title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Hermitian-Yang-Mills connections on collapsing elliptically fibered $K3$ surfaces
Let $X\rightarrow {\mathbb P}^1$ be an elliptically fibered $K3$ surface with a section, admitting a sequence of Ricci-flat metrics collapsing the fibers. Let $\mathcal E$ be a generic, holomoprhic $SU(n)$ bundle over $X$ such that the restriction of $\mathcal E$ to each fiber is semi-stable. Given a sequence $\Xi_i$ of Hermitian-Yang-Mills connections on $\mathcal E$ corresponding to this degeneration, we prove that, if $E$ is a given fiber away from a finite set, the restricted sequence $\Xi_i|_{E}$ converges to a flat connection uniquely determined by the holomorphic structure on $\mathcal E$.
0
0
1
0
0
0
Highly Efficient Human Action Recognition with Quantum Genetic Algorithm Optimized Support Vector Machine
In this paper we propose the use of quantum genetic algorithm to optimize the support vector machine (SVM) for human action recognition. The Microsoft Kinect sensor can be used for skeleton tracking, which provides the joints' position data. However, how to extract the motion features for representing the dynamics of a human skeleton is still a challenge due to the complexity of human motion. We present a highly efficient features extraction method for action classification, that is, using the joint angles to represent a human skeleton and calculating the variance of each angle during an action time window. Using the proposed representation, we compared the human action classification accuracy of two approaches, including the optimized SVM based on quantum genetic algorithm and the conventional SVM with grid search. Experimental results on the MSR-12 dataset show that the conventional SVM achieved an accuracy of $ 93.85\% $. The proposed approach outperforms the conventional method with an accuracy of $ 96.15\% $.
1
0
0
1
0
0
The Game Imitation: Deep Supervised Convolutional Networks for Quick Video Game AI
We present a vision-only model for gaming AI which uses a late integration deep convolutional network architecture trained in a purely supervised imitation learning context. Although state-of-the-art deep learning models for video game tasks generally rely on more complex methods such as deep-Q learning, we show that a supervised model which requires substantially fewer resources and training time can already perform well at human reaction speeds on the N64 classic game Super Smash Bros. We frame our learning task as a 30-class classification problem, and our CNN model achieves 80% top-1 and 95% top-3 validation accuracy. With slight test-time fine-tuning, our model is also competitive during live simulation with the highest-level AI built into the game. We will further show evidence through network visualizations that the network is successfully leveraging temporal information during inference to aid in decision making. Our work demonstrates that supervised CNN models can provide good performance in challenging policy prediction tasks while being significantly simpler and more lightweight than alternatives.
1
0
0
0
0
0
Differential relations for almost Belyi maps
Several kinds of differential relations for polynomial components of almost Belyi maps are presented. Saito's theory of free divisors give particularly interesting (yet conjectural) logarithmic action of vector fields. The differential relations implied by Kitaev's construction of algebraic Painleve VI solutions through pull-back transformations are used to compute almost Belyi maps for the pull-backs giving all genus 0 and 1 Painleve VI solutions in the Lisovyy-Tykhyy classification.
0
0
1
0
0
0
Non-commutative holomorphic semicocycles
This paper studies holomorphic semicocycles over semigroups in the unit disk, which take values in an arbitrary unital Banach algebra. We prove that every such semicocycle is a solution to a corresponding evolution problem. We then investigate the linearization problem: which semicocycles are cohomologous to constant semicocycles? In contrast with the case of commutative semicocycles, in the non-commutative case non-linearizable semicocycles are shown to exist. Simple conditions for linearizability are derived and are shown to be sharp.
0
0
1
0
0
0
Comparative Study of Virtual Machines and Containers for DevOps Developers
In this work, we plan to develop a system to compare virtual machines with container technology. We would devise ways to measure the administrator effort of containers vs. Virtual Machines (VMs). Metrics that will be tested against include human efforts required, ease of migration, resource utilization and ease of use using containers and virtual machines.
1
0
0
0
0
0
Developmental tendencies in the Academic Field of Intellectual Property through the Identification of Invisible Colleges
The emergence of intellectual property as an academic issue opens a big gate to a cross-disciplinary field. Different disciplines start a dialogue in the framework of the international multilateral treaties in the early 90's. As global economy demands new knowledge on intellectual property, Science grows at its own pace. However, the degree of consolidation of cross-disciplinary academic communities is not clear. In order to know how closely related are these communities, this paper proposes a mixed methodology to find invisible colleges in the production about intellectual property. The articles examined in this paper were extracted from Web of Science. The analyzed period was from 1994 to 2016, taking into account the signature of the agreement on Trade-Related Aspects of Intellectual Property Rights in the early 90's. A total amount of 1580 papers were processed through co-citation network analysis. An especial technique, which combine algorithms of community detection and defining population of articles through thresholds of shared references, was applied. In order to contrast the invisible colleges that emerged with the existence of formal institutional relations, it was made a qualitative tracking of the authors with respect to their institutional affiliation, lines of research and meeting places. Both methods show that the subjects of interest can be grouped into 13 different issues related to intellectual property field. Even though most of them are related to Laws and Economics, there are weak linkages between disciplines which could indicate the construction of a cross-disciplinary field.
1
1
0
0
0
0
Periodic fourth-order cubic NLS: Local well-posedness and Non-squeezing property
In this paper, we consider the cubic fourth-order nonlinear Schrödinger equation (4NLS) under the periodic boundary condition. We prove two results. One is the local well-posedness in $H^s$ with $-1/3 \le s < 0$ for the Cauchy problem of the Wick ordered 4NLS. The other one is the non-squeezing property for the flow map of 4NLS in the symplectic phase space $L^2(\mathbb{T})$. To prove the former we used the ideas introduced in [Takaoka and Tsutsumi 2004] and [Nakanish et al 2010], and to prove the latter we used the ideas in [Colliander et al 2005].
0
0
1
0
0
0
Feature Learning for Meta-Paths in Knowledge Graphs
In this thesis, we study the problem of feature learning on heterogeneous knowledge graphs. These features can be used to perform tasks such as link prediction, classification and clustering on graphs. Knowledge graphs provide rich semantics encoded in the edge and node types. Meta-paths consist of these types and abstract paths in the graph. Until now, meta-paths can only be used as categorical features with high redundancy and are therefore unsuitable for machine learning models. We propose meta-path embeddings to solve this problem by learning semantical and compact vector representations of them. Current graph embedding methods only embed nodes and edge types and therefore miss semantics encoded in the combination of them. Our method embeds meta-paths using the skipgram model with an extension to deal with the redundancy and high amount of meta-paths in big knowledge graphs. We critically evaluate our embedding approach by predicting links on Wikidata. The experiments indicate that we learn a sensible embedding of the meta-paths but can improve it further.
1
0
0
1
0
0
Closed-Form Exact Inverses of the Weakly Singular and Hypersingular Operators On Disks
We introduce new boundary integral operators which are the exact inverses of the weakly singular and hypersingular operators for the Laplacian on flat disks. Moreover, we provide explicit closed forms for them and prove the continuity and ellipticity of their corresponding bilinear forms in the natural Sobolev trace spaces. This permit us to derive new Calderón-type identities that can provide the foundation for optimal operator preconditioning in Galerkin boundary element methods.
0
0
1
0
0
0
Focus on Imaging Methods in Granular Physics
Granular materials are complex multi-particle ensembles in which macroscopic properties are largely determined by inter-particle interactions between their numerous constituents. In order to understand and to predict their macroscopic physical behavior, it is necessary to analyze the composition and interactions at the level of individual contacts and grains. To do so requires the ability to image individual particles and their local configurations to high precision. A variety of competing and complementary imaging techniques have been developed for that task. In this introductory paper accompanying the Focus Issue, we provide an overview of these imaging methods and discuss their advantages and drawbacks, as well as their limits of application.
0
1
0
0
0
0
The Stochastic Matching Problem: Beating Half with a Non-Adaptive Algorithm
In the stochastic matching problem, we are given a general (not necessarily bipartite) graph $G(V,E)$, where each edge in $E$ is realized with some constant probability $p > 0$ and the goal is to compute a bounded-degree (bounded by a function depending only on $p$) subgraph $H$ of $G$ such that the expected maximum matching size in $H$ is close to the expected maximum matching size in $G$. The algorithms in this setting are considered non-adaptive as they have to choose the subgraph $H$ without knowing any information about the set of realized edges in $G$. Originally motivated by an application to kidney exchange, the stochastic matching problem and its variants have received significant attention in recent years. The state-of-the-art non-adaptive algorithms for stochastic matching achieve an approximation ratio of $\frac{1}{2}-\epsilon$ for any $\epsilon > 0$, naturally raising the question that if $1/2$ is the limit of what can be achieved with a non-adaptive algorithm. In this work, we resolve this question by presenting the first algorithm for stochastic matching with an approximation guarantee that is strictly better than $1/2$: the algorithm computes a subgraph $H$ of $G$ with the maximum degree $O(\frac{\log{(1/ p)}}{p})$ such that the ratio of expected size of a maximum matching in realizations of $H$ and $G$ is at least $1/2+\delta_0$ for some absolute constant $\delta_0 > 0$. The degree bound on $H$ achieved by our algorithm is essentially the best possible (up to an $O(\log{(1/p)})$ factor) for any constant factor approximation algorithm, since an $\Omega(\frac{1}{p})$ degree in $H$ is necessary for a vertex to acquire at least one incident edge in a realization.
1
0
0
0
0
0
Measuring the unmeasurable - a project of domestic violence risk prediction and management
The prevention of domestic violence (DV) have aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported DV cases that doubled over the past decade and the scarcity of social workers. Additionally, a large amount of data was collected when social workers use the predominant case management approach to document case reports information. However, these data were not properly stored or organized. To improve the efficiency of DV prevention and risk management, we worked with Taipei City Government and utilized the 2015 data from its DV database to perform a spatial pattern analysis of the reports of DV cases to build a DV risk map. However, during our map building process, the issue of confounding bias arose because we were not able to verify if reported cases truly reflected real violence occurrence or were simply false reports from potential victim's neighbors. Therefore, we used the random forest method to build a repeat victimization risk prediction model. The accuracy and F1-measure of our model were 96.3% and 62.8%. This model helped social workers differentiate the risk level of new cases, which further reduced their major workload significantly. To our knowledge, this is the first project that utilized machine learning in DV prevention. The research approach and results of this project not only can improve DV prevention process, but also be applied to other social work or criminal prevention areas.
1
0
0
0
0
0
Attaining Capacity with Algebraic Geometry Codes through the $(U|U+V)$ Construction and Koetter-Vardy Soft Decoding
In this paper we show how to attain the capacity of discrete symmetric channels with polynomial time decoding complexity by considering iterated $(U|U+V)$ constructions with Reed-Solomon code or algebraic geometry code components. These codes are decoded with a recursive computation of the {\em a posteriori} probabilities of the code symbols together with the Koetter-Vardy soft decoder used for decoding the code components in polynomial time. We show that when the number of levels of the iterated $(U|U+V)$ construction tends to infinity, we attain the capacity of any discrete symmetric channel in this way. This result follows from the polarization theorem together with a simple lemma explaining how the Koetter-Vardy decoder behaves for Reed-Solomon codes of rate close to $1$. However, even if this way of attaining the capacity of a symmetric channel is essentially the Ar{\i}kan polarization theorem, there are some differences with standard polar codes. Indeed, with this strategy we can operate succesfully close to channel capacity even with a small number of levels of the iterated $(U|U+V)$ construction and the probability of error decays quasi-exponentially with the codelength in such a case (i.e. exponentially if we forget about the logarithmic terms in the exponent). We can even improve on this result by considering the algebraic geometry codes constructed in \cite{TVZ82}. In such a case, the probability of error decays exponentially in the codelength for any rate below the capacity of the channel. Moreover, when comparing this strategy to Reed-Solomon codes (or more generally algebraic geometry codes) decoded with the Koetter-Vardy decoding algorithm, it does not only improve the noise level that the code can tolerate, it also results in a significant complexity gain.
1
0
0
0
0
0
Embedded real-time monitoring using SystemC in IMA network
Avionics is one kind of domain where prevention prevails. Nonetheless fails occur. Sometimes due to pilot misreacting, flooded in information. Sometimes information itself would be better verified than trusted. To avoid some kind of failure, it has been thought to add,in midst of the ARINC664 aircraft data network, a new kind of monitoring.
1
0
0
0
0
0
One pixel attack for fooling deep neural networks
Recent research has revealed that the output of Deep Neural Networks (DNN) can be easily altered by adding relatively small perturbations to the input vector. In this paper, we analyze an attack in an extremely limited scenario where only one pixel can be modified. For that we propose a novel method for generating one-pixel adversarial perturbations based on differential evolution(DE). It requires less adversarial information(a black-box attack) and can fool more types of networks due to the inherent features of DE. The results show that 68.36% of the natural images in CIFAR-10 test dataset and 41.22% of the ImageNet (ILSVRC 2012) validation images can be perturbed to at least one target class by modifying just one pixel with 73.22% and 5.52% confidence on average. Thus, the proposed attack explores a different take on adversarial machine learning in an extreme limited scenario, showing that current DNNs are also vulnerable to such low dimension attacks. Besides, we also illustrate an important application of DE (or broadly speaking, evolutionary computation) in the domain of adversarial machine learning: creating tools that can effectively generate low-cost adversarial attacks against neural networks for evaluating robustness. The code is available on: this https URL
1
0
0
1
0
0
A computer simulation of the Volga River hydrological regime: a problem of water-retaining dam optimal location
We investigate of a special dam optimal location at the Volga river in area of the Akhtuba left sleeve beginning (7 \, km to the south of the Volga Hydroelectric Power Station dam). We claim that a new water-retaining dam can resolve the key problem of the Volga-Akhtuba floodplain related to insufficient water amount during the spring flooding due to the overregulation of the Lower Volga. By using a numerical integration of Saint-Vacant equations we study the water dynamics across the northern part of the Volga-Akhtuba floodplain with taking into account its actual topography. As the result we found an amount of water $V_A$ passing to the Akhtuba during spring period for a given water flow through the Volga Hydroelectric Power Station (so-called hydrograph which characterises the water flow per unit of time). By varying the location of the water-retaining dam $ x_d, y_d $ we obtained various values of $V_A (x_d, y_d) $ as well as various flow spatial structure on the territory during the flood period. Gradient descent method provide us the dam coordinated with the maximum value of ${V_A}$. Such approach to the dam location choice let us to find the best solution, that the value $V_A$ increases by a factor of 2. Our analysis demonstrate a good potential of the numerical simulations in the field of hydraulic works.
1
0
0
0
0
0
Multi-proton bunch driven hollow plasma wakefield acceleration in the nonlinear regime
Proton-driven plasma wakefield acceleration has been demonstrated in simulations to be capable of accelerating particles to the energy frontier in a single stage, but its potential is hindered by the fact that currently available proton bunches are orders of magnitude longer than the plasma wavelength. Fortunately, proton micro-bunching allows driving plasma waves resonantly. In this paper, we propose using a hollow plasma channel for multiple proton bunch driven plasma wakefield acceleration and demonstrate that it enables the operation in the nonlinear regime and resonant excitation of strong plasma waves. This new regime also involves beneficial features of hollow channels for the accelerated beam (such as emittance preservation and uniform accelerating field) and long buckets of stable deceleration for the drive beam. The regime is attained at a proper ratio among plasma skin depth, driver radius, hollow channel radius, and micro-bunch period.
0
1
0
0
0
0
Self corrective Perturbations for Semantic Segmentation and Classification
Convolutional Neural Networks have been a subject of great importance over the past decade and great strides have been made in their utility for producing state of the art performance in many computer vision problems. However, the behavior of deep networks is yet to be fully understood and is still an active area of research. In this work, we present an intriguing behavior: pre-trained CNNs can be made to improve their predictions by structurally perturbing the input. We observe that these perturbations - referred as Guided Perturbations - enable a trained network to improve its prediction performance without any learning or change in network weights. We perform various ablative experiments to understand how these perturbations affect the local context and feature representations. Furthermore, we demonstrate that this idea can improve performance of several existing approaches on semantic segmentation and scene labeling tasks on the PASCAL VOC dataset and supervised classification tasks on MNIST and CIFAR10 datasets.
1
0
0
1
0
0
Large-Scale Mapping of Human Activity using Geo-Tagged Videos
This paper is the first work to perform spatio-temporal mapping of human activity using the visual content of geo-tagged videos. We utilize a recent deep-learning based video analysis framework, termed hidden two-stream networks, to recognize a range of activities in YouTube videos. This framework is efficient and can run in real time or faster which is important for recognizing events as they occur in streaming video or for reducing latency in analyzing already captured video. This is, in turn, important for using video in smart-city applications. We perform a series of experiments to show our approach is able to accurately map activities both spatially and temporally. We also demonstrate the advantages of using the visual content over the tags/titles.
1
0
0
0
0
0
Structure of a Parabolic Partial Differential Equation on Graphs and Digital spaces. Solution of PDE on Digital Spaces: a Klein Bottle, a Projective Plane, a 4D Sphere and a Moebius Band
This paper studies the structure of a parabolic partial differential equation on graphs and digital n-dimensional manifolds, which are digital models of continuous n-manifolds. Conditions for the existence of solutions of equations are determined and investigated. Numerical solutions of the equation on a Klein bottle, a projective plane, a 4D sphere and a Moebius strip are presented.
1
0
1
0
0
0
GCN-GAN: A Non-linear Temporal Link Prediction Model for Weighted Dynamic Networks
In this paper, we generally formulate the dynamics prediction problem of various network systems (e.g., the prediction of mobility, traffic and topology) as the temporal link prediction task. Different from conventional techniques of temporal link prediction that ignore the potential non-linear characteristics and the informative link weights in the dynamic network, we introduce a novel non-linear model GCN-GAN to tackle the challenging temporal link prediction task of weighted dynamic networks. The proposed model leverages the benefits of the graph convolutional network (GCN), long short-term memory (LSTM) as well as the generative adversarial network (GAN). Thus, the dynamics, topology structure and evolutionary patterns of weighted dynamic networks can be fully exploited to improve the temporal link prediction performance. Concretely, we first utilize GCN to explore the local topological characteristics of each single snapshot and then employ LSTM to characterize the evolving features of the dynamic networks. Moreover, GAN is used to enhance the ability of the model to generate the next weighted network snapshot, which can effectively tackle the sparsity and the wide-value-range problem of edge weights in real-life dynamic networks. To verify the model's effectiveness, we conduct extensive experiments on four datasets of different network systems and application scenarios. The experimental results demonstrate that our model achieves impressive results compared to the state-of-the-art competitors.
1
0
0
0
0
0
Efficient Exact and Approximate Algorithms for Computing Betweenness Centrality in Directed Graphs
Graphs are an important tool to model data in different domains, including social networks, bioinformatics and the world wide web. Most of the networks formed in these domains are directed graphs, where all the edges have a direction and they are not symmetric. Betweenness centrality is an important index widely used to analyze networks. In this paper, first given a directed network $G$ and a vertex $r \in V(G)$, we propose a new exact algorithm to compute betweenness score of $r$. Our algorithm pre-computes a set $\mathcal{RV}(r)$, which is used to prune a huge amount of computations that do not contribute in the betweenness score of $r$. Time complexity of our exact algorithm depends on $|\mathcal{RV}(r)|$ and it is respectively $\Theta(|\mathcal{RV}(r)|\cdot|E(G)|)$ and $\Theta(|\mathcal{RV}(r)|\cdot|E(G)|+|\mathcal{RV}(r)|\cdot|V(G)|\log |V(G)|)$ for unweighted graphs and weighted graphs with positive weights. $|\mathcal{RV}(r)|$ is bounded from above by $|V(G)|-1$ and in most cases, it is a small constant. Then, for the cases where $\mathcal{RV}(r)$ is large, we present a simple randomized algorithm that samples from $\mathcal{RV}(r)$ and performs computations for only the sampled elements. We show that this algorithm provides an $(\epsilon,\delta)$-approximation of the betweenness score of $r$. Finally, we perform extensive experiments over several real-world datasets from different domains for several randomly chosen vertices as well as for the vertices with the highest betweenness scores. Our experiments reveal that in most cases, our algorithm significantly outperforms the most efficient existing randomized algorithms, in terms of both running time and accuracy. Our experiments also show that our proposed algorithm computes betweenness scores of all vertices in the sets of sizes 5, 10 and 15, much faster and more accurate than the most efficient existing algorithms.
1
0
0
0
0
0
A multi-instrument non-parametric reconstruction of the electron pressure profile in the galaxy cluster CLJ1226.9+3332
Context: In the past decade, sensitive, resolved Sunyaev-Zel'dovich (SZ) studies of galaxy clusters have become common. Whereas many previous SZ studies have parameterized the pressure profiles of galaxy clusters, non-parametric reconstructions will provide insights into the thermodynamic state of the intracluster medium (ICM). Aims: We seek to recover the non-parametric pressure profiles of the high redshift ($z=0.89$) galaxy cluster CLJ 1226.9+3332 as inferred from SZ data from the MUSTANG, NIKA, Bolocam, and Planck instruments, which all probe different angular scales. Methods: Our non-parametric algorithm makes use of logarithmic interpolation, which under the assumption of ellipsoidal symmetry is analytically integrable. For MUSTANG, NIKA, and Bolocam we derive a non-parametric pressure profile independently and find good agreement among the instruments. In particular, we find that the non-parametric profiles are consistent with a fitted gNFW profile. Given the ability of Planck to constrain the total signal, we include a prior on the integrated Compton Y parameter as determined by Planck. Results: For a given instrument, constraints on the pressure profile diminish rapidly beyond the field of view. The overlap in spatial scales probed by these four datasets is therefore critical in checking for consistency between instruments. By using multiple instruments, our analysis of CLJ 1226.9+3332 covers a large radial range, from the central regions to the cluster outskirts: $0.05 R_{500} < r < 1.1 R_{500}$. This is a wider range of spatial scales than is typical recovered by SZ instruments. Similar analyses will be possible with the new generation of SZ instruments such as NIKA2 and MUSTANG2.
0
1
0
0
0
0
Recognizing Union-Find trees built up using union-by-rank strategy is NP-complete
Disjoint-Set forests, consisting of Union-Find trees, are data structures having a widespread practical application due to their efficiency. Despite them being well-known, no exact structural characterization of these trees is known (such a characterization exists for Union trees which are constructed without using path compression) for the case assuming union-by-rank strategy for merging. In this paper we provide such a characterization by means of a simple push operation and show that the decision problem whether a given tree (along with the rank info of its nodes) is a Union-Find tree is NP-complete, complementing our earlier similar result for the union-by-size strategy.
1
0
0
0
0
0
A Competitive Algorithm for Online Multi-Robot Exploration of a Translating Plume
In this paper, we study the problem of exploring a translating plume with a team of aerial robots. The shape and the size of the plume are unknown to the robots. The objective is to find a tour for each robot such that they collectively explore the plume. Specifically, the tours must be such that each point in the plume must be visible from the field-of-view of some robot along its tour. We propose a recursive Depth-First Search (DFS)-based algorithm that yields a constant competitive ratio for the exploration problem. The competitive ratio is $\frac{2(S_r+S_p)(R+\lfloor\log{R}\rfloor)}{(S_r-S_p)(1+\lfloor\log{R}\rfloor)}$ where $R$ is the number of robots, and $S_r$ and $S_p$ are the robot speed and the plume speed, respectively. We also consider a more realistic scenario where the plume shape is not restricted to grid cells but an arbitrary shape. We show our algorithm has $\frac{2(S_r+S_p)(18R+\lfloor\log{R}\rfloor)}{(S_r-S_p)(1+\lfloor\log{R}\rfloor)}$ competitive ratio under the fat condition. We empirically verify our algorithm using simulations.
1
0
0
0
0
0
Warped Riemannian metrics for location-scale models
The present paper shows that warped Riemannian metrics, a class of Riemannian metrics which play a prominent role in Riemannian geometry, are also of fundamental importance in information geometry. Precisely, the paper features a new theorem, which states that the Rao-Fisher information metric of any location-scale model, defined on a Riemannian manifold, is a warped Riemannian metric, whenever this model is invariant under the action of some Lie group. This theorem is a valuable tool in finding the expression of the Rao-Fisher information metric of location-scale models defined on high-dimensional Riemannian manifolds. Indeed, a warped Riemannian metric is fully determined by only two functions of a single variable, irrespective of the dimension of the underlying Riemannian manifold. Starting from this theorem, several original contributions are made. The expression of the Rao-Fisher information metric of the Riemannian Gaussian model is provided, for the first time in the literature. A generalised definition of the Mahalanobis distance is introduced, which is applicable to any location-scale model defined on a Riemannian manifold. The solution of the geodesic equation is obtained, for any Rao-Fisher information metric defined in terms of warped Riemannian metrics. Finally, using a mixture of analytical and numerical computations, it is shown that the parameter space of the von Mises-Fisher model of $n$-dimensional directional data, when equipped with its Rao-Fisher information metric, becomes a Hadamard manifold, a simply-connected complete Riemannian manifold of negative sectional curvature, for $n = 2,\ldots,8$. Hopefully, in upcoming work, this will be proved for any value of $n$.
0
0
1
1
0
0
Admissibility of solution estimators for stochastic optimization
We look at stochastic optimization problems through the lens of statistical decision theory. In particular, we address admissibility, in the statistical decision theory sense, of the natural sample average estimator for a stochastic optimization problem (which is also known as the empirical risk minimization (ERM) rule in learning literature). It is well known that for general stochastic optimization problems, the sample average estimator may not be admissible. This is known as Stein's paradox in the statistics literature. We show in this paper that for optimizing stochastic linear functions over compact sets, the sample average estimator is admissible.
0
0
1
1
0
0
Min-Max Regret Scheduling To Minimize the Total Weight of Late Jobs With Interval Uncertainty
We study the single machine scheduling problem with the objective to minimize the total weight of late jobs. It is assumed that the processing times of jobs are not exactly known at the time when a complete schedule must be dispatched. Instead, only interval bounds for these parameters are given. In contrast to the stochastic optimization approach, we consider the problem of finding a robust schedule, which minimizes the maximum regret of a solution. Heuristic algorithm based on mixed-integer linear programming is presented and examined through computational experiments.
1
0
0
0
0
0
Correcting rural building annotations in OpenStreetMap using convolutional neural networks
Rural building mapping is paramount to support demographic studies and plan actions in response to crisis that affect those areas. Rural building annotations exist in OpenStreetMap (OSM), but their quality and quantity are not sufficient for training models that can create accurate rural building maps. The problems with these annotations essentially fall into three categories: (i) most commonly, many annotations are geometrically misaligned with the updated imagery; (ii) some annotations do not correspond to buildings in the images (they are misannotations or the buildings have been destroyed); and (iii) some annotations are missing for buildings in the images (the buildings were never annotated or were built between subsequent image acquisitions). First, we propose a method based on Markov Random Field (MRF) to align the buildings with their annotations. The method maximizes the correlation between annotations and a building probability map while enforcing that nearby buildings have similar alignment vectors. Second, the annotations with no evidence in the building probability map are removed. Third, we present a method to detect non-annotated buildings with predefined shapes and add their annotation. The proposed methodology shows considerable improvement in accuracy of the OSM annotations for two regions of Tanzania and Zimbabwe, being more accurate than state-of-the-art baselines.
1
0
0
0
0
0
Closed-form Harmonic Contrast Control with Surface Impedance Coatings for Conductive Objects
The problem of suppressing the scattering from conductive objects is addressed in terms of harmonic contrast reduction. A unique compact closed-form solution for a surface impedance $Z_s(m,kr)$ is found in a straightforward manner and without any approximation as a function of the harmonic index $m$ (scattering mode to suppress) and of the frequency regime $kr$ (product of wavenumber $k$ and radius $r$ of the cloaked system) at any frequency regime. In the quasi-static limit, mantle cloaking is obtained as a particular case for $kr \ll 1$ and $m=0$. In addition, beyond quasi-static regime, impedance coatings for a selected dominant harmonic wave can be designed with proper dispersive behaviour, resulting in improved reduction levels and harmonic filtering capability.
0
1
0
0
0
0
Numerical methods to prevent pressure oscillations in transcritical flows
The accurate and robust simulation of transcritical real-fluid effects is crucial for many engineering applications, such as fuel injection in internal combustion engines, rocket engines and gas turbines. For example, in diesel engines, the liquid fuel is injected into the ambient gas at a pressure that exceeds its critical value, and the fuel jet will be heated to a supercritical temperature before combustion takes place. This process is often referred to as transcritical injection. The largest thermodynamic gradient in the transcritical regime occurs as the fluid undergoes a liquid-like to a gas-like transition when crossing the pseudo-boiling line (Yang 2000, Oschwald et al. 2006, Banuti 2015). The complex processes during transcritical injection are still not well understood. Therefore, to provide insights into high-pressure combustion systems, accurate and robust numerical simulation tools are required for the characterization of supercritical and transcritical flows.
0
1
0
0
0
0
Shape Convergence for Aggregate Tiles in Conformal Tilings
Given a substitution tiling $T$ of the plane with subdivision operator $\tau$, we study the conformal tilings $\mathcal{T}_n$ associated with $\tau^n T$. We prove that aggregate tiles within $\mathcal{T}_n$ converge in shape as $n\rightarrow \infty$ to their associated Euclidean tiles in $T$.
0
0
1
0
0
0
Performance and sensitivity of vortex coronagraphs on segmented space telescopes
The detection of molecular species in the atmospheres of earth-like exoplanets orbiting nearby stars requires an optical system that suppresses starlight and maximizes the sensitivity to the weak planet signals at small angular separations. Achieving sufficient contrast performance on a segmented aperture space telescope is particularly challenging due to unwanted diffraction within the telescope from amplitude and phase discontinuities in the pupil. Apodized vortex coronagraphs are a promising solution that theoretically meet the performance needs for high contrast imaging with future segmented space telescopes. We investigate the sensitivity of apodized vortex coronagraphs to the expected aberrations, including segment co-phasing errors in piston and tip/tilt as well as other low-order and mid-spatial frequency aberrations. Coronagraph designs and their associated telescope requirements are identified for conceptual HabEx and LUVOIR telescope designs.
0
1
0
0
0
0
Classical Spacetime Structure
I discuss several issues related to "classical" spacetime structure. I review Galilean, Newtonian, and Leibnizian spacetimes, and briefly describe more recent developments. The target audience is undergraduates and early graduate students in philosophy; the presentation avoids mathematical formalism as much as possible.
0
1
0
0
0
0
Fractal curves from prime trigonometric series
We study the convergence of the parameter family of series $$V_{\alpha,\beta}(t)=\sum_{p}p^{-\alpha}\exp(2\pi i p^{\beta}t),\quad \alpha,\beta \in \mathbb{R}_{>0},\; t \in [0,1)$$ defined over prime numbers $p$, and subsequently, their differentiability properties. The visible fractal nature of the graphs as a function of $\alpha,\beta$ is analyzed in terms of Hölder continuity, self similarity and fractal dimension, backed with numerical results. We also discuss the link of this series to random walks and consequently, explore numerically its random properties.
0
0
1
0
0
0
Facets on the convex hull of $d$-dimensional Brownian and Lévy motion
For stationary, homogeneous Markov processes (viz., Lévy processes, including Brownian motion) in dimension $d\geq 3$, we establish an exact formula for the average number of $(d-1)$-dimensional facets that can be defined by $d$ points on the process's path. This formula defines a universality class in that it is independent of the increments' distribution, and it admits a closed form when $d=3$, a case which is of particular interest for applications in biophysics, chemistry and polymer science. We also show that the asymptotical average number of facets behaves as $\langle \mathcal{F}_T^{(d)}\rangle \sim 2\left[\ln \left( T/\Delta t\right)\right]^{d-1}$, where $T$ is the total duration of the motion and $\Delta t$ is the minimum time lapse separating points that define a facet.
0
1
1
0
0
0
Local systems on complements of arrangements of smooth, complex algebraic hypersurfaces
We consider smooth, complex quasi-projective varieties $U$ which admit a compactification with a boundary which is an arrangement of smooth algebraic hypersurfaces. If the hypersurfaces intersect locally like hyperplanes, and the relative interiors of the hypersurfaces are Stein manifolds, we prove that the cohomology of certain local systems on $U$ vanishes. As an application, we show that complements of linear, toric, and elliptic arrangements are both duality and abelian duality spaces.
0
0
1
0
0
0
Some Open Problems in Random Matrix Theory and the Theory of Integrable Systems. II
We describe a list of open problems in random matrix theory and the theory of integrable systems that was presented at the conference Asymptotics in Integrable Systems, Random Matrices and Random Processes and Universality, Centre de Recherches Mathematiques, Montreal, June 7-11, 2015. We also describe progress that has been made on problems in an earlier list presented by the author on the occasion of his 60th birthday in 2005 (see [Deift P., Contemp. Math., Vol. 458, Amer. Math. Soc., Providence, RI, 2008, 419-430, arXiv:0712.0849]).
0
1
1
0
0
0
A semi-parametric estimation for max-mixture spatial processes
We proposed a semi-parametric estimation procedure in order to estimate the parameters of a max-mixture model and also of a max-stable model (inverse max-stable model) as an alternative to composite likelihood. A good estimation by the proposed estimator required the dependence measure to detect all dependence structures in the model, especially when dealing with the max-mixture model. We overcame this challenge by using the F-madogram. The semi-parametric estimation was then based on a quasi least square method, by minimizing the square difference between the theoretical F-madogram and an empirical one. We evaluated the performance of this estimator through a simulation study. It was shown that on an average, the estimation is performed well, although in some cases, it encountered some difficulties. We apply our estimation procedure to model the daily rainfalls over the East Australia.
0
0
1
1
0
0
Spectroscopic Observation and Analysis of HII regions in M33 with MMT: Temperatures and Oxygen Abundances
The spectra of 413 star-forming (or HII) regions in M33 (NGC 598) were observed by using the multifiber spectrograph of Hectospec at the 6.5-m Multiple Mirror Telescope (MMT). By using this homogeneous spectra sample, we measured the intensities of emission lines and some physical parameters, such as electron temperatures, electron densities, and metallicities. Oxygen abundances were derived via the direct method (when available) and two empirical strong-line methods, namely, O3N2 and N2. In the high-metallicity end, oxygen abundances derived from O3N2 calibration were higher than those derived from N2 index, indicating an inconsistency between O3N2 and N2 calibrations. We presented a detailed analysis of the spatial distribution of gas-phase oxygen abundances in M33 and confirmed the existence of the axisymmetric global metallicity distribution widely assumed in literature. Local variations were also observed and subsequently associated with spiral structures to provide evidence of radial migration driven by arms. Our O/H gradient fitted out to 1.1 $R_{25}$ resulted in slopes of $-0.17\pm0.03$, $-0.19\pm0.01$, and $-0.16\pm0.17$ dex $R_{25}^{-1}$ utilizing abundances from O3N2, N2 diagnostics, and direct method, respectively.
0
1
0
0
0
0
Output-only parameter identification of a colored-noise-driven Van der Pol oscillator -- Thermoacoustic instabilities as an example
The problem of output-only parameter identification for nonlinear oscillators forced by colored noise is considered. In this context, it is often assumed that the forcing noise is white, since its actual spectral content is unknown. The impact of this white noise forcing assumption upon parameter identification is quantitatively analyzed. First, a Van der Pol oscillator forced by an Ornstein-Uhlenbeck process is considered. Second, the practical case of thermoacoustic limit cycles in combustion chambers with turbulence-induced forcing is investigated. It is shown that in both cases, the system parameters are accurately identified if time signals are appropriately band-pass filtered around the oscillator eigenfrequency.
0
1
0
0
0
0
Deep learning to achieve clinically applicable segmentation of head and neck anatomy for radiotherapy
Over half a million individuals are diagnosed with head and neck cancer each year worldwide. Radiotherapy is an important curative treatment for this disease, but it requires manually intensive delineation of radiosensitive organs at risk (OARs). This planning process can delay treatment commencement. While auto-segmentation algorithms offer a potentially time-saving solution, the challenges in defining, quantifying and achieving expert performance remain. Adopting a deep learning approach, we demonstrate a 3D U-Net architecture that achieves performance similar to experts in delineating a wide range of head and neck OARs. The model was trained on a dataset of 663 deidentified computed tomography (CT) scans acquired in routine clinical practice and segmented according to consensus OAR definitions. We demonstrate its generalisability through application to an independent test set of 24 CT scans available from The Cancer Imaging Archive collected at multiple international sites previously unseen to the model, each segmented by two independent experts and consisting of 21 OARs commonly segmented in clinical practice. With appropriate validation studies and regulatory approvals, this system could improve the effectiveness of radiotherapy pathways.
0
0
0
1
0
0
Anomaly Detection in Hierarchical Data Streams under Unknown Models
We consider the problem of detecting a few targets among a large number of hierarchical data streams. The data streams are modeled as random processes with unknown and potentially heavy-tailed distributions. The objective is an active inference strategy that determines, sequentially, which data stream to collect samples from in order to minimize the sample complexity under a reliability constraint. We propose an active inference strategy that induces a biased random walk on the tree-structured hierarchy based on confidence bounds of sample statistics. We then establish its order optimality in terms of both the size of the search space (i.e., the number of data streams) and the reliability requirement. The results find applications in hierarchical heavy hitter detection, noisy group testing, and adaptive sampling for active learning, classification, and stochastic root finding.
1
0
0
0
0
0
Intelligent Parameter Tuning in Optimization-based Iterative CT Reconstruction via Deep Reinforcement Learning
A number of image-processing problems can be formulated as optimization problems. The objective function typically contains several terms specifically designed for different purposes. Parameters in front of these terms are used to control the relative weights among them. It is of critical importance to tune these parameters, as quality of the solution depends on their values. Tuning parameter is a relatively straightforward task for a human, as one can intelligently determine the direction of parameter adjustment based on the solution quality. Yet manual parameter tuning is not only tedious in many cases, but becomes impractical when a number of parameters exist in a problem. Aiming at solving this problem, this paper proposes an approach that employs deep reinforcement learning to train a system that can automatically adjust parameters in a human-like manner. We demonstrate our idea in an example problem of optimization-based iterative CT reconstruction with a pixel-wise total-variation regularization term. We set up a parameter tuning policy network (PTPN), which maps an CT image patch to an output that specifies the direction and amplitude by which the parameter at the patch center is adjusted. We train the PTPN via an end-to-end reinforcement learning procedure. We demonstrate that under the guidance of the trained PTPN for parameter tuning at each pixel, reconstructed CT images attain quality similar or better than in those reconstructed with manually tuned parameters.
0
1
0
0
0
0
Adversarial Examples that Fool Detectors
An adversarial example is an example that has been adjusted to produce a wrong label when presented to a system at test time. To date, adversarial example constructions have been demonstrated for classifiers, but not for detectors. If adversarial examples that could fool a detector exist, they could be used to (for example) maliciously create security hazards on roads populated with smart vehicles. In this paper, we demonstrate a construction that successfully fools two standard detectors, Faster RCNN and YOLO. The existence of such examples is surprising, as attacking a classifier is very different from attacking a detector, and that the structure of detectors - which must search for their own bounding box, and which cannot estimate that box very accurately - makes it quite likely that adversarial patterns are strongly disrupted. We show that our construction produces adversarial examples that generalize well across sequences digitally, even though large perturbations are needed. We also show that our construction yields physical objects that are adversarial.
1
0
0
0
0
0
FLUX: Progressive State Estimation Based on Zakai-type Distributed Ordinary Differential Equations
We propose a homotopy continuation method called FLUX for approximating complicated probability density functions. It is based on progressive processing for smoothly morphing a given density into the desired one. Distributed ordinary differential equations (DODEs) with an artificial time $\gamma \in [0,1]$ are derived for describing the evolution from the initial density to the desired final density. For a finite-dimensional parametrization, the DODEs are converted to a system of ordinary differential equations (SODEs), which are solved for $\gamma \in [0,1]$ and return the desired result for $\gamma=1$. This includes parametric representations such as Gaussians or Gaussian mixtures and nonparametric setups such as sample sets. In the latter case, we obtain a particle flow between the two densities along the artificial time. FLUX is applied to state estimation in stochastic nonlinear dynamic systems by gradual inclusion of measurement information. The proposed approximation method (1) is fast, (2) can be applied to arbitrary nonlinear systems and is not limited to additive noise, (3) allows for target densities that are only known at certain points, (4) does not require optimization, (5) does not require the solution of partial differential equations, and (6) works with standard procedures for solving SODEs. This manuscript is limited to the one-dimensional case and a fixed number of parameters during the progression. Future extensions will include consideration of higher dimensions and on the fly adaption of the number of parameters.
1
0
0
0
0
0
Direct and mediating influences of user-developer perception gaps in requirements understanding on user participation
User participation is considered an effective way to conduct requirements engineering, but user-developer perception gaps in requirements understanding occur frequently. Since user participation in practice is not as active as we expect and the requirements perception gap has been recognized as a risk that negatively affects projects, exploring whether user-developer perception gaps in requirements understanding will hinder user participation is worthwhile. This will help develop a greater comprehension of the intertwined relationship between user participation and perception gap, a topic that has not yet been extensively examined. This study investigates the direct and mediating influences of user-developer requirements perception gaps on user participation by integrating requirements uncertainty and top management support. Survey data collected from 140 subjects were examined and analyzed using structural equation modeling. The results indicate that perception gaps have a direct negative effect on user participation and negate completely the positive effect of top management support on user participation. Additionally, perception gaps do not have a mediating effect between requirements uncertainty and user participation because requirements uncertainty does not significantly and directly affect user participation, but requirements uncertainty indirectly influences user participation due to its significant direct effect on perception gaps. The theoretical and practical implications are discussed, and limitations and possible future research areas are identified.
1
0
0
0
0
0
Equivariant Schrödinger maps from two dimensional hyperbolic space
In this article, we consider the equivariant Schrödinger map from $\Bbb H^2$ to $\Bbb S^2$ which converges to the north pole of $\Bbb S^2$ at the origin and spatial infinity of the hyperbolic space. If the energy of the data is less than $4\pi$, we show that the local existence of Schrödinger map. Furthermore, if the energy of the data sufficiently small, we prove the solutions are global in time.
0
0
1
0
0
0
A fast numerical method for ideal fluid flow in domains with multiple stirrers
A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field - given a particular distribution of any finite number of stirrers of specified shape and speed - can be formulated as a Riemann-Hilbert problem. We show that this Riemann-Hilbert problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.
0
0
1
0
0
0
A Short Survey on Probabilistic Reinforcement Learning
A reinforcement learning agent tries to maximize its cumulative payoff by interacting in an unknown environment. It is important for the agent to explore suboptimal actions as well as to pick actions with highest known rewards. Yet, in sensitive domains, collecting more data with exploration is not always possible, but it is important to find a policy with a certain performance guaranty. In this paper, we present a brief survey of methods available in the literature for balancing exploration-exploitation trade off and computing robust solutions from fixed samples in reinforcement learning.
1
0
0
1
0
0
Modification of low-temperature silicon dioxide films under the influence of technology factors
The structure, composition and electrophysical characteristics of low-temperature silicon dioxide films under influence of various technology factors, such as ion implantation, laser irradiation, thermal and photonic annealing, have been studied. Silicon dioxide films have been obtained by monosilane oxidation using plasma chemical method, reactive cathode sputtering, and tetraethoxysilane pyrolysis. In the capacity of substrates, germanium, silicon, gallium arsenide and gallium nitride were used. Structure and composition of the dielectric films were analyzed by methods of infrared transmission spectroscopy and frustrated internal reflectance spectroscopy. Analysis of modification efficiency of low-temperature silicon dioxide films has been made depending on the substrate type, structure and properties of the films, their moisture permeability, dielectric deposition technique, type and dose of implantation ions, temperature and kind of annealing.
0
1
0
0
0
0
GelSlim: A High-Resolution, Compact, Robust, and Calibrated Tactile-sensing Finger
This work describes the development of a high-resolution tactile-sensing finger for robot grasping. This finger, inspired by previous GelSight sensing techniques, features an integration that is slimmer, more robust, and with more homogeneous output than previous vision-based tactile sensors. To achieve a compact integration, we redesign the optical path from illumination source to camera by combining light guides and an arrangement of mirror reflections. We parameterize the optical path with geometric design variables and describe the tradeoffs between the finger thickness, the depth of field of the camera, and the size of the tactile sensing area. The sensor sustains the wear from continuous use -- and abuse -- in grasping tasks by combining tougher materials for the compliant soft gel, a textured fabric skin, a structurally rigid body, and a calibration process that maintains homogeneous illumination and contrast of the tactile images during use. Finally, we evaluate the sensor's durability along four metrics that track the signal quality during more than 3000 grasping experiments.
1
0
0
0
0
0
The Muon g-2 experiment at Fermilab
The upcoming Fermilab E989 experiment will measure the muon anomalous magnetic moment $a_{\mu}$ . This measurement is motivated by the previous measurement performed in 2001 by the BNL E821 experiment that reported a 3-4 standard deviation discrepancy between the measured value and the Standard Model prediction. The new measurement at Fermilab aims to improve the precision by a factor of four reducing the total uncertainty from 540 parts per billion (BNL E821) to 140 parts per billion (Fermilab E989). This paper gives the status of the experiment.
0
1
0
0
0
0
Ringel duality as an instance of Koszul duality
In their previous work, S. Koenig, S. Ovsienko and the second author showed that every quasi-hereditary algebra is Morita equivalent to the right algebra, i.e. the opposite algebra of the left dual, of a coring. Let $A$ be an associative algebra and $V$ an $A$-coring whose right algebra $R$ is quasi-hereditary. In this paper, we give a combinatorial description of an associative algebra $B$ and a $B$-coring $W$ whose right algebra is the Ringel dual of $R$. We apply our results in small examples to obtain restrictions on the $A_\infty$-structure of the $\textrm{Ext}$-algebra of standard modules over a class of quasi-hereditary algebras related to birational morphisms of smooth surfaces.
0
0
1
0
0
0
Bypass Fraud Detection: Artificial Intelligence Approach
Telecom companies are severely damaged by bypass fraud or SIM boxing. However, there is a shortage of published research to tackle this problem. The traditional method of Test Call Generating is easily overcome by fraudsters and the need for more sophisticated ways is inevitable. In this work, we are developing intelligent algorithms that mine a huge amount of mobile operator's data and detect the SIMs that are used to bypass international calls. This method will make it hard for fraudsters to generate revenue and hinder their work. Also by reducing fraudulent activities, quality of service can be increased as well as customer satisfaction. Our technique has been evaluated and tested on real world mobile operator data, and proved to be very efficient.
1
0
0
0
0
0
Absence of cyclotron resonance in the anomalous metallic phase in InO$_x$
It is observed that many thin superconducting films with not too high disorder level (generally R$_N/\Box \leq 2000 \Omega$) placed in magnetic field show an anomalous metallic phase where the resistance is low but still finite as temperature goes to zero. Here we report in weakly disordered amorphous InO$_x$ thin films, that this "Bose metal" metal phase possesses no cyclotron resonance and hence non-Drude electrodynamics. Its microwave dynamical conductivity shows signatures of remaining short-range superconducting correlations and strong phase fluctuations through the whole anomalous regime. The absence of a finite frequency resonant mode can be associated with a vanishing downstream component of the vortex current parallel to the supercurrent and an emergent particle-hole symmetry of this anomalous metal, which establishes its non-Fermi liquid character.
0
1
0
0
0
0
Scenario Reduction Revisited: Fundamental Limits and Guarantees
The goal of scenario reduction is to approximate a given discrete distribution with another discrete distribution that has fewer atoms. We distinguish continuous scenario reduction, where the new atoms may be chosen freely, and discrete scenario reduction, where the new atoms must be chosen from among the existing ones. Using the Wasserstein distance as measure of proximity between distributions, we identify those $n$-point distributions on the unit ball that are least susceptible to scenario reduction, i.e., that have maximum Wasserstein distance to their closest $m$-point distributions for some prescribed $m<n$. We also provide sharp bounds on the added benefit of continuous over discrete scenario reduction. Finally, to our best knowledge, we propose the first polynomial-time constant-factor approximations for both discrete and continuous scenario reduction as well as the first exact exponential-time algorithms for continuous scenario reduction.
0
0
1
0
0
0
Testing the science/technology relationship by analysis of patent citations of scientific papers after decomposition of both science and technology
The relationship of scientific knowledge development to technological development is widely recognized as one of the most important and complex aspects of technological evolution. This paper adds to our understanding of the relationship through use of a more rigorous structure for differentiating among technologies based upon technological domains (defined as consisting of the artifacts over time that fulfill a specific generic function using a specific body of technical knowledge).
1
1
0
0
0
0
The Informativeness of $k$-Means and Dimensionality Reduction for Learning Mixture Models
The learning of mixture models can be viewed as a clustering problem. Indeed, given data samples independently generated from a mixture of distributions, we often would like to find the correct target clustering of the samples according to which component distribution they were generated from. For a clustering problem, practitioners often choose to use the simple k-means algorithm. k-means attempts to find an optimal clustering which minimizes the sum-of-squared distance between each point and its cluster center. In this paper, we provide sufficient conditions for the closeness of any optimal clustering and the correct target clustering assuming that the data samples are generated from a mixture of log-concave distributions. Moreover, we show that under similar or even weaker conditions on the mixture model, any optimal clustering for the samples with reduced dimensionality is also close to the correct target clustering. These results provide intuition for the informativeness of k-means (with and without dimensionality reduction) as an algorithm for learning mixture models. We verify the correctness of our theorems using numerical experiments and demonstrate using datasets with reduced dimensionality significant speed ups for the time required to perform clustering.
1
0
0
1
0
0
A Mention-Ranking Model for Abstract Anaphora Resolution
Resolving abstract anaphora is an important, but difficult task for text understanding. Yet, with recent advances in representation learning this task becomes a more tangible aim. A central property of abstract anaphora is that it establishes a relation between the anaphor embedded in the anaphoric sentence and its (typically non-nominal) antecedent. We propose a mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net. We overcome the lack of training data by generating artificial anaphoric sentence--antecedent pairs. Our model outperforms state-of-the-art results on shell noun resolution. We also report first benchmark results on an abstract anaphora subset of the ARRAU corpus. This corpus presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. We found model variants that outperform the baselines for nominal anaphors, without training on individual anaphor data, but still lag behind for pronominal anaphors. Our model selects syntactically plausible candidates and -- if disregarding syntax -- discriminates candidates using deeper features.
1
0
0
1
0
0
Harmonic density interpolation methods for high-order evaluation of Laplace layer potentials in 2D and 3D
We present an effective harmonic density interpolation method for the numerical evaluation of singular and nearly singular Laplace boundary integral operators and layer potentials in two and three spatial dimensions. The method relies on the use of Green's third identity and local Taylor-like interpolations of density functions in terms of harmonic polynomials. The proposed technique effectively regularizes the singularities present in boundary integral operators and layer potentials, and recasts the latter in terms of integrands that are bounded or even more regular, depending on the order of the density interpolation. The resulting boundary integrals can then be easily, accurately, and inexpensively evaluated by means of standard quadrature rules. A variety of numerical examples demonstrate the effectiveness of the technique when used in conjunction with the classical trapezoidal rule (to integrate over smooth curves) in two-dimensions, and with a Chebyshev-type quadrature rule (to integrate over surfaces given as unions of non-overlapping quadrilateral patches) in three-dimensions.
0
1
0
0
0
0
Stochastic Low-Rank Bandits
Many problems in computer vision and recommender systems involve low-rank matrices. In this work, we study the problem of finding the maximum entry of a stochastic low-rank matrix from sequential observations. At each step, a learning agent chooses pairs of row and column arms, and receives the noisy product of their latent values as a reward. The main challenge is that the latent values are unobserved. We identify a class of non-negative matrices whose maximum entry can be found statistically efficiently and propose an algorithm for finding them, which we call LowRankElim. We derive a $\DeclareMathOperator{\poly}{poly} O((K + L) \poly(d) \Delta^{-1} \log n)$ upper bound on its $n$-step regret, where $K$ is the number of rows, $L$ is the number of columns, $d$ is the rank of the matrix, and $\Delta$ is the minimum gap. The bound depends on other problem-specific constants that clearly do not depend $K L$. To the best of our knowledge, this is the first such result in the literature.
1
0
0
1
0
0
Temporal resolution of a pre-maximum halt in a Classical Nova: V5589 Sgr observed with STEREO HI-1B
Classical novae show a rapid rise in optical brightness over a few hours. Until recently the rise phase, particularly the phenomenon of a pre-maximum halt, was observed sporadically. Solar observation satellites observing Coronal Mass Ejections enable us to observe the pre-maximum phase in unprecedented temporal resolution. We present observations of V5589 Sgr with STEREO HI-1B at a cadence of 40 min, the highest to date. We temporally resolve a pre-maximum halt for the first time, with two examples each rising over 40 min then declining within 80 min. Comparison with a grid of outburst models suggests this double peak, and the overall rise timescale, are consistent with a white dwarf mass, central temperature and accretion rate close to 1.0 solar mass, 5x10^7 K and 10^-10 solar masses per year respectively. The modelling formally predicts mass loss onset at JD 2456038.2391+/-0.0139, 12 hrs before optical maximum. The model assumes a main-sequence donor. Observational evidence is for a subgiant companion; meaning the accretion rate is under-estimated. Post-maximum we see erratic variations commonly associated with much slower novae. Estimating the decline rate difficult, but we place the time to decline two magnitudes as 2.1 < t_2(days) < 3.9 making V5589 Sgr a "very fast" nova. The brightest point defines "day 0" as JD 2456038.8224+/-0.0139, although at this high cadence the meaning of the observed maximum becomes difficult to define. We suggest that such erratic variability normally goes undetected in faster novae due to the low cadence of typical observations; implying erratic behaviour is not necessarily related to the rate of decline.
0
1
0
0
0
0
The Thermophysical Properties of the Bagnold Dunes, Mars: Ground-truthing Orbital Data
In this work, we compare the thermophysical properties and particle sizes derived from the Mars Science Laboratory (MSL) rover's Ground Temperature Sensor (GTS) of the Bagnold dunes, specifically Namib dune, to those derived orbitally from Thermal Emission Imaging System (THEMIS), ultimately linking these measurements to ground-truth particle sizes determined from Mars Hand Lens Imager (MAHLI) images. In general, we find that all three datasets report consistent particle sizes for the Bagnold dunes (~110-350 microns, and are within measurement and model uncertainties), indicating that particle sizes of homogeneous materials determined from orbit are reliable. Furthermore, we examine the effects of two physical characteristics that could influence the modeled thermal inertia and particle sizes, including: 1) fine-scale (cm-m scale) ripples, and 2) thin layering of indurated/armored materials. To first order, we find small scale ripples and thin (approximately centimeter scale) layers do not significantly affect the determination of bulk thermal inertia from orbital thermal data determined from a single nighttime temperature. Modeling of a layer of coarse or indurated material reveals that a thin layer (< ~5 mm; similar to what was observed by the Curiosity rover) would not significantly change the observed thermal properties of the surface and would be dominated by the properties of the underlying material. Thermal inertia and grain sizes of relatively homogeneous materials derived from nighttime orbital data should be considered as reliable, as long as there are not significant sub-pixel anisothermality effects (e.g. lateral mixing of multiple thermophysically distinct materials).
0
1
0
0
0
0
Entrywise Eigenvector Analysis of Random Matrices with Low Expected Rank
Recovering low-rank structures via eigenvector perturbation analysis is a common problem in statistical machine learning, such as in factor analysis, community detection, ranking, matrix completion, among others. While a large variety of results provide tight bounds on the average errors between empirical and population statistics of eigenvectors, fewer results are tight for entrywise analyses, which are critical for a number of problems such as community detection and ranking. This paper investigates the entrywise perturbation analysis for a large class of random matrices whose expectations are low-rank, including community detection, synchronization ($\mathbb{Z}_2$-spiked Wigner model) and matrix completion models. Denoting by $\{u_k\}$, respectively $\{u_k^*\}$, the eigenvectors of a random matrix $A$, respectively $\mathbb{E} A$, the paper characterizes cases for which $$u_k \approx \frac{A u_k^*}{\lambda_k^*}$$ serves as a first-order approximation under the $\ell_\infty$ norm. The fact that the approximation is both tight and linear in the random matrix $A$ allows for sharp comparisons of $u_k$ and $u_k^*$. In particular, it allows to compare the signs of $u_k$ and $u_k^*$ even when $\| u_k - u_k^*\|_{\infty}$ is large, which in turn allows to settle the conjecture in Abbe et al. (2016) that the spectral algorithm achieves exact recovery in the stochastic block model without any trimming or cleaning steps. The results are further extended to the perturbation of eigenspaces, providing new bounds for $\ell_\infty$-type errors in noisy matrix completion.
0
0
1
1
0
0
Algorithmic Trading with Fitted Q Iteration and Heston Model
We present the use of the fitted Q iteration in algorithmic trading. We show that the fitted Q iteration helps alleviate the dimension problem that the basic Q-learning algorithm faces in application to trading. Furthermore, we introduce a procedure including model fitting and data simulation to enrich training data as the lack of data is often a problem in realistic application. We experiment our method on both simulated environment that permits arbitrage opportunity and real-world environment by using prices of 450 stocks. In the former environment, the method performs well, implying that our method works in theory. To perform well in the real-world environment, the agents trained might require more training (iteration) and more meaningful variables with predictive value.
0
0
0
0
0
1
Superrigidity of actions on finite rank median spaces
Finite rank median spaces are a simultaneous generalisation of finite dimensional CAT(0) cube complexes and real trees. If $\Gamma$ is an irreducible lattice in a product of rank one simple Lie groups, we show that every action of $\Gamma$ on a complete, finite rank median space has a global fixed point. This is in sharp contrast with the behaviour of actions on infinite rank median spaces. The fixed point property is obtained as corollary to a superrigidity result; the latter holds for irreducible lattices in arbitrary products of compactly generated groups. In previous work, we introduced "Roller compactifications" of median spaces; these generalise a well-known construction in the case of cube complexes. We provide a reduced $1$-cohomology class that detects group actions with a finite orbit in the Roller compactification. Even for CAT(0) cube complexes, only second bounded cohomology classes were known with this property, due to Chatterji-Fernós-Iozzi. As a corollary, we observe that, in Gromov's density model, random groups at low density do not have Shalom's property $H_{FD}$.
0
0
1
0
0
0
Detecting and Explaining Causes From Text For a Time Series Event
Explaining underlying causes or effects about events is a challenging but valuable task. We define a novel problem of generating explanations of a time series event by (1) searching cause and effect relationships of the time series with textual data and (2) constructing a connecting chain between them to generate an explanation. To detect causal features from text, we propose a novel method based on the Granger causality of time series between features extracted from text such as N-grams, topics, sentiments, and their composition. The generation of the sequence of causal entities requires a commonsense causative knowledge base with efficient reasoning. To ensure good interpretability and appropriate lexical usage we combine symbolic and neural representations, using a neural reasoning algorithm trained on commonsense causal tuples to predict the next cause step. Our quantitative and human analysis show empirical evidence that our method successfully extracts meaningful causality relationships between time series with textual features and generates appropriate explanation between them.
1
0
0
0
0
0
Mailbox Types for Unordered Interactions
We propose a type system for reasoning on protocol conformance and deadlock freedom in networks of processes that communicate through unordered mailboxes. We model these networks in the mailbox calculus, a mild extension of the asynchronous {\pi}-calculus with first-class mailboxes and selective input. The calculus subsumes the actor model and allows us to analyze networks with dynamic topologies and varying number of processes possibly mixing different concurrency abstractions. Well-typed processes are deadlock free and never fail because of unexpected messages. For a non-trivial class of them, junk freedom is also guaranteed. We illustrate the expressiveness of the calculus and of the type system by encoding instances of non-uniform, concurrent objects, binary sessions extended with joins and forks, and some known actor benchmarks.
1
0
0
0
0
0
The Complexity of Counting Surjective Homomorphisms and Compactions
A homomorphism from a graph G to a graph H is a function from the vertices of G to the vertices of H that preserves edges. A homomorphism is surjective if it uses all of the vertices of H and it is a compaction if it uses all of the vertices of H and all of the non-loop edges of H. Hell and Nesetril gave a complete characterisation of the complexity of deciding whether there is a homomorphism from an input graph G to a fixed graph H. A complete characterisation is not known for surjective homomorphisms or for compactions, though there are many interesting results. Dyer and Greenhill gave a complete characterisation of the complexity of counting homomorphisms from an input graph G to a fixed graph H. In this paper, we give a complete characterisation of the complexity of counting surjective homomorphisms from an input graph G to a fixed graph H and we also give a complete characterisation of the complexity of counting compactions from an input graph G to a fixed graph H. In an addendum we use our characterisations to point out a dichotomy for the complexity of the respective approximate counting problems (in the connected case).
1
0
0
0
0
0
A recursive algorithm and a series expansion related to the homogeneous Boltzmann equation for hard potentials with angular cutoff
We consider the spatially homogeneous Boltzmann equation for hard potentials with angular cutoff. This equation has a unique conservative weak solution $(f_t)_{t\geq 0}$, once the initial condition $f_0$ with finite mass and energy is fixed. Taking advantage of the energy conservation, we propose a recursive algorithm that produces a $(0,\infty)\times\mathbb{R}^3$ random variable $(M_t,V_t)$ such that $E[M_t {\bf 1}_{\{V_t \in \cdot\}}]=f_t$. We also write down a series expansion of $f_t$. Although both the algorithm and the series expansion might be theoretically interesting in that they explicitly express $f_t$ in terms of $f_0$, we believe that the algorithm is not very efficient in practice and that the series expansion is rather intractable. This is a tedious extension to non-Maxwellian molecules of Wild's sum and of its interpretation by McKean.
0
0
1
0
0
0
Periodic auxetics: Structure and design
Materials science has adopted the term of auxetic behavior for structural deformations where stretching in some direction entails lateral widening, rather than lateral shrinking. Most studies, in the last three decades, have explored repetitive or cellular structures and used the notion of negative Poisson's ratio as the hallmark of auxetic behavior. However, no general auxetic principle has been established from this perspective. In the present paper, we show that a purely geometric approach to periodic auxetics is apt to identify essential characteristics of frameworks with auxetic deformations and can generate a systematic and endless series of periodic auxetic designs. The critical features refer to convexity properties expressed through families of homothetic ellipsoids.
0
0
1
0
0
0
Asymptotics of maximum likelihood estimation for stable law with $(M)$ parameterization
Asymptotics of maximum likelihood estimation for $\alpha$-stable law are analytically investigated with $(M)$ parameterization. The consistency and asymptotic normality are shown on the interior of the whole parameter space. Although these asymptotics have been proved with $(B)$ parameterization, there are several gaps between. Especially in the latter, the density, so that scores and their derivatives are discontinuous at $\alpha=1$ for $\beta\neq 0$ and usual asymptotics are impossible, whereas in $(M)$ form these quantities are shown to be continuous on the interior of the parameter space. We fill these gaps and provide a convenient theory for applied people. We numerically approximate the Fisher information matrix around the Cauchy law $(\alpha,\beta)=(1,0)$. The results exhibit continuity at $\alpha=1,\,\beta\neq 0$ and this secures the accuracy of our calculations.
0
0
1
1
0
0
Enabling Massive Deep Neural Networks with the GraphBLAS
Deep Neural Networks (DNNs) have emerged as a core tool for machine learning. The computations performed during DNN training and inference are dominated by operations on the weight matrices describing the DNN. As DNNs incorporate more stages and more nodes per stage, these weight matrices may be required to be sparse because of memory limitations. The GraphBLAS.org math library standard was developed to provide high performance manipulation of sparse weight matrices and input/output vectors. For sufficiently sparse matrices, a sparse matrix library requires significantly less memory than the corresponding dense matrix implementation. This paper provides a brief description of the mathematics underlying the GraphBLAS. In addition, the equations of a typical DNN are rewritten in a form designed to use the GraphBLAS. An implementation of the DNN is given using a preliminary GraphBLAS C library. The performance of the GraphBLAS implementation is measured relative to a standard dense linear algebra library implementation. For various sizes of DNN weight matrices, it is shown that the GraphBLAS sparse implementation outperforms a BLAS dense implementation as the weight matrix becomes sparser.
1
0
0
0
0
0
Univariate and Bivariate Geometric Discrete Generalized Exponential Distributions
Marshall and Olkin (1997, Biometrika, 84, 641 - 652) introduced a very powerful method to introduce an additional parameter to a class of continuous distribution functions and hence it brings more flexibility to the model. They have demonstrated their method for the exponential and Weibull classes. In the same paper they have briefly indicated regarding its bivariate extension. The main aim of this paper is to introduce the same method, for the first time, to the class of discrete generalized exponential distributions both for the univariate and bivariate cases. We investigate several properties of the proposed univariate and bivariate classes. The univariate class has three parameters, whereas the bivariate class has five parameters. It is observed that depending on the parameter values the univariate class can be both zero inflated as well as heavy tailed. We propose to use EM algorithm to estimate the unknown parameters. Small simulation experiments have been performed to see the effectiveness of the proposed EM algorithm, and a bivariate data set has been analyzed and it is observed that the proposed models and the EM algorithm work quite well in practice.
0
0
0
1
0
0
What Can Machine Learning Teach Us about Communications?
Rapid improvements in machine learning over the past decade are beginning to have far-reaching effects. For communications, engineers with limited domain expertise can now use off-the-shelf learning packages to design high-performance systems based on simulations. Prior to the current revolution in machine learning, the majority of communication engineers were quite aware that system parameters (such as filter coefficients) could be learned using stochastic gradient descent. It was not at all clear, however, that more complicated parts of the system architecture could be learned as well. In this paper, we discuss the application of machine-learning techniques to two communications problems and focus on what can be learned from the resulting systems. We were pleasantly surprised that the observed gains in one example have a simple explanation that only became clear in hindsight. In essence, deep learning discovered a simple and effective strategy that had not been considered earlier.
1
0
0
1
0
0
Global stability of a network-based SIRS epidemic model with nonmonotone incidence rate
This paper studies the dynamics of a network-based SIRS epidemic model with vaccination and a nonmonotone incidence rate. This type of nonlinear incidence can be used to describe the psychological or inhibitory effect from the behavioral change of the susceptible individuals when the number of infective individuals on heterogeneous networks is getting larger. Using the analytical method, epidemic threshold $R_0$ is obtained. When $R_0$ is less than one, we prove the disease-free equilibrium is globally asymptotically stable and the disease dies out, while $R_0$ is greater than one, there exists a unique endemic equilibrium. By constructing a suitable Lyapunov function, we also prove the endemic equilibrium is globally asymptotically stable if the inhibitory factor $\alpha$ is sufficiently large. Numerical experiments are also given to support the theoretical results. It is shown both theoretically and numerically a larger $\alpha$ can accelerate the extinction of the disease and reduce the level of disease.
0
0
0
0
1
0
Slicewise definability in first-order logic with bounded quantifier rank
For every $q\in \mathbb N$ let $\textrm{FO}_q$ denote the class of sentences of first-order logic FO of quantifier rank at most $q$. If a graph property can be defined in $\textrm{FO}_q$, then it can be decided in time $O(n^q)$. Thus, minimizing $q$ has favorable algorithmic consequences. Many graph properties amount to the existence of a certain set of vertices of size $k$. Usually this can only be expressed by a sentence of quantifier rank at least $k$. We use the color-coding method to demonstrate that some (hyper)graph problems can be defined in $\textrm{FO}_q$ where $q$ is independent of $k$. This property of a graph problem is equivalent to the question of whether the corresponding parameterized problem is in the class $\textrm{para-AC}^0$. It is crucial for our results that the FO-sentences have access to built-in addition and multiplication. It is known that then FO corresponds to the circuit complexity class uniform $\textrm{AC}^0$. We explore the connection between the quantifier rank of FO-sentences and the depth of $\textrm{AC}^0$-circuits, and prove that $\textrm{FO}_q \subsetneq \textrm{FO}_{q+1}$ for structures with built-in addition and multiplication.
1
0
0
0
0
0
The efficiency of community detection by most similar node pairs
Community analysis is an important way to ascertain whether or not a complex system consists of sub-structures with different properties. In this paper, we give a two level community structure analysis for the SSCI journal system by most similar co-citation pattern. Five different strategies for the selection of most similar node (journal) pairs are introduced. The efficiency is checked by the normalized mutual information technique. Statistical properties and comparisons of the community results show that both of the two level detection could give instructional information for the community structure of complex systems. Further comparisons of the five strategies indicates that, the most efficient strategy is to assign nodes with maximum similarity into the same community whether the similarity information is complete or not, while random selection generates small world local community with no inside order. These results give valuable indication for efficient community detection by most similar node pairs.
1
0
0
0
0
0
Composite Adaptive Control for Bilateral Teleoperation Systems without Persistency of Excitation
Composite adaptive control schemes, which use both the system tracking errors and the prediction error to drive the update laws, have become widespread in achieving an improvement of system performance. However, a strong persistent-excitation (PE) condition should be satisfied to guarantee the parameter convergence. This paper proposes a novel composite adaptive control to guarantee parameter convergence without PE condition for nonlinear teleoperation systems with dynamic uncertainties and time-varying communication delays. The stability criteria of the closed-loop teleoperation system are given in terms of linear matrix inequalities. New tracking performance measures are proposed to evaluate the position tracking between the master and the slave. Simulation studies are given to show the effectiveness of the proposed method.
1
0
0
0
0
0
Infinitely many periodic orbits just above the Mañé critical value on the 2-sphere
We introduce a new critical value $c_\infty(L)$ for Tonelli Lagrangians $L$ on the tangent bundle of the 2-sphere without minimizing measures supported on a point. We show that $c_\infty(L)$ is strictly larger than the Mañé critical value $c(L)$, and on every energy level $e\in(c(L),c_\infty(L))$ there exist infinitely many periodic orbits of the Lagrangian system of $L$, one of which is a local minimizer of the free-period action functional. This has applications to Finsler metrics of Randers type on the 2-sphere. We show that, under a suitable criticality assumption on a given Randers metric, after rescaling its magnetic part with a sufficiently large multiplicative constant, the new metric admits infinitely many closed geodesics, one of which is a waist. Examples of critical Randers metrics include the celebrated Katok metric.
0
0
1
0
0
0
Decentralized Random Walk-Based Data Collection in Networks
We analyze a decentralized random walk-based algorithm for data collection at the sink in a multi-hop sensor network. Our algorithm, Random-Collect, which involves data packets being passed to random neighbors in the network according to a random walk mechanism, requires no configuration and incurs no routing overhead. To analyze this method, we model the data generation process as independent Bernoulli arrivals at the source nodes. We analyze both latency and throughput in this setting, providing a theoretical lower bound for the throughput and a theoretical upper bound for the latency. The main contribution of our paper, however, is the throughput result: we present a general lower bound on the throughput achieved by our data collection method in terms of the underlying network parameters. In particular, we show that the rate at which our algorithm can collect data depends on the spectral gap of the given random walk's transition matrix and if the random walk is simple then it also depends on the maximum and minimum degrees of the graph modeling the network. For latency, we show that the time taken to collect data not only depends on the worst-case hitting time of the given random walk but also depends on the data arrival rate. In fact, our latency bound reflects the data rate-latency trade-off i.e., in order to achieve a higher data rate we need to compromise on latency and vice-versa. We also discuss some examples that demonstrate that our lower bound on the data rate is optimal up to constant factors, i.e., there exists a network topology and sink placement for which the maximum stable data rate is just a constant factor above our lower bound.
1
0
0
0
0
0
The Fan Region at 1.5 GHz. I: Polarized synchrotron emission extending beyond the Perseus Arm
The Fan Region is one of the dominant features in the polarized radio sky, long thought to be a local (distance < 500 pc) synchrotron feature. We present 1.3-1.8 GHz polarized radio continuum observations of the region from the Global Magneto-Ionic Medium Survey (GMIMS) and compare them to maps of Halpha and polarized radio continuum intensity from 0.408-353 GHz. The high-frequency (> 1 GHz) and low-frequency (< 600 MHz) emission have different morphologies, suggesting a different physical origin. Portions of the 1.5 GHz Fan Region emission are depolarized by about 30% by ionized gas structures in the Perseus Arm, indicating that this fraction of the emission originates >2 kpc away. We argue for the same conclusion based on the high polarization fraction at 1.5 GHz (about 40%). The Fan Region is offset with respect to the Galactic plane, covering -5° < b < +10°; we attribute this offset to the warp in the outer Galaxy. We discuss origins of the polarized emission, including the spiral Galactic magnetic field. This idea is a plausible contributing factor although no model to date readily reproduces all of the observations. We conclude that models of the Galactic magnetic field should account for the > 1 GHz emission from the Fan Region as a Galactic-scale, not purely local, feature.
0
1
0
0
0
0
The redshift distribution of cosmological samples: a forward modeling approach
Determining the redshift distribution $n(z)$ of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global $n(z)$ of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using UFig (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the $MCCL$ (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of $n(z)$ distributions for the acceptable models. We demonstrate the method by determining $n(z)$ for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior $n(z)$ distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
0
1
0
0
0
0
DeepCodec: Adaptive Sensing and Recovery via Deep Convolutional Neural Networks
In this paper we develop a novel computational sensing framework for sensing and recovering structured signals. When trained on a set of representative signals, our framework learns to take undersampled measurements and recover signals from them using a deep convolutional neural network. In other words, it learns a transformation from the original signals to a near-optimal number of undersampled measurements and the inverse transformation from measurements to signals. This is in contrast to traditional compressive sensing (CS) systems that use random linear measurements and convex optimization or iterative algorithms for signal recovery. We compare our new framework with $\ell_1$-minimization from the phase transition point of view and demonstrate that it outperforms $\ell_1$-minimization in the regions of phase transition plot where $\ell_1$-minimization cannot recover the exact solution. In addition, we experimentally demonstrate how learning measurements enhances the overall recovery performance, speeds up training of recovery framework, and leads to having fewer parameters to learn.
1
0
0
1
0
0
Robotic frameworks, architectures and middleware comparison
Nowadays, the construction of a complex robotic system requires a high level of specialization in a large number of diverse scientific areas. It is reasonable that a single researcher cannot create from scratch the entirety of this system, as it is impossible for him to have the necessary skills in the necessary fields. This obstacle is being surpassed with the existent robotic frameworks. This paper tries to give an extensive review of the most famous robotic frameworks and middleware, as well as to provide the means to effortlessly compare them. Additionally, we try to investigate the differences between the definitions of a robotic framework, a robotic middleware and a robotic architecture.
1
0
0
0
0
0
Analysis of equivalence relation in joint sparse recovery
The joint sparse recovery problem is a generalization of the single measurement vector problem which is widely studied in Compressed Sensing and it aims to recovery a set of jointly sparse vectors. i.e. have nonzero entries concentrated at common location. Meanwhile l_p-minimization subject to matrices is widely used in a large number of algorithms designed for this problem. Therefore the main contribution in this paper is two theoretical results about this technique. The first one is to prove that in every multiple systems of linear equation, there exists a constant p* such that the original unique sparse solution also can be recovered from a minimization in l_p quasi-norm subject to matrices whenever 0< p<p*. The other one is to show an analysis expression of such p*. Finally, we display the results of one example to confirm the validity of our conclusions.
1
0
1
0
0
0
The Stochastic Firefighter Problem
The dynamics of infectious diseases spread is crucial in determining their risk and offering ways to contain them. We study sequential vaccination of individuals in networks. In the original (deterministic) version of the Firefighter problem, a fire breaks out at some node of a given graph. At each time step, b nodes can be protected by a firefighter and then the fire spreads to all unprotected neighbors of the nodes on fire. The process ends when the fire can no longer spread. We extend the Firefighter problem to a probabilistic setting, where the infection is stochastic. We devise a simple policy that only vaccinates neighbors of infected nodes and is optimal on regular trees and on general graphs for a sufficiently large budget. We derive methods for calculating upper and lower bounds of the expected number of infected individuals, as well as provide estimates on the budget needed for containment in expectation. We calculate these explicitly on trees, d-dimensional grids, and Erdős Rényi graphs. Finally, we construct a state-dependent budget allocation strategy and demonstrate its superiority over constant budget allocation on real networks following a first order acquaintance vaccination policy.
1
0
0
0
0
0
The phase space structure of the oligopoly dynamical system by means of Darboux integrability
We investigate the dynamical complexity of Cournot oligopoly dynamics of three firms by using the qualitative methods of dynamical systems to study the phase structure of this model. The phase space is organized with one-dimensional and two-dimensional invariant submanifolds (for the monopoly and duopoly) and unique stable node (global attractor) in the positive quadrant of the phase space (Cournot equilibrium). We also study the integrability of the system. We demonstrate the effectiveness of the method of the Darboux polynomials in searching for first integrals of the oligopoly. The general method as well as examples of adopting this method are presented. We study Darboux non-integrability of the oligopoly for linear demand functions and find first integrals of this system for special classes of the system, in particular, rational integrals can be found for a quite general set of model parameters. We show how first integral can be useful in lowering the dimension of the system using the example of $n$ almost identical firms. This first integral also gives information about the structure of the phase space and the behaviour of trajectories in the neighbourhood of a Nash equilibrium
0
1
0
0
0
0
Generalizations of the 'Linear Chain Trick': Incorporating more flexible dwell time distributions into mean field ODE models
Mathematical modelers have long known of a "rule of thumb" referred to as the Linear Chain Trick (LCT; aka the Gamma Chain Trick): a technique used to construct mean field ODE models from continuous-time stochastic state transition models where the time an individual spends in a given state (i.e., the dwell time) is Erlang distributed (i.e., gamma distributed with integer shape parameter). Despite the LCT's widespread use, we lack general theory to facilitate the easy application of this technique, especially for complex models. This has forced modelers to choose between constructing ODE models using heuristics with oversimplified dwell time assumptions, using time consuming derivations from first principles, or to instead use non-ODE models (like integro-differential equations or delay differential equations) which can be cumbersome to derive and analyze. Here, we provide analytical results that enable modelers to more efficiently construct ODE models using the LCT or related extensions. Specifically, we 1) provide novel extensions of the LCT to various scenarios found in applications; 2) provide formulations of the LCT and it's extensions that bypass the need to derive ODEs from integral or stochastic model equations; and 3) introduce a novel Generalized Linear Chain Trick (GLCT) framework that extends the LCT to a much broader family of distributions, including the flexible phase-type distributions which can approximate distributions on $\mathbb{R}^+$ and be fit to data. These results give modelers more flexibility to incorporate appropriate dwell time assumptions into mean field ODEs, including conditional dwell time distributions, and these results help clarify connections between individual-level stochastic model assumptions and the structure of corresponding mean field ODEs.
0
0
0
0
1
0
Life and work of Egbert Brieskorn (1936 - 2013)
Egbert Brieskorn died on July 11, 2013, a few days after his 77th birthday. He was an impressive personality who has left a lasting impression on all who knew him, whether inside or outside of mathematics. Brieskorn was a great mathematician, but his interests, his knowledge, and activities ranged far beyond mathematics. In this contribution, which is strongly influenced by many years of personal connectedness of the authors with Brieskorn, we try to give a deeper insight into the life and work of Brieskorn. We illuminate both his personal commitment to peace and the environment as well as his long-term study of the life and work of Felix Hausdorff and the publication of Hausdorff's collected works. However, the main focus of the article is on the presentation of his remarkable and influential mathematical work.
0
0
1
0
0
0
Resource Allocation for a Full-Duplex Base Station Aided OFDMA System
Exploiting full-duplex (FD) technology on base stations (BSs) is a promising solution to enhancing the system performance. Motivated by this, we revisit a full-duplex base station (FD-BS) aided OFDMA system, which consists of one BS, several uplink/downlink users and multiple subcarriers. A joint 3-dimensional (3D) mapping scheme among subcarriers, down-link users (DUEs), uplink users (UUEs) is considered as well as an associated power allocation optimization. In detail, we first decompose the complex 3D mapping problem into three 2-dimensional sub ones and solve them by using the iterative Hungarian method, respectively. Then based on the Lagrange dual method, we sequentially solve the power allocation and 3- dimensional mapping problem by fixing a dual point. Finally, the optimal solution can be obtained by utilizing the sub-gradient method. Unlike existing work that only solves either 3D mapping or power allocation problem but with a high computation complexity, we tackle both of them and have successfully reduced computation complexity from exponential to polynomial order. Numerical simulations are conducted to verify the proposed scheme.
1
0
0
0
0
0
Combining Neural Networks and Tree Search for Task and Motion Planning in Challenging Environments
We consider task and motion planning in complex dynamic environments for problems expressed in terms of a set of Linear Temporal Logic (LTL) constraints, and a reward function. We propose a methodology based on reinforcement learning that employs deep neural networks to learn low-level control policies as well as task-level option policies. A major challenge in this setting, both for neural network approaches and classical planning, is the need to explore future worlds of a complex and interactive environment. To this end, we integrate Monte Carlo Tree Search with hierarchical neural net control policies trained on expressive LTL specifications. This paper investigates the ability of neural networks to learn both LTL constraints and control policies in order to generate task plans in complex environments. We demonstrate our approach in a simulated autonomous driving setting, where a vehicle must drive down a road in traffic, avoid collisions, and navigate an intersection, all while obeying given rules of the road.
1
0
0
0
0
0
BT-Nets: Simplifying Deep Neural Networks via Block Term Decomposition
Recently, deep neural networks (DNNs) have been regarded as the state-of-the-art classification methods in a wide range of applications, especially in image classification. Despite the success, the huge number of parameters blocks its deployment to situations with light computing resources. Researchers resort to the redundancy in the weights of DNNs and attempt to find how fewer parameters can be chosen while preserving the accuracy at the same time. Although several promising results have been shown along this research line, most existing methods either fail to significantly compress a well-trained deep network or require a heavy fine-tuning process for the compressed network to regain the original performance. In this paper, we propose the \textit{Block Term} networks (BT-nets) in which the commonly used fully-connected layers (FC-layers) are replaced with block term layers (BT-layers). In BT-layers, the inputs and the outputs are reshaped into two low-dimensional high-order tensors, then block-term decomposition is applied as tensor operators to connect them. We conduct extensive experiments on benchmark datasets to demonstrate that BT-layers can achieve a very large compression ratio on the number of parameters while preserving the representation power of the original FC-layers as much as possible. Specifically, we can get a higher performance while requiring fewer parameters compared with the tensor train method.
1
0
0
1
0
0
Resampling Strategy in Sequential Monte Carlo for Constrained Sampling Problems
Sequential Monte Carlo (SMC) methods are a class of Monte Carlo methods that are used to obtain random samples of a high dimensional random variable in a sequential fashion. Many problems encountered in applications often involve different types of constraints. These constraints can make the problem much more challenging. In this paper, we formulate a general framework of using SMC for constrained sampling problems based on forward and backward pilot resampling strategies. We review some existing methods under the framework and develop several new algorithms. It is noted that all information observed or imposed on the underlying system can be viewed as constraints. Hence the approach outlined in this paper can be useful in many applications.
0
0
0
1
0
0
Transverse Shift in Andreev Reflection
An incoming electron is reflected back as a hole at a normal-metal-superconductor interface, a process known as Andreev reflection. We predict that there exists a universal transverse shift in this process due to the effect of spin-orbit coupling in the normal metal. Particularly, using both the scattering approach and the argument of angular momentum conservation, we demonstrate that the shifts are pronounced for lightly-doped Weyl semimetals, and are opposite for incoming electrons with different chirality, generating a chirality-dependent Hall effect for the reflected holes. The predicted shift is not limited to Weyl systems, but exists for a general three-dimensional spin-orbit- coupled metal interfaced with a superconductor.
0
1
0
0
0
0
Production of Entanglement Entropy by Decoherence
We examine the dynamics of entanglement entropy of all parts in an open system consisting of a two-level dimer interacting with an environment of oscillators. The dimer-environment interaction is almost energy conserving. We find the precise link between decoherence and production of entanglement entropy. We show that not all environment oscillators carry significant entanglement entropy and we identify the oscillator frequency regions which contribute to the production of entanglement entropy. Our results hold for arbitrary strengths of the dimer-environment interaction, and they are mathematically rigorous.
0
1
0
0
0
0
Aggressive Economic Incentives and Physical Activity: The Role of Choice and Technology Decision Aids
Aggressive incentive schemes that allow individuals to impose economic punishment on themselves if they fail to meet health goals present a promising approach for encouraging healthier behavior. However, the element of choice inherent in these schemes introduces concerns that only non-representative sectors of the population will select aggressive incentives, leaving value on the table for those who don't opt in. In a field experiment conducted over a 29 week period on individuals wearing Fitbit activity trackers, we find modest and short lived increases in physical activity for those provided the choice of aggressive incentives. In contrast, we find significant and persistent increases for those assigned (oftentimes against their stated preference) to the same aggressive incentives. The modest benefits for those provided a choice seems to emerge because those who benefited most from the aggressive incentives were the least likely to choose them, and it was those who did not need them who opted in. These results are confirmed in a follow up lab experiment. We also find that benefits to individuals assigned to aggressive incentives were pronounced if they also updated their step target in the Fitbit mobile application to match the new activity goal we provided them. Our findings have important implications for incentive based interventions to improve health behavior. For firms and policy makers, our results suggest that one effective strategy for encouraging sustained healthy behavior combines exposure to aggressive incentive schemes to jolt individuals out of their comfort zones with technology decision aids that help individuals sustain this behavior after incentives end.
0
0
0
0
0
1
Dynamics of resonances and equilibria of Low Earth Objects
The nearby space surrounding the Earth is densely populated by artificial satellites and instruments, whose orbits are distributed within the Low-Earth-Orbit region (LEO), ranging between 90 and 2 000 $km$ of altitude. As a consequence of collisions and fragmentations, many space debris of different sizes are left in the LEO region. Given the threat raised by the possible damages which a collision of debris can provoke with operational or manned satellites, the study of their dynamics is nowadays mandatory. This work is focused on the existence of equilibria and the dynamics of resonances in LEO. We base our results on a simplified model which includes the geopotential and the atmospheric drag. Using such model, we make a qualitative study of the resonances and the equilibrium positions, including their location and stability. The dissipative effect due to the atmosphere provokes a tidal decay, but we give examples of different behaviors, precisely a straightforward passage through the resonance or rather a temporary capture. We also investigate the effect of the solar cycle which is responsible of fluctuations of the atmospheric density and we analyze the influence of Sun and Moon on LEO objects.
0
1
0
0
0
0