title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Deep Depth From Focus
Depth from focus (DFF) is one of the classical ill-posed inverse problems in computer vision. Most approaches recover the depth at each pixel based on the focal setting which exhibits maximal sharpness. Yet, it is not obvious how to reliably estimate the sharpness level, particularly in low-textured areas. In this paper, we propose `Deep Depth From Focus (DDFF)' as the first end-to-end learning approach to this problem. One of the main challenges we face is the hunger for data of deep neural networks. In order to obtain a significant amount of focal stacks with corresponding groundtruth depth, we propose to leverage a light-field camera with a co-calibrated RGB-D sensor. This allows us to digitally create focal stacks of varying sizes. Compared to existing benchmarks our dataset is 25 times larger, enabling the use of machine learning for this inverse problem. We compare our results with state-of-the-art DFF methods and we also analyze the effect of several key deep architectural components. These experiments show that our proposed method `DDFFNet' achieves state-of-the-art performance in all scenes, reducing depth error by more than 75% compared to the classical DFF methods.
1
0
0
0
0
0
Deep Text Classification Can be Fooled
In this paper, we present an effective method to craft text adversarial samples, revealing one important yet underestimated fact that DNN-based text classifiers are also prone to adversarial sample attack. Specifically, confronted with different adversarial scenarios, the text items that are important for classification are identified by computing the cost gradients of the input (white-box attack) or generating a series of occluded test samples (black-box attack). Based on these items, we design three perturbation strategies, namely insertion, modification, and removal, to generate adversarial samples. The experiment results show that the adversarial samples generated by our method can successfully fool both state-of-the-art character-level and word-level DNN-based text classifiers. The adversarial samples can be perturbed to any desirable classes without compromising their utilities. At the same time, the introduced perturbation is difficult to be perceived.
1
0
0
0
0
0
Two-Stream 3D Convolutional Neural Network for Skeleton-Based Action Recognition
It remains a challenge to efficiently extract spatialtemporal information from skeleton sequences for 3D human action recognition. Although most recent action recognition methods are based on Recurrent Neural Networks which present outstanding performance, one of the shortcomings of these methods is the tendency to overemphasize the temporal information. Since 3D convolutional neural network(3D CNN) is a powerful tool to simultaneously learn features from both spatial and temporal dimensions through capturing the correlations between three dimensional signals, this paper proposes a novel two-stream model using 3D CNN. To our best knowledge, this is the first application of 3D CNN in skeleton-based action recognition. Our method consists of three stages. First, skeleton joints are mapped into a 3D coordinate space and then encoding the spatial and temporal information, respectively. Second, 3D CNN models are seperately adopted to extract deep features from two streams. Third, to enhance the ability of deep features to capture global relationships, we extend every stream into multitemporal version. Extensive experiments on the SmartHome dataset and the large-scale NTU RGB-D dataset demonstrate that our method outperforms most of RNN-based methods, which verify the complementary property between spatial and temporal information and the robustness to noise.
1
0
0
0
0
0
Strong Bayesian Evidence for the Normal Neutrino Hierarchy
The configuration of the three neutrino masses can take two forms, known as the normal and inverted hierarchies. We compute the Bayesian evidence associated with these two hierarchies. Previous studies found a mild preference for the normal hierarchy, and this was driven by the asymmetric manner in which cosmological data has confined the available parameter space. Here we identify the presence of a second asymmetry, which is imposed by data from neutrino oscillations. By combining constraints on the squared-mass splittings with the limit on the sum of neutrino masses of $\Sigma m_\nu < 0.13$ eV, and using a minimally informative prior on the masses, we infer odds of 42:1 in favour of the normal hierarchy, which is classified as "strong" in the Jeffreys' scale. We explore how these odds may evolve in light of higher precision cosmological data, and discuss the implications of this finding with regards to the nature of neutrinos. Finally the individual masses are inferred to be $m_1 = 3.80^{+26.2}_{-3.73} \, \text{meV}, m_2 = 8.8^{+18}_{-1.2} \, \text{meV}, m_3 = 50.4^{+5.8}_{-1.2} \, \text{meV}$ ($95\%$ credible intervals).
0
1
0
0
0
0
A fast reconstruction algorithm for geometric inverse problems using topological sensitivity analysis and Dirichlet-Neumann cost functional approach
This paper is concerned with the detection of objects immersed in anisotropic media from boundary measurements. We propose an accurate approach based on the Kohn-Vogelius formulation and the topological sensitivity analysis method. The inverse problem is formulated as a topology optimization one minimizing an energy like functional. A topological asymptotic expansion is derived for the anisotropic Laplace operator. The unknown object is reconstructed using a level-set curve of the topological gradient. The efficiency and accuracy of the proposed algorithm are illustrated by some numerical results. MOTS-CLÉS : Problème inverse géométrique, Laplace anisotrope, formulation de Kohn-Vogelius, analyse de sensibilité, optimisation topologique.
0
0
1
0
0
0
Tomographic X-ray data of carved cheese
This is the documentation of the tomographic X-ray data of a carved cheese slice. Data are available at www.fips.fi/dataset.php, and can be freely used for scientific purposes with appropriate references to them, and to this document in this http URL. The data set consists of (1) the X-ray sinogram of a single 2D slice of the cheese slice with three different resolutions and (2) the corresponding measurement matrices modeling the linear operation of the X-ray transform. Each of these sinograms was obtained from a measured 360-projection fan-beam sinogram by down-sampling and taking logarithms. The original (measured) sinogram is also provided in its original form and resolution.
0
1
0
0
0
0
SMARTies: Sentiment Models for Arabic Target Entities
We consider entity-level sentiment analysis in Arabic, a morphologically rich language with increasing resources. We present a system that is applied to complex posts written in response to Arabic newspaper articles. Our goal is to identify important entity "targets" within the post along with the polarity expressed about each target. We achieve significant improvements over multiple baselines, demonstrating that the use of specific morphological representations improves the performance of identifying both important targets and their sentiment, and that the use of distributional semantic clusters further boosts performances for these representations, especially when richer linguistic resources are not available.
1
0
0
0
0
0
Quantile Markov Decision Process
In this paper, we consider the problem of optimizing the quantiles of the cumulative rewards of Markov Decision Processes (MDP), to which we refers as Quantile Markov Decision Processes (QMDP). Traditionally, the goal of a Markov Decision Process (MDP) is to maximize expected cumulative reward over a defined horizon (possibly to be infinite). In many applications, however, a decision maker may be interested in optimizing a specific quantile of the cumulative reward instead of its expectation. Our framework of QMDP provides analytical results characterizing the optimal QMDP solution and presents the algorithm for solving the QMDP. We provide analytical results characterizing the optimal QMDP solution and present the algorithms for solving the QMDP. We illustrate the model with two experiments: a grid game and a HIV optimal treatment experiment.
1
0
0
0
0
0
Explaining the elongated shape of 'Oumuamua by the Eikonal abrasion model
The photometry of the minor body with extrasolar origin (1I/2017 U1) 'Oumuamua revealed an unprecedented shape: Meech et al. (2017) reported a shape elongation b/a close to 1/10, which calls for theoretical explanation. Here we show that the abrasion of a primordial asteroid by a huge number of tiny particles ultimately leads to such elongated shape. The model (called the Eikonal equation) predicting this outcome was already suggested in Domokos et al. (2009) to play an important role in the evolution of asteroid shapes.
0
1
0
0
0
0
Generation of $1/f$ noise motivated by a model for musical melodies
We present a model to generate power spectrum noise with intensity proportional to 1/f as a function of frequency f. The model arises from a broken-symmetry variable which corresponds to absolute pitch, where fluctuations occur in an attempt to restore that symmetry, influenced by interactions in the creation of musical melodies.
0
1
0
0
0
0
On the lattice of the $σ$-permutable subgroups of a finite group
Let $\sigma =\{\sigma_{i} | i\in I\}$ be some partition of the set of all primes $\Bbb{P}$, $G$ a finite group and $\sigma (G) =\{\sigma_{i} |\sigma_{i}\cap \pi (G)\ne \emptyset \}$. A set ${\cal H}$ of subgroups of $G$ is said to be a complete Hall $\sigma $-set of $G$ if every member $\ne 1$ of ${\cal H}$ is a Hall $\sigma_{i}$-subgroup of $G$ for some $\sigma_{i}\in \sigma $ and ${\cal H}$ contains exactly one Hall $\sigma_{i}$-subgroup of $G$ for every $\sigma_{i}\in \sigma (G)$. A subgroup $A$ of $G$ is said to be ${\sigma}$-permutable in $G$ if $G$ possesses a complete Hall $\sigma $-set and $A$ permutes with each Hall $\sigma_{i}$-subgroup $H$ of $G$, that is, $AH=HA$ for all $i \in I$. We characterize finite groups with distributive lattice of the ${\sigma}$-permutable subgroups.
0
0
1
0
0
0
The geometry of the generalized algebraic Riccati equation and of the singular Hamiltonian system
This paper analyzes the properties of the solutions of the generalized continuous algebraic Riccati equation from a geometric perspective. This analysis reveals the presence of a subspace that may provide an appropriate degree of freedom to stabilize the system in the related optimal control problem even in cases where the Riccati equation does not admit a stabilizing solution.
0
0
1
0
0
0
High Order Numerical Integrators for Relativistic Charged Particle Tracking
In this paper, we extend several time reversible numerical integrators to solve the Lorentz force equations from second order accuracy to higher order accuracy for relativistic charged particle tracking in electromagnetic fields. A fourth order algorithm is given explicitly and tested with numerical examples. Such high order numerical integrators can significantly save the computational cost by using a larger step size in comparison to the second order integrators.
0
1
0
0
0
0
Astrophotonics: molding the flow of light in astronomical instruments
Since its emergence two decades ago, astrophotonics has found broad application in scientific instruments at many institutions worldwide. The case for astrophotonics becomes more compelling as telescopes push for AO-assisted, diffraction-limited performance, a mode of observing that is central to the next-generation of extremely large telescopes (ELTs). Even AO systems are beginning to incorporate advanced photonic principles as the community pushes for higher performance and more complex guide-star configurations. Photonic instruments like Gravity on the Very Large Telescope achieve milliarcsec resolution at 2000 nm which would be very difficult to achieve with conventional optics. While space photonics is not reviewed here, we foresee that remote sensing platforms will become a major beneficiary of astrophotonic components in the years ahead. The field has given back with the development of new technologies (e.g. photonic lantern, large area multi-core fibres) already finding widespread use in other fields; Google Scholar lists more than 400 research papers making reference to this technology. This short review covers representative key developments since the 2009 Focus issue on Astrophotonics.
0
1
0
0
0
0
Towards quantitative methods to assess network generative models
Assessing generative models is not an easy task. Generative models should synthesize graphs which are not replicates of real networks but show topological features similar to real graphs. We introduce an approach for assessing graph generative models using graph classifiers. The inability of an established graph classifier for distinguishing real and synthesized graphs could be considered as a performance measurement for graph generators.
1
0
0
0
0
0
Soliton groups as the reason for extreme statistics of unidirectional sea waves
The results of the probabilistic analysis of the direct numerical simulations of irregular unidirectional deep-water waves are discussed. It is shown that an occurrence of large-amplitude soliton-like groups represents an extraordinary case, which is able to increase noticeably the probability of high waves even in moderately rough sea conditions. The ensemble of wave realizations should be large enough to take these rare events into account. Hence we provide a striking example when long-living coherent structures make the water wave statistics extreme.
0
1
0
0
0
0
Stack Overflow: A Code Laundering Platform?
Developers use Question and Answer (Q&A) websites to exchange knowledge and expertise. Stack Overflow is a popular Q&A website where developers discuss coding problems and share code examples. Although all Stack Overflow posts are free to access, code examples on Stack Overflow are governed by the Creative Commons Attribute-ShareAlike 3.0 Unported license that developers should obey when reusing code from Stack Overflow or posting code to Stack Overflow. In this paper, we conduct a case study with 399 Android apps, to investigate whether developers respect license terms when reusing code from Stack Overflow posts (and the other way around). We found 232 code snippets in 62 Android apps from our dataset that were potentially reused from Stack Overflow, and 1,226 Stack Overflow posts containing code examples that are clones of code released in 68 Android apps, suggesting that developers may have copied the code of these apps to answer Stack Overflow questions. We investigated the licenses of these pieces of code and observed 1,279 cases of potential license violations (related to code posting to Stack overflow or code reuse from Stack overflow). This paper aims to raise the awareness of the software engineering community about potential unethical code reuse activities taking place on Q&A websites like Stack Overflow.
1
0
0
0
0
0
New Abilities and Limitations of Spectral Graph Bisection
Spectral based heuristics belong to well-known commonly used methods which determines provably minimal graph bisection or outputs "fail" when the optimality cannot be certified. In this paper we focus on Boppana's algorithm which belongs to one of the most prominent methods of this type. It is well known that the algorithm works well in the random \emph{planted bisection model} -- the standard class of graphs for analysis minimum bisection and relevant problems. In 2001 Feige and Kilian posed the question if Boppana's algorithm works well in the semirandom model by Blum and Spencer. In our paper we answer this question affirmatively. We show also that the algorithm achieves similar performance on graph classes which extend the semirandom model. Since the behavior of Boppana's algorithm on the semirandom graphs remained unknown, Feige and Kilian proposed a new semidefinite programming (SDP) based approach and proved that it works on this model. The relationship between the performance of the SDP based algorithm and Boppana's approach was left as an open problem. In this paper we solve the problem in a complete way by proving that the bisection algorithm of Feige and Kilian provides exactly the same results as Boppana's algorithm. As a consequence we get that Boppana's algorithm achieves the optimal threshold for exact cluster recovery in the \emph{stochastic block model}. On the other hand we prove some limitations of Boppana's approach: we show that if the density difference on the parameters of the planted bisection model is too small then the algorithm fails with high probability in the model.
1
0
0
0
0
0
Deep Residual Learning for Accelerated MRI using Magnitude and Phase Networks
Accelerated magnetic resonance (MR) scan acquisition with compressed sensing (CS) and parallel imaging is a powerful method to reduce MR imaging scan time. However, many reconstruction algorithms have high computational costs. To address this, we investigate deep residual learning networks to remove aliasing artifacts from artifact corrupted images. The proposed deep residual learning networks are composed of magnitude and phase networks that are separately trained. If both phase and magnitude information are available, the proposed algorithm can work as an iterative k-space interpolation algorithm using framelet representation. When only magnitude data is available, the proposed approach works as an image domain post-processing algorithm. Even with strong coherent aliasing artifacts, the proposed network successfully learned and removed the aliasing artifacts, whereas current parallel and CS reconstruction methods were unable to remove these artifacts. Comparisons using single and multiple coil show that the proposed residual network provides good reconstruction results with orders of magnitude faster computational time than existing compressed sensing methods. The proposed deep learning framework may have a great potential for accelerated MR reconstruction by generating accurate results immediately.
0
0
0
1
0
0
Quantification of market efficiency based on informational-entropy
Since the 1960s, the question whether markets are efficient or not is controversially discussed. One reason for the difficulty to overcome the controversy is the lack of a universal, but also precise, quantitative definition of efficiency that is able to graduate between different states of efficiency. The main purpose of this article is to fill this gap by developing a measure for the efficiency of markets that fulfill all the stated requirements. It is shown that the new definition of efficiency, based on informational-entropy, is equivalent to the two most used definitions of efficiency from Fama and Jensen. The new measure therefore enables steps to settle the dispute over the state of efficiency in markets. Moreover, it is shown that inefficiency in a market can either arise from the possibility to use information to predict an event with higher than chance level, or can emerge from wrong pricing/ quotes that do not reflect the right probabilities of possible events. Finally, the calculation of efficiency is demonstrated on a simple game (of coin tossing), to show how one could exactly quantify the efficiency in any market-like system, if all probabilities are known.
0
0
0
0
0
1
A critical analysis of resampling strategies for the regularized particle filter
We analyze the performance of different resampling strategies for the regularized particle filter regarding parameter estimation. We show in particular, building on analytical insight obtained in the linear Gaussian case, that resampling systematically can prevent the filtered density from converging towards the true posterior distribution. We discuss several means to overcome this limitation, including kernel bandwidth modulation, and provide evidence that the resulting particle filter clearly outperforms traditional bootstrap particle filters. Our results are supported by numerical simulations on a linear textbook example, the logistic map and a non-linear plant growth model.
0
0
1
1
0
0
Geometry of the free-sliding Bernoulli beam
If a variational problem comes with no boundary conditions prescribed beforehand, and yet these arise as a consequence of the variation process itself, we speak of a free boundary values variational problem. Such is, for instance, the problem of finding the shortest curve whose endpoints can slide along two prescribed curves. There exists a rigorous geometric way to formulate this sort of problems on smooth manifolds with boundary, which we review here in a friendly self-contained way. As an application, we study a particular free boundary values variational problem, the free-sliding Bernoulli beam.
0
0
1
0
0
0
Note on equivalences for degenerations of Calabi-Yau manifolds
This note studies the equivalencies among convergences of Ricci-flat Kähler-Einstein metrics on Calabi-Yau manifolds, cohomology classes and potential functions.
0
0
1
0
0
0
Improving DNN-based Music Source Separation using Phase Features
Music source separation with deep neural networks typically relies only on amplitude features. In this paper we show that additional phase features can improve the separation performance. Using the theoretical relationship between STFT phase and amplitude, we conjecture that derivatives of the phase are a good feature representation opposed to the raw phase. We verify this conjecture experimentally and propose a new DNN architecture which combines amplitude and phase. This joint approach achieves a better signal-to distortion ratio on the DSD100 dataset for all instruments compared to a network that uses only amplitude features. Especially, the bass instrument benefits from the phase information.
1
0
0
0
0
0
Rapid, User-Transparent, and Trustworthy Device Pairing for D2D-Enabled Mobile Crowdsourcing
Mobile Crowdsourcing is a promising service paradigm utilizing ubiquitous mobile devices to facilitate largescale crowdsourcing tasks (e.g. urban sensing and collaborative computing). Many applications in this domain require Device-to-Device (D2D) communications between participating devices for interactive operations such as task collaborations and file transmissions. Considering the private participating devices and their opportunistic encountering behaviors, it is highly desired to establish secure and trustworthy D2D connections in a fast and autonomous way, which is vital for implementing practical Mobile Crowdsourcing Systems (MCSs). In this paper, we develop an efficient scheme, Trustworthy Device Pairing (TDP), which achieves user-transparent secure D2D connections and reliable peer device selections for trustworthy D2D communications. Through rigorous analysis, we demonstrate the effectiveness and security intensity of TDP in theory. The performance of TDP is evaluated based on both real-world prototype experiments and extensive trace-driven simulations. Evaluation results verify our theoretical analysis and show that TDP significantly outperforms existing approaches in terms of pairing speed, stability, and security.
1
0
0
0
0
0
liquidSVM: A Fast and Versatile SVM package
liquidSVM is a package written in C++ that provides SVM-type solvers for various classification and regression tasks. Because of a fully integrated hyper-parameter selection, very carefully implemented solvers, multi-threading and GPU support, and several built-in data decomposition strategies it provides unprecedented speed for small training sizes as well as for data sets of tens of millions of samples. Besides the C++ API and a command line interface, bindings to R, MATLAB, Java, Python, and Spark are available. We present a brief description of the package and report experimental comparisons to other SVM packages.
0
0
0
1
0
0
Thermalizing sterile neutrino dark matter
Sterile neutrinos produced through oscillations are a well motivated dark matter candidate, but recent constraints from observations have ruled out most of the parameter space. We analyze the impact of new interactions on the evolution of keV sterile neutrino dark matter in the early Universe. Based on general considerations we find a mechanism which thermalizes the sterile neutrinos after an initial production by oscillations. The thermalization of sterile neutrinos is accompanied by dark entropy production which increases the yield of dark matter and leads to a lower characteristic momentum. This resolves the growing tensions with structure formation and X-ray observations and even revives simple non-resonant production as a viable way to produce sterile neutrino dark matter. We investigate the parameters required for the realization of the thermalization mechanism in a representative model and find that a simple estimate based on energy- and entropy conservation describes the mechanism well.
0
1
0
0
0
0
Information Elicitation for Bayesian Auctions
In this paper we design information elicitation mechanisms for Bayesian auctions. While in Bayesian mechanism design the distributions of the players' private types are often assumed to be common knowledge, information elicitation considers the situation where the players know the distributions better than the decision maker. To weaken the information assumption in Bayesian auctions, we consider an information structure where the knowledge about the distributions is arbitrarily scattered among the players. In such an unstructured information setting, we design mechanisms for unit-demand auctions and additive auctions that aggregate the players' knowledge, generating revenue that are constant approximations to the optimal Bayesian mechanisms with a common prior. Our mechanisms are 2-step dominant-strategy truthful and the revenue increases gracefully with the amount of knowledge the players collectively have.
1
0
0
0
0
0
Partially chaotic orbits in a perturbed cubic force model
Three types of orbits are theoretically possible in autonomous Hamiltonian systems with three degrees of freedom: fully chaotic (they only obey the energy integral), partially chaotic (they obey an additional isolating integral besides energy) and regular (they obey two isolating integrals besides energy). The existence of partially chaotic orbits has been denied by several authors, however, arguing either that there is a sudden transition from regularity to full chaoticity, or that a long enough follow up of a supposedly partially chaotic orbit would reveal a fully chaotic nature. This situation needs clarification, because partially chaotic orbits might play a significant role in the process of chaotic diffusion. Here we use numerically computed Lyapunov exponents to explore the phase space of a perturbed three dimensional cubic force toy model, and a generalization of the Poincaré maps to show that partially chaotic orbits are actually present in that model. They turn out to be double orbits joined by a bifurcation zone, which is the most likely source of their chaos, and they are encapsulated in regions of phase space bounded by regular orbits similar to each one of the components of the double orbit.
0
1
0
0
0
0
Measurable process selection theorem and non-autonomous inclusions
A semi-process is an analog of the semi-flow for non-autonomous differential equations or inclusions. We prove an abstract result on the existence of measurable semi-processes in the situations where there is no uniqueness. Also, we allow solutions to blow up in finite time and then obtain local semi-processes.
0
1
1
0
0
0
Just-infinite C*-algebras and their invariants
Just-infinite C*-algebras, i.e., infinite dimensional C*-algebras, whose proper quotients are finite dimensional, were investigated in [Grigorchuk-Musat-Rordam, 2016]. One particular example of a just-infinite residually finite dimensional AF-algebras was constructed in that article. In this paper we extend that construction by showing that each infinite dimensional metrizable Choquet simplex is affinely homeomorphic to the trace simplex of a just-infinite residually finite dimensional C*-algebras. The trace simplex of any unital residually finite dimensional C*-algebra is hence realized by a just-infinite one. We determine the trace simplex of the particular residually finite dimensional AF-algebras constructed in the above mentioned article, and we show that it has precisely one extremal trace of type II_1. We give a complete description of the Bratteli diagrams corresponding to residually finite dimensional AF-algebras. We show that a modification of any such Bratteli diagram, similar to the modification that makes an arbitrary Bratteli diagram simple, will yield a just-infinite residually finite dimensional AF-algebra.
0
0
1
0
0
0
A Large-scale Dataset and Benchmark for Similar Trademark Retrieval
Trademark retrieval (TR) has become an important yet challenging problem due to an ever increasing trend in trademark applications and infringement incidents. There have been many promising attempts for the TR problem, which, however, fell impracticable since they were evaluated with limited and mostly trivial datasets. In this paper, we provide a large-scale dataset with benchmark queries with which different TR approaches can be evaluated systematically. Moreover, we provide a baseline on this benchmark using the widely-used methods applied to TR in the literature. Furthermore, we identify and correct two important issues in TR approaches that were not addressed before: reversal of contrast, and presence of irrelevant text in trademarks severely affect the TR methods. Lastly, we applied deep learning, namely, several popular Convolutional Neural Network models, to the TR problem. To the best of the authors, this is the first attempt to do so.
1
0
0
0
0
0
Tracing Networks of Knowledge in the Digital Age
The emergence of new digital technologies has allowed the study of human behaviour at a scale and at level of granularity that were unthinkable just a decade ago. In particular, by analysing the digital traces left by people interacting in the online and offline worlds, we are able to trace the spreading of knowledge and ideas at both local and global scales. In this article we will discuss how these digital traces can be used to map knowledge across the world, outlining both the limitations and the challenges in performing this type of analysis. We will focus on data collected from social media platforms, large-scale digital repositories and mobile data. Finally, we will provide an overview of the tools that are available to scholars and practitioners for understanding these processes using these emerging forms of data.
1
1
0
0
0
0
Observational evidence of galaxy assembly bias
We analyze the spectra of 300,000 luminous red galaxies (LRGs) with stellar masses $M_* \gtrsim 10^{11} M_{\odot}$ from the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS). By studying their star-formation histories, we find two main evolutionary paths converging into the same quiescent galaxy population at $z\sim0.55$. Fast-growing LRGs assemble $80\%$ of their stellar mass very early on ($z\sim5$), whereas slow-growing LRGs reach the same evolutionary state at $z\sim1.5$. Further investigation reveals that their clustering properties on scales of $\sim$1-30 Mpc are, at a high level of significance, also different. Fast-growing LRGs are found to be more strongly clustered and reside in overall denser large-scale structure environments than slow-growing systems, for a given stellar-mass threshold. Our results imply a dependence of clustering on stellar-mass assembly history (naturally connected to the mass-formation history of the corresponding halos) for a homogeneous population of similar mass and color, which constitutes a strong observational evidence of galaxy assembly bias.
0
1
0
0
0
0
Data-Driven Tree Transforms and Metrics
We consider the analysis of high dimensional data given in the form of a matrix with columns consisting of observations and rows consisting of features. Often the data is such that the observations do not reside on a regular grid, and the given order of the features is arbitrary and does not convey a notion of locality. Therefore, traditional transforms and metrics cannot be used for data organization and analysis. In this paper, our goal is to organize the data by defining an appropriate representation and metric such that they respect the smoothness and structure underlying the data. We also aim to generalize the joint clustering of observations and features in the case the data does not fall into clear disjoint groups. For this purpose, we propose multiscale data-driven transforms and metrics based on trees. Their construction is implemented in an iterative refinement procedure that exploits the co-dependencies between features and observations. Beyond the organization of a single dataset, our approach enables us to transfer the organization learned from one dataset to another and to integrate several datasets together. We present an application to breast cancer gene expression analysis: learning metrics on the genes to cluster the tumor samples into cancer sub-types and validating the joint organization of both the genes and the samples. We demonstrate that using our approach to combine information from multiple gene expression cohorts, acquired by different profiling technologies, improves the clustering of tumor samples.
1
0
0
1
0
0
NotiMind: Utilizing Responses to Smart Phone Notifications as Affective sensors
Today's mobile phone users are faced with large numbers of notifications on social media, ranging from new followers on Twitter and emails to messages received from WhatsApp and Facebook. These digital alerts continuously disrupt activities through instant calls for attention. This paper examines closely the way everyday users interact with notifications and their impact on users' emotion. Fifty users were recruited to download our application NotiMind and use it over a five-week period. Users' phones collected thousands of social and system notifications along with affect data collected via self-reported PANAS tests three times a day. Results showed a noticeable correlation between positive affective measures and keyboard activities. When large numbers of Post and Remove notifications occur, a corresponding increase in negative affective measures is detected. Our predictive model has achieved a good accuracy level using three different classifiers "in the wild" (F-measure 74-78% within-subject model, 72-76% global model). Our findings show that it is possible to automatically predict when people are experiencing positive, neutral or negative affective states based on interactions with notifications. We also show how our findings open the door to a wide range of applications in relation to emotion awareness on social and mobile communication.
1
0
0
0
0
0
Aktuelle Entwicklungen in der Automatischen Musikverfolgung
In this paper we present current trends in real-time music tracking (a.k.a. score following). Casually speaking, these algorithms "listen" to a live performance of music, compare the audio signal to an abstract representation of the score, and "read" along in the sheet music. In this way at any given time the exact position of the musician(s) in the sheet music is computed. Here, we focus on the aspects of flexibility and usability of these algorithms. This comprises work on automatic identification and flexible tracking of the piece being played as well as current approaches based on Deep Learning. The latter enables direct learning of correspondences between complex audio data and images of the sheet music, avoiding the complicated and time-consuming definition of a mid-level representation. ----- Diese Arbeit befasst sich mit aktuellen Entwicklungen in der automatischen Musikverfolgung durch den Computer. Es handelt sich dabei um Algorithmen, die einer musikalischen Aufführung "zuhören", das aufgenommene Audiosignal mit einer (abstrakten) Repräsentation des Notentextes vergleichen und sozusagen in diesem mitlesen. Der Algorithmus kennt also zu jedem Zeitpunkt die Position der Musiker im Notentext. Neben der Vermittlung eines generellen Überblicks, liegt der Schwerpunkt dieser Arbeit auf der Beleuchtung des Aspekts der Flexibilität und der einfacheren Nutzbarkeit dieser Algorithmen. Es wird dargelegt, welche Schritte getätigt wurden (und aktuell getätigt werden) um den Prozess der automatischen Musikverfolgung einfacher zugänglich zu machen. Dies umfasst Arbeiten zur automatischen Identifikation von gespielten Stücken und deren flexible Verfolgung ebenso wie aktuelle Ansätze mithilfe von Deep Learning, die es erlauben Bild und Ton direkt zu verbinden, ohne Umwege über abstrakte und nur unter gro{\ss}em Zeitaufwand zu erstellende Zwischenrepräsentationen.
1
0
0
0
0
0
Blind Demixing and Deconvolution at Near-Optimal Rate
We consider simultaneous blind deconvolution of r source signals from their noisy superposition, a problem also referred to blind demixing and deconvolution. This signal processing problem occurs in the context of the Internet of Things where a massive number of sensors sporadically communicate only short messages over unknown channels. We show that robust recovery of message and channel vectors can be achieved via convex optimization when random linear encoding using i.i.d. complex Gaussian matrices is used at the devices and the number of required measurements at the receiver scales with the degrees of freedom of the overall estimation problem. Since the scaling is linear in r our result significantly improves over recent works.
1
0
0
0
0
0
A Visualization of the Classical Musical Tradition
A study of around 13,000 musical compositions from the Western classical tradition is carried out, spanning 33 major composers from the Baroque to the Romantic, with a focus on the usage of major/minor key signatures. A 2-dimensional chromatic diagram is proposed to succinctly visualize the data. The diagram is found to be useful not only in distinguishing style and period, but also in tracking the career development of a particular composer.
0
0
0
1
0
0
Multivariate Locally Stationary Wavelet Process Analysis with the mvLSW R Package
This paper describes the R package mvLSW. The package contains a suite of tools for the analysis of multivariate locally stationary wavelet (LSW) time series. Key elements include: (i) the simulation of multivariate LSW time series for a given multivariate evolutionary wavelet spectrum (EWS); (ii) estimation of the time-dependent multivariate EWS for a given time series; (iii) estimation of the time-dependent coherence and partial coherence between time series channels; and, (iv) estimation of approximate confidence intervals for multivariate EWS estimates. A demonstration of the package is presented via both a simulated example and a case study with EuStockMarkets from the datasets package. This paper has been accepted by the Journal of Statistical Software. Presented code extracts demonstrating the mvLSW package is performed under version 1.2.1.
0
0
0
1
0
0
Using Posters to Recommend Anime and Mangas in a Cold-Start Scenario
Item cold-start is a classical issue in recommender systems that affects anime and manga recommendations as well. This problem can be framed as follows: how to predict whether a user will like a manga that received few ratings from the community? Content-based techniques can alleviate this issue but require extra information, that is usually expensive to gather. In this paper, we use a deep learning technique, Illustration2Vec, to easily extract tag information from the manga and anime posters (e.g., sword, or ponytail). We propose BALSE (Blended Alternate Least Squares with Explanation), a new model for collaborative filtering, that benefits from this extra information to recommend mangas. We show, using real data from an online manga recommender system called Mangaki, that our model improves substantially the quality of recommendations, especially for less-known manga, and is able to provide an interpretation of the taste of the users.
1
0
0
1
0
0
Generalized least squares can overcome the critical threshold in respondent-driven sampling
In order to sample marginalized and/or hard-to-reach populations, respondent-driven sampling (RDS) and similar techniques reach their participants via peer referral. Under a Markov model for RDS, previous research has shown that if the typical participant refers too many contacts, then the variance of common estimators does not decay like $O(n^{-1})$, where $n$ is the sample size. This implies that confidence intervals will be far wider than under a typical sampling design. Here we show that generalized least squares (GLS) can effectively reduce the variance of RDS estimates. In particular, a theoretical analysis indicates that the variance of the GLS estimator is $O(n^{-1})$. We then derive two classes of feasible GLS estimators. The first class is based upon a Degree Corrected Stochastic Blockmodel for the underlying social network. The second class is based upon a rank-two model. It might be of independent interest that in both model classes, the theoretical results show that it is possible to estimate the spectral properties of the population network from the sampled observations. Simulations on empirical social networks show that the feasible GLS (fGLS) estimators can have drastically smaller error and rarely increase the error. A diagnostic plot helps to identify where fGLS will aid estimation. The fGLS estimators continue to outperform standard estimators even when they are built from a misspecified model and when there is preferential recruitment.
0
0
1
1
0
0
Electron-Hole Symmetry Breaking in Charge Transport in Nitrogen-Doped Graphene
Graphitic nitrogen-doped graphene is an excellent platform to study scattering processes of massless Dirac fermions by charged impurities, in which high mobility can be preserved due to the absence of lattice defects through direct substitution of carbon atoms in the graphene lattice by nitrogen atoms. In this work, we report on electrical and magnetotransport measurements of high-quality graphitic nitrogen-doped graphene. We show that the substitutional nitrogen dopants in graphene introduce atomically sharp scatters for electrons but long-range Coulomb scatters for holes and, thus, graphitic nitrogen-doped graphene exhibits clear electron-hole asymmetry in transport properties. Dominant scattering processes of charge carriers in graphitic nitrogen-doped graphene are analyzed. It is shown that the electron-hole asymmetry originates from a distinct difference in intervalley scattering of electrons and holes. We have also carried out the magnetotransport measurements of graphitic nitrogen-doped graphene at different temperatures and the temperature dependences of intervalley scattering, intravalley scattering and phase coherent scattering rates are extracted and discussed. Our results provide an evidence for the electron-hole asymmetry in the intervalley scattering induced by substitutional nitrogen dopants in graphene and shine a light on versatile and potential applications of graphitic nitrogen-doped graphene in electronic and valleytronic devices.
0
1
0
0
0
0
Micro-sized cold atmospheric plasma source for brain and breast cancer treatment
Micro-sized cold atmospheric plasma (uCAP) has been developed to expand the applications of CAP in cancer therapy. In this paper, uCAP devices with different nozzle lengths were applied to investigate effects on both brain (glioblastoma U87) and breast (MDA-MB-231) cancer cells. Various diagnostic techniques were employed to evaluate the parameters of uCAP devices with different lengths such as potential distribution, electron density, and optical emission spectroscopy. The generation of short- and long-lived species (such as hydroxyl radical (.OH), superoxide (O2-), hydrogen peroxide (H2O2), nitrite (NO2-), et al) were studied. These data revealed that uCAP treatment with a 20 mm length tube has a stronger effect than that of the 60 mm tube due to the synergetic effects of reactive species and free radicals. Reactive species generated by uCAP enhanced tumor cell death in a dose-dependent fashion and was not specific with regards to tumor cell type.
0
0
0
0
1
0
How Many Random Seeds? Statistical Power Analysis in Deep Reinforcement Learning Experiments
Consistently checking the statistical significance of experimental results is one of the mandatory methodological steps to address the so-called "reproducibility crisis" in deep reinforcement learning. In this tutorial paper, we explain how the number of random seeds relates to the probabilities of statistical errors. For both the t-test and the bootstrap confidence interval test, we recall theoretical guidelines to determine the number of random seeds one should use to provide a statistically significant comparison of the performance of two algorithms. Finally, we discuss the influence of deviations from the assumptions usually made by statistical tests. We show that they can lead to inaccurate evaluations of statistical errors and provide guidelines to counter these negative effects. We make our code available to perform the tests.
0
0
0
1
0
0
Training Quantized Nets: A Deeper Understanding
Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding low-precision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions. We then look at the behavior of these algorithms for non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.
1
0
0
1
0
0
Some Sharpening and Generalizations of a result of T. J. Rivlin
Let $p(z)=a_0+a_1z+a_2z^2+a_3z^3+\cdots+a_nz^n$ be a polynomial of degree $n$. Rivlin \cite{Rivlin} proved that if $p(z)\neq 0$ in the unit disk, then for $0<r\leq 1$, $\displaystyle{\max_{|z| = r}|p(z)|} \geq \Big(\dfrac{r+1}{2}\Big)^n \displaystyle{\max_{|z|=1} |p(z)|}.$ ~In this paper, we prove a sharpening and generalization of this result, and show by means of examples that for some polynomials our result can significantly improve the bound obtained by the Rivlin's Theorem.
0
0
1
0
0
0
Computational Aided Design for Generating a Modular, Lightweight Car Concept
Developing an appropriate design process for a conceptual model is a stepping stone toward designing car bodies. This paper presents a methodology to design a lightweight and modular space frame chassis for a sedan electric car. The dual phase high strength steel with improved mechanical properties is employed to reduce the weight of the car body. Utilizing the finite element analysis yields two models in order to predict the performance of each component. The first model is a beam structure with a rapid response in structural stiffness simulation. This model is used for performing the static tests including modal frequency, bending stiffens and torsional stiffness evaluation. Whereas the second model, i.e., a shell model, is proposed to illustrate every module's mechanical behavior as well as its crashworthiness efficiency. In order to perform the crashworthiness analysis, the explicit nonlinear dynamic solver provided by ABAQUS, a commercial finite element software, is used. The results of finite element beam and shell models are in line with the concept design specifications. Implementation of this procedure leads to generate a lightweight and modular concept for an electric car.
1
0
0
0
0
0
Reaction-Diffusion Systems in Epidemiology
A key problem in modelling the evolution dynamics of infectious diseases is the mathematical representation of the mechanism of transmission of the contagion. Models with a finite number of subpopulations can be described via systems of ordinary differential equations. When dealing with populations with space structure the relevant quantities are spatial densities, whose evolution in time requires nonlinear partial differential equations, which are known as reaction-diffusion systems. Here we present an (historical) outline of mathematical epidemiology, with a particular attention to the role of spatial heterogeneity and dispersal in the population dynamics of infectious diseases. Two specific examples are discussed, which have been the subject of intensive research by the authors, i.e. man-environment-man epidemics, and malaria. In addition to the epidemiological relevance of these epidemics all over the world, their treatment requires a large amount of different sophisticate mathematical methods, and has even posed new non trivial mathematical problems, as one can realize from the list of references. One of the most relevant problems posed by the authors, i.e. regional control, has been emphasized here: the public health concern consists of eradicating the disease in the relevant population, as fast as possible. On the other hand, very often the entire domain of interest for the epidemic, is either unknown, or difficult to manage for an affordable implementation of suitable environmental programmes. For regional control instead it might be sufficient to implement such programmes only in a given subregion conveniently chosen so to lead to an effective (exponentially fast) eradication of the epidemic in the whole habitat; it is evident that this practice may have an enormous importance in real cases with respect to both financial and practical affordability.
0
0
1
0
0
0
Is charge order induced near an antiferromagnetic quantum critical point?
We investigate the interplay between charge order and superconductivity near an antiferromagnetic quantum critical point using sign-problem-free Quantum Monte Carlo simulations. We establish that, when the electronic dispersion is particle-hole symmetric, the system has an emergent SU(2) symmetry that implies a degeneracy between $d$-wave superconductivity and charge order with $d$-wave form factor. Deviations from particle-hole symmetry, however, rapidly lift this degeneracy, despite the fact that the SU(2) symmetry is preserved at low energies. As a result, we find a strong suppression of charge order caused by the competing, leading superconducting instability. Across the antiferromagnetic phase transition, we also observe a shift in the charge order wave-vector from diagonal to axial. We discuss the implications of our results to the universal phase diagram of antiferromagnetic quantum-critical metals and to the elucidation of the charge order experimentally observed in the cuprates.
0
1
0
0
0
0
Commissioning and performance results of the WFIRST/PISCES integral field spectrograph
The Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) is a high contrast integral field spectrograph (IFS) whose design was driven by WFIRST coronagraph instrument requirements. We present commissioning and operational results using PISCES as a camera on the High Contrast Imaging Testbed at JPL. PISCES has demonstrated ability to achieve high contrast spectral retrieval with flight-like data reduction and analysis techniques.
0
1
0
0
0
0
Metal nanospheres under intense continuous wave illumination - a unique case of non-perturbative nonlinear nanophotonics
We show that the standard perturbative (i.e., cubic) description of the thermal nonlinear response of small metal nanospheres to intense continuous wave illumination is insufficient already beyond temperature rises of a few tens of degrees. In some cases, a cubic-quintic nonlinear response is sufficient to describe accurately the intensity dependence of the temperature, permittivity and field, while in other cases, a full non-perturbative description is required. We further analyze the relative importance of the various contributions to the thermal nonlinearity, identify a qualitative difference between Au and Ag, and show that the thermo-optical nonlinearity of the host typically plays a minor role, but its thermal conductivity is important.
0
1
0
0
0
0
Revealing strong bias in common measures of galaxy properties using new inclination-independent structures
Accurate measurement of galaxy structures is a prerequisite for quantitative investigation of galaxy properties or evolution. Yet, the impact of galaxy inclination and dust on commonly used metrics of galaxy structure is poorly quantified. We use infrared data sets to select inclination-independent samples of disc and flattened elliptical galaxies. These samples show strong variation in Sérsic index, concentration, and half-light radii with inclination. We develop novel inclination-independent galaxy structures by collapsing the light distribution in the near-infrared on to the major axis, yielding inclination-independent `linear' measures of size and concentration. With these new metrics we select a sample of Milky Way analogue galaxies with similar stellar masses, star formation rates, sizes and concentrations. Optical luminosities, light distributions, and spectral properties are all found to vary strongly with inclination: When inclining to edge-on, $r$-band luminosities dim by $>$1 magnitude, sizes decrease by a factor of 2, `dust-corrected' estimates of star formation rate drop threefold, metallicities decrease by 0.1 dex, and edge-on galaxies are half as likely to be classified as star forming. These systematic effects should be accounted for in analyses of galaxy properties.
0
1
0
0
0
0
Behind Every Great Tree is a Great (Phylogenetic) Network
In Francis and Steel (2015), it was shown that there exists non-trivial networks on $4$ leaves upon which the distance metric affords a metric on a tree which is not the base tree of the network. In this paper we extend this result in two directions. We show that for any tree $T$ there exists a family of non-trivial HGT networks $N$ for which the distance metric $d_N$ affords a metric on $T$. We additionally provide a class of networks on any number of leaves upon which the distance metric affords a metric on a tree which is not the base tree of the network. The family of networks are all "floating" networks, a subclass of a novel family of networks introduced in this paper, and referred to as "versatile" networks. Versatile networks are then characterised. Additionally, we find a lower bound for the number of `useful' HGT arcs in such networks, in a sense explained in the paper. This lower bound is equal to the number of HGT arcs required for each floating network in the main results, and thus our networks are minimal in this sense.
0
0
1
0
0
0
Cosmic-ray induced destruction of CO in star-forming galaxies
We explore the effects of the expected higher cosmic ray (CR) ionization rates $\zeta_{\rm CR}$ on the abundances of carbon monoxide (CO), atomic carbon (C), and ionized carbon (C$^+$) in the H$_2$ clouds of star-forming galaxies. The study of Bisbas et al. (2015) is expanded by: a) using realistic inhomogeneous Giant Molecular Cloud (GMC) structures, b) a detailed chemical analysis behind the CR-induced destruction of CO, and c) exploring the thermal state of CR-irradiated molecular gas. CRs permeating the interstellar medium with $\zeta_{\rm CR}$$\gtrsim 10\times$(Galactic) are found to significantly reduce the [CO]/[H$_2$] abundance ratios throughout the mass of a GMC. CO rotational line imaging will then show much clumpier structures than the actual ones. For $\zeta_{\rm CR}$$\gtrsim 100\times$(Galactic) this bias becomes severe, limiting the utility of CO lines for recovering structural and dynamical characteristics of H$_2$-rich galaxies throughout the Universe, including many of the so-called Main Sequence (MS) galaxies where the bulk of cosmic star formation occurs. Both C$^+$ and C abundances increase with rising $\zeta_{\rm CR}$, with C remaining the most abundant of the two throughout H$_2$ clouds, when $\zeta_{\rm CR}\sim (1-100)\times$(Galactic). C$^+$ starts to dominate for $\zeta_{\rm CR}$$\gtrsim 10^3\times$(Galactic). The thermal state of the gas in the inner and denser regions of GMCs is invariant with $T_{\rm gas}\sim 10\,{\rm K}$ for $\zeta_{\rm CR}\sim (1-10)\times$(Galactic). For $\zeta_{\rm CR}$$\sim 10^3\times$(Galactic) this is no longer the case and $T_{\rm gas}\sim 30-50\,{\rm K}$ are reached. Finally we identify OH as the key species whose $T_{\rm gas}-$sensitive abundance could mitigate the destruction of CO at high temperatures.
0
1
0
0
0
0
News Session-Based Recommendations using Deep Neural Networks
News recommender systems are aimed to personalize users experiences and help them to discover relevant articles from a large and dynamic search space. Therefore, news domain is a challenging scenario for recommendations, due to its sparse user profiling, fast growing number of items, accelerated item's value decay, and users preferences dynamic shift. Some promising results have been recently achieved by the usage of Deep Learning techniques on Recommender Systems, specially for item's feature extraction and for session-based recommendations with Recurrent Neural Networks. In this paper, it is proposed an instantiation of the CHAMELEON -- a Deep Learning Meta-Architecture for News Recommender Systems. This architecture is composed of two modules, the first responsible to learn news articles representations, based on their text and metadata, and the second module aimed to provide session-based recommendations using Recurrent Neural Networks. The recommendation task addressed in this work is next-item prediction for users sessions: "what is the next most likely article a user might read in a session?" Users sessions context is leveraged by the architecture to provide additional information in such extreme cold-start scenario of news recommendation. Users' behavior and item features are both merged in an hybrid recommendation approach. A temporal offline evaluation method is also proposed as a complementary contribution, for a more realistic evaluation of such task, considering dynamic factors that affect global readership interests like popularity, recency, and seasonality. Experiments with an extensive number of session-based recommendation methods were performed and the proposed instantiation of CHAMELEON meta-architecture obtained a significant relative improvement in top-n accuracy and ranking metrics (10% on Hit Rate and 13% on MRR) over the best benchmark methods.
0
0
0
1
0
0
Online Nonparametric Anomaly Detection based on Geometric Entropy Minimization
We consider the online and nonparametric detection of abrupt and persistent anomalies, such as a change in the regular system dynamics at a time instance due to an anomalous event (e.g., a failure, a malicious activity). Combining the simplicity of the nonparametric Geometric Entropy Minimization (GEM) method with the timely detection capability of the Cumulative Sum (CUSUM) algorithm we propose a computationally efficient online anomaly detection method that is applicable to high-dimensional datasets, and at the same time achieve a near-optimum average detection delay performance for a given false alarm constraint. We provide new insights to both GEM and CUSUM, including new asymptotic analysis for GEM, which enables soft decisions for outlier detection, and a novel interpretation of CUSUM in terms of the discrepancy theory, which helps us generalize it to the nonparametric GEM statistic. We numerically show, using both simulated and real datasets, that the proposed nonparametric algorithm attains a close performance to the clairvoyant parametric CUSUM test.
0
0
0
1
0
0
Learning latent structure of large random graphs
In this paper, we estimate the distribution of hidden nodes weights in large random graphs from the observation of very few edges weights. In this very sparse setting, the first non-asymptotic risk bounds for maximum likelihood estimators (MLE) are established. The proof relies on the construction of a graphical model encoding conditional dependencies that is extremely efficient to study n-regular graphs obtained using a round-robin scheduling. This graphical model allows to prove geometric loss of memory properties and deduce the asymp-totic behavior of the likelihood function. Following a classical construction in learning theory, the asymptotic likelihood is used to define a measure of performance for the MLE. Risk bounds for the MLE are finally obtained by subgaussian deviation results derived from concentration inequalities for Markov chains applied to our graphical model.
0
0
1
1
0
0
Loop Tiling in Large-Scale Stencil Codes at Run-time with OPS
The key common bottleneck in most stencil codes is data movement, and prior research has shown that improving data locality through optimisations that schedule across loops do particularly well. However, in many large PDE applications it is not possible to apply such optimisations through compilers because there are many options, execution paths and data per grid point, many dependent on run-time parameters, and the code is distributed across different compilation units. In this paper, we adapt the data locality improving optimisation called iteration space slicing for use in large OPS applications both in shared-memory and distributed-memory systems, relying on run-time analysis and delayed execution. We evaluate our approach on a number of applications, observing speedups of 2$\times$ on the Cloverleaf 2D/3D proxy application, which contain 83/141 loops respectively, $3.5\times$ on the linear solver TeaLeaf, and $1.7\times$ on the compressible Navier-Stokes solver OpenSBLI. We demonstrate strong and weak scalability up to 4608 cores of CINECA's Marconi supercomputer. We also evaluate our algorithms on Intel's Knights Landing, demonstrating maintained throughput as the problem size grows beyond 16GB, and we do scaling studies up to 8704 cores. The approach is generally applicable to any stencil DSL that provides per loop data access information.
1
0
0
0
0
0
Second differentials in the Quillen spectral sequence
For an algebraic variety $X$ we introduce generalized first Chern classes, which are defined for coherent sheaves on $X$ with support in codimension $p$ and take values in $CH^p(X)$. We use them to provide an explicit formula for the differentials ${d_2^p: E_2^{p,-p-1} \to E_2^{p+2, -p-2} \cong CH^{p+2}(X)}$ in the Quillen spectral sequence.
0
0
1
0
0
0
General Dynamics of Spinors
In this paper, we consider a general twisted-curved space-time hosting Dirac spinors and we take into account the Lorentz covariant polar decomposition of the Dirac spinor field: the corresponding decomposition of the Dirac spinor field equation leads to a set of field equations that are real and where spinorial components have disappeared while still maintaining Lorentz covariance. We will see that the Dirac spinor will contain two real scalar degrees of freedom, the module and the so-called Yvon-Takabayashi angle, and we will display their field equations. This will permit us to study the coupling of curvature and torsion respectively to the module and the YT angle.
0
1
0
0
0
0
PT-Spike: A Precise-Time-Dependent Single Spike Neuromorphic Architecture with Efficient Supervised Learning
One of the most exciting advancements in AI over the last decade is the wide adoption of ANNs, such as DNN and CNN, in many real-world applications. However, the underlying massive amounts of computation and storage requirement greatly challenge their applicability in resource-limited platforms like the drone, mobile phone, and IoT devices etc. The third generation of neural network model--Spiking Neural Network (SNN), inspired by the working mechanism and efficiency of human brain, has emerged as a promising solution for achieving more impressive computing and power efficiency within light-weighted devices (e.g. single chip). However, the relevant research activities have been narrowly carried out on conventional rate-based spiking system designs for fulfilling the practical cognitive tasks, underestimating SNN's energy efficiency, throughput, and system flexibility. Although the time-based SNN can be more attractive conceptually, its potentials are not unleashed in realistic applications due to lack of efficient coding and practical learning schemes. In this work, a Precise-Time-Dependent Single Spike Neuromorphic Architecture, namely "PT-Spike", is developed to bridge this gap. Three constituent hardware-favorable techniques: precise single-spike temporal encoding, efficient supervised temporal learning, and fast asymmetric decoding are proposed accordingly to boost the energy efficiency and data processing capability of the time-based SNN at a more compact neural network model size when executing real cognitive tasks. Simulation results show that "PT-Spike" demonstrates significant improvements in network size, processing efficiency and power consumption with marginal classification accuracy degradation when compared with the rate-based SNN and ANN under the similar network configuration.
0
0
0
0
1
0
Learning to Represent Edits
We introduce the problem of learning distributed representations of edits. By combining a "neural editor" with an "edit encoder", our models learn to represent the salient information of an edit and can be used to apply edits to new inputs. We experiment on natural language and source code edit data. Our evaluation yields promising results that suggest that our neural network models learn to capture the structure and semantics of edits. We hope that this interesting task and data source will inspire other researchers to work further on this problem.
1
0
0
0
0
0
Semiclassical Prediction of Large Spectral Fluctuations in Interacting Kicked Spin Chains
While plenty of results have been obtained for single-particle quantum systems with chaotic dynamics through a semiclassical theory, much less is known about quantum chaos in the many-body setting. We contribute to recent efforts to make a semiclassical analysis of many-body systems feasible. This is nontrivial due to both the enormous density of states and the exponential proliferation of periodic orbits with the number of particles. As a model system we study kicked interacting spin chains employing semiclassical methods supplemented by a newly developed duality approach. We show that for this model the line between integrability and chaos becomes blurred. Due to the interaction structure the system features (non-isolated) manifolds of periodic orbits possessing highly correlated, collective dynamics. As with the invariant tori in integrable systems, their presence lead to significantly enhanced spectral fluctuations, which by order of magnitude lie in-between integrable and chaotic cases.
0
1
0
0
0
0
If it ain't broke, don't fix it: Sparse metric repair
Many modern data-intensive computational problems either require, or benefit from distance or similarity data that adhere to a metric. The algorithms run faster or have better performance guarantees. Unfortunately, in real applications, the data are messy and values are noisy. The distances between the data points are far from satisfying a metric. Indeed, there are a number of different algorithms for finding the closest set of distances to the given ones that also satisfy a metric (sometimes with the extra condition of being Euclidean). These algorithms can have unintended consequences, they can change a large number of the original data points, and alter many other features of the data. The goal of sparse metric repair is to make as few changes as possible to the original data set or underlying distances so as to ensure the resulting distances satisfy the properties of a metric. In other words, we seek to minimize the sparsity (or the $\ell_0$ "norm") of the changes we make to the distances subject to the new distances satisfying a metric. We give three different combinatorial algorithms to repair a metric sparsely. In one setting the algorithm is guaranteed to return the sparsest solution and in the other settings, the algorithms repair the metric. Without prior information, the algorithms run in time proportional to the cube of the number of input data points and, with prior information we can reduce the running time considerably.
1
0
0
1
0
0
Optimal one-shot quantum algorithm for EQUALITY and AND
We study the computation complexity of Boolean functions in the quantum black box model. In this model our task is to compute a function $f:\{0,1\}\to\{0,1\}$ on an input $x\in\{0,1\}^n$ that can be accessed by querying the black box. Quantum algorithms are inherently probabilistic; we are interested in the lowest possible probability that the algorithm outputs incorrect answer (the error probability) for a fixed number of queries. We show that the lowest possible error probability for $AND_n$ and $EQUALITY_{n+1}$ is $1/2-n/(n^2+1)$.
1
0
0
0
0
0
Photon-gated spin transistor
Spin-polarized field-effect transistor (spin-FET), where a dielectric layer is generally employed for the electrical gating as the traditional FET, stands out as a seminal spintronic device under the miniaturization trend of electronics. It would be fundamentally transformative if optical gating was used for spin-FET. We report a new type of spin-polarized field-effect transistor (spin-FET) with optical gating, which is fabricated by partial exposure of the (La,Sr)MnO3 channel to light-emitting diode (LED) light. The manipulation of the channel conductivity is ascribed to the enhanced scattering of the spin-polarized current by photon-excited antiparallel aligned spins. And the photon-gated spin-FET shows strong light power dependence and reproducible enhancement of resistance under light illumination, indicting well-defined conductivity cycling features. Our finding would enrich the concept of spin-FET and promote the use of optical means in spintronics for low power consumption and ultrafast data processing.
0
1
0
0
0
0
Polarisation of submillimetre lines from interstellar medium
Magnetic fields play important roles in many astrophysical processes. However, there is no universal diagnostic for the magnetic fields in the interstellar medium (ISM) and each magnetic tracer has its limitation. Any new detection method is thus valuable. Theoretical studies have shown that submillimetre fine-structure lines are polarised due to atomic alignment by Ultraviolet (UV) photon-excitation, which opens up a new avenue to probe interstellar magnetic fields. We will, for the first time, perform synthetic observations on the simulated three-dimensional ISM to demonstrate the measurability of the polarisation of submillimetre atomic lines. The maximum polarisation for different absorption and emission lines expected from various sources, including Star-Forming Regions (SFRs) are provided. Our results demonstrate that the polarisation of submillimetre atomic lines is a powerful magnetic tracer and add great value to the observational studies of the submilimetre astronomy.
0
1
0
0
0
0
Two bosonic quantum walkers in one-dimensional optical lattices
Dynamical properties of two bosonic quantum walkers in a one-dimensional lattice are studied theoretically. Depending on the initial state, interactions, lattice tilting, and lattice disorder, whole plethora of different behaviors are observed. Particularly, it is shown that two bosons system manifests the many-body localization like behavior in the presence of a quenched disorder. The whole analysis is based on a specific decomposition of the temporal density profile into different contributions from singly and doubly occupied sites. In this way, the role of interactions is extracted. Since the contributions can be directly measured in experiments with ultra-cold atoms in optical lattices, the predictions presented may have some importance for upcoming experiment.
0
1
0
0
0
0
Photometric Redshifts with the LSST: Evaluating Survey Observing Strategies
In this paper we present and characterize a nearest-neighbors color-matching photometric redshift estimator that features a direct relationship between the precision and accuracy of the input magnitudes and the output photometric redshifts. This aspect makes our estimator an ideal tool for evaluating the impact of changes to LSST survey parameters that affect the measurement errors of the photometry, which is the main motivation of our work (i.e., it is not intended to provide the "best" photometric redshifts for LSST data). We show how the photometric redshifts will improve with time over the 10-year LSST survey and confirm that the nominal distribution of visits per filter provides the most accurate photo-$z$ results. The LSST survey strategy naturally produces observations over a range of airmass, which offers the opportunity of using an SED- and $z$-dependent atmospheric affect on the observed photometry as a color-independent redshift indicator. We show that measuring this airmass effect and including it as a prior has the potential to improve the photometric redshifts and can ameliorate extreme outliers, but that it will only be adequately measured for the brightest galaxies, which limits its overall impact on LSST photometric redshifts. We furthermore demonstrate how this airmass effect can induce a bias in the photo-$z$ results, and caution against survey strategies that prioritize high-airmass observations for the purpose of improving this prior. Ultimately, we intend for this work to serve as a guide for the expectations and preparations of the LSST science community with regards to the minimum quality of photo-$z$ as the survey progresses.
0
1
0
0
0
0
Modelling diverse sources of Clostridium difficile in the community: importance of animals, infants and asymptomatic carriers
Clostridium difficile infections (CDIs) affect patients in hospitals and in the community, but the relative importance of transmission in each setting is unknown. We developed a mathematical model of C. difficile transmission in a hospital and surrounding community that included infants, adults, and transmission from animal reservoirs. We assessed the role of these transmission routes in maintaining disease and evaluated the recommended classification system for hospital and community-acquired CDIs. The reproduction number in the hospital was <1 (range: 0.16-0.46) for all scenarios. Outside the hospital, the reproduction number was >1 for nearly all scenarios without transmission from animal reservoirs (range: 1.0-1.34). However, the reproduction number for the human population was <1 if a minority (>3.5-26.0%) of human exposures originated from animal reservoirs. Symptomatic adults accounted for <10% transmission in the community. Under conservative assumptions, infants accounted for 17% of community transmission. An estimated 33-40% of community-acquired cases were reported but 28-39% of these reported cases were misclassified as hospital-acquired by recommended definitions. Transmission could be plausibly sustained by asymptomatically colonized adults and infants in the community or exposure to animal reservoirs, but not hospital transmission alone. Underreporting of community-onset cases and systematic misclassification underplays the role of community transmission.
0
0
0
0
1
0
Quivers with potentials for cluster varieties associated to braid semigroups
Let $C$ be a simply laced generalized Cartan matrix. Given an element $b$ of the generalized braid semigroup related to $C$, we construct a collection of mutation-equivalent quivers with potentials. A quiver with potential in such a collection corresponds to an expression of $b$ in terms of the standard generators. For two expressions that differ by a braid relation, the corresponding quivers with potentials are related by a mutation. The main application of this result is a construction of a family of $CY_3$ $A_\infty$-categories associated to elements of the braid semigroup related to $C$. In particular, we construct a canonical up to equivalence $CY_3$ $A_\infty$-category associated to quotient of any Double Bruhat cell $G^{u,v}/{\rm Ad} H$ in a simply laced reductive Lie group $G$. We describe the full set of parameters these categories depend on by defining a 2-dimensional CW-complex and proving that the set of parameters is identified with second cohomology group of this complex.
0
0
1
0
0
0
Effect of stellar flares on the upper atmospheres of HD 189733b and HD 209458b
Stellar flares are a frequent occurrence on young low-mass stars around which many detected exoplanets orbit. Flares are energetic, impulsive events, and their impact on exoplanetary atmospheres needs to be taken into account when interpreting transit observations. We have developed a model to describe the upper atmosphere of Extrasolar Giant Planets (EGPs) orbiting flaring stars. The model simulates thermal escape from the upper atmospheres of close-in EGPs. Ionisation by solar radiation and electron impact is included and photochemical and diffusive transport processes are simulated. This model is used to study the effect of stellar flares from the solar-like G star HD209458 and the young K star HD189733 on their respective planets. A hypothetical HD209458b-like planet orbiting the active M star AU Mic is also simulated. We find that the neutral upper atmosphere of EGPs is not significantly affected by typical flares. Therefore, stellar flares alone would not cause large enough changes in planetary mass loss to explain the variations in HD189733b transit depth seen in previous studies, although we show that it may be possible that an extreme stellar proton event could result in the required mass loss. Our simulations do however reveal an enhancement in electron number density in the ionosphere of these planets, the peak of which is located in the layer where stellar X-rays are absorbed. Electron densities are found to reach 2.2 to 3.5 times pre-flare levels and enhanced electron densities last from about 3 to 10 hours after the onset of the flare. The strength of the flare and the width of its spectral energy distribution affect the range of altitudes that see enhancements in ionisation. A large broadband continuum component in the XUV portion of the flaring spectrum in very young flare stars, such as AU Mic, results in a broad range of altitudes affected in planets orbiting this star.
0
1
0
0
0
0
Gross-Hopkins Duals of Higher Real K-theory Spectra
We determine the Gross-Hopkins duals of certain higher real K-theory spectra. More specifically, let p be an odd prime, and consider the Morava E-theory spectrum of height n=p-1. It is known, in the expert circles, that for certain finite subgroups G of the Morava stabilizer group, the homotopy fixed point spectra E_n^{hG} are Gross-Hopkins self-dual up to a shift. In this paper, we determine the shift for those finite subgroups G which contain p-torsion. This generalizes previous results for n=2 and p=3.
0
0
1
0
0
0
A Projection Method for Metric-Constrained Optimization
We outline a new approach for solving optimization problems which enforce triangle inequalities on output variables. We refer to this as metric-constrained optimization, and give several examples where problems of this form arise in machine learning applications and theoretical approximation algorithms for graph clustering. Although these problem are interesting from a theoretical perspective, they are challenging to solve in practice due to the high memory requirement of black-box solvers. In order to address this challenge we first prove that the metric-constrained linear program relaxation of correlation clustering is equivalent to a special case of the metric nearness problem. We then developed a general solver for metric-constrained linear and quadratic programs by generalizing and improving a simple projection algorithm originally developed for metric nearness. We give several novel approximation guarantees for using our framework to find lower bounds for optimal solutions to several challenging graph clustering problems. We also demonstrate the power of our framework by solving optimizing problems involving up to 10^{8} variables and 10^{11} constraints.
1
0
0
1
0
0
Calibrating the Planck Cluster Mass Scale with Cluster Velocity Dispersions
We measure the Planck cluster mass bias using dynamical mass measurements based on velocity dispersions of a subsample of 17 Planck-detected clusters. The velocity dispersions were calculated using redshifts determined from spectra obtained at Gemini observatory with the GMOS multi-object spectrograph. We correct our estimates for effects due to finite aperture, Eddington bias and correlated scatter between velocity dispersion and the Planck mass proxy. The result for the mass bias parameter, $(1-b)$, depends on the value of the galaxy velocity bias $b_v$ adopted from simulations: $(1-b)=(0.51\pm0.09) b_v^3$. Using a velocity bias of $b_v=1.08$ from Munari et al., we obtain $(1-b)=0.64\pm 0.11$, i.e, an error of 17% on the mass bias measurement with 17 clusters. This mass bias value is consistent with most previous weak lensing determinations. It lies within $1\sigma$ of the value needed to reconcile the Planck cluster counts with the Planck primary CMB constraints. We emphasize that uncertainty in the velocity bias severely hampers precision measurements of the mass bias using velocity dispersions. On the other hand, when we fix the Planck mass bias using the constraints from Penna-Lima et al., based on weak lensing measurements, we obtain a positive velocity bias $b_v \gtrsim 0.9$ at $3\sigma$.
0
1
0
0
0
0
Curie: Policy-based Secure Data Exchange
Data sharing among partners---users, organizations, companies---is crucial for the advancement of data analytics in many domains. Sharing through secure computation and differential privacy allows these partners to perform private computations on their sensitive data in controlled ways. However, in reality, there exist complex relationships among members. Politics, regulations, interest, trust, data demands and needs are one of the many reasons. Thus, there is a need for a mechanism to meet these conflicting relationships on data sharing. This paper presents Curie, an approach to exchange data among members whose membership has complex relationships. The CPL policy language that allows members to define the specifications of data exchange requirements is introduced. Members (partners) assert who and what to exchange through their local policies and negotiate a global sharing agreement. The agreement is implemented in a multi-party computation that guarantees sharing among members will comply with the policy as negotiated. The use of Curie is validated through an example of a health care application built on recently introduced secure multi-party computation and differential privacy frameworks, and policy and performance trade-offs are explored.
1
0
0
0
0
0
Boundedness in a fully parabolic chemotaxis system with nonlinear diffusion and sensitivity, and logistic source
In this paper we study the zero-flux chemotaxis-system \begin{equation*} \begin{cases} u_{ t}=\nabla \cdot ((u+1)^{m-1} \nabla u-(u+1)^\alpha \chi(v)\nabla v) + ku-\mu u^2 & x\in \Omega, t>0, \\ v_{t} = \Delta v-vu & x\in \Omega, t>0,\\ \end{cases} \end{equation*} $\Omega$ being a bounded and smooth domain of $\mathbb{R}^n$, $n\geq 1$, and where $m,k \in \mathbb{R}$, $\mu>0$ and $\alpha < \frac{m+1}{2}$. For any $v\geq 0$ the chemotactic sensitivity function is assumed to behave as the prototype $\chi(v) = \frac{\chi_0}{(1+av)^2}$, with $a\geq 0$ and $\chi_0>0$. We prove that for nonnegative and sufficiently regular initial data $u(x,0)$ and $v(x,0),$ the corresponding initial-boundary value problem admits a global bounded classical solution provided $\mu$ is large enough.
0
0
1
0
0
0
Gemini/GMOS Transmission Spectral Survey: Complete Optical Transmission Spectrum of the hot Jupiter WASP-4b
We present the complete optical transmission spectrum of the hot Jupiter WASP-4b from 440-940 nm at R ~ 400-1500 obtained with the Gemini Multi-Object Spectrometers (GMOS); this is the first result from a comparative exoplanetology survey program of close-in gas giants conducted with GMOS. WASP-4b has an equilibrium temperature of 1700 K and is favorable to study in transmission due to a large scale height (370 km). We derive the transmission spectrum of WASP-4b using 4 transits observed with the MOS technique. We demonstrate repeatable results across multiple epochs with GMOS, and derive a combined transmission spectrum at a precision about twice above photon noise, which is roughly equal to to one atmospheric scale height. The transmission spectrum is well fitted with a uniform opacity as a function of wavelength. The uniform opacity and absence of a Rayleigh slope from molecular hydrogen suggest that the atmosphere is dominated by clouds with condensate grain size of ~1 um. This result is consistent with previous observations of hot Jupiters since clouds have been seen in planets with similar equilibrium temperatures to WASP-4b. We describe a custom pipeline that we have written to reduce GMOS time-series data of exoplanet transits, and present a thorough analysis of the dominant noise sources in GMOS, which primarily consist of wavelength- and time- dependent displacements of the spectra on the detector, mainly due to a lack of atmospheric dispersion correction.
0
1
0
0
0
0
From Infinite to Finite Programs: Explicit Error Bounds with Applications to Approximate Dynamic Programming
We consider linear programming (LP) problems in infinite dimensional spaces that are in general computationally intractable. Under suitable assumptions, we develop an approximation bridge from the infinite-dimensional LP to tractable finite convex programs in which the performance of the approximation is quantified explicitly. To this end, we adopt the recent developments in two areas of randomized optimization and first order methods, leading to a priori as well as a posterior performance guarantees. We illustrate the generality and implications of our theoretical results in the special case of the long-run average cost and discounted cost optimal control problems for Markov decision processes on Borel spaces. The applicability of the theoretical results is demonstrated through a constrained linear quadratic optimal control problem and a fisheries management problem.
1
0
1
0
0
0
An overview and comparative analysis of Recurrent Neural Networks for Short Term Load Forecasting
The key component in forecasting demand and consumption of resources in a supply network is an accurate prediction of real-valued time series. Indeed, both service interruptions and resource waste can be reduced with the implementation of an effective forecasting system. Significant research has thus been devoted to the design and development of methodologies for short term load forecasting over the past decades. A class of mathematical models, called Recurrent Neural Networks, are nowadays gaining renewed interest among researchers and they are replacing many practical implementation of the forecasting systems, previously based on static methods. Despite the undeniable expressive power of these architectures, their recurrent nature complicates their understanding and poses challenges in the training procedures. Recently, new important families of recurrent architectures have emerged and their applicability in the context of load forecasting has not been investigated completely yet. In this paper we perform a comparative study on the problem of Short-Term Load Forecast, by using different classes of state-of-the-art Recurrent Neural Networks. We test the reviewed models first on controlled synthetic tasks and then on different real datasets, covering important practical cases of study. We provide a general overview of the most important architectures and we define guidelines for configuring the recurrent networks to predict real-valued time series.
1
0
0
0
0
0
Periods of abelian differentials and dynamics
Given a closed oriented surface S we describe those cohomology classes which appear as the period characters of abelian differentials for some choice of complex structure on S consistent with the orientation. The proof is based upon Ratner's solution of Raghunathan's conjecture.
0
0
1
0
0
0
Existence of regular solutions for a certain type of non-Newtonian Navier-Stokes equations
We are concerned with existence of regular solutions for non-Newtonian fluids in dimension three. For a certain type of non-Newtonian fluids we prove local existence of unique regular solutions, provided that the initial data are sufficiently smooth. Moreover, if the $H^3$-norm of initial data is sufficiently small, then the regular solution exists globally in time.
0
0
1
0
0
0
Multiferroic Quantum Criticality
The zero-temperature limit of a continuous phase transition is marked by a quantum critical point, which can generate exotic physics that extends to elevated temperatures. Magnetic quantum criticality is now well known, and has been explored in systems ranging from heavy fermion metals to quantum Ising materials. Ferroelectric quantum critical behaviour has also been recently established, motivating a flurry of research investigating its consequences. Here, we introduce the concept of multiferroic quantum criticality, in which both magnetic and ferroelectric quantum criticality occur in the same system. We develop the phenomenology of multiferroic quantum critical behaviour, describe the associated experimental signatures, and propose material systems and schemes to realize it.
0
1
0
0
0
0
Carbon Nanotube Wools Directly from CO2 By Molten Electrolysis Value Driven Pathways to Carbon Dioxide Greenhouse Gas Mitigation
A climate mitigation comprehensive solution is presented through the first high yield, low energy synthesis of macroscopic length carbon nanotubes (CNT) wool from CO2 by molten carbonate electrolysis, suitable for weaving into carbon composites and textiles. Growing CO2 concentrations, the concurrent climate change and species extinction can be addressed if CO2 becomes a sought resource rather than a greenhouse pollutant. Inexpensive carbon composites formed from carbon wool as a lighter metal, textiles and cement replacement comprise a major market sink to compactly store transformed anthropogenic CO2. 100x-longer CNTs grow on Monel versus steel. Monel, electrolyte equilibration, and a mixed metal nucleation facilitate the synthesis. CO2, the sole reactant in this transformation, is directly extractable from dilute (atmospheric) or concentrated sources, and is cost constrained only by the (low) cost of electricity. Today's $100K per ton CNT valuation incentivizes CO2 removal.
0
1
0
0
0
0
Rock-Paper-Scissors Random Walks on Temporal Multilayer Networks
We study diffusion on a multilayer network where the contact dynamics between the nodes is governed by a random process and where the waiting time distribution differs for edges from different layers. We study the impact on a random walk of the competition that naturally emerges between the edges of the different layers. In opposition to previous studies which have imposed a priori inter-layer competition, the competition is here induced by the heterogeneity of the activity on the different layers. We first study the precedence relation between different edges and by extension between different layers, and show that it determines biased paths for the walker. We also discuss the emergence of cyclic, rock-paper-scissors random walks, when the precedence between layers is non-transitive. Finally, we numerically show the slowing-down effect due to the competition on a heterogeneous multilayer as the walker is likely to be trapped for a longer time either on a single layer, or on an oriented cycle . Keywords: random walks; multilayer networks; dynamical systems on networks; models of networks; simulations of networks; competition between layers.
1
0
0
0
0
0
Marginally compact fractal trees with semiflexibility
We study marginally compact macromolecular trees that are created by means of two different fractal generators. In doing so, we assume Gaussian statistics for the vectors connecting nodes of the trees. Moreover, we introduce bond-bond correlations that make the trees locally semiflexible. The symmetry of the structures allows an iterative construction of full sets of eigenmodes (notwithstanding the additional interactions that are present due to semiflexibility constraints), enabling us to get physical insights about the trees' behavior and to consider larger structures. Due to the local stiffness the self-contact density gets drastically reduced.
0
1
0
0
0
0
Deep Neural Networks for Physics Analysis on low-level whole-detector data at the LHC
There has been considerable recent activity applying deep convolutional neural nets (CNNs) to data from particle physics experiments. Current approaches on ATLAS/CMS have largely focussed on a subset of the calorimeter, and for identifying objects or particular particle types. We explore approaches that use the entire calorimeter, combined with track information, for directly conducting physics analyses: i.e. classifying events as known-physics background or new-physics signals. We use an existing RPV-Supersymmetry analysis as a case study and explore CNNs on multi-channel, high-resolution sparse images: applied on GPU and multi-node CPU architectures (including Knights Landing (KNL) Xeon Phi nodes) on the Cori supercomputer at NERSC.
1
0
0
0
0
0
Time-frequency analysis of ship wave patterns in shallow water: modelling and experiments
A spectrogram of a ship wake is a heat map that visualises the time-dependent frequency spectrum of surface height measurements taken at a single point as the ship travels by. Spectrograms are easy to compute and, if properly interpreted, have the potential to provide crucial information about various properties of the ship in question. Here we use geometrical arguments and analysis of an idealised mathematical model to identify features of spectrograms, concentrating on the effects of a finite-depth channel. Our results depend heavily on whether the flow regime is subcritical or supercritical. To support our theoretical predictions, we compare with data taken from experiments we conducted in a model test basin using a variety of realistic ship hulls. Finally, we note that vessels with a high aspect ratio appear to produce spectrogram data that contains periodic patterns. We can reproduce this behaviour in our mathematical model by using a so-called two-point wavemaker. These results highlight the role of wave interference effects in spectrograms of ship wakes.
0
1
0
0
0
0
HARE: Supporting efficient uplink multi-hop communications in self-organizing LPWANs
The emergence of low-power wide area networks (LPWANs) as a new agent in the Internet of Things (IoT) will result in the incorporation into the digital world of low-automated processes from a wide variety of sectors. The single-hop conception of typical LPWAN deployments, though simple and robust, overlooks the self-organization capabilities of network devices, suffers from lack of scalability in crowded scenarios, and pays little attention to energy consumption. Aimed to take the most out of devices' capabilities, the HARE protocol stack is proposed in this paper as a new LPWAN technology flexible enough to adopt uplink multi-hop communications when proving energetically more efficient. In this way, results from a real testbed show energy savings of up to 15% when using a multi-hop approach while keeping the same network reliability. System's self-organizing capability and resilience have been also validated after performing numerous iterations of the association mechanism and deliberately switching off network devices.
1
0
0
0
0
0
Tensor tomography in periodic slabs
The X-ray transform on the periodic slab $[0,1]\times\mathbb T^n$, $n\geq0$, has a non-trivial kernel due to the symmetry of the manifold and presence of trapped geodesics. For tensor fields gauge freedom increases the kernel further, and the X-ray transform is not solenoidally injective unless $n=0$. We characterize the kernel of the geodesic X-ray transform for $L^2$-regular $m$-tensors for any $m\geq0$. The characterization extends to more general manifolds, twisted slabs, including the Möbius strip as the simplest example.
0
0
1
0
0
0
Central limit theorem for linear spectral statistics of general separable sample covariance matrices with applications
In this paper, we consider the separable covariance model, which plays an important role in wireless communications and spatio-temporal statistics and describes a process where the time correlation does not depend on the spatial location and the spatial correlation does not depend on time. We established a central limit theorem for linear spectral statistics of general separable sample covariance matrices in the form of $\mathbf S_n=\frac1n\mathbf T_{1n}\mathbf X_n\mathbf T_{2n}\mathbf X_n^*\mathbf T_{1n}^*$ where $\mathbf X_n=(x_{jk})$ is of $m_1\times m_2$ dimension, the entries $\{x_{jk}, j=1,...,m_1, k=1,...,m_2\}$ are independent and identically distributed complex variables with zero means and unit variances, $\mathbf T_{1n}$ is a $p\times m_1 $ complex matrix and $\mathbf T_{2n}$ is an $m_2\times m_2$ Hermitian matrix. We then apply this general central limit theorem to the problem of testing white noise in time series.
0
0
1
1
0
0
Malnormality and join-free subgroups in right-angled Coxeter groups
In this paper, we prove that all finitely generated malnormal subgroups of one-ended right-angled Coxeter groups are strongly quasiconvex and they are in particular quasiconvex when the ambient groups are hyperbolic. The key idea is to prove all infinite proper malnormal subgroups of one-ended right-angled Coxeter groups are join-free and then prove the strong quasiconvexity and the virtual freeness of these subgroups. We also study the subgroup divergence of join-free subgroups in right-angled Coxeter groups and compare them with the analogous subgroups in right-angled Artin groups. We characterize almost malnormal parabolic subgroups in terms of their defining graphs and also recognize them as strongly quasiconvex subgroups by the recent work of Genevois and Russell-Spriano-Tran. Finally, we discuss some results on hyperbolically embedded subgroups in right-angled Coxeter groups.
0
0
1
0
0
0
Mapping the Americanization of English in Space and Time
As global political preeminence gradually shifted from the United Kingdom to the United States, so did the capacity to culturally influence the rest of the world. In this work, we analyze how the world-wide varieties of written English are evolving. We study both the spatial and temporal variations of vocabulary and spelling of English using a large corpus of geolocated tweets and the Google Books datasets corresponding to books published in the US and the UK. The advantage of our approach is that we can address both standard written language (Google Books) and the more colloquial forms of microblogging messages (Twitter). We find that American English is the dominant form of English outside the UK and that its influence is felt even within the UK borders. Finally, we analyze how this trend has evolved over time and the impact that some cultural events have had in shaping it.
1
0
0
1
0
0
Robot Localisation and 3D Position Estimation Using a Free-Moving Camera and Cascaded Convolutional Neural Networks
Many works in collaborative robotics and human-robot interaction focuses on identifying and predicting human behaviour while considering the information about the robot itself as given. This can be the case when sensors and the robot are calibrated in relation to each other and often the reconfiguration of the system is not possible, or extra manual work is required. We present a deep learning based approach to remove the constraint of having the need for the robot and the vision sensor to be fixed and calibrated in relation to each other. The system learns the visual cues of the robot body and is able to localise it, as well as estimate the position of robot joints in 3D space by just using a 2D color image. The method uses a cascaded convolutional neural network, and we present the structure of the network, describe our own collected dataset, explain the network training and achieved results. A fully trained system shows promising results in providing an accurate mask of where the robot is located and a good estimate of its joints positions in 3D. The accuracy is not good enough for visual servoing applications yet, however, it can be sufficient for general safety and some collaborative tasks not requiring very high precision. The main benefit of our method is the possibility of the vision sensor to move freely. This allows it to be mounted on moving objects, for example, a body of the person or a mobile robot working in the same environment as the robots are operating in.
1
0
0
0
0
0
Meromorphic Jacobi Forms of Half-Integral Index and Umbral Moonshine Modules
In this work we consider an association of meromorphic Jacobi forms of half-integral index to the pure D-type cases of umbral moonshine, and solve the module problem for four of these cases by constructing vertex operator superalgebras that realise the corresponding meromorphic Jacobi forms as graded traces. We also present a general discussion of meromorphic Jacobi forms with half-integral index and their relationship to mock modular forms.
0
0
1
0
0
0
Active Hypothesis Testing: Beyond Chernoff-Stein
An active hypothesis testing problem is formulated. In this problem, the agent can perform a fixed number of experiments and then decide on one of the hypotheses. The agent is also allowed to declare its experiments inconclusive if needed. The objective is to minimize the probability of making an incorrect inference (misclassification probability) while ensuring that the true hypothesis is declared conclusively with moderately high probability. For this problem, lower and upper bounds on the optimal misclassification probability are derived and these bounds are shown to be asymptotically tight. In the analysis, a sub-problem, which can be viewed as a generalization of the Chernoff-Stein lemma, is formulated and analyzed. A heuristic approach to strategy design is proposed and its relationship with existing heuristic strategies is discussed.
1
0
1
1
0
0
Data-driven regularization of Wasserstein barycenters with an application to multivariate density registration
We present a framework to simultaneously align and smooth data in the form of multiple point clouds sampled from unknown densities with support in a $d$-dimensional Euclidean space. This work is motivated by applications in bioinformatics where researchers aim to automatically homogenize large datasets to compare and analyze characteristics within a same cell population. Inconveniently, the information acquired is most certainly noisy due to mis-alignment caused by technical variations of the environment. To overcome this problem, we propose to register multiple point clouds by using the notion of regularized barycenters (or Fréchet mean) of a set of probability measures with respect to the Wasserstein metric. A first approach consists in penalizing a Wasserstein barycenter with a convex functional as recently proposed in Bigot and al. (2018). A second strategy is to transform the Wasserstein metric itself into an entropy regularized transportation cost between probability measures as introduced in Cuturi (2013). The main contribution of this work is to propose data-driven choices for the regularization parameters involved in each approach using the Goldenshluger-Lepski's principle. Simulated data sampled from Gaussian mixtures are used to illustrate each method, and an application to the analysis of flow cytometry data is finally proposed.
0
0
0
1
0
0
BCS quantum critical phenomena
Theoretically, we recently showed that the scaling relation between the transition temperature T_c and the superfluid density at zero temperature n_s (0) might exhibit a parabolic pattern [Scientific Reports 6 (2016) 23863]. It is significantly different from the linear scaling described by Homes' law, which is well known as a mean-field result. More recently, Bozovic et al. have observed such a parabolic scaling in the overdoped copper oxides with a sufficiently low transition temperature T_c [Nature 536 (2016) 309-311]. They further point out that this experimental finding is incompatible with the standard Bardeen-Cooper-Schrieffer (BCS) description. Here we report that if T_c is sufficiently low, applying the renormalization group approach into the BCS action at zero temperature will naturally lead to the parabolic scaling. Our result indicates that when T_c sufficiently approaches zero, quantum fluctuations will be overwhelmingly amplified so that the mean-field approximation may break down at zero temperature.
0
1
0
0
0
0
Action preserving (weak) topologies on the category of presheaves
Let $\mathcal{C}$ be a finitely complete small category. In this paper, first we construct two weak (Lawvere-Tierney) topologies on the category of presheaves. One of them is established by means of a subfunctor of the Yoneda functor and the other one, is constructed by an admissible class on $\mathcal{C}$ and the internal existential quantifier in the presheaf topos $\widehat{\mathcal{C}}$. Moreover, by using an admissible class on $\mathcal{C},$ we are able to define an action on the subobject classifier $\Omega$ of $\widehat{\mathcal{C}}$. Then we find some necessary conditions for that the two weak topologies and also the double negation topology $\neg\neg$ on $\widehat{\mathcal{C}}$ to be action preserving maps. Finally, among other things, we constitute an action preserving weak topology on $\widehat{\mathcal{C}}$.
0
0
1
0
0
0