title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Statistical Latent Space Approach for Mixed Data Modelling and Applications
The analysis of mixed data has been raising challenges in statistics and machine learning. One of two most prominent challenges is to develop new statistical techniques and methodologies to effectively handle mixed data by making the data less heterogeneous with minimum loss of information. The other challenge is that such methods must be able to apply in large-scale tasks when dealing with huge amount of mixed data. To tackle these challenges, we introduce parameter sharing and balancing extensions to our recent model, the mixed-variate restricted Boltzmann machine (MV.RBM) which can transform heterogeneous data into homogeneous representation. We also integrate structured sparsity and distance metric learning into RBM-based models. Our proposed methods are applied in various applications including latent patient profile modelling in medical data analysis and representation learning for image retrieval. The experimental results demonstrate the models perform better than baseline methods in medical data and outperform state-of-the-art rivals in image dataset.
1
0
0
1
0
0
Near-field coupling of gold plasmonic antennas for sub-100 nm magneto-thermal microscopy
The development of spintronic technology with increasingly dense, high-speed, and complex devices will be accelerated by accessible microscopy techniques capable of probing magnetic phenomena on picosecond time scales and at deeply sub-micron length scales. A recently developed time-resolved magneto-thermal microscope provides a path towards this goal if it is augmented with a picosecond, nanoscale heat source. We theoretically study adiabatic nanofocusing and near-field heat induction using conical gold plasmonic antennas to generate sub-100 nm thermal gradients for time-resolved magneto-thermal imaging. Finite element calculations of antenna-sample interactions reveal focused electromagnetic loss profiles that are either peaked directly under the antenna or are annular, depending on the sample's conductivity, the antenna's apex radius, and the tip-sample separation. We find that the thermal gradient is confined to 40 nm to 60 nm full width at half maximum for realistic ranges of sample conductivity and apex radius. To mitigate this variation, which is undesirable for microscopy, we investigate the use of a platinum capping layer on top of the sample as a thermal transduction layer to produce heat uniformly across different sample materials. After determining the optimal capping layer thickness, we simulate the evolution of the thermal gradient in the underlying sample layer, and find that the temporal width is below 10 ps. These results lay a theoretical foundation for nanoscale, time-resolved magneto-thermal imaging.
0
1
0
0
0
0
A reproducible effect size is more useful than an irreproducible hypothesis test to analyze high throughput sequencing datasets
Motivation: P values derived from the null hypothesis significance testing framework are strongly affected by sample size, and are known to be irreproducible in underpowered studies, yet no suitable replacement has been proposed. Results: Here we present implementations of non-parametric standardized median effect size estimates, dNEF, for high-throughput sequencing datasets. Case studies are shown for transcriptome and tag-sequencing datasets. The dNEF measure is shown to be more repro- ducible and robust than P values and requires sample sizes as small as 3 to reproducibly identify differentially abundant features. Availability: Source code and binaries freely available at: this https URL, omicplotR, and this https URL.
0
0
0
0
1
0
High temperature thermodynamics of the honeycomb-lattice Kitaev-Heisenberg model: A high temperature series expansion study
We develop high temperature series expansions for the thermodynamic properties of the honeycomb-lattice Kitaev-Heisenberg model. Numerical results for uniform susceptibility, heat capacity and entropy as a function of temperature for different values of the Kitaev coupling $K$ and Heisenberg exachange coupling $J$ (with $|J|\le |K|$) are presented. These expansions show good convergence down to a temperature of a fraction of $K$ and in some cases down to $T=K/10$. In the Kitaev exchange dominated regime, the inverse susceptibility has a nearly linear temperature dependence over a wide temperature range. However, we show that already at temperatures $10$-times the Curie-Weiss temperature, the effective Curie-Weiss constant estimated from the data can be off by a factor of 2. We find that the magnitude of the heat capacity maximum at the short-range order peak, is substantially smaller for small $J/K$ than for $J$ of order or larger than $K$. We suggest that this itself represents a simple marker for the relative importance of the Kitaev terms in these systems. Somewhat surprisingly, both heat capacity and susceptibility data on Na$_2$IrO$_3$ are consistent with a dominant {\it antiferromagnetic} Kitaev exchange constant of about $300-400$ $K$.
0
1
0
0
0
0
Laplace Beltrami operator in the Baran metric and pluripotential equilibrium measure: the ball, the simplex and the sphere
The Baran metric $\delta_E$ is a Finsler metric on the interior of $E\subset \R^n$ arising from Pluripotential Theory. We consider the few instances, namely $E$ being the ball, the simplex, or the sphere, where $\delta_E$ is known to be Riemaniann and we prove that the eigenfunctions of the associated Laplace Beltrami operator (with no boundary conditions) are the orthogonal polynomials with respect to the pluripotential equilibrium measure $\mu_E$ of $E.$ We conjecture that this may hold in a wider generality. The considered differential operators have been already introduced in the framework of orthogonal polynomials and studied in connection with certain symmetry groups. In this work instead we highlight the relationships between orthogonal polynomials with respect to $\mu_E$ and the Riemaniann structure naturally arising from Pluripotential Theory
0
0
1
0
0
0
Magnetic polarons in a nonequilibrium polariton condensate
We consider a condensate of exciton-polaritons in a diluted magnetic semiconductor microcavity. Such system may exhibit magnetic self-trapping in the case of sufficiently strong coupling between polaritons and magnetic ions embedded in the semiconductor. We investigate the effect of the nonequilibrium nature of exciton-polaritons on the physics of the resulting self-trapped magnetic polarons. We find that multiple polarons can exist at the same time, and derive a critical condition for self-trapping which is different to the one predicted previously in the equilibrium case. Using the Bogoliubov-de Gennes approximation, we calculate the excitation spectrum and provide a physical explanation in terms of the effective magnetic attraction between polaritons, mediated by the ion subsystem.
0
1
0
0
0
0
Inference in Sparse Graphs with Pairwise Measurements and Side Information
We consider the statistical problem of recovering a hidden "ground truth" binary labeling for the vertices of a graph up to low Hamming error from noisy edge and vertex measurements. We present new algorithms and a sharp finite-sample analysis for this problem on trees and sparse graphs with poor expansion properties such as hypergrids and ring lattices. Our method generalizes and improves over that of Globerson et al. (2015), who introduced the problem for two-dimensional grid lattices. For trees we provide a simple, efficient, algorithm that infers the ground truth with optimal Hamming error has optimal sample complexity and implies recovery results for all connected graphs. Here, the presence of side information is critical to obtain a non-trivial recovery rate. We then show how to adapt this algorithm to tree decompositions of edge-subgraphs of certain graph families such as lattices, resulting in optimal recovery error rates that can be obtained efficiently The thrust of our analysis is to 1) use the tree decomposition along with edge measurements to produce a small class of viable vertex labelings and 2) apply an analysis influenced by statistical learning theory to show that we can infer the ground truth from this class using vertex measurements. We show the power of our method in several examples including hypergrids, ring lattices, and the Newman-Watts model for small world graphs. For two-dimensional grids, our results improve over Globerson et al. (2015) by obtaining optimal recovery in the constant-height regime.
1
0
0
0
0
0
Oracle Importance Sampling for Stochastic Simulation Models
We consider the problem of estimating an expected outcome from a stochastic simulation model using importance sampling. We propose a two-stage procedure that involves a regression stage and a sampling stage to construct our estimator. We introduce a parametric and a nonparametric regression estimator in the first stage and study how the allocation between the two stages affects the performance of final estimator. We derive the oracle property for both approaches. We analyze the empirical performances of our approaches using two simulated data and a case study on wind turbine reliability evaluation.
0
0
0
1
0
0
The Generalized Cross Validation Filter
Generalized cross validation (GCV) is one of the most important approaches used to estimate parameters in the context of inverse problems and regularization techniques. A notable example is the determination of the smoothness parameter in splines. When the data are generated by a state space model, like in the spline case, efficient algorithms are available to evaluate the GCV score with complexity that scales linearly in the data set size. However, these methods are not amenable to on-line applications since they rely on forward and backward recursions. Hence, if the objective has been evaluated at time $t-1$ and new data arrive at time t, then O(t) operations are needed to update the GCV score. In this paper we instead show that the update cost is $O(1)$, thus paving the way to the on-line use of GCV. This result is obtained by deriving the novel GCV filter which extends the classical Kalman filter equations to efficiently propagate the GCV score over time. We also illustrate applications of the new filter in the context of state estimation and on-line regularized linear system identification.
1
0
0
1
0
0
Coherence of Biochemical Oscillations is Bounded by Driving Force and Network Topology
Biochemical oscillations are prevalent in living organisms. Systems with a small number of constituents cannot sustain coherent oscillations for an indefinite time because of fluctuations in the period of oscillation. We show that the number of coherent oscillations that quantifies the precision of the oscillator is universally bounded by the thermodynamic force that drives the system out of equilibrium and by the topology of the underlying biochemical network of states. Our results are valid for arbitrary Markov processes, which are commonly used to model biochemical reactions. We apply our results to a model for a single KaiC protein and to an activator-inhibitor model that consists of several molecules. From a mathematical perspective, based on strong numerical evidence, we conjecture a universal constraint relating the imaginary and real parts of the first non-trivial eigenvalue of a stochastic matrix.
0
1
0
0
0
0
How Do Classifiers Induce Agents To Invest Effort Strategically?
Algorithms are often used to produce decision-making rules that classify or evaluate individuals. When these individuals have incentives to be classified a certain way, they may behave strategically to influence their outcomes. We develop a model for how strategic agents can invest effort in order to change the outcomes they receive, and we give a tight characterization of when such agents can be incentivized to invest specified forms of effort into improving their outcomes as opposed to "gaming" the classifier. We show that whenever any "reasonable" mechanism can do so, a simple linear mechanism suffices.
0
0
0
1
0
0
Guiding Reinforcement Learning Exploration Using Natural Language
In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation, specifically the use of encoder-decoder networks, to learn associations between natural language behavior descriptions and state-action information. We then use this learned model to guide agent exploration using a modified version of policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, under ideal and non-ideal conditions. This evaluation shows that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.
1
0
0
1
0
0
Of the People: Voting Is More Effective with Representative Candidates
In light of the classic impossibility results of Arrow and Gibbard and Satterthwaite regarding voting with ordinal rules, there has been recent interest in characterizing how well common voting rules approximate the social optimum. In order to quantify the quality of approximation, it is natural to consider the candidates and voters as embedded within a common metric space, and to ask how much further the chosen candidate is from the population as compared to the socially optimal one. We use this metric preference model to explore a fundamental and timely question: does the social welfare of a population improve when candidates are representative of the population? If so, then by how much, and how does the answer depend on the complexity of the metric space? We restrict attention to the most fundamental and common social choice setting: a population of voters, two independently drawn candidates, and a majority rule election. When candidates are not representative of the population, it is known that the candidate selected by the majority rule can be thrice as far from the population as the socially optimal one. We examine how this ratio improves when candidates are drawn independently from the population of voters. Our results are two-fold: When the metric is a line, the ratio improves from $3$ to $4-2\sqrt{2}$, roughly $1.1716$; this bound is tight. When the metric is arbitrary, we show a lower bound of $1.5$ and a constant upper bound strictly better than $2$ on the approximation ratio of the majority rule. The positive result depends in part on the assumption that candidates are independent and identically distributed. However, we show that independence alone is not enough to achieve the upper bound: even when candidates are drawn independently, if the population of candidates can be different from the voters, then an upper bound of $2$ on the approximation is tight.
1
0
0
0
0
0
Cell Coverage Extension with Orthogonal Random Precoding for Massive MIMO Systems
In this paper, we investigate a coverage extension scheme based on orthogonal random precoding (ORP) for the downlink of massive multiple-input multiple-output (MIMO) systems. In this scheme, a precoding matrix consisting of orthogonal vectors is employed at the transmitter to enhance the maximum signal-to-interference-plus-noise ratio (SINR) of the user. To analyze and optimize the ORP scheme in terms of cell coverage, we derive the analytical expressions of the downlink coverage probability for two receiver structures, namely, the single-antenna (SA) receiver and multiple-antenna receiver with antenna selection (AS). The simulation results show that the analytical expressions accurately capture the coverage behaviors of the systems employing the ORP scheme. It is also shown that the optimal coverage performance is achieved when a single precoding vector is used under the condition that the threshold of the signal-to-noise ratio of the coverage is greater than one. The performance of the ORP scheme is further analyzed when different random precoder groups are utilized over multiple time slots to exploit precoding diversity. The numerical results show that the proposed ORP scheme over multiple time slots provides a substantial coverage gain over the space-time coding scheme despite its low feedback overhead.
1
0
1
0
0
0
Hidden Community Detection in Social Networks
We introduce a new paradigm that is important for community detection in the realm of network analysis. Networks contain a set of strong, dominant communities, which interfere with the detection of weak, natural community structure. When most of the members of the weak communities also belong to stronger communities, they are extremely hard to be uncovered. We call the weak communities the hidden community structure. We present a novel approach called HICODE (HIdden COmmunity DEtection) that identifies the hidden community structure as well as the dominant community structure. By weakening the strength of the dominant structure, one can uncover the hidden structure beneath. Likewise, by reducing the strength of the hidden structure, one can more accurately identify the dominant structure. In this way, HICODE tackles both tasks simultaneously. Extensive experiments on real-world networks demonstrate that HICODE outperforms several state-of-the-art community detection methods in uncovering both the dominant and the hidden structure. In the Facebook university social networks, we find multiple non-redundant sets of communities that are strongly associated with residential hall, year of registration or career position of the faculties or students, while the state-of-the-art algorithms mainly locate the dominant ground truth category. In the Due to the difficulty of labeling all ground truth communities in real-world datasets, HICODE provides a promising approach to pinpoint the existing latent communities and uncover communities for which there is no ground truth. Finding this unknown structure is an extremely important community detection problem.
1
1
0
1
0
0
Two-photon exchange correction to the hyperfine splitting in muonic hydrogen
We reevaluate the Zemach, recoil and polarizability corrections to the hyperfine splitting in muonic hydrogen expressing them through the low-energy proton structure constants and obtain the precise values of the Zemach radius and two-photon exchange (TPE) contribution. The uncertainty of TPE correction to S energy levels in muonic hydrogen of 105 ppm exceeds the ppm accuracy level of the forthcoming 1S hyperfine splitting measurements at PSI, J-PARC and RIKEN-RAL.
0
1
0
0
0
0
Ising Models with Latent Conditional Gaussian Variables
Ising models describe the joint probability distribution of a vector of binary feature variables. Typically, not all the variables interact with each other and one is interested in learning the presumably sparse network structure of the interacting variables. However, in the presence of latent variables, the conventional method of learning a sparse model might fail. This is because the latent variables induce indirect interactions of the observed variables. In the case of only a few latent conditional Gaussian variables these spurious interactions contribute an additional low-rank component to the interaction parameters of the observed Ising model. Therefore, we propose to learn a sparse + low-rank decomposition of the parameters of an Ising model using a convex regularized likelihood problem. We show that the same problem can be obtained as the dual of a maximum-entropy problem with a new type of relaxation, where the sample means collectively need to match the expected values only up to a given tolerance. The solution to the convex optimization problem has consistency properties in the high-dimensional setting, where the number of observed binary variables and the number of latent conditional Gaussian variables are allowed to grow with the number of training samples.
1
0
0
1
0
0
Quasiparticles and charge transfer at the two surfaces of the honeycomb iridate Na$_2$IrO$_3$
Direct experimental investigations of the low-energy electronic structure of the Na$_2$IrO$_3$ iridate insulator are sparse and draw two conflicting pictures. One relies on flat bands and a clear gap, the other involves dispersive states approaching the Fermi level, pointing to surface metallicity. Here, by a combination of angle-resolved photoemission, photoemission electron microscopy, and x-ray absorption, we show that the correct picture is more complex and involves an anomalous band, arising from charge transfer from Na atoms to Ir-derived states. Bulk quasiparticles do exist, but in one of the two possible surface terminations the charge transfer is smaller and they remain elusive.
0
1
0
0
0
0
Breaking Bivariate Records
We establish a fundamental property of bivariate Pareto records for independent observations uniformly distributed in the unit square. We prove that the asymptotic conditional distribution of the number of records broken by an observation given that the observation sets a record is Geometric with parameter 1/2.
0
0
1
1
0
0
A Bag-of-Paths Node Criticality Measure
This work compares several node (and network) criticality measures quantifying to which extend each node is critical with respect to the communication flow between nodes of the network, and introduces a new measure based on the Bag-of-Paths (BoP) framework. Network disconnection simulation experiments show that the new BoP measure outperforms all the other measures on a sample of Erdos-Renyi and Albert-Barabasi graphs. Furthermore, a faster (still O(n^3)), approximate, BoP criticality relying on the Sherman-Morrison rank-one update of a matrix is introduced for tackling larger networks. This approximate measure shows similar performances as the original, exact, one.
1
1
0
0
0
0
Generation and analysis of lamplighter programs
We consider a programming language based on the lamplighter group that uses only composition and iteration as control structures. We derive generating functions and counting formulas for this language and special subsets of it, establishing lower and upper bounds on the growth rate of semantically distinct programs. Finally, we show how to sample random programs and analyze the distribution of runtimes induced by such sampling.
1
0
1
0
0
0
A Projection-Based Reformulation and Decomposition Algorithm for Global Optimization of a Class of Mixed Integer Bilevel Linear Programs
We propose an extended variant of the reformulation and decomposition algorithm for solving a special class of mixed-integer bilevel linear programs (MIBLPs) where continuous and integer variables are involved in both upper- and lower-level problems. In particular, we consider MIBLPs with upper-level constraints that involve lower-level variables. We assume that the inducible region is nonempty and all variables are bounded. By using the reformulation and decomposition scheme, an MIBLP is first converted into its equivalent single-level formulation, then computed by a column-and-constraint generation based decomposition algorithm. The solution procedure is enhanced by a projection strategy that does not require the relatively complete response property. To ensure its performance, we prove that our new method converges to the global optimal solution in a finite number of iterations. A large-scale computational study on random instances and instances of hierarchical supply chain planning are presented to demonstrate the effectiveness of the algorithm.
0
0
1
0
0
0
Preduals for spaces of operators involving Hilbert spaces and trace-class operators
Continuing the study of preduals of spaces $\mathcal{L}(H,Y)$ of bounded, linear maps, we consider the situation that $H$ is a Hilbert space. We establish a natural correspondence between isometric preduals of $\mathcal{L}(H,Y)$ and isometric preduals of $Y$. The main ingredient is a Tomiyama-type result which shows that every contractive projection that complements $\mathcal{L}(H,Y)$ in its bidual is automatically a right $\mathcal{L}(H)$-module map. As an application, we show that isometric preduals of $\mathcal{L}(\mathcal{S}_1)$, the algebra of operators on the space of trace-class operators, correspond to isometric preduals of $\mathcal{S}_1$ itself (and there is an abundance of them). On the other hand, the compact operators are the unique predual of $\mathcal{S}_1$ making its multiplication separately weak* continuous.
0
0
1
0
0
0
Computing maximum cliques in $B_2$-EPG graphs
EPG graphs, introduced by Golumbic et al. in 2009, are edge-intersection graphs of paths on an orthogonal grid. The class $B_k$-EPG is the subclass of EPG graphs where the path on the grid associated to each vertex has at most $k$ bends. Epstein et al. showed in 2013 that computing a maximum clique in $B_1$-EPG graphs is polynomial. As remarked in [Heldt et al., 2014], when the number of bends is at least $4$, the class contains $2$-interval graphs for which computing a maximum clique is an NP-hard problem. The complexity status of the Maximum Clique problem remains open for $B_2$ and $B_3$-EPG graphs. In this paper, we show that we can compute a maximum clique in polynomial time in $B_2$-EPG graphs given a representation of the graph. Moreover, we show that a simple counting argument provides a ${2(k+1)}$-approximation for the coloring problem on $B_k$-EPG graphs without knowing the representation of the graph. It generalizes a result of [Epstein et al, 2013] on $B_1$-EPG graphs (where the representation was needed).
1
0
0
0
0
0
Interactions between Health Searchers and Search Engines
The Web is an important resource for understanding and diagnosing medical conditions. Based on exposure to online content, people may develop undue health concerns, believing that common and benign symptoms are explained by serious illnesses. In this paper, we investigate potential strategies to mine queries and searcher histories for clues that could help search engines choose the most appropriate information to present in response to exploratory medical queries. To do this, we performed a longitudinal study of health search behavior using the logs of a popular search engine. We found that query variations which might appear innocuous (e.g. "bad headache" vs "severe headache") may hold valuable information about the searcher which could be used by search engines to improve performance. Furthermore, we investigated how medically concerned users respond differently to search engine result pages (SERPs) and find that their disposition for clicking on concerning pages is pronounced, potentially leading to a self-reinforcement of concern. Finally, we studied to which degree variations in the SERP impact future search and real-world health-seeking behavior and obtained some surprising results (e.g., viewing concerning pages may lead to a short-term reduction of real-world health seeking).
1
0
0
0
0
0
Effect algebras as presheaves on finite Boolean algebras
For an effect algebra $A$, we examine the category of all morphisms from finite Boolean algebras into $A$. This category can be described as a category of elements of a presheaf $R(A)$ on the category of finite Boolean algebras. We prove that some properties (being an orthoalgebra, the Riesz decomposition property, being a Boolean algebra) of an effect algebra $A$ can be characterized by properties of the category of elements of the presheaf $R(A)$. We prove that the tensor product of of effect algebras arises as a left Kan extension of the free product of finite Boolean algebras along the inclusion functor. As a consequence, the tensor product of effect algebras can be expressed by means of the Day convolution of presheaves on finite Boolean algebras.
0
0
1
0
0
0
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
In a previous work we have detailed the requirements to obtain a maximal performance benefit by implementing fully connected deep neural networks (DNN) in form of arrays of resistive devices for deep learning. This concept of Resistive Processing Unit (RPU) devices we extend here towards convolutional neural networks (CNNs). We show how to map the convolutional layers to RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed due to analog nature of the computations performed on the arrays effect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of RPU approach for large class of neural network architectures.
1
0
0
1
0
0
Absolute versus convective helical magnetorotational instabilities in Taylor-Couette flows
We study magnetic Taylor-Couette flow in a system having nondimensional radii $r_i=1$ and $r_o=2$, and periodic in the axial direction with wavelengths $h\ge100$. The rotation ratio of the inner and outer cylinders is adjusted to be slightly in the Rayleigh-stable regime, where magnetic fields are required to destabilize the flow, in this case triggering the axisymmetric helical magnetorotational instability (HMRI). Two choices of imposed magnetic field are considered, both having the same azimuthal component $B_\phi=r^{-1}$, but differing axial components. The first choice has $B_z=0.1$, and yields the familiar HMRI, consisting of unidirectionally traveling waves. The second choice has $B_z\approx0.1\sin(2\pi z/h)$, and yields HMRI waves that travel in opposite directions depending on the sign of $B_z$. The first configuration corresponds to a convective instability, the second to an absolute instability. The two variants behave very similarly regarding both linear onset as well as nonlinear equilibration.
0
1
0
0
0
0
Symmetries and multipeakon solutions for the modified two-component Camassa-Holm system
Compared with the two-component Camassa-Holm system, the modified two-component Camassa-Holm system introduces a regularized density which makes possible the existence of solutions of lower regularity, and in particular of multipeakon solutions. In this paper, we derive a new pointwise invariant for the modified two-component Camassa-Holm system. The derivation of the invariant uses directly the symmetry of the system, following the classical argument of Noether's theorem. The existence of the multipeakon solutions can be directly inferred from this pointwise invariant. This derivation shows the strong connection between symmetries and the existence of special solutions. The observation also holds for the scalar Camassa-Holm equation and, for comparison, we have also included the corresponding derivation. Finally, we compute explicitly the solutions obtained for the peakon-antipeakon case. We observe the existence of a periodic solution which has not been reported in the literature previously. This case shows the attractive effect that the introduction of an elastic potential can have on the solutions.
0
0
1
0
0
0
Selection of quasi-stationary states in the Navier-Stokes equation on the torus
The two dimensional incompressible Navier-Stokes equation on $D_\delta := [0, 2\pi\delta] \times [0, 2\pi]$ with $\delta \approx 1$, periodic boundary conditions, and viscosity $0 < \nu \ll 1$ is considered. Bars and dipoles, two explicitly given quasi-stationary states of the system, evolve on the time scale $\mathcal{O}(e^{-\nu t})$ and have been shown to play a key role in its long-time evolution. Of particular interest is the role that $\delta$ plays in selecting which of these two states is observed. Recent numerical studies suggest that, after a transient period of rapid decay of the high Fourier modes, the bar state will be selected if $\delta \neq 1$, while the dipole will be selected if $\delta = 1$. Our results support this claim and seek to mathematically formalize it. We consider the system in Fourier space, project it onto a center manifold consisting of the lowest eight Fourier modes, and use this as a model to study the selection of bars and dipoles. It is shown for this ODE model that the value of $\delta$ controls the behavior of the asymptotic ratio of the low modes, thus determining the likelihood of observing a bar state or dipole after an initial transient period. Moreover, in our model, for all $\delta \approx 1$, there is an initial time period in which the high modes decay at the rapid rate $\mathcal{O}(e^{-t/\nu})$, while the low modes evolve at the slower $\mathcal{O}(e^{-\nu t})$ rate. The results for the ODE model are proven using energy estimates and invariant manifolds and further supported by formal asymptotic expansions and numerics.
0
0
1
0
0
0
Geometric Enclosing Networks
Training model to generate data has increasingly attracted research attention and become important in modern world applications. We propose in this paper a new geometry-based optimization approach to address this problem. Orthogonal to current state-of-the-art density-based approaches, most notably VAE and GAN, we present a fresh new idea that borrows the principle of minimal enclosing ball to train a generator G\left(\bz\right) in such a way that both training and generated data, after being mapped to the feature space, are enclosed in the same sphere. We develop theory to guarantee that the mapping is bijective so that its inverse from feature space to data space results in expressive nonlinear contours to describe the data manifold, hence ensuring data generated are also lying on the data manifold learned from training data. Our model enjoys a nice geometric interpretation, hence termed Geometric Enclosing Networks (GEN), and possesses some key advantages over its rivals, namely simple and easy-to-control optimization formulation, avoidance of mode collapsing and efficiently learn data manifold representation in a completely unsupervised manner. We conducted extensive experiments on synthesis and real-world datasets to illustrate the behaviors, strength and weakness of our proposed GEN, in particular its ability to handle multi-modal data and quality of generated data.
1
0
0
1
0
0
A pliable lasso for the Cox model
We introduce a pliable lasso method for estimation of interaction effects in the Cox proportional hazards model framework. The pliable lasso is a linear model that includes interactions between covariates X and a set of modifying variables Z and assumes sparsity of the main effects and interaction effects. The hierarchical penalty excludes interaction effects when the corresponding main effects are zero: this avoids overfitting and an explosion of model complexity. We extend this method to the Cox model for survival data, incorporating modifiers that are either fixed or varying in time into the partial likelihood. For example, this allows modeling of survival times that differ based on interactions of genes with age, gender, or other demographic information. The optimization is done by blockwise coordinate descent on a second order approximation of the objective.
0
0
0
1
0
0
Localized magnetic moments with tunable spin exchange in a gas of ultracold fermions
We report on the experimental realization of a state-dependent lattice for a two-orbital fermionic quantum gas with strong interorbital spin exchange. In our state-dependent lattice, the ground and metastable excited electronic states of $^{173}$Yb take the roles of itinerant and localized magnetic moments, respectively. Repulsive on-site interactions in conjunction with the tunnel mobility lead to spin exchange between mobile and localized particles, modeling the coupling term in the well-known Kondo Hamiltonian. In addition, we find that this exchange process can be tuned resonantly by varying the on-site confinement. We attribute this to a resonant coupling to center-of-mass excited bound states of one interorbital scattering channel.
0
1
0
0
0
0
Khintchine's Theorem with random fractions
We prove versions of Khintchine's Theorem (1924) for approximations by rational numbers whose numerators lie in randomly chosen sets of integers, and we explore the extent to which the monotonicity assumption can be removed. Roughly speaking, we show that if the number of available fractions for each denominator grows too fast, then the monotonicity assumption cannot be removed. There are questions in this random setting which may be seen as cognates of the Duffin-Schaeffer Conjecture (1941), and are likely to be more accessible. We point out that the direct random analogue of the Duffin-Schaeffer Conjecture, like the Duffin-Schaeffer Conjecture itself, implies Catlin's Conjecture (1976). It is not obvious whether the Duffin-Schaeffer Conjecture and its random version imply one another, and it is not known whether Catlin's Conjecture implies either of them. The question of whether Catlin implies Duffin-Schaeffer has been unsettled for decades.
0
0
1
0
0
0
A Method of Generating Random Weights and Biases in Feedforward Neural Networks with Random Hidden Nodes
Neural networks with random hidden nodes have gained increasing interest from researchers and practical applications. This is due to their unique features such as very fast training and universal approximation property. In these networks the weights and biases of hidden nodes determining the nonlinear feature mapping are set randomly and are not learned. Appropriate selection of the intervals from which weights and biases are selected is extremely important. This topic has not yet been sufficiently explored in the literature. In this work a method of generating random weights and biases is proposed. This method generates the parameters of the hidden nodes in such a way that nonlinear fragments of the activation functions are located in the input space regions with data and can be used to construct the surface approximating a nonlinear target function. The weights and biases are dependent on the input data range and activation function type. The proposed methods allows us to control the generalization degree of the model. These all lead to improvement in approximation performance of the network. Several experiments show very promising results.
1
0
0
1
0
0
The Relative Performance of Ensemble Methods with Deep Convolutional Neural Networks for Image Classification
Artificial neural networks have been successfully applied to a variety of machine learning tasks, including image recognition, semantic segmentation, and machine translation. However, few studies fully investigated ensembles of artificial neural networks. In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as candidate algorithms. We designed several experiments, with the candidate algorithms being the same network structure with different model checkpoints within a single training process, networks with same structure but trained multiple times stochastically, and networks with different structure. In addition, we further studied the over-confidence phenomenon of the neural networks, as well as its impact on the ensemble methods. Across all of our experiments, the Super Learner achieved best performance among all the ensemble methods in this study.
1
0
0
1
0
0
Representation of big data by dimension reduction
Suppose the data consist of a set $S$ of points $x_j, 1 \leq j \leq J$, distributed in a bounded domain $D \subset R^N$, where $N$ and $J$ are large numbers. In this paper an algorithm is proposed for checking whether there exists a manifold $\mathbb{M}$ of low dimension near which many of the points of $S$ lie and finding such $\mathbb{M}$ if it exists. There are many dimension reduction algorithms, both linear and non-linear. Our algorithm is simple to implement and has some advantages compared with the known algorithms. If there is a manifold of low dimension near which most of the data points lie, the proposed algorithm will find it. Some numerical results are presented illustrating the algorithm and analyzing its performance compared to the classical PCA (principal component analysis) and Isomap.
1
0
0
1
0
0
Out-degree reducing partitions of digraphs
Let $k$ be a fixed integer. We determine the complexity of finding a $p$-partition $(V_1, \dots, V_p)$ of the vertex set of a given digraph such that the maximum out-degree of each of the digraphs induced by $V_i$, ($1\leq i\leq p$) is at least $k$ smaller than the maximum out-degree of $D$. We show that this problem is polynomial-time solvable when $p\geq 2k$ and ${\cal NP}$-complete otherwise. The result for $k=1$ and $p=2$ answers a question posed in \cite{bangTCS636}. We also determine, for all fixed non-negative integers $k_1,k_2,p$, the complexity of deciding whether a given digraph of maximum out-degree $p$ has a $2$-partition $(V_1,V_2)$ such that the digraph induced by $V_i$ has maximum out-degree at most $k_i$ for $i\in [2]$. It follows from this characterization that the problem of deciding whether a digraph has a 2-partition $(V_1,V_2)$ such that each vertex $v\in V_i$ has at least as many neighbours in the set $V_{3-i}$ as in $V_i$, for $i=1,2$ is ${\cal NP}$-complete. This solves a problem from \cite{kreutzerEJC24} on majority colourings.
1
0
0
0
0
0
Introduction to Plasma Physics
These notes are intended to provide a brief primer in plasma physics, introducing common definitions, basic properties, and typical processes found in plasmas. These concepts are inherent in contemporary plasma-based accelerator schemes, and thus provide a foundation for the more advanced expositions that follow in this volume. No prior knowledge of plasma physics is required, but the reader is assumed to be familiar with basic electrodynamics and fluid mechanics.
0
1
0
0
0
0
Presymplectic convexity and (ir)rational polytopes
In this paper, we extend the Atiyah--Guillemin--Sternberg convexity theorem and Delzant's classification of symplectic toric manifolds to presymplectic manifolds. We also define and study the Morita equivalence of presymplectic toric manifolds and of their corresponding framed momentum polytopes, which may be rational or non-rational. Toric orbifolds, quasifolds and non-commutative toric varieties may be viewed as the quotient of our presymplectic toric manifolds by the kernel isotropy foliation of the presymplectic form.
0
0
1
0
0
0
Unsupervised Learning of Mixture Regression Models for Longitudinal Data
This paper is concerned with learning of mixture regression models for individuals that are measured repeatedly. The adjective "unsupervised" implies that the number of mixing components is unknown and has to be determined, ideally by data driven tools. For this purpose, a novel penalized method is proposed to simultaneously select the number of mixing components and to estimate the mixing proportions and unknown parameters in the models. The proposed method is capable of handling both continuous and discrete responses by only requiring the first two moment conditions of the model distribution. It is shown to be consistent in both selecting the number of components and estimating the mixing proportions and unknown regression parameters. Further, a modified EM algorithm is developed to seamlessly integrate model selection and estimation. Simulation studies are conducted to evaluate the finite sample performance of the proposed procedure. And it is further illustrated via an analysis of a primary biliary cirrhosis data set.
0
0
0
1
0
0
Anomalous electron states
By the certain macroscopic perturbations in condensed matter anomalous electron wells can be formed due to a local reduction of electromagnetic zero point energy. These wells are narrow, of the width $\sim 10^{-11}cm$, and with the depth $\sim 1MeV$. Such anomalous states, from the formal standpoint of quantum mechanics, correspond to a singular solution of a wave equation produced by the non-physical $\delta(\vec R)$ source. The resolution, on the level of the Standard Model, of the tiny region around the formal singularity shows that the state is physical. The creation of those states in an atomic system is of the formal probability $\exp(-1000)$. The probability becomes not small under a perturbation which rapidly varies in space, on the scale $10^{-11}cm$. In condensed matter such perturbation may relate to acoustic shock waves. In this process the short scale is the length of the standing de Broglie wave of a reflected lattice atom. Under electron transitions in the anomalous well (anomalous atom) $keV$ X-rays are expected to be emitted. A macroscopic amount of anomalous atoms, of the size $10^{-11}cm$ each, can be formed in a solid resulting in ${\it collapsed}$ ${\it matter}$ with $10^9$ times enhanced density.
0
1
0
0
0
0
Theoretical calculation of the fine-structure constant and the permittivity of the vacuum
Light traveling through the vacuum interacts with virtual particles similarly to the way that light traveling through a dielectric interacts with ordinary matter. And just as the permittivity of a dielectric can be calculated, the permittivity $\epsilon_0$ of the vacuum can be calculated, yielding an equation for the fine-structure constant $\alpha$. The most important contributions to the value of $\alpha$ arise from interactions in the vacuum of photons with virtual, bound states of charged lepton-antilepton pairs. Considering only these contributions, the fully screened $\alpha \cong 1/(8^2\sqrt{3\pi/2}) \cong 1/139$.
0
1
0
0
0
0
LEADER: fast estimates of asteroid shape elongation and spin latitude distributions from scarce photometry
Many asteroid databases with lightcurve brightness measurements (e.g. WISE, Pan-STARRS1) contain enormous amounts of data for asteroid shape and spin modelling. While lightcurve inversion is not plausible for individual targets with scarce data, it is possible for large populations with thousands of asteroids, where the distributions of the shape and spin characteristics of the populations are obtainable. We aim to introduce a software implementation of a method that computes the joint shape elongation p and spin latitude beta distributions for a population, with the brightness observations given in an asteroid database. Other main goals are to include a method for performing validity checks of the algorithm, and a tool for a statistical comparison of populations. The LEADER software package read the brightness measurement data for a user-defined subpopulation from a given database. The observations were used to compute estimates of the brightness variations of the population members. A cumulative distribution function (CDF) was constructed of these estimates. A superposition of known analytical basis functions yielded this CDF as a function of the (shape, spin) distribution. The joint distribution can be reconstructed by solving a linear constrained inverse problem. To test the validity of the method, the algorithm can be run with synthetic asteroid models, where the shape and spin characteristics are known, and by using the geometries taken from the examined database. LEADER is a fast and robust software package for solving shape and spin distributions for large populations. There are major differences in the quality and coverage of measurements depending on the database used, so synthetic simulations are always necessary before a database can be reliably used. We show examples of differences in the results when switching to another database.
0
1
0
0
0
0
Calibrated Projection in MATLAB: Users' Manual
We present the calibrated-projection MATLAB package implementing the method to construct confidence intervals proposed by Kaido, Molinari and Stoye (2017). This manual provides details on how to use the package for inference on projections of partially identified parameters. It also explains how to use the MATLAB functions we developed to compute confidence intervals on solutions of nonlinear optimization problems with estimated constraints.
0
0
0
1
0
0
Atomic Clock Measurements of Quantum Scattering Phase Shifts Spanning Feshbach Resonances at Ultralow Fields
We use an atomic fountain clock to measure quantum scattering phase shifts precisely through a series of narrow, low-field Feshbach resonances at average collision energies below $1\,\mu$K. Our low spread in collision energy yields phase variations of order $\pm \pi/2$ for target atoms in several $F,m_F$ states. We compare them to a theoretical model and establish the accuracy of the measurements and the theoretical uncertainties from the fitted potential. We find overall excellent agreement, with small statistically significant differences that remain unexplained.
0
1
0
0
0
0
Temporal processing and context dependency in C. elegans mechanosensation
A quantitative understanding of how sensory signals are transformed into motor outputs places useful constraints on brain function and helps reveal the brain's underlying computations. We investigate how the nematode C. elegans responds to time-varying mechanosensory signals using a high-throughput optogenetic assay and automated behavior quantification. In the prevailing picture of the touch circuit, the animal's behavior is determined by which neurons are stimulated and by the stimulus amplitude. In contrast, we find that the behavioral response is tuned to temporal properties of mechanosensory signals, like its integral and derivative, that extend over many seconds. Mechanosensory signals, even in the same neurons, can be tailored to elicit different behavioral responses. Moreover, we find that the animal's response also depends on its behavioral context. Most dramatically, the animal ignores all tested mechanosensory stimuli during turns. Finally, we present a linear-nonlinear model that predicts the animal's behavioral response to stimulus.
0
0
0
0
1
0
On the putative essential discreteness of q-generalized entropies
It has been argued in [EPL {\bf 90} (2010) 50004], entitled {\it Essential discreteness in generalized thermostatistics with non-logarithmic entropy}, that "continuous Hamiltonian systems with long-range interactions and the so-called q-Gaussian momentum distributions are seen to be outside the scope of non-extensive statistical mechanics". The arguments are clever and appealing. We show here that, however, some mathematical subtleties render them unconvincing
0
1
0
0
0
0
Spin Distribution of Primordial Black Holes
We estimate the spin distribution of primordial black holes based on the recent study of the critical phenomena in the gravitational collapse of a rotating radiation fluid. We find that primordial black holes are mostly slowly rotating.
0
1
0
0
0
0
Automated flow for compressing convolution neural networks for efficient edge-computation with FPGA
Deep convolutional neural networks (CNN) based solutions are the current state- of-the-art for computer vision tasks. Due to the large size of these models, they are typically run on clusters of CPUs or GPUs. However, power requirements and cost budgets can be a major hindrance in adoption of CNN for IoT applications. Recent research highlights that CNN contain significant redundancy in their structure and can be quantized to lower bit-width parameters and activations, while maintaining acceptable accuracy. Low bit-width and especially single bit-width (binary) CNN are particularly suitable for mobile applications based on FPGA implementation, due to the bitwise logic operations involved in binarized CNN. Moreover, the transition to lower bit-widths opens new avenues for performance optimizations and model improvement. In this paper, we present an automatic flow from trained TensorFlow models to FPGA system on chip implementation of binarized CNN. This flow involves quantization of model parameters and activations, generation of network and model in embedded-C, followed by automatic generation of the FPGA accelerator for binary convolutions. The automated flow is demonstrated through implementation of binarized "YOLOV2" on the low cost, low power Cyclone- V FPGA device. Experiments on object detection using binarized YOLOV2 demonstrate significant performance benefit in terms of model size and inference speed on FPGA as compared to CPU and mobile CPU platforms. Furthermore, the entire automated flow from trained models to FPGA synthesis can be completed within one hour.
1
0
0
0
0
0
Pulse rate estimation using imaging photoplethysmography: generic framework and comparison of methods on a publicly available dataset
Objective: to establish an algorithmic framework and a benchmark dataset for comparing methods of pulse rate estimation using imaging photoplethysmography (iPPG). Approach: first we reveal essential steps of pulse rate estimation from facial video and review methods applied at each of the steps. Then we investigate performance of these methods for DEAP dataset www.eecs.qmul.ac.uk/mmv/datasets/deap/ containing facial videos and reference contact photoplethysmograms. Main results: best assessment precision is achieved when pulse rate is estimated using continuous wavelet transform from iPPG extracted by the POS method (overall mean absolute error below 2 heart beats per minute). Significance: we provide a generic framework for theoretical comparison of methods for pulse rate estimation from iPPG and report results for the most popular methods on a publicly available dataset that can be used as a benchmark.
0
1
0
0
0
0
Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution
Convolutional neural networks have recently demonstrated high-quality reconstruction for single-image super-resolution. In this paper, we propose the Laplacian Pyramid Super-Resolution Network (LapSRN) to progressively reconstruct the sub-band residuals of high-resolution images. At each pyramid level, our model takes coarse-resolution feature maps as input, predicts the high-frequency residuals, and uses transposed convolutions for upsampling to the finer level. Our method does not require the bicubic interpolation as the pre-processing step and thus dramatically reduces the computational complexity. We train the proposed LapSRN with deep supervision using a robust Charbonnier loss function and achieve high-quality reconstruction. Furthermore, our network generates multi-scale predictions in one feed-forward pass through the progressive reconstruction, thereby facilitates resource-aware applications. Extensive quantitative and qualitative evaluations on benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in terms of speed and accuracy.
1
0
0
0
0
0
Foundation for a series of efficient simulation algorithms
Compute the coarsest simulation preorder included in an initial preorder is used to reduce the resources needed to analyze a given transition system. This technique is applied on many models like Kripke structures, labeled graphs, labeled transition systems or even word and tree automata. Let (Q, $\rightarrow$) be a given transition system and Rinit be an initial preorder over Q. Until now, algorithms to compute Rsim , the coarsest simulation included in Rinit , are either memory efficient or time efficient but not both. In this paper we propose the foundation for a series of efficient simulation algorithms with the introduction of the notion of maximal transitions and the notion of stability of a preorder with respect to a coarser one. As an illustration we solve an open problem by providing the first algorithm with the best published time complexity, O(|Psim |.|$\rightarrow$|), and a bit space complexity in O(|Psim |^2. log(|Psim |) + |Q|. log(|Q|)), with Psim the partition induced by Rsim.
1
0
0
0
0
0
A Review of Macroscopic Motion in Thermodynamic Equilibrium
A principle on the macroscopic motion of systems in thermodynamic equilibrium, rarely discussed in texts, is reviewed: Very small but still macroscopic parts of a fully isolated system in thermal equilibrium move as if points of a rigid body, macroscopic energy being dissipated to increase internal energy, and increase entropy along. It appears particularly important in Space physics, when dissipation involves long-range fields at Electromagnetism and Gravitation, rather than short-range contact forces. It is shown how new physics, Special Relativity as regards Electromagnetism, first Newtonian theory then General Relativity as regards Gravitation, determine different dissipative processes involved in the approach to that equilibrium.
0
1
0
0
0
0
Emergent electronic structure of CaFe2As2
CaFe2As2 exhibits collapsed tetragonal (cT) structure and varied exotic behavior under pressure at low temperatures that led to debate on linking the structural changes to its exceptional electronic properties like superconductivity, magnetism, etc. Here, we investigate the electronic structure of CaFe2As2 forming in different structures employing density functional theory. The results indicate better stability of the cT phase with enhancement in hybridization induced effects and shift of the energy bands towards lower energies. The Fermi surface centered around $\Gamma$ point gradually vanishes with the increase in pressure. Consequently, the nesting between the hole and electron Fermi surfaces associated to the spin density wave state disappears indicating a pathway to achieve the proximity to quantum fluctuations. The magnetic moment at the Fe sites diminishes in the cT phase consistent with the magnetic susceptibility results. Notably, the hybridization of Ca 4s states (Ca-layer may be treated as a charge reservoir layer akin to those in cuprate superconductors) is significantly enhanced in the cT phase revealing its relevance in its interesting electronic properties.
0
1
0
0
0
0
Lord Kelvin's method of images approach to the Rotenberg model and its asymptotics
We study a mathematical model of cell populations dynamics proposed by M. Rotenberg and investigated by M. Boulanouar. Here, a cell is characterized by her maturity and speed of maturation. The growth of cell populations is described by a partial differential equation with a boundary condition. In the first part of the paper we exploit semigroup theory approach and apply Lord Kelvin's method of images in order to give a new proof that the model is well posed. Next, we use a semi-explicit formula for the semigroup related to the model obtained by the method of images in order to give growth estimates for the semigroup. The main part of the paper is devoted to the asymptotic behaviour of the semigroup. We formulate conditions for the asymptotic stability of the semigroup in the case in which the average number of viable daughters per mitosis equals one. To this end we use methods developed by K. Pichór and R. Rudnicki.
0
0
1
0
0
0
Study of the Magnetizing Relationship of the Kickers for CSNS
The extraction system of CSNS mainly consists of two kinds of magnets: eight kickers and one lambertson magnet. In this paper, firstly, the magnetic test results of the eight kickers were introduced and then the filed uniformity and magnetizing relationship of the kickers were given. Secondly, during the beam commissioning in the future, in order to obtain more accurate magnetizing relationship, a new method to measure the magnetizing coefficients of the kickers by the real extraction beam was given and the data analysis would also be processed.
0
1
0
0
0
0
Smart "Predict, then Optimize"
Many real-world analytics problems involve two significant challenges: prediction and optimization. Due to the typically complex nature of each challenge, the standard paradigm is to predict, then optimize. By and large, machine learning tools are intended to minimize prediction error and do not account for how the predictions will be used in a downstream optimization problem. In contrast, we propose a new and very general framework, called Smart "Predict, then Optimize" (SPO), which directly leverages the optimization problem structure, i.e., its objective and constraints, for designing successful analytics tools. A key component of our framework is the SPO loss function, which measures the quality of a prediction by comparing the objective values of the solutions generated using the predicted and observed parameters, respectively. Training a model with respect to the SPO loss is computationally challenging, and therefore we also develop a surrogate loss function, called the SPO+ loss, which upper bounds the SPO loss, has desirable convexity properties, and is statistically consistent under mild conditions. We also propose a stochastic gradient descent algorithm which allows for situations in which the number of training samples is large, model regularization is desired, and/or the optimization problem of interest is nonlinear or integer. Finally, we perform computational experiments to empirically verify the success of our SPO framework in comparison to the standard predict-then-optimize approach.
1
0
0
1
0
0
U-SLADS: Unsupervised Learning Approach for Dynamic Dendrite Sampling
Novel data acquisition schemes have been an emerging need for scanning microscopy based imaging techniques to reduce the time in data acquisition and to minimize probing radiation in sample exposure. Varies sparse sampling schemes have been studied and are ideally suited for such applications where the images can be reconstructed from a sparse set of measurements. Dynamic sparse sampling methods, particularly supervised learning based iterative sampling algorithms, have shown promising results for sampling pixel locations on the edges or boundaries during imaging. However, dynamic sampling for imaging skeleton-like objects such as metal dendrites remains difficult. Here, we address a new unsupervised learning approach using Hierarchical Gaussian Mixture Mod- els (HGMM) to dynamically sample metal dendrites. This technique is very useful if the users are interested in fast imaging the primary and secondary arms of metal dendrites in solidification process in materials science.
0
0
0
1
0
0
On a registration-based approach to sensor network localization
We consider a registration-based approach for localizing sensor networks from range measurements. This is based on the assumption that one can find overlapping cliques spanning the network. That is, for each sensor, one can identify geometric neighbors for which all inter-sensor ranges are known. Such cliques can be efficiently localized using multidimensional scaling. However, since each clique is localized in some local coordinate system, we are required to register them in a global coordinate system. In other words, our approach is based on transforming the localization problem into a problem of registration. In this context, the main contributions are as follows. First, we describe an efficient method for partitioning the network into overlapping cliques. Second, we study the problem of registering the localized cliques, and formulate a necessary rigidity condition for uniquely recovering the global sensor coordinates. In particular, we present a method for efficiently testing rigidity, and a proposal for augmenting the partitioned network to enforce rigidity. A recently proposed semidefinite relaxation of global registration is used for registering the cliques. We present simulation results on random and structured sensor networks to demonstrate that the proposed method compares favourably with state-of-the-art methods in terms of run-time, accuracy, and scalability.
1
0
1
0
0
0
Density estimation on small datasets
How might a smooth probability distribution be estimated, with accurately quantified uncertainty, from a limited amount of sampled data? Here we describe a field-theoretic approach that addresses this problem remarkably well in one dimension, providing an exact nonparametric Bayesian posterior without relying on tunable parameters or large-data approximations. Strong non-Gaussian constraints, which require a non-perturbative treatment, are found to play a major role in reducing distribution uncertainty. A software implementation of this method is provided.
1
0
0
0
1
0
Generalized Euler classes, differential forms and commutative DGAs
In the context of commutative differential graded algebras over $\mathbb Q$, we show that an iteration of "odd spherical fibration" creates a "total space" commutative differential graded algebra with only odd degree cohomology. Then we show for such a commutative differential graded algebra that, for any of its "fibrations" with "fiber" of finite cohomological dimension, the induced map on cohomology is injective.
0
0
1
0
0
0
Episodic memory for continual model learning
Both the human brain and artificial learning agents operating in real-world or comparably complex environments are faced with the challenge of online model selection. In principle this challenge can be overcome: hierarchical Bayesian inference provides a principled method for model selection and it converges on the same posterior for both off-line (i.e. batch) and online learning. However, maintaining a parameter posterior for each model in parallel has in general an even higher memory cost than storing the entire data set and is consequently clearly unfeasible. Alternatively, maintaining only a limited set of models in memory could limit memory requirements. However, sufficient statistics for one model will usually be insufficient for fitting a different kind of model, meaning that the agent loses information with each model change. We propose that episodic memory can circumvent the challenge of limited memory-capacity online model selection by retaining a selected subset of data points. We design a method to compute the quantities necessary for model selection even when the data is discarded and only statistics of one (or few) learnt models are available. We demonstrate on a simple model that a limited-sized episodic memory buffer, when the content is optimised to retain data with statistics not matching the current representation, can resolve the fundamental challenge of online model selection.
1
0
0
1
0
0
Security Trust Zone in 5G Networks
Fifth Generation (5G) telecommunication system is going to deliver a flexible radio access network (RAN). Security functions such as authorization, authentication and accounting (AAA) are expected to be distributed from central clouds to edge clouds. We propose a novel architectural security solution that applies to 5G networks. It is called Trust Zone (TZ) that is designed as an enhancement of the 5G AAA in the edge cloud. TZ also provides an autonomous and decentralized security policy for different tenants under variable network conditions. TZ also initiates an ability of disaster cognition and extends the security functionalities to a set of flexible and highly available emergency services in the edge cloud.
1
0
0
0
0
0
Upper-Bounding the Regularization Constant for Convex Sparse Signal Reconstruction
Consider reconstructing a signal $x$ by minimizing a weighted sum of a convex differentiable negative log-likelihood (NLL) (data-fidelity) term and a convex regularization term that imposes a convex-set constraint on $x$ and enforces its sparsity using $\ell_1$-norm analysis regularization. We compute upper bounds on the regularization tuning constant beyond which the regularization term overwhelmingly dominates the NLL term so that the set of minimum points of the objective function does not change. Necessary and sufficient conditions for irrelevance of sparse signal regularization and a condition for the existence of finite upper bounds are established. We formulate an optimization problem for finding these bounds when the regularization term can be globally minimized by a feasible $x$ and also develop an alternating direction method of multipliers (ADMM) type method for their computation. Simulation examples show that the derived and empirical bounds match.
0
0
1
1
0
0
On the Privacy of the Opal Data Release: A Response
This document is a response to a report from the University of Melbourne on the privacy of the Opal dataset release. The Opal dataset was released by Data61 (CSIRO) in conjunction with the Transport for New South Wales (TfNSW). The data consists of two separate weeks of "tap-on/tap-off" data of individuals who used any of the four different modes of public transport from TfNSW: buses, light rail, train and ferries. These taps are recorded through the smart ticketing system, known as Opal, available in the state of New South Wales, Australia.
1
0
0
0
0
0
Long time behavior of Gross-Pitaevskii equation at positive temperature
The stochastic Gross-Pitaevskii equation is used as a model to describe Bose-Einstein condensation at positive temperature. The equation is a complex Ginzburg Landau equation with a trapping potential and an additive space-time white noise. Two important questions for this system are the global existence of solutions in the support of the Gibbs measure, and the convergence of those solutions to the equilibrium for large time. In this paper, we give a proof of these two results in one space dimension. In order to prove the convergence to equilibrium, we use the associated purely dissipative equation as an auxiliary equation, for which the convergence may be obtained using standard techniques. Global existence is obtained for all initial data, and not almost surely with respect to the invariant measure.
0
0
1
0
0
0
Isomorphism and Morita equivalence classes for crossed products of irrational rotation algebras by cyclic subgroups of $SL_2(\mathbb{Z})$
Let $\theta, \theta'$ be irrational numbers and $A, B$ be matrices in $SL_2(\mathbb{Z})$ of infinite order. We compute the $K$-theory of the crossed product $\mathcal{A}_{\theta}\rtimes_A \mathbb{Z}$ and show that $\mathcal{A}_{\theta} \rtimes_A\mathbb{Z}$ and $\mathcal{A}_{\theta'} \rtimes_B \mathbb{Z}$ are $*$-isomorphic if and only if $\theta = \pm\theta' \pmod{\mathbb{Z}}$ and $I-A^{-1}$ is matrix equivalent to $I-B^{-1}$. Combining this result and an explicit construction of equivariant bimodules, we show that $\mathcal{A}_{\theta} \rtimes_A\mathbb{Z}$ and $\mathcal{A}_{\theta'} \rtimes_B \mathbb{Z}$ are Morita equivalent if and only if $\theta$ and $\theta'$ are in the same $GL_2(\mathbb{Z})$ orbit and $I-A^{-1}$ is matrix equivalent to $I-B^{-1}$. Finally, we determine the Morita equivalence class of $\mathcal{A}_{\theta} \rtimes F$ for any finite subgroup $F$ of $SL_2(\mathbb{Z})$.
0
0
1
0
0
0
Model Predictive Control for Distributed Microgrid Battery Energy Storage Systems
This paper proposes a new convex model predictive control strategy for dynamic optimal power flow between battery energy storage systems distributed in an AC microgrid. The proposed control strategy uses a new problem formulation, based on a linear d-q reference frame voltage-current model and linearised power flow approximations. This allows the optimal power flows to be solved as a convex optimisation problem, for which fast and robust solvers exist. The proposed method does not assume real and reactive power flows are decoupled, allowing line losses, voltage constraints and converter current constraints to be addressed. In addition, non-linear variations in the charge and discharge efficiencies of lithium ion batteries are analysed and included in the control strategy. Real-time digital simulations were carried out for an islanded microgrid based on the IEEE 13 bus prototypical feeder, with distributed battery energy storage systems and intermittent photovoltaic generation. It is shown that the proposed control strategy approaches the performance of a strategy based on non-convex optimisation, while reducing the required computation time by a factor of 1000, making it suitable for a real-time model predictive control implementation.
1
0
0
0
0
0
On noncommutative geometry of the Standard Model: fermion multiplet as internal forms
We unveil the geometric nature of the multiplet of fundamental fermions in the Standard Model of fundamental particles as a noncommutative analogue of de Rham forms on the internal finite quantum space.
0
0
1
0
0
0
A Review of Dynamic Network Models with Latent Variables
We present a selective review of statistical modeling of dynamic networks. We focus on models with latent variables, specifically, the latent space models and the latent class models (or stochastic blockmodels), which investigate both the observed features and the unobserved structure of networks. We begin with an overview of the static models, and then we introduce the dynamic extensions. For each dynamic model, we also discuss its applications that have been studied in the literature, with the data source listed in Appendix. Based on the review, we summarize a list of open problems and challenges in dynamic network modeling with latent variables.
0
0
0
1
0
0
LevelHeaded: Making Worst-Case Optimal Joins Work in the Common Case
Pipelines combining SQL-style business intelligence (BI) queries and linear algebra (LA) are becoming increasingly common in industry. As a result, there is a growing need to unify these workloads in a single framework. Unfortunately, existing solutions either sacrifice the inherent benefits of exclusively using a relational database (e.g. logical and physical independence) or incur orders of magnitude performance gaps compared to specialized engines (or both). In this work we study applying a new type of query processing architecture to standard BI and LA benchmarks. To do this we present a new in-memory query processing engine called LevelHeaded. LevelHeaded uses worst-case optimal joins as its core execution mechanism for both BI and LA queries. With LevelHeaded, we show how crucial optimizations for BI and LA queries can be captured in a worst-case optimal query architecture. Using these optimizations, LevelHeaded outperforms other relational database engines (LogicBlox, MonetDB, and HyPer) by orders of magnitude on standard LA benchmarks, while performing on average within 31% of the best-of-breed BI (HyPer) and LA (Intel MKL) solutions on their own benchmarks. Our results show that such a single query processing architecture is capable of delivering competitive performance on both BI and LA queries.
1
0
0
0
0
0
Few-shot learning of neural networks from scratch by pseudo example optimization
In this paper, we propose a simple but effective method for training neural networks with a limited amount of training data. Our approach inherits the idea of knowledge distillation that transfers knowledge from a deep or wide reference model to a shallow or narrow target model. The proposed method employs this idea to mimic predictions of reference estimators that are more robust against overfitting than the network we want to train. Different from almost all the previous work for knowledge distillation that requires a large amount of labeled training data, the proposed method requires only a small amount of training data. Instead, we introduce pseudo training examples that are optimized as a part of model parameters. Experimental results for several benchmark datasets demonstrate that the proposed method outperformed all the other baselines, such as naive training of the target model and standard knowledge distillation.
0
0
0
1
0
0
Identities and congruences involving the Fubini polynomials
In this paper, we investigate the umbral representation of the Fubini polynomials $F_{x}^{n}:=F_{n}(x)$ to derive some properties involving these polynomials. For any prime number $p$ and any polynomial $f$ with integer coefficients, we show $(f(F_{x}))^{p}\equiv f(F_{x})$ and we give other curious congruences.
0
0
1
0
0
0
Introduction to Delay Models and Their Wave Solutions
In this paper, a brief review of delay population models and their applications in ecology is provided. The inclusion of diffusion and nonlocality terms in delay models has given more capabilities to these models enabling them to capture several ecological phenomena such as the Allee effect, waves of invasive species and spatio-temporal competitions of interacting species. Moreover, recent advances in the studies of traveling and stationary wave solutions of delay models are outlined. In particular, the existence of stationary and traveling wave solutions of delay models, stability of wave solutions, formation of wavefronts in the special domain, and possible outcomes of delay models are discussed.
0
0
1
0
0
0
On Dummett's Pragmatist Justification Procedure
I show that propositional intuitionistic logic is complete with respect to an adaptation of Dummett's pragmatist justification procedure. In particular, given a pragmatist justification of an argument, I show how to obtain a natural deduction derivation of the conclusion of the argument from, at most, the same assumptions.
0
0
1
0
0
0
Evidence for a radiatively driven disc-wind in PDS 456?
We present a newly discovered correlation between the wind outflow velocity and the X-ray luminosity in the luminous ($L_{\rm bol}\sim10^{47}\,\rm erg\,s^{-1}$) nearby ($z=0.184$) quasar PDS\,456. All the contemporary XMM-Newton, NuSTAR and Suzaku observations from 2001--2014 were revisited and we find that the centroid energy of the blueshifted Fe\,K absorption profile increases with luminosity. This translates into a correlation between the wind outflow velocity and the hard X-ray luminosity (between 7--30\,keV) where we find that $v_{\rm w}/c \propto L_{7-30}^{\gamma}$ where $\gamma=0.22\pm0.04$. We also show that this is consistent with a wind that is predominately radiatively driven, possibly resulting from the high Eddington ratio of PDS\,456.
0
1
0
0
0
0
From a normal insulator to a topological insulator in plumbene
Plumbene, similar to silicene, has a buckled honeycomb structure with a large band gap ($\sim 400$ meV). All previous studies have shown that it is a normal insulator. Here, we perform first-principles calculations and employ a sixteen-band tight-binding model with nearest-neighbor and next-nearest-neighbor hopping terms to investigate electronic structures and topological properties of the plumbene monolayer. We find that it can become a topological insulator with a large bulk gap ($\sim 200$ meV) through electron doping, and the nontrivial state is very robust with respect to external strain. Plumbene can be an ideal candidate for realizing the quantum spin Hall effect at room temperature. By investigating effects of external electric and magnetic fields on electronic structures and transport properties of plumbene, we present two rich phase diagrams with and without electron doping, and propose a theoretical design for a four-state spin-valley filter.
0
1
0
0
0
0
High-sensitivity Kinetic Inductance Detectors for CALDER
Providing a background discrimination tool is crucial for enhancing the sensitivity of next-generation experiments searching for neutrinoless double- beta decay. The development of high-sensitivity (< 20 eV RMS) cryogenic light detectors allows simultaneous read-out of the light and heat signals and enables background suppression through particle identification. The Cryogenic wide- Area Light Detector with Excellent Resolution (CALDER) R&D already proved the potential of this technique using the phonon-mediated Kinetic Inductance Detectors (KIDs) approach. The first array prototype with 4 Aluminum KIDs on a 2 $\times$ 2 cm2 Silicon substrate showed a baseline resolution of 154 $\pm$ 7 eV RMS. Improving the design and the readout of the resonator, the next CALDER prototype featured an energy resolution of 82 $\pm$ 4 eV, by sampling the same substrate with a single Aluminum KID.
0
1
0
0
0
0
Bounding the composition length of primitive permutation groups and completely reducible linear groups
We obtain upper bounds on the composition length of a finite permutation group in terms of the degree and the number of orbits, and analogous bounds for primitive, quasiprimitive and semiprimitive groups. Similarly, we obtain upper bounds on the composition length of a finite completely reducible linear group in terms of some of its parameters. In almost all cases we show that the bounds are sharp, and describe the extremal examples.
0
0
1
0
0
0
A Bernstein Inequality For Spatial Lattice Processes
In this article we present a Bernstein inequality for sums of random variables which are defined on a spatial lattice structure. The inequality can be used to derive concentration inequalities. It can be useful to obtain consistency properties for nonparametric estimators of conditional expectation functions.
0
0
1
1
0
0
An Exploration of Approaches to Integrating Neural Reranking Models in Multi-Stage Ranking Architectures
We explore different approaches to integrating a simple convolutional neural network (CNN) with the Lucene search engine in a multi-stage ranking architecture. Our models are trained using the PyTorch deep learning toolkit, which is implemented in C/C++ with a Python frontend. One obvious integration strategy is to expose the neural network directly as a service. For this, we use Apache Thrift, a software framework for building scalable cross-language services. In exploring alternative architectures, we observe that once trained, the feedforward evaluation of neural networks is quite straightforward. Therefore, we can extract the parameters of a trained CNN from PyTorch and import the model into Java, taking advantage of the Java Deeplearning4J library for feedforward evaluation. This has the advantage that the entire end-to-end system can be implemented in Java. As a third approach, we can extract the neural network from PyTorch and "compile" it into a C++ program that exposes a Thrift service. We evaluate these alternatives in terms of performance (latency and throughput) as well as ease of integration. Experiments show that feedforward evaluation of the convolutional neural network is significantly slower in Java, while the performance of the compiled C++ network does not consistently beat the PyTorch implementation.
1
0
0
0
0
0
Dispersive Regimes of the Dicke Model
We study two dispersive regimes in the dynamics of $N$ two-level atoms interacting with a bosonic mode for long interaction times. Firstly, we analyze the dispersive multiqubit quantum Rabi model for the regime in which the qubit frequencies are equal and smaller than the mode frequency, and for values of the coupling strength similar or larger than the mode frequency, namely, the deep strong coupling regime. Secondly, we address an interaction that is dependent on the photon number, where the coupling strength is comparable to the geometric mean of the qubit and mode frequencies. We show that the associated dynamics is analytically tractable and provide useful frameworks with which to analyze the system behavior. In the deep strong coupling regime, we unveil the structure of unexpected resonances for specific values of the coupling, present for $N\ge2$, and in the photon-number-dependent regime we demonstrate that all the nontrivial dynamical behavior occurs in the atomic degrees of freedom for a given Fock state. We verify these assertions with numerical simulations of the qubit population and photon-statistic dynamics.
0
1
0
0
0
0
ZebraLancer: Crowdsource Knowledge atop Open Blockchain, Privately and Anonymously
We design and implement the first private and anonymous decentralized crowdsourcing system ZebraLancer. It realizes the fair exchange (i.e. security against malicious workers and dishonest requesters) without using any third-party arbiter. More importantly, it overcomes two fundamental challenges of decentralization, i.e. data leakage and identity breach. First, our outsource-then-prove methodology resolves the critical tension between blockchain transparency and data confidentiality without sacrificing the fairness of exchange. ZebraLancer ensures: a requester will not pay more than what data deserve, according to a policy announced when her task is published through the blockchain; each worker indeed gets a payment based on the policy, if submits data to the blockchain; the above properties are realized not only without a central arbiter, but also without leaking the data to blockchain network. Furthermore, the blockchain transparency might allow one to infer private information of workers/requesters through their participation history. ZebraLancer solves the problem by allowing anonymous participations without surrendering user accountability. Specifically, workers cannot misuse anonymity to submit multiple times to reap rewards, and an anonymous requester cannot maliciously submit colluded answers to herself to repudiate payments. The idea behind is a subtle linkability: if one authenticates twice in a task, everybody can tell, or else staying anonymous. To realize such delicate linkability, we put forth a novel cryptographic notion, the common-prefix-linkable anonymous authentication. Finally, we implement our protocol for a common image annotation task and deploy it in a test net of Ethereum. The experiment results show the applicability of our protocol and highlight subtleties of tailoring the protocol to be compatible with the existing real-world open blockchain.
1
0
0
0
0
0
Fast, Better Training Trick -- Random Gradient
In this paper, we will show an unprecedented method to accelerate training and improve performance, which called random gradient (RG). This method can be easier to the training of any model without extra calculation cost, we use Image classification, Semantic segmentation, and GANs to confirm this method can improve speed which is training model in computer vision. The central idea is using the loss multiplied by a random number to random reduce the back-propagation gradient. We can use this method to produce a better result in Pascal VOC, Cifar, Cityscapes datasets.
0
0
0
1
0
0
Expressions of Sentiments During Code Reviews: Male vs. Female
Background: As most of the software development organizations are male-dominated, female developers encountering various negative workplace experiences reported feeling like they "do not belong". Exposures to discriminatory expletives or negative critiques from their male colleagues may further exacerbate those feelings. Aims: The primary goal of this study is to identify the differences in expressions of sentiments between male and female developers during various software engineering tasks. Method: On this goal, we mined the code review repositories of six popular open source projects. We used a semi-automated approach leveraging the name as well as multiple social networks to identify the gender of a developer. Using SentiSE, a customized and state-of-the-art sentiment analysis tool for the software engineering domain, we classify each communication as negative, positive, or neutral. We also compute the frequencies of sentiment words, emoticons, and expletives used by each developer. Results: Our results suggest that the likelihood of using sentiment words, emoticons, and expletives during code reviews varies based on the gender of a developer, as females are significantly less likely to express sentiments than males. Although female developers were more neutral to their male colleagues than to another female, male developers from three out of the six projects were not only writing more frequent negative comments but also withholding positive encouragements from their female counterparts. Conclusion: Our results provide empirical evidence of another factor behind the negative work place experiences encountered by the female developers that may be contributing to the diminishing number of females in the SE industry.
1
0
0
0
0
0
Monotonicity patterns and functional inequalities for classical and generalized Wright functions
In this paper our aim is to present the completely monotonicity and convexity properties for the Wright function. As consequences of these results, we present some functional inequalities. Moreover, we derive the monotonicity and log-convexity results for the generalized Wright functions. As applications, we present several new inequalities (like Turán type inequalities) and we prove some geometric properties for four--parametric Mittag--Leffler functions.
0
0
1
0
0
0
Multiple VLAD encoding of CNNs for image classification
Despite the effectiveness of convolutional neural networks (CNNs) especially in image classification tasks, the effect of convolution features on learned representations is still limited. It mostly focuses on the salient object of the images, but ignores the variation information on clutter and local. In this paper, we propose a special framework, which is the multiple VLAD encoding method with the CNNs features for image classification. Furthermore, in order to improve the performance of the VLAD coding method, we explore the multiplicity of VLAD encoding with the extension of three kinds of encoding algorithms, which are the VLAD-SA method, the VLAD-LSA and the VLAD-LLC method. Finally, we equip the spatial pyramid patch (SPM) on VLAD encoding to add the spatial information of CNNs feature. In particular, the power of SPM leads our framework to yield better performance compared to the existing method.
1
0
0
0
0
0
Index of Dirac operators and classification of topological insulators
Real and complex Clifford bundles and Dirac operators defined on them are considered. By using the index theorems of Dirac operators, table of topological invariants is constructed from the Clifford chessboard. Through the relations between K-theory groups, Grothendieck groups and symmetric spaces, the periodic table of topological insulators and superconductors is obtained. This gives the result that the periodic table of real and complex topological phases is originated from the Clifford chessboard and index theorems.
0
1
0
0
0
0
Centroid vetting of transiting planet candidates from the Next Generation Transit Survey
The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterisation. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully-automated centroid vetting algorithm developed for NGTS, enabled by our high-precision auto-guiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel, and down to 0.25 milli-pixel for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.
0
1
0
0
0
0
Galaxy And Mass Assembly: the evolution of the cosmic spectral energy distribution from z = 1 to z = 0
We present the evolution of the Cosmic Spectral Energy Distribution (CSED) from $z = 1 - 0$. Our CSEDs originate from stacking individual spectral energy distribution fits based on panchromatic photometry from the Galaxy and Mass Assembly (GAMA) and COSMOS datasets in ten redshift intervals with completeness corrections applied. Below $z = 0.45$, we have credible SED fits from 100 nm to 1 mm. Due to the relatively low sensitivity of the far-infrared data, our far-infrared CSEDs contain a mix of predicted and measured fluxes above $z = 0.45$. Our results include appropriate errors to highlight the impact of these corrections. We show that the bolometric energy output of the Universe has declined by a factor of roughly four -- from $5.1 \pm 1.0$ at $z \sim 1$ to $1.3 \pm 0.3 \times 10^{35}~h_{70}$~W~Mpc$^{-3}$ at the current epoch. We show that this decrease is robust to cosmic variance, SED modelling and other various types of error. Our CSEDs are also consistent with an increase in the mean age of stellar populations. We also show that dust attenuation has decreased over the same period, with the photon escape fraction at 150~nm increasing from $16 \pm 3$ at $z \sim 1$ to $24 \pm 5$ per cent at the current epoch, equivalent to a decrease in $A_\mathrm{FUV}$ of 0.4~mag. Our CSEDs account for $68 \pm 12$ and $61 \pm 13$ per cent of the cosmic optical and infrared backgrounds respectively as defined from integrated galaxy counts and are consistent with previous estimates of the cosmic infrared background with redshift.
0
1
0
0
0
0
Large sums of Hecke eigenvalues of holomorphic cusp forms
Let $f$ be a Hecke cusp form of weight $k$ for the full modular group, and let $\{\lambda_f(n)\}_{n\geq 1}$ be the sequence of its normalized Fourier coefficients. Motivated by the problem of the first sign change of $\lambda_f(n)$, we investigate the range of $x$ (in terms of $k$) for which there are cancellations in the sum $S_f(x)=\sum_{n\leq x} \lambda_f(n)$. We first show that $S_f(x)=o(x\log x)$ implies that $\lambda_f(n)<0$ for some $n\leq x$. We also prove that $S_f(x)=o(x\log x)$ in the range $\log x/\log\log k\to \infty$ assuming the Riemann hypothesis for $L(s, f)$, and furthermore that this range is best possible unconditionally. More precisely, we establish the existence of many Hecke cusp forms $f$ of large weight $k$, for which $S_f(x)\gg_A x\log x$, when $x=(\log k)^A.$ Our results are $GL_2$ analogues of work of Granville and Soundararajan for character sums, and could also be generalized to other families of automorphic forms.
0
0
1
0
0
0
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Recent studies have highlighted the vulnerability of deep neural networks (DNNs) to adversarial examples - a visually indistinguishable adversarial image can easily be crafted to cause a well-trained model to misclassify. Existing methods for crafting adversarial examples are based on $L_2$ and $L_\infty$ distortion metrics. However, despite the fact that $L_1$ distortion accounts for the total variation and encourages sparsity in the perturbation, little has been developed for crafting $L_1$-based adversarial examples. In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Our elastic-net attacks to DNNs (EAD) feature $L_1$-oriented adversarial examples and include the state-of-the-art $L_2$ attack as a special case. Experimental results on MNIST, CIFAR10 and ImageNet show that EAD can yield a distinct set of adversarial examples with small $L_1$ distortion and attains similar attack performance to the state-of-the-art methods in different attack scenarios. More importantly, EAD leads to improved attack transferability and complements adversarial training for DNNs, suggesting novel insights on leveraging $L_1$ distortion in adversarial machine learning and security implications of DNNs.
1
0
0
1
0
0
Playtime Measurement with Survival Analysis
Maximizing product use is a central goal of many businesses, which makes retention and monetization two central analytics metrics in games. Player retention may refer to various duration variables quantifying product use: total playtime or session playtime are popular research targets, and active playtime is well-suited for subscription games. Such research often has the goal of increasing player retention or conversely decreasing player churn. Survival analysis is a framework of powerful tools well suited for retention type data. This paper contributes new methods to game analytics on how playtime can be analyzed using survival analysis without covariates. Survival and hazard estimates provide both a visual and an analytic interpretation of the playtime phenomena as a funnel type nonparametric estimate. Metrics based on the survival curve can be used to aggregate this playtime information into a single statistic. Comparison of survival curves between cohorts provides a scientific AB-test. All these methods work on censored data and enable computation of confidence intervals. This is especially important in time and sample limited data which occurs during game development. Throughout this paper, we illustrate the application of these methods to real world game development problems on the Hipster Sheep mobile game.
1
0
0
1
0
0
Asymptotic formula of the number of Newton polygons
In this paper, we enumerate Newton polygons asymptotically. The number of Newton polygons is computable by a simple recurrence equation, but unexpectedly the asymptotic formula of its logarithm contains growing oscillatory terms. As the terms come from non-trivial zeros of the Riemann zeta function, an estimation of the amplitude of the oscillating part is equivalent to the Riemann hypothesis.
0
0
1
0
0
0
Invariant-based inverse engineering of crane control parameters
By applying invariant-based inverse engineering in the small-oscillations regime, we design the time dependence of the control parameters of an overhead crane (trolley displacement and rope length), to transport a load between two positions at different heights with minimal final energy excitation for a microcanonical ensemble of initial conditions. The analogies between ion transport in multisegmented traps or neutral atom transport in moving optical lattices and load manipulation by cranes opens a route for a useful transfer of techniques among very different fields.
0
1
0
0
0
0
Leaf Space Isometries of Singular Riemannian Foliations and Their Spectral Properties
In this paper, the authors consider leaf spaces of singular Riemannian foliations $\mathcal{F}$ on compact manifolds $M$ and the associated $\mathcal{F}$-basic spectrum on $M$, $spec_B(M, \mathcal{F}),$ counted with multiplicities. Recently, a notion of smooth isometry $\varphi: M_1/\mathcal{F}_1\rightarrow M_2/\mathcal{F}_2$ between the leaf spaces of such singular Riemannian foliations $(M_1,\mathcal{F}_1)$ and $(M_2,\mathcal{F}_2)$ has appeared in the literature. In this paper, the authors provide an example to show that the existence a smooth isometry of leaf spaces as above is not sufficient to guarantee the equality of $spec_B(M_1,\mathcal{F}_1)$ and $spec_B(M_2,\mathcal{F}_2).$ The authors then prove that if some additional conditions involving the geometry of the leaves are satisfied, then the equality of $spec_B(M_1,\mathcal{F}_1)$ and $spec_B(M_2,\mathcal{F}_2)$ is guaranteed. Consequences and applications to orbifold spectral theory, isometric group actions, and their reductions are also explored.
0
0
1
0
0
0
Backward Monte-Carlo applied to muon transport
We discuss a backward Monte-Carlo technique for muon transport problem, with emphasis on its application in muography. Backward Monte-Carlo allows exclusive sampling of a final state by reversing the simulation flow. In practice it can be made analogous to an adjoint Monte-Carlo, though it is more versatile for muon transport. A backward Monte-Carlo was implemented as a dedicated muon transport library: PUMAS. It is shown for case studies relevant for muography imaging that the implementations of forward and backward Monte-Carlo schemes agree to better than 1%.
0
1
0
0
0
0
Functional importance of noise in neuronal information processing
Noise is an inherent part of neuronal dynamics, and thus of the brain. It can be observed in neuronal activity at different spatiotemporal scales, including in neuronal membrane potentials, local field potentials, electroencephalography, and magnetoencephalography. A central research topic in contemporary neuroscience is to elucidate the functional role of noise in neuronal information processing. Experimental studies have shown that a suitable level of noise may enhance the detection of weak neuronal signals by means of stochastic resonance. In response, theoretical research, based on the theory of stochastic processes, nonlinear dynamics, and statistical physics, has made great strides in elucidating the mechanism and the many benefits of stochastic resonance in neuronal systems. In this perspective, we review recent research dedicated to neuronal stochastic resonance in biophysical mathematical models. We also explore the regulation of neuronal stochastic resonance, and we outline important open questions and directions for future research. A deeper understanding of neuronal stochastic resonance may afford us new insights into the highly impressive information processing in the brain.
0
0
0
0
1
0
Stochastic Variance Reduction Methods for Policy Evaluation
Policy evaluation is a crucial step in many reinforcement-learning procedures, which estimates a value function that predicts states' long-term value under a given policy. In this paper, we focus on policy evaluation with linear function approximation over a fixed dataset. We first transform the empirical policy evaluation problem into a (quadratic) convex-concave saddle point problem, and then present a primal-dual batch gradient method, as well as two stochastic variance reduction methods for solving the problem. These algorithms scale linearly in both sample size and feature dimension. Moreover, they achieve linear convergence even when the saddle-point problem has only strong concavity in the dual variables but no strong convexity in the primal variables. Numerical experiments on benchmark problems demonstrate the effectiveness of our methods.
1
0
1
1
0
0