title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Even denominator fractional quantum Hall states at an isospin transition in monolayer graphene
Magnetic fields quench the kinetic energy of two dimensional electrons, confining them to highly degenerate Landau levels. In the absence of disorder, the ground state at partial Landau level filling is determined only by Coulomb interactions, leading to a variety of correlation-driven phenomena. Here, we realize a quantum Hall analog of the Neél-to-valence bond solid transition within the spin- and sublattice- degenerate monolayer graphene zero energy Landau level by experimentally controlling substrate-induced sublattice symmetry breaking. The transition is marked by unusual isospin transitions in odd denominator fractional quantum Hall states for filling factors $\nu$ near charge neutrality, and the unexpected appearance of incompressible even denominator fractional quantum Hall states at $\nu=\pm1/2$ and $\pm1/4$ associated with pairing between composite fermions on different carbon sublattices.
0
1
0
0
0
0
Reverse approximation of gradient flows as Minimizing Movements: a conjecture by De Giorgi
We consider the Cauchy problem for the gradient flow \begin{equation} \label{eq:81} \tag{$\star$} u'(t)=-\nabla\phi(u(t)),\quad t\ge 0;\quad u(0)=u_0, \end{equation} generated by a continuously differentiable function $\phi:\mathbb H \to \mathbb R$ in a Hilbert space $\mathbb H$ and study the reverse approximation of solutions to ($\star$) by the De Giorgi Minimizing Movement approach. We prove that if $\mathbb H$ has finite dimension and $\phi$ is quadratically bounded from below (in particular if $\phi$ is Lipschitz) then for every solution $u$ to ($\star$) (which may have an infinite number of solutions) there exist perturbations $\phi_\tau:\mathbb H \to \mathbb R \ (\tau>0)$ converging to $\phi$ in the Lipschitz norm such that $u$ can be approximated by the Minimizing Movement scheme generated by the recursive minimization of $\Phi(\tau,U,V):=\frac 1{2\tau}|V-U|^2+ \phi_\tau(V)$: \begin{equation} \label{eq:abstract} \tag{$\star\star$} U_\tau^n\in \operatorname{argmin}_{V\in \mathbb H} \Phi(\tau,U_\tau^{n-1},V)\quad n\in\mathbb N, \quad U_\tau^0:=u_0. \end{equation} We show that the piecewise constant interpolations with time step $\tau > 0$ of all possible selections of solutions $(U_\tau^n)_{n\in\mathbb N}$ to ($\star\star$) will converge to $u$ as $\tau\downarrow 0$. This result solves a question raised by Ennio De Giorgi. We also show that even if $\mathbb H$ has infinite dimension the above approximation holds for the distinguished class of minimal solutions to ($\star$), that generate all the other solutions to ($\star$) by time reparametrization.
0
0
1
0
0
0
Linking de novo assembly results with long DNA reads by dnaasm-link application
Currently, third-generation sequencing techniques, which allow to obtain much longer DNA reads compared to the next-generation sequencing technologies, are becoming more and more popular. There are many possibilities to combine data from next-generation and third-generation sequencing. Herein, we present a new application called dnaasm-link for linking contigs, a result of \textit{de novo} assembly of second-generation sequencing data, with long DNA reads. Our tool includes an integrated module to fill gaps with a suitable fragment of appropriate long DNA read, which improves the consistency of the resulting DNA sequences. This feature is very important, in particular for complex DNA regions, as presented in the paper. Finally, our implementation outperforms other state-of-the-art tools in terms of speed and memory requirements, which may enable the usage of the presented application for organisms with a large genome, which is not possible in~existing applications. The presented application has many advantages as (i) significant memory optimization and reduction of computation time (ii) filling the gaps through the appropriate fragment of a specified long DNA read (iii) reducing number of spanned and unspanned gaps in the existing genome drafts. The application is freely available to all users under GNU Library or Lesser General Public License version 3.0 (LGPLv3). The demo application, docker image and source code are available at this http URL.
0
0
0
0
1
0
ReBNet: Residual Binarized Neural Network
This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.
1
0
0
0
0
0
Prime geodesic theorem for the modular surface
Under the generalized Lindelöf hypothesis, the exponent in the error term of the prime geodesic theorem for the modular surface is reduced to $\frac{5}{8}+\varepsilon $ outside a set of finite logarithmic measure.
0
0
1
0
0
0
Multiple regimes and coalescence timescales for massive black hole pairs ; the critical role of galaxy formation physics
We discuss the latest results of numerical simulations following the orbital decay of massive black hole pairs in galaxy mergers. We highlight important differences between gas-poor and gas-rich hosts, and between orbital evolution taking place at high redshift as opposed to low redshift. Two effects have a huge impact and are rather novel in the context of massive black hole binaries. The first is the increase in characteristic density of galactic nuclei of merger remnants as galaxies are more compact at high redshift due to the way dark halo collapse depends on redshift. This leads naturally to hardening timescales due to 3-body encounters that should decrease by two orders of magnitude up to $z=4$. It explains naturally the short binary coalescence timescale, $\sim 10$ Myr, found in novel cosmological simulations that follow binary evolution from galactic to milliparsec scales. The second one is the inhomogeneity of the interstellar medium in massive gas-rich disks at high redshift. In the latter star forming clumps 1-2 orders of magnitude more massive than local Giant Molecular Clouds (GMCs) can scatter massive black holes out of the disk plane via gravitational perturbations and direct encounters. This renders the character of orbital decay inherently stochastic, often increasing orbital decay timescales by as much as a Gyr. At low redshift a similar regime is present at scales of $1-10$ pc inside Circumnuclear Gas Disks (CNDs). In CNDs only massive black holes with masses below $10^7 M_{\odot}$ can be significantly perturbed. They decay to sub-pc separations in up to $\sim 10^8$ yr rather than the in just a few million years as in a smooth CND. Finally implications for building robust forecasts of LISA event rates are discussed
0
1
0
0
0
0
The Molecular Gas Environment in the 20 km s$^{-1}$ Cloud in the Central Molecular Zone
We recently reported a population of protostellar candidates in the 20 km s$^{-1}$ cloud in the Central Molecular Zone of the Milky Way, traced by H$_2$O masers in gravitationally bound dense cores. In this paper, we report high-angular-resolution ($\sim$3'') molecular line studies of the environment of star formation in this cloud. Maps of various molecular line transitions as well as the continuum at 1.3 mm are obtained using the Submillimeter Array. Five NH$_3$ inversion lines and the 1.3 cm continuum are observed with the Karl G. Jansky Very Large Array. The interferometric observations are complemented with single-dish data. We find that the CH$_3$OH, SO, and HNCO lines, which are usually shock tracers, are better correlated spatially with the compact dust emission from dense cores among the detected lines. These lines also show enhancement in intensities with respect to SiO intensities toward the compact dust emission, suggesting the presence of slow shocks or hot cores in these regions. We find gas temperatures of $\gtrsim$100 K at 0.1-pc scales based on RADEX modelling of the H$_2$CO and NH$_3$ lines. Although no strong correlations between temperatures and linewidths/H$_2$O maser luminosities are found, in high-angular-resolution maps we notice several candidate shock heated regions offset from any dense cores, as well as signatures of localized heating by protostars in several dense cores. Our findings suggest that at 0.1-pc scales in this cloud star formation and strong turbulence may together affect the chemistry and temperature of the molecular gas.
0
1
0
0
0
0
Spatio-Temporal Backpropagation for Training High-performance Spiking Neural Networks
Compared with artificial neural networks (ANNs), spiking neural networks (SNNs) are promising to explore the brain-like behaviors since the spikes could encode more spatio-temporal information. Although pre-training from ANN or direct training based on backpropagation (BP) makes the supervised training of SNNs possible, these methods only exploit the networks' spatial domain information which leads to the performance bottleneck and requires many complicated training skills. Another fundamental issue is that the spike activity is naturally non-differentiable which causes great difficulties in training SNNs. To this end, we build an iterative LIF model that is more friendly for gradient descent training. By simultaneously considering the layer-by-layer spatial domain (SD) and the timing-dependent temporal domain (TD) in the training phase, as well as an approximated derivative for the spike activity, we propose a spatio-temporal backpropagation (STBP) training framework without using any complicated technology. We achieve the best performance of multi-layered perceptron (MLP) compared with existing state-of-the-art algorithms over the static MNIST and the dynamic N-MNIST dataset as well as a custom object detection dataset. This work provides a new perspective to explore the high-performance SNNs for future brain-like computing paradigm with rich spatio-temporal dynamics.
1
0
0
1
0
0
Inverse of a Special Matrix and Application
The matrix inversion is an interesting topic in algebra mathematics. However, to determine an inverse matrix from a given matrix is required many computation tools and time resource if the size of matrix is huge. In this paper, we have shown an inverse closed form for an interesting matrix which has much applications in communication system. Base on this inverse closed form, the channel capacity closed form of a communication system can be determined via the error rate parameter alpha
1
0
0
0
0
0
Incompressible fluid problems on embedded surfaces: Modeling and variational formulations
Governing equations of motion for a viscous incompressible material surface are derived from the balance laws of continuum mechanics. The surface is treated as a time-dependent smooth orientable manifold of codimension one in an ambient Euclidian space. We use elementary tangential calculus to derive the governing equations in terms of exterior differential operators in Cartesian coordinates. The resulting equations can be seen as the Navier-Stokes equations posed on an evolving manifold. We consider a splitting of the surface Navier-Stokes system into coupled equations for the tangential and normal motions of the material surface. We then restrict ourselves to the case of a geometrically stationary manifold of codimension one embedded in $\Bbb{R}^n$. For this case, we present new well-posedness results for the simplified surface fluid model consisting of the surface Stokes equations. Finally, we propose and analyze several alternative variational formulations for this surface Stokes problem, including constrained and penalized formulations, which are convenient for Galerkin discretization methods.
0
1
1
0
0
0
Gaiotto's Lagrangian subvarieties via loop groups
The purpose of this note is to give a simple proof of the fact that a certain substack, defined in [2], of the moduli stack $T^{\ast}Bun_G(\Sigma)$ of Higgs bundles over a curve $\Sigma$, for a connected, simply connected semisimple group $G$, possesses a Lagrangian structure. The substack, roughly speaking, consists of images under the moment map of global sections of principal $G$-bundles over $\Sigma$ twisted by a smooth symplectic variety with a Hamiltonian $G$-action.
0
0
1
0
0
0
Geohyperbolic Routing and Addressing Schemes
The key requirement to routing in any telecommunication network, and especially in Internet-of-Things (IoT) networks, is scalability. Routing must route packets between any source and destination in the network without incurring unmanageable routing overhead that grows quickly with increasing network size and dynamics. Here we present an addressing scheme and a coupled network topology design scheme that guarantee essentially optimal routing scalability. The FIB sizes are as small as they can be, equal to the number of adjacencies a node has, while the routing control overhead is minimized as nearly zero routing control messages are exchanged even upon catastrophic failures in the network. The key new ingredient is the addressing scheme, which is purely local, based only on geographic coordinates of nodes and a centrality measure, and does not require any sophisticated non-local computations or global network topology knowledge for network embedding. The price paid for these benefits is that network topology cannot be arbitrary but should follow a specific design, resulting in Internet-like topologies. The proposed schemes can be most easily deployed in overlay networks, and also in other network deployments, where geolocation information is available, and where network topology can grow following the design specifications.
1
1
0
0
0
0
Conical: an extended module for computing a numerically satisfactory pair of solutions of the differential equation for conical functions
Conical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module CONICAL for the computation of conical functions. Specifically, the module includes now a routine for computing the function ${\rm R}^{m}_{-\frac{1}{2}+i\tau}(x)$, a real-valued numerically satisfactory companion of the function ${\rm P}^m_{-\tfrac12+i\tau}(x)$ for $x>1$. In this way, a natural basis for solving Dirichlet problems bounded by conical domains is provided.
1
0
1
0
0
0
Online to Offline Conversions, Universality and Adaptive Minibatch Sizes
We present an approach towards convex optimization that relies on a novel scheme which converts online adaptive algorithms into offline methods. In the offline optimization setting, our derived methods are shown to obtain favourable adaptive guarantees which depend on the harmonic sum of the queried gradients. We further show that our methods implicitly adapt to the objective's structure: in the smooth case fast convergence rates are ensured without any prior knowledge of the smoothness parameter, while still maintaining guarantees in the non-smooth setting. Our approach has a natural extension to the stochastic setting, resulting in a lazy version of SGD (stochastic GD), where minibathces are chosen \emph{adaptively} depending on the magnitude of the gradients. Thus providing a principled approach towards choosing minibatch sizes.
1
0
1
1
0
0
Kondo Length in Bosonic Lattices
Motivated by the fact that the low-energy properties of the Kondo model can be effectively simulated in spin chains, we study the realization of the effect with bond impurities in ultracold bosonic lattices at half-filling. After presenting a discussion of the effective theory and of the mapping of the bosonic chain onto a lattice spin Hamiltonian, we provide estimates for the Kondo length as a function of the parameters of the bosonic model. We point out that the Kondo length can be extracted from the integrated real space correlation functions, which are experimentally accessible quantities in experiments with cold atoms.
0
1
0
0
0
0
RVP-FLMS : A Robust Variable Power Fractional LMS Algorithm
In this paper, we propose an adaptive framework for the variable power of the fractional least mean square (FLMS) algorithm. The proposed algorithm named as robust variable power FLMS (RVP-FLMS) dynamically adapts the fractional power of the FLMS to achieve high convergence rate with low steady state error. For the evaluation purpose, the problems of system identification and channel equalization are considered. The experiments clearly show that the proposed approach achieves better convergence rate and lower steady-state error compared to the FLMS. The MATLAB code for the related simulation is available online at this https URL.
0
0
1
1
0
0
Dichotomy for Digraph Homomorphism Problems (two algorithms)
Update : An issue has been found in the correctness of our algorithm, and we are working to resolve the issue. Until a resolution is found, we retract our main claim that our approach gives a combinatorial solution to the CSP conjecture. We remain hopeful that we can resolve the issues. We thank Ross Willard for carefully checking the algorithm and pointing out the mistake in the version of this manuscript. We briefly explain one issue at the beginning of the text, and leave the rest of the manuscript intact for the moment . Ross Willard is posting a more involved description of a counter-example to the algorithm in the present manuscript. We have an updated manuscript that corrects some issues while still not arriving at a full solution; we will keep this private as long as unresolved issues remain. Previous abstract : We consider the problem of finding a homomorphism from an input digraph G to a fixed digraph H. We show that if H admits a weak-near-unanimity polymorphism $\phi$ then deciding whether G admits a homomorphism to H (HOM(H)) is polynomial time solvable. This confirms the conjecture of Maroti and McKenzie, and consequently implies the validity of the celebrated dichotomy conjecture due to Feder and Vardi. We transform the problem into an instance of the list homomorphism problem where initially all the lists are full (contain all the vertices of H). Then we use the polymorphism $\phi$ as a guide to reduce the lists to singleton lists, which yields a homomorphism if one exists.
1
0
0
0
0
0
Macro-molecular data storage with petabyte/cm^3 density, highly parallel read/write operations, and genuine 3D storage capability
Digital information can be encoded in the building-block sequence of macro-molecules, such as RNA and single-stranded DNA. Methods of "writing" and "reading" macromolecular strands are currently available, but they are slow and expensive. In an ideal molecular data storage system, routine operations such as write, read, erase, store, and transfer must be done reliably and at high speed within an integrated chip. As a first step toward demonstrating the feasibility of this concept, we report preliminary results of DNA readout experiments conducted in miniaturized chambers that are scalable to even smaller dimensions. We show that translocation of a single-stranded DNA molecule (consisting of 50 adenosine bases followed by 100 cytosine bases) through an ion-channel yields a characteristic signal that is attributable to the 2-segment structure of the molecule. We also examine the dependence of the rate and speed of molecular translocation on the adjustable parameters of the experiment.
1
1
0
0
0
0
Design and implementation of lighting control system using battery-less wireless human detection sensor networks
Artificial lighting is responsible for a large portion of total energy consumption and has great potential for energy saving. This paper designs an LED light control algorithm based on users' localization using multiple battery-less binary human detection sensors. The proposed lighting control system focuses on reducing office lighting energy consumption and satisfying users' illumination requirement. Most current lighting control systems use infrared human detection sensors, but the poor detection probability, especially for a static user, makes it difficult to realize comfortable and effective lighting control. To improve the detection probability of each sensor, we proposed to locate sensors as close to each user as possible by using a battery-less wireless sensor network, in which all sensors can be placed freely in the space with high energy stability. We also proposed to use a multi-sensor-based user localization algorithm to capture user's position more accurately and realize fine lighting control which works even with static users. The system is actually implemented in an indoor office environment in a pilot project. A verification experiment is conducted by measuring the practical illumination and power consumption. The performance agrees with design expectations. It shows that the proposed LED lighting control system reduces the energy consumption significantly, 57% compared to the batch control scheme, and satisfies user's illumination requirement with 100% probability.
1
0
0
0
0
0
Independence in generic incidence structures
We study the theory $T_{m,n}$ of existentially closed incidence structures omitting the complete incidence structure $K_{m,n}$, which can also be viewed as existentially closed $K_{m,n}$-free bipartite graphs. In the case $m = n = 2$, this is the theory of existentially closed projective planes. We give an $\forall\exists$-axiomatization of $T_{m,n}$, show that $T_{m,n}$ does not have a countable saturated model when $m,n\geq 2$, and show that the existence of a prime model for $T_{2,2}$ is equivalent to a longstanding open question about finite projective planes. Finally, we analyze model theoretic notions of complexity for $T_{m,n}$. We show that $T_{m,n}$ is NSOP$_1$, but not simple when $m,n\geq 2$, and we show that $T_{m,n}$ has weak elimination of imaginaries but not full elimination of imaginaries. These results rely on combinatorial characterizations of various notions of independence, including algebraic independence, Kim independence, and forking independence.
0
0
1
0
0
0
Propagation of regularity for the MHD system in optimal Sobolev space
We study the problem of propagation of regularity of solutions to the incompressible viscous non-resistive magneto-hydrodynamics system. According to scaling, the Sobolev space $H^{\frac n2-1}(\mathbb R^n)\times H^{\frac n2}(\mathbb R^n)$ is critical for the system. We show that if a weak solution $(u(t),b(t))$ is in $H^{s}(\mathbb R^n)\times H^{s+1}(\mathbb R^n)$ with $s>\frac n2-1$ at a certain time $t_0$, then it will stay in the space for a short time, provided the initial velocity $u(0)\in H^s(\mathbb R^n)$. In the case that the uniqueness of weak solution in $H^{s}(\mathbb R^n)\times H^{s+1}(\mathbb R^n)$ is known, the assumption of $u(0)\in H^s(\mathbb R^n)$ is not necessary.
0
1
1
0
0
0
On the link between atmospheric cloud parameters and cosmic rays
We herewith attempt to investigate the cosmic rays behavior regarding the scaling features of their time series. Our analysis is based on cosmic ray observations made at four neutron monitor stations in Athens (Greece), Jung (Switzerland) and Oulu (Finland), for the period 2000 to early 2017. Each of these datasets was analyzed by using the Detrended Fluctuation Analysis (DFA) and Multifractal Detrended Fluctuation Analysis (MF-DFA) in order to investigate intrinsic properties, like self-similarity and the spectrum of singularities. The main result obtained is that the cosmic rays time series at all the neutron monitor stations exhibit positive long-range correlations (of 1/f type) with multifractal behavior. On the other hand, we try to investigate the possible existence of similar scaling features in the time series of other meteorological parameters which are closely associated with the cosmic rays, such as parameters describing physical properties of clouds.
0
1
0
0
0
0
To Compress, or Not to Compress: Characterizing Deep Learning Model Compression for Embedded Inference
The recent advances in deep neural networks (DNNs) make them attractive for embedded systems. However, it can take a long time for DNNs to make an inference on resource-constrained computing devices. Model compression techniques can address the computation issue of deep inference on embedded devices. This technique is highly attractive, as it does not rely on specialized hardware, or computation-offloading that is often infeasible due to privacy concerns or high latency. However, it remains unclear how model compression techniques perform across a wide range of DNNs. To design efficient embedded deep learning solutions, we need to understand their behaviors. This work develops a quantitative approach to characterize model compression techniques on a representative embedded deep learning architecture, the NVIDIA Jetson Tx2. We perform extensive experiments by considering 11 influential neural network architectures from the image classification and the natural language processing domains. We experimentally show that how two mainstream compression techniques, data quantization and pruning, perform on these network architectures and the implications of compression techniques to the model storage size, inference time, energy consumption and performance metrics. We demonstrate that there are opportunities to achieve fast deep inference on embedded systems, but one must carefully choose the compression settings. Our results provide insights on when and how to apply model compression techniques and guidelines for designing efficient embedded deep learning systems.
1
0
0
0
0
0
Universal Scaling Laws for Correlation Spreading in Quantum Systems with Short- and Long-Range Interactions
We study the spreading of information in a wide class of quantum systems, with variable-range interactions. We show that, after a quench, it generally features a double structure, whose scaling laws are related to a set of universal microscopic exponents that we determine. When the system supports excitations with a finite maximum velocity, the spreading shows a twofold ballistic behavior. While the correlation edge spreads with a velocity equal to twice the maximum group velocity, the dominant correlation maxima propagate with a different velocity that we derive. When the maximum group velocity diverges, as realizable with long-range interactions, the correlation edge features a slower-than-ballistic motion. The motion of the maxima is, instead, either faster-than-ballistic, for gapless systems, or ballistic, for gapped systems. The phenomenology that we unveil here provides a unified framework, which encompasses existing experimental observations with ultracold atoms and ions. It also paves the way to simple extensions of those experiments to observe the structures we describe in their full generality.
0
1
0
0
0
0
Examples of plane rational curves with two Galois points in positive characteristic
We present four new examples of plane rational curves with two Galois points in positive characteristic, and determine the number of Galois points for three of them. Our results are related to a problem on projective linear groups.
0
0
1
0
0
0
Optimal Kullback-Leibler Aggregation in Mixture Density Estimation by Maximum Likelihood
We study the maximum likelihood estimator of density of $n$ independent observations, under the assumption that it is well approximated by a mixture with a large number of components. The main focus is on statistical properties with respect to the Kullback-Leibler loss. We establish risk bounds taking the form of sharp oracle inequalities both in deviation and in expectation. A simple consequence of these bounds is that the maximum likelihood estimator attains the optimal rate $((\log K)/n)^{1/2}$, up to a possible logarithmic correction, in the problem of convex aggregation when the number $K$ of components is larger than $n^{1/2}$. More importantly, under the additional assumption that the Gram matrix of the components satisfies the compatibility condition, the obtained oracle inequalities yield the optimal rate in the sparsity scenario. That is, if the weight vector is (nearly) $D$-sparse, we get the rate $(D\log K)/n$. As a natural complement to our oracle inequalities, we introduce the notion of nearly-$D$-sparse aggregation and establish matching lower bounds for this type of aggregation.
0
0
1
1
0
0
Non-asymptotic theory for nonparametric testing
We consider nonparametric testing in a non-asymptotic framework. Our statistical guarantees are exact in the sense that Type I and II errors are controlled for any finite sample size. Meanwhile, one proposed test is shown to achieve minimax optimality in the asymptotic sense. An important consequence of this non-asymptotic theory is a new and practically useful formula for selecting the optimal smoothing parameter in nonparametric testing. The leading example in this paper is smoothing spline models under Gaussian errors. The results obtained therein can be further generalized to the kernel ridge regression framework under possibly non-Gaussian errors. Simulations demonstrate that our proposed test improves over the conventional asymptotic test when sample size is small to moderate.
0
0
1
1
0
0
Classification of out-of-time-order correlators
The space of n-point correlation functions, for all possible time-orderings of operators, can be computed by a non-trivial path integral contour, which depends on how many time-ordering violations are present in the correlator. These contours, which have come to be known as timefolds, or out-of-time-order (OTO) contours, are a natural generalization of the Schwinger-Keldysh contour (which computes singly out-of-time-ordered correlation functions). We provide a detailed discussion of such higher OTO functional integrals, explaining their general structure, and the myriad ways in which a particular correlation function may be encoded in such contours. Our discussion may be seen as a natural generalization of the Schwinger-Keldysh formalism to higher OTO correlation functions. We provide explicit illustration for low point correlators (n=2,3,4) to exemplify the general statements.
0
1
0
0
0
0
Hierarchical Reinforcement Learning: Approximating Optimal Discounted TSP Using Local Policies
In this work, we provide theoretical guarantees for reward decomposition in deterministic MDPs. Reward decomposition is a special case of Hierarchical Reinforcement Learning, that allows one to learn many policies in parallel and combine them into a composite solution. Our approach builds on mapping this problem into a Reward Discounted Traveling Salesman Problem, and then deriving approximate solutions for it. In particular, we focus on approximate solutions that are local, i.e., solutions that only observe information about the current state. Local policies are easy to implement and do not require substantial computational resources as they do not perform planning. While local deterministic policies, like Nearest Neighbor, are being used in practice for hierarchical reinforcement learning, we propose three stochastic policies that guarantee better performance than any deterministic policy.
0
0
0
1
0
0
End-to-End Musical Key Estimation Using a Convolutional Neural Network
We present an end-to-end system for musical key estimation, based on a convolutional neural network. The proposed system not only out-performs existing key estimation methods proposed in the academic literature; it is also capable of learning a unified model for diverse musical genres that performs comparably to existing systems specialised for specific genres. Our experiments confirm that different genres do differ in their interpretation of tonality, and thus a system tuned e.g. for pop music performs subpar on pieces of electronic music. They also reveal that such cross-genre setups evoke specific types of error (predicting the relative or parallel minor). However, using the data-driven approach proposed in this paper, we can train models that deal with multiple musical styles adequately, and without major losses in accuracy.
1
0
0
0
0
0
Approximate Optimal Designs for Multivariate Polynomial Regression
We introduce a new approach aiming at computing approximate optimal designs for multivariate polynomial regressions on compact (semi-algebraic) design spaces. We use the moment-sum-of-squares hierarchy of semidefinite programming problems to solve numerically the approximate optimal design problem. The geometry of the design is recovered via semidefinite programming duality theory. This article shows that the hierarchy converges to the approximate optimal design as the order of the hierarchy increases. Furthermore, we provide a dual certificate ensuring finite convergence of the hierarchy and showing that the approximate optimal design can be computed numerically with our method. As a byproduct, we revisit the equivalence theorem of the experimental design theory: it is linked to the Christoffel polynomial and it characterizes finite convergence of the moment-sum-of-square hierarchies.
0
0
1
1
0
0
Determinants of public cooperation in multiplex networks
Synergies between evolutionary game theory and statistical physics have significantly improved our understanding of public cooperation in structured populations. Multiplex networks, in particular, provide the theoretical framework within network science that allows us to mathematically describe the rich structure of interactions characterizing human societies. While research has shown that multiplex networks may enhance the resilience of cooperation, the interplay between the overlap in the structure of the layers and the control parameters of the corresponding games has not yet been investigated. With this aim, we consider here the public goods game on a multiplex network, and we unveil the role of the number of layers and the overlap of links, as well as the impact of different synergy factors in different layers, on the onset of cooperation. We show that enhanced public cooperation emerges only when a significant edge overlap is combined with at least one layer being able to sustain some cooperation by means of a sufficiently high synergy factor. In the absence of either of these conditions, the evolution of cooperation in multiplex networks is determined by the bounds of traditional network reciprocity with no enhanced resilience. These results caution against overly optimistic predictions that the presence of multiple social domains may in itself promote cooperation, and they help us better understand the complexity behind prosocial behavior in layered social systems.
1
1
0
0
0
0
Variational Encoding of Complex Dynamics
Often the analysis of time-dependent chemical and biophysical systems produces high-dimensional time-series data for which it can be difficult to interpret which individual features are most salient. While recent work from our group and others has demonstrated the utility of time-lagged co-variate models to study such systems, linearity assumptions can limit the compression of inherently nonlinear dynamics into just a few characteristic components. Recent work in the field of deep learning has led to the development of variational autoencoders (VAE), which are able to compress complex datasets into simpler manifolds. We present the use of a time-lagged VAE, or variational dynamics encoder (VDE), to reduce complex, nonlinear processes to a single embedding with high fidelity to the underlying dynamics. We demonstrate how the VDE is able to capture nontrivial dynamics in a variety of examples, including Brownian dynamics and atomistic protein folding. Additionally, we demonstrate a method for analyzing the VDE model, inspired by saliency mapping, to determine what features are selected by the VDE model to describe dynamics. The VDE presents an important step in applying techniques from deep learning to more accurately model and interpret complex biophysics.
0
1
0
1
0
0
Accelerating Imitation Learning with Predictive Models
Sample efficiency is critical in solving real-world reinforcement learning problems, where agent-environment interactions can be costly. Imitation learning from expert advice has proved to be an effective strategy for reducing the number of interactions required to train a policy. Online imitation learning, which interleaves policy evaluation and policy optimization, is a particularly effective technique with provable performance guarantees. In this work, we seek to further accelerate the convergence rate of online imitation learning, thereby making it more sample efficient. We propose two model-based algorithms inspired by Follow-the-Leader (FTL) with prediction: MoBIL-VI based on solving variational inequalities and MoBIL-Prox based on stochastic first-order updates. These two methods leverage a model to predict future gradients to speed up policy learning. When the model oracle is learned online, these algorithms can provably accelerate the best known convergence rate up to an order. Our algorithms can be viewed as a generalization of stochastic Mirror-Prox (Juditsky et al., 2011), and admit a simple constructive FTL-style analysis of performance.
0
0
0
1
0
0
Neural Networks for Predicting Algorithm Runtime Distributions
Many state-of-the-art algorithms for solving hard combinatorial problems in artificial intelligence (AI) include elements of stochasticity that lead to high variations in runtime, even for a fixed problem instance. Knowledge about the resulting runtime distributions (RTDs) of algorithms on given problem instances can be exploited in various meta-algorithmic procedures, such as algorithm selection, portfolios, and randomized restarts. Previous work has shown that machine learning can be used to individually predict mean, median and variance of RTDs. To establish a new state-of-the-art in predicting RTDs, we demonstrate that the parameters of an RTD should be learned jointly and that neural networks can do this well by directly optimizing the likelihood of an RTD given runtime observations. In an empirical study involving five algorithms for SAT solving and AI planning, we show that neural networks predict the true RTDs of unseen instances better than previous methods, and can even do so when only few runtime observations are available per training instance.
1
0
0
0
0
0
Large-scale Datasets: Faces with Partial Occlusions and Pose Variations in the Wild
Face detection methods have relied on face datasets for training. However, existing face datasets tend to be in small scales for face learning in both constrained and unconstrained environments. In this paper, we first introduce our large-scale image datasets, Large-scale Labeled Face (LSLF) and noisy Large-scale Labeled Non-face (LSLNF). Our LSLF dataset consists of a large number of unconstrained multi-view and partially occluded faces. The faces have many variations in color and grayscale, image quality, image resolution, image illumination, image background, image illusion, human face, cartoon face, facial expression, light and severe partial facial occlusion, make up, gender, age, and race. Many of these faces are partially occluded with accessories such as tattoos, hats, glasses, sunglasses, hands, hair, beards, scarves, microphones, or other objects or persons. The LSLF dataset is currently the largest labeled face image dataset in the literature in terms of the number of labeled images and the number of individuals compared to other existing labeled face image datasets. Second, we introduce our CrowedFaces and CrowedNonFaces image datasets. The crowedFaces and CrowedNonFaces datasets include faces and non-faces images from crowed scenes. These datasets essentially aim for researchers to provide a large number of training examples with many variations for large scale face learning and face recognition tasks.
1
0
0
0
0
0
Small Hankel operators on generalized Fock spaces
We consider Fock spaces $F^{p,\ell}_{\alpha}$ of entire functions on ${\mathbb C}$ associated to the weights $e^{-\alpha |z|^{2\ell}}$, where $\alpha>0$ and $\ell$ is a positive integer. We compute explicitly the corresponding Bergman kernel associated to $F^{2,\ell}_{\alpha}$ and, using an adequate factorization of this kernel, we characterize the boundedness and the compactness of the small Hankel operator $\mathfrak{h}^{\ell}_{b,\alpha}$ on $F^{p,\ell}_{\alpha}$. Moreover, we also determine when $\mathfrak{h}^{\ell}_{b,\alpha}$ is a Hilbert-Schmidt operator on $F^{2,\ell}_{\alpha}$.
0
0
1
0
0
0
Efficient enumeration of solutions produced by closure operations
In this paper we address the problem of generating all elements obtained by the saturation of an initial set by some operations. More precisely, we prove that we can generate the closure of a boolean relation (a set of boolean vectors) by polymorphisms with a polynomial delay. Therefore we can compute with polynomial delay the closure of a family of sets by any set of "set operations": union, intersection, symmetric difference, subsets, supersets $\dots$). To do so, we study the $Membership_{\mathcal{F}}$ problem: for a set of operations $\mathcal{F}$, decide whether an element belongs to the closure by $\mathcal{F}$ of a family of elements. In the boolean case, we prove that $Membership_{\mathcal{F}}$ is in P for any set of boolean operations $\mathcal{F}$. When the input vectors are over a domain larger than two elements, we prove that the generic enumeration method fails, since $Membership_{\mathcal{F}}$ is NP-hard for some $\mathcal{F}$. We also study the problem of generating minimal or maximal elements of closures and prove that some of them are related to well known enumeration problems such as the enumeration of the circuits of a matroid or the enumeration of maximal independent sets of a hypergraph. This article improves on previous works of the same authors.
1
0
0
0
0
0
Compressive Sensing via Convolutional Factor Analysis
We solve the compressive sensing problem via convolutional factor analysis, where the convolutional dictionaries are learned {\em in situ} from the compressed measurements. An alternating direction method of multipliers (ADMM) paradigm for compressive sensing inversion based on convolutional factor analysis is developed. The proposed algorithm provides reconstructed images as well as features, which can be directly used for recognition ($e.g.$, classification) tasks. When a deep (multilayer) model is constructed, a stochastic unpooling process is employed to build a generative model. During reconstruction and testing, we project the upper layer dictionary to the data level and only a single layer deconvolution is required. We demonstrate that using $\sim30\%$ (relative to pixel numbers) compressed measurements, the proposed model achieves the classification accuracy comparable to the original data on MNIST. We also observe that when the compressed measurements are very limited ($e.g.$, $<10\%$), the upper layer dictionary can provide better reconstruction results than the bottom layer.
1
0
0
1
0
0
Exact Dimensionality Selection for Bayesian PCA
We present a Bayesian model selection approach to estimate the intrinsic dimensionality of a high-dimensional dataset. To this end, we introduce a novel formulation of the probabilisitic principal component analysis model based on a normal-gamma prior distribution. In this context, we exhibit a closed-form expression of the marginal likelihood which allows to infer an optimal number of components. We also propose a heuristic based on the expected shape of the marginal likelihood curve in order to choose the hyperparameters. In non-asymptotic frameworks, we show on simulated data that this exact dimensionality selection approach is competitive with both Bayesian and frequentist state-of-the-art methods.
0
0
1
1
0
0
Multi-Dialect Speech Recognition With A Single Sequence-To-Sequence Model
Sequence-to-sequence models provide a simple and elegant solution for building speech recognition systems by folding separate components of a typical system, namely acoustic (AM), pronunciation (PM) and language (LM) models into a single neural network. In this work, we look at one such sequence-to-sequence model, namely listen, attend and spell (LAS), and explore the possibility of training a single model to serve different English dialects, which simplifies the process of training multi-dialect systems without the need for separate AM, PM and LMs for each dialect. We show that simply pooling the data from all dialects into one LAS model falls behind the performance of a model fine-tuned on each dialect. We then look at incorporating dialect-specific information into the model, both by modifying the training targets by inserting the dialect symbol at the end of the original grapheme sequence and also feeding a 1-hot representation of the dialect information into all layers of the model. Experimental results on seven English dialects show that our proposed system is effective in modeling dialect variations within a single LAS model, outperforming a LAS model trained individually on each of the seven dialects by 3.1 ~ 16.5% relative.
1
0
0
0
0
0
Exponential convergence of testing error for stochastic gradient methods
We consider binary classification problems with positive definite kernels and square loss, and study the convergence rates of stochastic gradient methods. We show that while the excess testing loss (squared loss) converges slowly to zero as the number of observations (and thus iterations) goes to infinity, the testing error (classification error) converges exponentially fast if low-noise conditions are assumed.
1
0
0
1
0
0
What is a hierarchically hyperbolic space?
The first part of this survey is a heuristic, non-technical discussion of what an HHS is, and the aim is to provide a good mental picture both to those actively doing research on HHSs and to those who only seek a basic understanding out of pure curiosity. It can be read independently of the second part, which is a detailed technical discussion of the axioms and the main tools to deal with HHSs.
0
0
1
0
0
0
What makes a gesture a gesture? Neural signatures involved in gesture recognition
Previous work in the area of gesture production, has made the assumption that machines can replicate "human-like" gestures by connecting a bounded set of salient points in the motion trajectory. Those inflection points were hypothesized to also display cognitive saliency. The purpose of this paper is to validate that claim using electroencephalography (EEG). That is, this paper attempts to find neural signatures of gestures (also referred as placeholders) in human cognition, which facilitate the understanding, learning and repetition of gestures. Further, it is discussed whether there is a direct mapping between the placeholders and kinematic salient points in the gesture trajectories. These are expressed as relationships between inflection points in the gestures' trajectories with oscillatory mu rhythms (8-12 Hz) in the EEG. This is achieved by correlating fluctuations in mu power during gesture observation with salient motion points found for each gesture. Peaks in the EEG signal at central electrodes (motor cortex) and occipital electrodes (visual cortex) were used to isolate the salient events within each gesture. We found that a linear model predicting mu peaks from motion inflections fits the data well. Increases in EEG power were detected 380 and 500ms after inflection points at occipital and central electrodes, respectively. These results suggest that coordinated activity in visual and motor cortices is sensitive to motion trajectories during gesture observation, and it is consistent with the proposal that inflection points operate as placeholders in gesture recognition.
1
0
0
0
0
0
Neural networks and rational functions
Neural networks and rational functions efficiently approximate each other. In more detail, it is shown here that for any ReLU network, there exists a rational function of degree $O(\text{polylog}(1/\epsilon))$ which is $\epsilon$-close, and similarly for any rational function there exists a ReLU network of size $O(\text{polylog}(1/\epsilon))$ which is $\epsilon$-close. By contrast, polynomials need degree $\Omega(\text{poly}(1/\epsilon))$ to approximate even a single ReLU. When converting a ReLU network to a rational function as above, the hidden constants depend exponentially on the number of layers, which is shown to be tight; in other words, a compositional representation can be beneficial even for rational functions.
1
0
0
1
0
0
Local Structure Theorems for Erdos Renyi Graphs and their Algorithmic Application
We analyze some local properties of sparse Erdos-Renyi graphs, where $d(n)/n$ is the edge probability. In particular we study the behavior of very short paths. For $d(n)=n^{o(1)}$ we show that $G(n,d(n)/n)$ has asymptotically almost surely (a.a.s.~) bounded local treewidth and therefore is a.a.s.~nowhere dense. We also discover a new and simpler proof that $G(n,d/n)$ has a.a.s.~bounded expansion for constant~$d$. The local structure of sparse Erdos-Renyi Gaphs is very special: The $r$-neighborhood of a vertex is a tree with some additional edges, where the probability that there are $m$ additional edges decreases with~$m$. This implies efficient algorithms for subgraph isomorphism, in particular for finding subgraphs with small diameter. Finally we note that experiments suggest that preferential attachment graphs might have similar properties after deleting a small number of vertices.
1
0
0
0
0
0
DeepCloak: Masking Deep Neural Network Models for Robustness Against Adversarial Samples
Recent studies have shown that deep neural networks (DNN) are vulnerable to adversarial samples: maliciously-perturbed samples crafted to yield incorrect model outputs. Such attacks can severely undermine DNN systems, particularly in security-sensitive settings. It was observed that an adversary could easily generate adversarial samples by making a small perturbation on irrelevant feature dimensions that are unnecessary for the current classification task. To overcome this problem, we introduce a defensive mechanism called DeepCloak. By identifying and removing unnecessary features in a DNN model, DeepCloak limits the capacity an attacker can use generating adversarial samples and therefore increase the robustness against such inputs. Comparing with other defensive approaches, DeepCloak is easy to implement and computationally efficient. Experimental results show that DeepCloak can increase the performance of state-of-the-art DNN models against adversarial samples.
1
0
0
0
0
0
Automatic Pill Reminder for Easy Supervision
In this paper we present a working model of an automatic pill reminder and dispenser setup that can alleviate irregularities in taking prescribed dosage of medicines at the right time dictated by the medical practitioner and switch from approaches predominantly dependent on human memory to automation with negligible supervision, thus relieving persons from error-prone tasks of giving wrong medicine at the wrong time in the wrong amount.
1
0
0
0
0
0
Chaotic dynamics of movements stochastic instability and the hypothesis of N.A. Bernstein about "repetition without repetition"
The registration of tremor was performed in two groups of subjects (15 people in each group) with different physical fitness at rest and at a static loads of 3N. Each subject has been tested 15 series (number of series N=15) in both states (with and without physical loads) and each series contained 15 samples (n=15) of tremorogramm measurements (500 elements in each sample, registered coordinates x1(t) of the finger position relative to eddy current sensor) of the finger. Using non-parametric Wilcoxon test of each series of experiment a pairwise comparison was made forming 15 tables in which the results of calculation of pairwise comparison was presented as a matrix (15x15) for tremorogramms are presented. The average number of hits random pairs of samples (<k>) and standard deviation {\sigma} were calculated for all 15 matrices without load and under the impact of physical load (3N), which showed an increase almost in twice in the number k of pairs of matching samples of tremorogramms at conditions of a static load. For all these samples it was calculated special quasi-attractor (this square was presented the distinguishes between physical load and without it. All samples present the stochastic unstable state.
0
0
0
0
1
0
Deep Learning for Real Time Crime Forecasting
Accurate real time crime prediction is a fundamental issue for public safety, but remains a challenging problem for the scientific community. Crime occurrences depend on many complex factors. Compared to many predictable events, crime is sparse. At different spatio-temporal scales, crime distributions display dramatically different patterns. These distributions are of very low regularity in both space and time. In this work, we adapt the state-of-the-art deep learning spatio-temporal predictor, ST-ResNet [Zhang et al, AAAI, 2017], to collectively predict crime distribution over the Los Angeles area. Our models are two staged. First, we preprocess the raw crime data. This includes regularization in both space and time to enhance predictable signals. Second, we adapt hierarchical structures of residual convolutional units to train multi-factor crime prediction models. Experiments over a half year period in Los Angeles reveal highly accurate predictive power of our models.
1
0
0
1
0
0
Unpredictable sequences and Poincaré chaos
To make research of chaos more friendly with discrete equations, we introduce the concept of an unpredictable sequence as a specific unpredictable function on the set of integers. It is convenient to be verified as a solution of a discrete equation. This is rigorously proved in this paper for quasilinear systems, and we demonstrate the result numerically for linear systems in the critical case with respect to the stability of the origin. The completed research contributes to the theory of chaos as well as to the theory of discrete equations, considering unpredictable solutions.
0
1
0
0
0
0
Scaling of the Detonation Product State with Reactant Kinetic Energy
This submissions has been withdrawn by arXiv administrators because the submitter did not have the right to agree to our license.
0
1
0
0
0
0
Lumping of Degree-Based Mean Field and Pair Approximation Equations for Multi-State Contact Processes
Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information spreading. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), like degree-based mean field (DBMF), approximate master equation (AME), or pair approximation (PA). The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we extend AME and PA, which has been proposed only for the binary state case, to a multi-state setting and provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.
1
1
0
0
0
0
Almost automorphic functions on the quantum time scale and applications
In this paper, we first propose two types of concepts of almost automorphic functions on the quantum time scale. Secondly, we study some basic properties of almost automorphic functions on the quantum time scale. Then, we introduce a transformation between functions defined on the quantum time scale and functions defined on the set of generalized integer numbers, by using this transformation we give equivalent definitions of almost automorphic functions on the quantum time scale. Finally, as an application of our results, we establish the existence of almost automorphic solutions of linear and semilinear dynamic equations on the quantum time scale.
0
0
1
0
0
0
Stochastic Game in Remote Estimation under DoS Attacks
This paper studies remote state estimation under denial-of-service (DoS) attacks. A sensor transmits its local estimate of an underlying physical process to a remote estimator via a wireless communication channel. A DoS attacker is capable to interfere the channel and degrades the remote estimation accuracy. Considering the tactical jamming strategies played by the attacker, the sensor adjusts its transmission power. This interactive process between the sensor and the attacker is studied in the framework of a zero-sum stochastic game. To derive their optimal power schemes, we first discuss the existence of stationary Nash equilibrium (SNE) for this game. We then present the monotone structure of the optimal strategies, which helps reduce the computational complexity of the stochastic game algorithm. Numerical examples are provided to illustrate the obtained results.
1
0
0
0
0
0
VLocNet++: Deep Multitask Learning for Semantic Visual Localization and Odometry
Semantic understanding and localization are fundamental enablers of robot autonomy that have for the most part been tackled as disjoint problems. While deep learning has enabled recent breakthroughs across a wide spectrum of scene understanding tasks, its applicability to state estimation tasks has been limited due to the direct formulation that renders it incapable of encoding scene-specific constrains. In this work, we propose the VLocNet++ architecture that employs a multitask learning approach to exploit the inter-task relationship between learning semantics, regressing 6-DoF global pose and odometry, for the mutual benefit of each of these tasks. Our network overcomes the aforementioned limitation by simultaneously embedding geometric and semantic knowledge of the world into the pose regression network. We propose a novel adaptive weighted fusion layer to aggregate motion-specific temporal information and to fuse semantic features into the localization stream based on region activations. Furthermore, we propose a self-supervised warping technique that uses the relative motion to warp intermediate network representations in the segmentation stream for learning consistent semantics. Finally, we introduce a first-of-a-kind urban outdoor localization dataset with pixel-level semantic labels and multiple loops for training deep networks. Extensive experiments on the challenging Microsoft 7-Scenes benchmark and our DeepLoc dataset demonstrate that our approach exceeds the state-of-the-art outperforming local feature-based methods while simultaneously performing multiple tasks and exhibiting substantial robustness in challenging scenarios.
1
0
0
0
0
0
On Decidability of the Ordered Structures of Numbers
The ordered structures of natural, integer, rational and real numbers are studied here. It is known that the theories of these numbers in the language of order are decidable and finitely axiomatizable. Also, their theories in the language of order and addition are decidable and infinitely axiomatizable. For the language of order and multiplication, it is known that the theories of $\mathbb{N}$ and $\mathbb{Z}$ are not decidable (and so not axiomatizable by any computably enumerable set of sentences). By Tarski's theorem, the multiplicative ordered structure of $\mathbb{R}$ is decidable also; here we prove this result directly and present an axiomatization. The structure of $\mathbb{Q}$ in the language of order and multiplication seems to be missing in the literature; here we show the decidability of its theory by the technique of quantifier elimination and after presenting an infinite axiomatization for this structure we prove that it is not finitely axiomatizable.
1
0
1
0
0
0
Finite $p$-groups of conjugate type $\{ 1, p^3 \}$
We classify finite $p$-groups, upto isoclinism, which have only two conjugacy class sizes $1$ and $p^3$. It turns out that the nilpotency class of such groups is $2$.
0
0
1
0
0
0
Betting on Quantum Objects
Dutch book arguments have been applied to beliefs about the outcomes of measurements of quantum systems, but not to beliefs about quantum objects prior to measurement. In this paper, we prove a quantum version of the probabilists' Dutch book theorem that applies to both sorts of beliefs: roughly, if ideal beliefs are given by vector states, all and only Born-rule probabilities avoid Dutch books. This theorem and associated results have implications for operational and realist interpretations of the logic of a Hilbert lattice. In the latter case, we show that the defenders of the eigenstate-value orthodoxy face a trilemma. Those who favor vague properties avoid the trilemma, admitting all and only those beliefs about quantum objects that avoid Dutch books.
0
1
0
0
0
0
Casimir-Polder size consistency -- a constraint violated by some dispersion theories
A key goal in quantum chemistry methods, whether ab initio or otherwise, is to achieve size consistency. In this manuscript we formulate the related idea of "Casimir-Polder size consistency" that manifests in long-range dispersion energetics. We show that local approximations in time-dependent density functional theory dispersion energy calculations violate the consistency condition because of incorrect treatment of highly non-local "xc kernel" physics, by up to 10% in our tests on closed-shell atoms.
0
1
0
0
0
0
A combinatorial proof of Bass's determinant formula for the zeta function of regular graphs
We give an elementary combinatorial proof of Bass's determinant formula for the zeta function of a finite regular graph. This is done by expressing the number of non-backtracking cycles of a given length in terms of Chebychev polynomials in the eigenvalues of the adjacency operator of the graph.
1
0
0
0
0
0
Electron-Muon Ranger: hardware characterization
The Electron-Muon Ranger (EMR) is a fully-active tracking-calorimeter in charge of the electron background rejection downstream of the cooling channel at the international Muon Ionization Cooling Experiment. It consists of 2832 plastic scintillator bars segmented in 48 planes in an X-Y arrangement and uses particle range as its main variable to tag muons and discriminate electrons. An array of analyses were conducted to characterize the hardware of the EMR and determine whether the detector performs to specifications. The clear fibres coming from the bars were shown to transmit the desired amount of light, and only four dead channels were identified in the electronics. Two channels had indubitably been mismatched during assembly and the DAQ channel map was subsequently corrected. The level of crosstalk is within acceptable values for the type of multi-anode photomultiplier used with an average of $0.20\pm0.03\,\%$ probability of occurrence in adjacent channels and a mean amplitude equivalent to $4.5\pm0.1\,\%$ of the primary signal intensity. The efficiency of the signal acquisition, defined as the probability of recording a signal in a plane when a particle goes through it in beam conditions, reached $99.73\pm0.02\,\%$.
0
1
0
0
0
0
Photometric Redshifts for Hyper Suprime-Cam Subaru Strategic Program Data Release 1
Photometric redshifts are a key component of many science objectives in the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP). In this paper, we describe and compare the codes used to compute photometric redshifts for HSC-SSP, how we calibrate them, and the typical accuracy we achieve with the HSC five-band photometry (grizy). We introduce a new point estimator based on an improved loss function and demonstrate that it works better than other commonly used estimators. We find that our photo-z's are most accurate at 0.2<~zphot<~1.5, where we can straddle the 4000A break. We achieve sigma(d_zphot/(1+zphot))~0.05 and an outlier rate of about 15% for galaxies down to i=25 within this redshift range. If we limit to a brighter sample of i<24, we achieve sigma~0.04 and ~8% outliers. Our photo-z's should thus enable many science cases for HSC-SSP. We also characterize the accuracy of our redshift probability distribution function (PDF) and discover that some codes over/under-estimate the redshift uncertainties, which have implications for N(z) reconstruction. Our photo-z products for the entire area in the Public Data Release 1 are publicly available, and both our catalog products (such as point estimates) and full PDFs can be retrieved from the data release site, this https URL.
0
1
0
0
0
0
Shallow Updates for Deep Reinforcement Learning
Deep reinforcement learning (DRL) methods such as the Deep Q-Network (DQN) have achieved state-of-the-art results in a variety of challenging, high-dimensional domains. This success is mainly attributed to the power of deep neural networks to learn rich domain representations for approximating the value function or policy. Batch reinforcement learning methods with linear representations, on the other hand, are more stable and require less hyper parameter tuning. Yet, substantial feature engineering is necessary to achieve good results. In this work we propose a hybrid approach -- the Least Squares Deep Q-Network (LS-DQN), which combines rich feature representations learned by a DRL algorithm with the stability of a linear least squares method. We do this by periodically re-training the last hidden layer of a DRL network with a batch least squares update. Key to our approach is a Bayesian regularization term for the least squares update, which prevents over-fitting to the more recent data. We tested LS-DQN on five Atari games and demonstrate significant improvement over vanilla DQN and Double-DQN. We also investigated the reasons for the superior performance of our method. Interestingly, we found that the performance improvement can be attributed to the large batch size used by the LS method when optimizing the last layer.
1
0
0
1
0
0
Survey on Additive Manufacturing, Cloud 3D Printing and Services
Cloud Manufacturing (CM) is the concept of using manufacturing resources in a service oriented way over the Internet. Recent developments in Additive Manufacturing (AM) are making it possible to utilise resources ad-hoc as replacement for traditional manufacturing resources in case of spontaneous problems in the established manufacturing processes. In order to be of use in these scenarios the AM resources must adhere to a strict principle of transparency and service composition in adherence to the Cloud Computing (CC) paradigm. With this review we provide an overview over CM, AM and relevant domains as well as present the historical development of scientific research in these fields, starting from 2002. Part of this work is also a meta-review on the domain to further detail its development and structure.
1
0
0
0
0
0
Barnacles and Gravity
Theories with more than one vacuum allow quantum transitions between them, which may proceed via bubble nucleation; theories with more than two vacua posses additional decay modes in which the wall of a bubble may further decay. The instantons which mediate such a process have $O(3)$ symmetry (in four dimensions, rather than the usual $O(4)$ symmetry of homogeneous vacuum decay), and have been called `barnacles'; previously they have been studied in flat space, in the thin wall limit, and this paper extends the analysis to include gravity. It is found that there are regions of parameter space in which, given an initial bubble, barnacles are the favoured subsequent decay process, and that the inclusion of gravity can enlarge this region. The relation to other heterogeneous vacuum decay scenarios, as well as some of the phenomenological implications of barnacles are briefly discussed.
0
1
0
0
0
0
A note on some constants related to the zeta-function and their relationship with the Gregory coefficients
In this paper new series for the first and second Stieltjes constants (also known as generalized Euler's constant), as well as for some closely related constants are obtained. These series contain rational terms only and involve the so-called Gregory coefficients, which are also known as (reciprocal) logarithmic numbers, Cauchy numbers of the first kind and Bernoulli numbers of the second kind. In addition, two interesting series with rational terms are given for Euler's constant and the constant ln(2*pi), and yet another generalization of Euler's constant is proposed and various formulas for the calculation of these constants are obtained. Finally, in the paper, we mention that almost all the constants considered in this work admit simple representations via the Ramanujan summation.
0
0
1
0
0
0
Thresholding Bandit for Dose-ranging: The Impact of Monotonicity
We analyze the sample complexity of the thresholding bandit problem, with and without the assumption that the mean values of the arms are increasing. In each case, we provide a lower bound valid for any risk $\delta$ and any $\delta$-correct algorithm; in addition, we propose an algorithm whose sample complexity is of the same order of magnitude for small risks. This work is motivated by phase 1 clinical trials, a practically important setting where the arm means are increasing by nature, and where no satisfactory solution is available so far.
0
0
1
1
0
0
Spin - Phonon Coupling in Nickel Oxide Determined from Ultraviolet Raman Spectroscopy
Nickel oxide (NiO) has been studied extensively for various applications ranging from electrochemistry to solar cells [1,2]. In recent years, NiO attracted much attention as an antiferromagnetic (AF) insulator material for spintronic devices [3-10]. Understanding the spin - phonon coupling in NiO is a key to its functionalization, and enabling AF spintronics' promise of ultra-high-speed and low-power dissipation [11,12]. However, despite its status as an exemplary AF insulator and a benchmark material for the study of correlated electron systems, little is known about the spin - phonon interaction, and the associated energy dissipation channel, in NiO. In addition, there is a long-standing controversy over the large discrepancies between the experimental and theoretical values for the electron, phonon, and magnon energies in NiO [13-23]. This gap in knowledge is explained by NiO optical selection rules, high Neel temperature and dominance of the magnon band in the visible Raman spectrum, which precludes a conventional approach for investigating such interaction. Here we show that by using ultraviolet (UV) Raman spectroscopy one can extract the spin - phonon coupling coefficients in NiO. We established that unlike in other materials, the spins of Ni atoms interact more strongly with the longitudinal optical (LO) phonons than with the transverse optical (TO) phonons, and produce opposite effects on the phonon energies. The peculiarities of the spin - phonon coupling are consistent with the trends given by density functional theory calculations. The obtained results shed light on the nature of the spin - phonon coupling in AF insulators and may help in developing innovative spintronic devices.
0
1
0
0
0
0
A Univariate Bound of Area Under ROC
Area under ROC (AUC) is an important metric for binary classification and bipartite ranking problems. However, it is difficult to directly optimizing AUC as a learning objective, so most existing algorithms are based on optimizing a surrogate loss to AUC. One significant drawback of these surrogate losses is that they require pairwise comparisons among training data, which leads to slow running time and increasing local storage for online learning. In this work, we describe a new surrogate loss based on a reformulation of the AUC risk, which does not require pairwise comparison but rankings of the predictions. We further show that the ranking operation can be avoided, and the learning objective obtained based on this surrogate enjoys linear complexity in time and storage. We perform experiments to demonstrate the effectiveness of the online and batch algorithms for AUC optimization based on the proposed surrogate loss.
0
0
0
1
0
0
On the Role of Text Preprocessing in Neural Network Architectures: An Evaluation Study on Text Categorization and Sentiment Analysis
Text preprocessing is often the first step in the pipeline of a Natural Language Processing (NLP) system, with potential impact in its final performance. Despite its importance, text preprocessing has not received much attention in the deep learning literature. In this paper we investigate the impact of simple text preprocessing decisions (particularly tokenizing, lemmatizing, lowercasing and multiword grouping) on the performance of a standard neural text classifier. We perform an extensive evaluation on standard benchmarks from text categorization and sentiment analysis. While our experiments show that a simple tokenization of input text is generally adequate, they also highlight significant degrees of variability across preprocessing techniques. This reveals the importance of paying attention to this usually-overlooked step in the pipeline, particularly when comparing different models. Finally, our evaluation provides insights into the best preprocessing practices for training word embeddings.
1
0
0
0
0
0
Evidences against cuspy dark matter halos in large galaxies
We develop and apply new techniques in order to uncover galaxy rotation curves (RC) systematics. Considering that an ideal dark matter (DM) profile should yield RCs that have no bias towards any particular radius, we find that the Burkert DM profile satisfies the test, while the Navarro-Frenk-While (NFW) profile has a tendency of better fitting the region between one and two disc scale lengths than the inner disc scale length region. Our sample indicates that this behaviour happens to more than 75% of the galaxies fitted with an NFW halo. Also, this tendency does not weaken by considering "large" galaxies, for instance those with $M_*\gtrsim 10^{10} M_\odot$. Besides the tests on the homogeneity of the fits, we also use a sample of 62 galaxies of diverse types to perform tests on the quality of the overall fit of each galaxy, and to search for correlations with stellar mass, gas mass and the disc scale length. In particular, we find that only 13 galaxies are better fitted by the NFW halo; and that even for the galaxies with $M_* \gtrsim 10^{10} M_\odot$ the Burkert profile either fits as good as, or better than, the NFW profile. This result is relevant since different baryonic effects important for the smaller galaxies, like supernova feedback and dynamical friction from baryonic clumps, indicate that at such large stellar masses the NFW profile should be preferred over the Burkert profile. Hence, our results either suggest a new baryonic effect or a change of the dark matter physics.
0
1
0
0
0
0
A Hybrid Framework for Multi-Vehicle Collision Avoidance
With the recent surge of interest in UAVs for civilian services, the importance of developing tractable multi-agent analysis techniques that provide safety and performance guarantees have drastically increased. Hamilton-Jacobi (HJ) reachability has successfully provided these guarantees to small-scale systems and is flexible in terms of system dynamics. However, the exponential complexity scaling of HJ reachability with respect to system dimension prevents its direct application to larger-scale problems where the number of vehicles is greater than two. In this paper, we propose a collision avoidance algorithm using a hybrid framework for N+1 vehicles through higher-level control logic given any N-vehicle collision avoidance algorithm. Our algorithm conservatively approximates a guaranteed-safe region in the joint state space of the N+1 vehicles and produces a safety-preserving controller. In addition, our algorithm does not incur significant additional computation cost. We demonstrate our proposed method in simulation.
0
0
1
0
0
0
On the zeros of Riemann $Ξ(z)$ function
The Riemann $\Xi(z)$ function (even in $z$) admits a Fourier transform of an even kernel $\Phi(t)=4e^{9t/2}\theta''(e^{2t})+6e^{5t/2}\theta'(e^{2t})$. Here $\theta(x):=\theta_3(0,ix)$ and $\theta_3(0,z)$ is a Jacobi theta function, a modular form of weight $\frac{1}{2}$. (A) We discover a family of functions $\{\Phi_n(t)\}_{n\geqslant 2}$ whose Fourier transform on compact support $(-\frac{1}{2}\log n, \frac{1}{2}\log n)$, $\{F(n,z)\}_{n\geqslant2}$, converges to $\Xi(z)$ uniformly in the critical strip $S_{1/2}:=\{|\Im(z)|< \frac{1}{2}\}$. (B) Based on this we then construct another family of functions $\{H(14,n,z)\}_{n\geqslant 2}$ and show that it uniformly converges to $\Xi(z)$ in the critical strip $S_{1/2}$. (C) Based on this we construct another family of functions $\{W(n,z)\}_{n\geqslant 8}:=\{H(14,n,2z/\log n)\}_{n\geqslant 8}$ and show that if all the zeros of $\{W(n,z)\}_{n\geqslant 8}$ in the critical strip $S_{1/2}$ are real, then all the zeros of $\{H(14,n,z)\}_{n\geqslant 8}$ in the critical strip $S_{1/2}$ are real. (D) We then show that $W(n,z)=U(n,z)-V(n,z)$ and $U(n,z^{1/2})$ and $V(n,z^{1/2})$ have only real, positive and simple zeros. And there exists a positive integer $N\geqslant 8$ such that for all $n\geqslant N$, the zeros of $U(n,x^{1/2})$ are strictly left-interlacing with those of $V(n,x^{1/2})$. Using an entire function equivalent to Hermite-Kakeya Theorem for polynomials we show that $W(n\geqslant N,z^{1/2})$ has only real, positive and simple zeros. Thus $W(n\geqslant N,z)$ have only real and imple zeros. (E) Using a corollary of Hurwitz's theorem in complex analysis we prove that $\Xi(z)$ has no zeros in $S_{1/2}\setminus\mathbb{R}$, i.e., $S_{1/2}\setminus \mathbb{R}$ is a zero-free region for $\Xi(z)$. Since all the zeros of $\Xi(z)$ are in $S_{1/2}$, all the zeros of $\Xi(z)$ are in $\mathbb{R}$, i.e., all the zeros of $\Xi(z)$ are real.
0
0
1
0
0
0
On Topologized Fundamental Groups with Small Loop Transfer Viewpoints
In this paper, by introducing some kind of small loop transfer spaces at a point, we study the behavior of topologized fundamental groups with the compact-open topology and the whisker topology, $\pi_{1}^{qtop}(X,x_{0})$ and $\pi_{1}^{wh}(X,x_{0})$, respectively. In particular, we give necessary or sufficient conditions for coincidence and being topological group of these two topologized fundamental groups. Finally, we give some examples to show that the reverse of some of these implications do not hold, in general.
0
0
1
0
0
0
The boundary value problem for Yang--Mills--Higgs fields
We show the existence of Yang--Mills--Higgs (YMH) fields over a Riemann surface with boundary where a free boundary condition is imposed on the section and a Neumann boundary condition on the connection. In technical terms, we study the convergence and blow-up behavior of a sequence of Sacks-Uhlenbeck type $\alpha$-YMH fields as $\alpha\to 1$. For $\alpha>1$, each $\alpha$-YMH field is shown to be smooth up to the boundary under some gauge transformation. This is achieved by showing a regularity theorem for more general coupled systems, which extends the classical results of Ladyzhenskaya-Ural'ceva and Morrey.
0
0
1
0
0
0
Polish Topologies for Graph Products of Groups
We give strong necessary conditions on the admissibility of a Polish group topology for an arbitrary graph product of groups $G(\Gamma, G_a)$, and use them to give a characterization modulo a finite set of nodes. As a corollary, we give a complete characterization in case all the factor groups $G_a$ are countable.
0
0
1
0
0
0
Iterated failure rate monotonicity and ordering relations within Gamma and Weibull distributions
Stochastic ordering of distributions of random variables may be defined by the relative convexity of the tail functions. This has been extended to higher order stochastic orderings, by iteratively reassigning tail-weights. The actual verification of those stochastic orderings is not simple, as this depends on inverting distribution functions for which there may be no explicit expression. The iterative definition of distributions, of course, contributes to make that verification even harder. We have a look at the stochastic ordering, introducing a method that allows for explicit usage, applying it to the Gamma and Weibull distributions, giving a complete description of the order relations within each of those families.
0
0
1
1
0
0
Quantum depletion of a homogeneous Bose-Einstein condensate
We have measured the quantum depletion of an interacting homogeneous Bose-Einstein condensate, and confirmed the 70-year old theory of N.N. Bogoliubov. The observed condensate depletion is reversibly tuneable by changing the strength of the interparticle interactions. Our atomic homogeneous condensate is produced in an optical-box trap, the interactions are tuned via a magnetic Feshbach resonance, and the condensed fraction probed by coherent two-photon Bragg scattering.
0
1
0
0
0
0
A Deep Generative Model for Graphs: Supervised Subset Selection to Create Diverse Realistic Graphs with Applications to Power Networks Synthesis
Creating and modeling real-world graphs is a crucial problem in various applications of engineering, biology, and social sciences; however, learning the distributions of nodes/edges and sampling from them to generate realistic graphs is still challenging. Moreover, generating a diverse set of synthetic graphs that all imitate a real network is not addressed. In this paper, the novel problem of creating diverse synthetic graphs is solved. First, we devise the deep supervised subset selection (DeepS3) algorithm; Given a ground-truth set of data points, DeepS3 selects a diverse subset of all items (i.e. data points) that best represent the items in the ground-truth set. Furthermore, we propose the deep graph representation recurrent network (GRRN) as a novel generative model that learns a probabilistic representation of a real weighted graph. Training the GRRN, we generate a large set of synthetic graphs that are likely to follow the same features and adjacency patterns as the original one. Incorporating GRRN with DeepS3, we select a diverse subset of generated graphs that best represent the behaviors of the real graph (i.e. our ground-truth). We apply our model to the novel problem of power grid synthesis, where a synthetic power network is created with the same physical/geometric properties as a real power system without revealing the real locations of the substations (nodes) and the lines (edges), since such data is confidential. Experiments on the Synthetic Power Grid Data Set show accurate synthetic networks that follow similar structural and spatial properties as the real power grid.
1
0
0
1
0
0
Analytic Combinatorics in Several Variables: Effective Asymptotics and Lattice Path Enumeration
The field of analytic combinatorics, which studies the asymptotic behaviour of sequences through analytic properties of their generating functions, has led to the development of deep and powerful tools with applications across mathematics and the natural sciences. In addition to the now classical univariate theory, recent work in the study of analytic combinatorics in several variables (ACSV) has shown how to derive asymptotics for the coefficients of certain D-finite functions represented by diagonals of multivariate rational functions. We give a pedagogical introduction to the methods of ACSV from a computer algebra viewpoint, developing rigorous algorithms and giving the first complexity results in this area under conditions which are broadly satisfied. Furthermore, we give several new applications of ACSV to the enumeration of lattice walks restricted to certain regions. In addition to proving several open conjectures on the asymptotics of such walks, a detailed study of lattice walk models with weighted steps is undertaken.
1
0
0
0
0
0
Speaking the Same Language: Matching Machine to Human Captions by Adversarial Training
While strong progress has been made in image captioning over the last years, machine and human captions are still quite distinct. A closer look reveals that this is due to the deficiencies in the generated word distribution, vocabulary size, and strong bias in the generators towards frequent captions. Furthermore, humans -- rightfully so -- generate multiple, diverse captions, due to the inherent ambiguity in the captioning task which is not considered in today's systems. To address these challenges, we change the training objective of the caption generator from reproducing groundtruth captions to generating a set of captions that is indistinguishable from human generated captions. Instead of handcrafting such a learning target, we employ adversarial training in combination with an approximate Gumbel sampler to implicitly match the generated distribution to the human one. While our method achieves comparable performance to the state-of-the-art in terms of the correctness of the captions, we generate a set of diverse captions, that are significantly less biased and match the word statistics better in several aspects.
1
0
0
0
0
0
A Transferable Pedestrian Motion Prediction Model for Intersections with Different Geometries
This paper presents a novel framework for accurate pedestrian intent prediction at intersections. Given some prior knowledge of the curbside geometry, the presented framework can accurately predict pedestrian trajectories, even in new intersections that it has not been trained on. This is achieved by making use of the contravariant components of trajectories in the curbside coordinate system, which ensures that the transformation of trajectories across intersections is affine, regardless of the curbside geometry. Our method is based on the Augmented Semi Nonnegative Sparse Coding (ASNSC) formulation and we use that as a baseline to show improvement in prediction performance on real pedestrian datasets collected at two intersections in Cambridge, with distinctly different curbside and crosswalk geometries. We demonstrate a 7.2% improvement in prediction accuracy in the case of same train and test intersections. Furthermore, we show a comparable prediction performance of TASNSC when trained and tested in different intersections with the baseline, trained and tested on the same intersection.
1
0
0
1
0
0
Generating GraphQL-Wrappers for REST(-like) APIs
GraphQL is a query language and thereupon-based paradigm for implementing web Application Programming Interfaces (APIs) for client-server interactions. Using GraphQL, clients define precise, nested data-requirements in typed queries, which are resolved by servers against (possibly multiple) backend systems, like databases, object storages, or other APIs. Clients receive only the data they care about, in a single request. However, providers of existing REST(-like) APIs need to implement additional GraphQL interfaces to enable these advantages. We here assess the feasibility of automatically generating GraphQL wrappers for existing REST(-like) APIs. A wrapper, upon receiving GraphQL queries, translates them to requests against the target API. We discuss the challenges for creating such wrappers, including dealing with data sanitation, authentication, or handling nested queries. We furthermore present a prototypical implementation of OASGraph. OASGraph takes as input an OpenAPI Specification (OAS) describing an existing REST(-like) web API and generates a GraphQL wrapper for it. We evaluate OASGraph by running it, as well as an existing open source alternative, against 959 publicly available OAS. This experiment shows that OASGraph outperforms the existing alternative and is able to create a GraphQL wrapper for 89.5% of the APIs -- however, with limitations in many cases. A subsequent analysis of errors and warnings produced by OASGraph shows that missing or ambiguous information in the assessed OAS hinders creating complete wrappers. Finally, we present a use case of the IBM Watson Language Translator API that shows that small changes to an OAS allow OASGraph to generate more idiomatic and more expressive GraphQL wrappers.
1
0
0
0
0
0
GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework
There is a pressing need to build an architecture that could subsume these networks under a unified framework that achieves both higher performance and less overhead. To this end, two fundamental issues are yet to be addressed. The first one is how to implement the back propagation when neuronal activations are discrete. The second one is how to remove the full-precision hidden weights in the training phase to break the bottlenecks of memory/computation consumption. To address the first issue, we present a multi-step neuronal activation discretization method and a derivative approximation technique that enable the implementing the back propagation algorithm on discrete DNNs. While for the second issue, we propose a discrete state transition (DST) methodology to constrain the weights in a discrete space without saving the hidden weights. Through this way, we build a unified framework that subsumes the binary or ternary networks as its special cases, and under which a heuristic algorithm is provided at the website this https URL. More particularly, we find that when both the weights and activations become ternary values, the DNNs can be reduced to sparse binary networks, termed as gated XNOR networks (GXNOR-Nets) since only the event of non-zero weight and non-zero activation enables the control gate to start the XNOR logic operations in the original binary networks. This promises the event-driven hardware design for efficient mobile intelligence. We achieve advanced performance compared with state-of-the-art algorithms. Furthermore, the computational sparsity and the number of states in the discrete space can be flexibly modified to make it suitable for various hardware platforms.
1
0
0
1
0
0
The growth of carbon chains in IRC+10216 mapped with ALMA
Linear carbon chains are common in various types of astronomical molecular sources. Possible formation mechanisms involve both bottom-up and top-down routes. We have carried out a combined observational and modeling study of the formation of carbon chains in the C-star envelope IRC+10216, where the polymerization of acetylene and hydrogen cyanide induced by ultraviolet photons can drive the formation of linear carbon chains of increasing length. We have used ALMA to map the emission of 3 mm rotational lines of the hydrocarbon radicals C2H, C4H, and C6H, and the CN-containing species CN, C3N, HC3N, and HC5N with an angular resolution of 1". The spatial distribution of all these species is a hollow, 5-10" wide, spherical shell located at a radius of 10-20" from the star, with no appreciable emission close to the star. Our observations resolve the broad shell of carbon chains into thinner sub-shells which are 1-2" wide and not fully concentric, indicating that the mass loss process has been discontinuous and not fully isotropic. The radial distributions of the species mapped reveal subtle differences: while the hydrocarbon radicals have very similar radial distributions, the CN-containing species show more diverse distributions, with HC3N appearing earlier in the expansion and the radical CN extending later than the rest of the species. The observed morphology can be rationalized by a chemical model in which the growth of polyynes is mainly produced by rapid gas-phase chemical reactions of C2H and C4H radicals with unsaturated hydrocarbons, while cyanopolyynes are mainly formed from polyynes in gas-phase reactions with CN and C3N radicals.
0
1
0
0
0
0
Thermoelectric Transport Coefficients from Charged Solv and Nil Black Holes
In the present work we study charged black hole solutions of the Einstein-Maxwell action that have Thurston geometries on its near horizon region. In particular we find solutions with charged Solv and Nil geometry horizons. We also find Nil black holes with hyperscaling violation. For all our solutions we compute the thermoelectric DC transport coefficients of the corresponding dual field theory. We find that the Solv and Nil black holes without hyperscaling violation are dual to metals while those with hyperscaling violation are dual to insulators.
0
1
0
0
0
0
A unified method for maximal truncated Calderón-Zygmund operators in general function spaces by sparse domination
In this note we give simple proofs of several results involving maximal truncated Caldeón-Zygmund operators in the general setting of rearrangement invariant quasi-Banach function spaces by sparse domination. Our techniques allow us to track the dependence of the constants in weighted norm inequalities; additionally, our results hold in $\mathbb{R}^n$ as well as in many spaces of homogeneous type.
0
0
1
0
0
0
Evaluating Quality of Chatbots and Intelligent Conversational Agents
Chatbots are one class of intelligent, conversational software agents activated by natural language input (which can be in the form of text, voice, or both). They provide conversational output in response, and if commanded, can sometimes also execute tasks. Although chatbot technologies have existed since the 1960s and have influenced user interface development in games since the early 1980s, chatbots are now easier to train and implement. This is due to plentiful open source code, widely available development platforms, and implementation options via Software as a Service (SaaS). In addition to enhancing customer experiences and supporting learning, chatbots can also be used to engineer social harm - that is, to spread rumors and misinformation, or attack people for posting their thoughts and opinions online. This paper presents a literature review of quality issues and attributes as they relate to the contemporary issue of chatbot development and implementation. Finally, quality assessment approaches are reviewed, and a quality assessment method based on these attributes and the Analytic Hierarchy Process (AHP) is proposed and examined.
1
0
0
0
0
0
Fast and Lightweight Rate Control for Onboard Predictive Coding of Hyperspectral Images
Predictive coding is attractive for compression of hyperspecral images onboard of spacecrafts in light of the excellent rate-distortion performance and low complexity of recent schemes. In this letter we propose a rate control algorithm and integrate it in a lossy extension to the CCSDS-123 lossless compression recommendation. The proposed rate algorithm overhauls our previous scheme by being orders of magnitude faster and simpler to implement, while still providing the same accuracy in terms of output rate and comparable or better image quality.
1
0
0
0
0
0
Quantum-continuum calculation of the surface states and electrical response of silicon in solution
A wide range of electrochemical reactions of practical importance occur at the interface between a semiconductor and an electrolyte. We present an embedded density-functional theory method using the recently released self-consistent continuum solvation (SCCS) approach to study these interfaces. In this model, a quantum description of the surface is incorporated into a continuum representation of the bending of the bands within the electrode. The model is applied to understand the electrical response of silicon electrodes in solution, providing microscopic insights into the low-voltage region, where surface states determine the electrification of the semiconductor electrode.
0
1
0
0
0
0
Simulation study of signal formation in position sensitive planar p-on-n silicon detectors after short range charge injection
Segmented silicon detectors (micropixel and microstrip) are the main type of detectors used in the inner trackers of Large Hadron Collider (LHC) experiments at CERN. Due to the high luminosity and eventual high fluence, detectors with fast response to fit the short shaping time of 20 ns and sufficient radiation hardness are required. Measurements carried out at the Ioffe Institute have shown a reversal of the pulse polarity in the detector response to short-range charge injection. Since the measured negative signal is about 30-60% of the peak positive signal, the effect strongly reduces the CCE even in non-irradiated detectors. For further investigation of the phenomenon the measurements have been reproduced by TCAD simulations. As for the measurements, the simulation study was applied for the p-on-n strip detectors similar in geometry to those developed for the ATLAS experiment and for the Ioffe Institute designed p-on-n strip detectors with each strip having a window in the metallization covering the p$^+$ implant, allowing the generation of electron-hole pairs under the strip implant. Red laser scans across the strips and the interstrip gap with varying laser diameters and Si-SiO$_2$ interface charge densities were carried out. The results verify the experimentally observed negative response along the scan in the interstrip gap. When the laser spot is positioned on the strip p$^+$ implant the negative response vanishes and the collected charge at the active strip proportionally increases. The simulation results offer a further insight and understanding of the influence of the oxide charge density in the signal formation. The observed effects and details of the detector response for different charge injection positions are discussed in the context of Ramo's theorem.
0
1
0
0
0
0
Re-DPoctor: Real-time health data releasing with w-day differential privacy
Wearable devices enable users to collect health data and share them with healthcare providers for improved health service. Since health data contain privacy-sensitive information, unprotected data release system may result in privacy leakage problem. Most of the existing work use differential privacy for private data release. However, they have limitations in healthcare scenarios because they do not consider the unique features of health data being collected from wearables, such as continuous real-time collection and pattern preservation. In this paper, we propose Re-DPoctor, a real-time health data releasing scheme with $w$-day differential privacy where the privacy of health data collected from any consecutive $w$ days is preserved. We improve utility by using a specially-designed partition algorithm to protect the health data patterns. Meanwhile, we improve privacy preservation by applying newly proposed adaptive sampling technique and budget allocation method. We prove that Re-DPoctor satisfies $w$-day differential privacy. Experiments on real health data demonstrate that our method achieves better utility with strong privacy guarantee than existing state-of-the-art methods.
1
0
0
0
0
0
Efficient Computation of the Stochastic Behavior of Partial Sum Processes
In this paper the computational aspects of probability calculations for dynamical partial sum expressions are discussed. Such dynamical partial sum expressions have many important applications, and examples are provided in the fields of reliability, product quality assessment, and stochastic control. While these probability calculations are ostensibly of a high dimension, and consequently intractable in general, it is shown how a recursive integration methodology can be implemented to obtain exact calculations as a series of two-dimensional calculations. The computational aspects of the implementaion of this methodology, with the adoption of Fast Fourier Transforms, are discussed.
0
0
0
1
0
0
Laser electron acceleration on curved surfaces
Electron acceleration by relativistically intense laser beam propagating along a curved surface allows to split softly the accelerated electron bunch and the laser beam. The presence of a curved surface allows to switch an adiabatic invariant of electrons in the wave instantly leaving the gained energy to the particles. The efficient acceleration is provided by the presence of strong transient quasistationary fields in the interaction region and a long efficient acceleration length. The curvature of the surface allows to select the accelerated particles and provides their narrow angular distribution. The mechanism at work is explicitly demonstrated in theoretical models and experiments.
0
1
0
0
0
0
A Physicist's view on Chopin's Études
We propose the use of specific dynamical processes and more in general of ideas from Physics to model the evolution in time of musical structures. We apply this approach to two Études by F. Chopin, namely op.10 n.3 and op.25 n.1, proposing some original description based on concepts of symmetry breaking/restoration and quantum coherence, which could be useful for interpretation. In this analysis, we take advantage of colored musical scores, obtained by implementing Scriabin's color code for sounds to musical notation.
0
1
0
0
0
0
Thermalization after holographic bilocal quench
We study thermalization in the holographic (1+1)-dimensional CFT after simultaneous generation of two high-energy excitations in the antipodal points on the circle. The holographic picture of such quantum quench is the creation of BTZ black hole from a collision of two massless particles. We perform holographic computation of entanglement entropy and mutual information in the boundary theory and analyze their evolution with time. We show that equilibration of the entanglement in the regions which contained one of the initial excitations is generally similar to that in other holographic quench models, but with some important distinctions. We observe that entanglement propagates along a sharp effective light cone from the points of initial excitations on the boundary. The characteristics of entanglement propagation in the global quench models such as entanglement velocity and the light cone velocity also have a meaning in the bilocal quench scenario. We also observe the loss of memory about the initial state during the equilibration process. We find that the memory loss reflects on the time behavior of the entanglement similarly to the global quench case, and it is related to the universal linear growth of entanglement, which comes from the interior of the forming black hole. We also analyze general two-point correlation functions in the framework of the geodesic approximation, focusing on the study of the late time behavior.
0
1
0
0
0
0
A mean-field approach to Kondo-attractive-Hubbard model
With the purpose of investigating coexistence between magnetic order and superconductivity, we consider a model in which conduction electrons interact with each other, via an attractive Hubbard on-site coupling $U$, and with local moments on every site, via a Kondo-like coupling, $J$. The model is solved on a simple cubic lattice through a Hartree-Fock approximation, within a `semi-classical' framework which allows spiral magnetic modes to be stabilized. For a fixed electronic density, $n_c$, the small $J$ region of the ground state ($T=0$) phase diagram displays spiral antiferromagnetic (SAFM) states for small $U$. Upon increasing $U$, a state with coexistence between superconductivity (SC) and SAFM sets in; further increase in $U$ turns the spiral mode into a Néel antiferromagnet. The large $J$ region is a (singlet) Kondo phase. At finite temperatures, and in the region of coexistence, thermal fluctuations suppress the different ordered phases in succession: the SAFM phase at lower temperatures and SC at higher temperatures; also, reentrant behaviour is found to be induced by temperature. Our results provide a qualitative description of the competition between local moment magnetism and superconductivity in the borocarbides family.
0
1
0
0
0
0
Endpoint Sobolev and BV continuity for maximal operators
In this paper we investigate some questions related to the continuity of maximal operators in $W^{1,1}$ and $BV$ spaces, complementing some well-known boundedness results. Letting $\widetilde M$ be the one-dimensional uncentered Hardy-Littlewood maximal operator, we prove that the map $f \mapsto \big(\widetilde Mf\big)'$ is continuous from $W^{1,1}(\mathbb{R})$ to $L^1(\mathbb{R})$. In the discrete setting, we prove that $\widetilde M: BV(\mathbb{Z}) \to BV(\mathbb{Z})$ is also continuous. For the one-dimensional fractional Hardy-Littlewood maximal operator, we prove by means of counterexamples that the corresponding continuity statements do not hold, both in the continuous and discrete settings, and for the centered and uncentered versions.
0
0
1
0
0
0
Quantum models as classical cellular automata
A synopsis is offered of the properties of discrete and integer-valued, hence "natural", cellular automata (CA). A particular class comprises the "Hamiltonian CA" with discrete updating rules that resemble Hamilton's equations. The resulting dynamics is linear like the unitary evolution described by the Schrödinger equation. Employing Shannon's Sampling Theorem, we construct an invertible map between such CA and continuous quantum mechanical models which incorporate a fundamental discreteness scale $l$. Consequently, there is a one-to-one correspondence of quantum mechanical and CA conservation laws. We discuss the important issue of linearity, recalling that nonlinearities imply nonlocal effects in the continuous quantum mechanical description of intrinsically local discrete CA - requiring locality entails linearity. The admissible CA observables and the existence of solutions of the $l$-dependent dispersion relation for stationary states are mentioned, besides the construction of multipartite CA obeying the Superposition Principle. We point out problems when trying to match the deterministic CA here to those envisioned in 't Hooft's CA Interpretation of Quantum Mechanics.
0
1
1
0
0
0