title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Spectral Method and Regularized MLE Are Both Optimal for Top-$K$ Ranking
This paper is concerned with the problem of top-$K$ ranking from pairwise comparisons. Given a collection of $n$ items and a few pairwise comparisons across them, one wishes to identify the set of $K$ items that receive the highest ranks. To tackle this problem, we adopt the logistic parametric model --- the Bradley-Terry-Luce model, where each item is assigned a latent preference score, and where the outcome of each pairwise comparison depends solely on the relative scores of the two items involved. Recent works have made significant progress towards characterizing the performance (e.g. the mean square error for estimating the scores) of several classical methods, including the spectral method and the maximum likelihood estimator (MLE). However, where they stand regarding top-$K$ ranking remains unsettled. We demonstrate that under a natural random sampling model, the spectral method alone, or the regularized MLE alone, is minimax optimal in terms of the sample complexity --- the number of paired comparisons needed to ensure exact top-$K$ identification, for the fixed dynamic range regime. This is accomplished via optimal control of the entrywise error of the score estimates. We complement our theoretical studies by numerical experiments, confirming that both methods yield low entrywise errors for estimating the underlying scores. Our theory is established via a novel leave-one-out trick, which proves effective for analyzing both iterative and non-iterative procedures. Along the way, we derive an elementary eigenvector perturbation bound for probability transition matrices, which parallels the Davis-Kahan $\sin\Theta$ theorem for symmetric matrices. This also allows us to close the gap between the $\ell_2$ error upper bound for the spectral method and the minimax lower limit.
1
0
1
1
0
0
One-Shot Reinforcement Learning for Robot Navigation with Interactive Replay
Recently, model-free reinforcement learning algorithms have been shown to solve challenging problems by learning from extensive interaction with the environment. A significant issue with transferring this success to the robotics domain is that interaction with the real world is costly, but training on limited experience is prone to overfitting. We present a method for learning to navigate, to a fixed goal and in a known environment, on a mobile robot. The robot leverages an interactive world model built from a single traversal of the environment, a pre-trained visual feature encoder, and stochastic environmental augmentation, to demonstrate successful zero-shot transfer under real-world environmental variations without fine-tuning.
1
0
0
0
0
0
Data Race Detection on Compressed Traces
We consider the problem of detecting data races in program traces that have been compressed using straight line programs (SLP), which are special context-free grammars that generate exactly one string, namely the trace that they represent. We consider two classical approaches to race detection --- using the happens-before relation and the lockset discipline. We present algorithms for both these methods that run in time that is linear in the size of the compressed, SLP representation. Typical program executions almost always exhibit patterns that lead to significant compression. Thus, our algorithms are expected to result in large speedups when compared with analyzing the uncompressed trace. Our experimental evaluation of these new algorithms on standard benchmarks confirms this observation.
1
0
0
0
0
0
Model and Integrate Medical Resource Availability into Verifiably Correct Executable Medical Guidelines - Technical Report
Improving effectiveness and safety of patient care is an ultimate objective for medical cyber-physical systems. A recent study shows that the patients' death rate can be reduced by computerizing medical guidelines. Most existing medical guideline models are validated and/or verified based on the assumption that all necessary medical resources needed for a patient care are always available. However, the reality is that some medical resources, such as special medical equipment or medical specialists, can be temporarily unavailable for an individual patient. In such cases, safety properties validated and/or verified in existing medical guideline models without considering medical resource availability may not hold any more. The paper argues that considering medical resource availability is essential in building verifiably correct executable medical guidelines. We present an approach to explicitly and separately model medical resource availability and automatically integrate resource availability models into an existing statechart-based computerized medical guideline model. This approach requires minimal change in existing medical guideline models to take into consideration of medical resource availability in validating and verifying medical guideline models. A simplified stroke scenario is used as a case study to investigate the effectiveness and validity of our approach.
1
0
0
0
0
0
Two-Party Function Computation on the Reconciled Data
In this paper, we initiate a study of a new problem termed function computation on the reconciled data, which generalizes a set reconciliation problem in the literature. Assume a distributed data storage system with two users $A$ and $B$. The users possess a collection of binary vectors $S_{A}$ and $S_{B}$, respectively. They are interested in computing a function $\phi$ of the reconciled data $S_{A} \cup S_{B}$. It is shown that any deterministic protocol, which computes a sum and a product of reconciled sets of binary vectors represented as nonnegative integers, has to communicate at least $2^n + n - 1$ and $2^n + n - 2$ bits in the worst-case scenario, respectively, where $n$ is the length of the binary vectors. Connections to other problems in computer science, such as set disjointness and finding the intersection, are established, yielding a variety of additional upper and lower bounds on the communication complexity. A protocol for computation of a sum function, which is based on use of a family of hash functions, is presented, and its characteristics are analyzed.
1
0
0
0
0
0
On Calabi-Yau compactifications of toric Landau-Ginzburg models for Fano complete intersections
Toric Landau--Ginzburg models of Givental's type for Fano complete intersections are known to have Calabi--Yau compactifications. We give an alternative proof of this fact. As an output of our proof we get a description of fibers over infinity for compactified toric Landau--Ginzburg models.
0
0
1
0
0
0
Halo assembly bias and the tidal anisotropy of the local halo environment
We study the role of the local tidal environment in determining the assembly bias of dark matter haloes. Previous results suggest that the anisotropy of a halo's environment (i.e, whether it lies in a filament or in a more isotropic region) can play a significant role in determining the eventual mass and age of the halo. We statistically isolate this effect using correlations between the large-scale and small-scale environments of simulated haloes at $z=0$ with masses between $10^{11.6}\lesssim (m/h^{-1}M_{\odot})\lesssim10^{14.9}$. We probe the large-scale environment using a novel halo-by-halo estimator of linear bias. For the small-scale environment, we identify a variable $\alpha_R$ that captures the $\textit{tidal anisotropy}$ in a region of radius $R=4R_{\textrm{200b}}$ around the halo and correlates strongly with halo bias at fixed mass. Segregating haloes by $\alpha_R$ reveals two distinct populations. Haloes in highly isotropic local environments ($\alpha_R\lesssim0.2$) behave as expected from the simplest, spherically averaged analytical models of structure formation, showing a $\textit{negative}$ correlation between their concentration and large-scale bias at $\textit{all}$ masses. In contrast, haloes in anisotropic, filament-like environments ($\alpha_R\gtrsim0.5$) tend to show a $\textit{positive}$ correlation between bias and concentration at any mass. Our multi-scale analysis cleanly demonstrates how the overall assembly bias trend across halo mass emerges as an average over these different halo populations, and provides valuable insights towards building analytical models that correctly incorporate assembly bias. We also discuss potential implications for the nature and detectability of galaxy assembly bias.
0
1
0
0
0
0
Towards a general theory for non-linear locally stationary processes
In this paper some general theory is presented for locally stationary processes based on the stationary approximation and the stationary derivative. Laws of large numbers, central limit theorems as well as deterministic and stochastic bias expansions are proved for processes obeying an expansion in terms of the stationary approximation and derivative. In addition it is shown that this applies to some general nonlinear non-stationary Markov-models. In addition the results are applied to derive the asymptotic properties of maximum likelihood estimates of parameter curves in such models.
0
0
1
1
0
0
Self-Learning Monte Carlo Method: Continuous-Time Algorithm
The recently-introduced self-learning Monte Carlo method is a general-purpose numerical method that speeds up Monte Carlo simulations by training an effective model to propose uncorrelated configurations in the Markov chain. We implement this method in the framework of continuous time Monte Carlo method with auxiliary field in quantum impurity models. We introduce and train a diagram generating function (DGF) to model the probability distribution of auxiliary field configurations in continuous imaginary time, at all orders of diagrammatic expansion. By using DGF to propose global moves in configuration space, we show that the self-learning continuous-time Monte Carlo method can significantly reduce the computational complexity of the simulation.
0
1
0
0
0
0
Single-cell diffraction tomography with optofluidic rotation about a tilted axis
Optical diffraction tomography (ODT) is a tomographic technique that can be used to measure the three-dimensional (3D) refractive index distribution within living cells without the requirement of any marker. In principle, ODT can be regarded as a generalization of optical projection tomography which is equivalent to computerized tomography (CT). Both optical tomographic techniques require projection-phase images of cells measured at multiple angles. However, the reconstruction of the 3D refractive index distribution post-measurement differs for the two techniques. It is known that ODT yields better results than projection tomography, because it takes into account diffraction of the imaging light due to the refractive index structure of the sample. Here, we apply ODT to biological cells in a microfluidic chip which combines optical trapping and microfluidic flow to achieve an optofluidic single-cell rotation. In particular, we address the problem that arises when the trapped cell is not rotating about an axis perpendicular to the imaging plane, but instead about an arbitrarily tilted axis. In this paper we show that the 3D reconstruction can be improved by taking into account such a tilted rotational axis in the reconstruction process.
0
0
0
0
1
0
Non-Linear Least-Squares Optimization of Rational Filters for the Solution of Interior Eigenvalue Problems
Rational filter functions can be used to improve convergence of contour-based eigensolvers, a popular family of algorithms for the solution of the interior eigenvalue problem. We present a framework for the optimization of rational filters based on a non-convex weighted Least-Squares scheme. When used in combination with the FEAST library, our filters out-perform existing ones on a large and representative set of benchmark problems. This work provides a detailed description of: (1) a set up of the optimization process that exploits symmetries of the filter function for Hermitian eigenproblems, (2) a formulation of the gradient descent and Levenberg-Marquardt algorithms that exploits the symmetries, (3) a method to select the starting position for the optimization algorithms that reliably produces effective filters, (4) a constrained optimization scheme that produces filter functions with specific properties that may be beneficial to the performance of the eigensolver that employs them.
1
0
0
0
0
0
Maps on statistical manifolds exactly reduced from the Perron-Frobenius equations for solvable chaotic maps
Maps on a parameter space for expressing distribution functions are exactly derived from the Perron-Frobenius equations for a generalized Boole transform family. Here the generalized Boole transform family is a one-parameter family of maps where it is defined on a subset of the real line and its probability distribution function is the Cauchy distribution with some parameters. With this reduction, some relations between the statistical picture and the orbital one are shown. From the viewpoint of information geometry, the parameter space can be identified with a statistical manifold, and then it is shown that the derived maps can be characterized. Also, with an induced symplectic structure from a statistical structure, symplectic and information geometric aspects of the derived maps are discussed.
0
1
0
0
0
0
Discontinuous classical ground state magnetic response as an even-odd effect in higher order rotationally invariant exchange interactions
The classical ground state magnetic response of the Heisenberg model when rotationally invariant exchange interactions of integer order q>1 are added is found to be discontinuous, even though the interactions lack magnetic anisotropy. This holds even in the case of bipartite lattices which are not frustrated, as well as for the frustrated triangular lattice. The total number of discontinuities is associated with even-odd effects as it depends on the parity of q via the relative strength of the bilinear and higher order exchange interactions, and increases with q. These results demonstrate that the precise form of the microscopic interactions is important for the ground state magnetization response.
0
1
0
0
0
0
Sparse Matrix Multiplication On An Associative Processor
Sparse matrix multiplication is an important component of linear algebra computations. Implementing sparse matrix multiplication on an associative processor (AP) enables high level of parallelism, where a row of one matrix is multiplied in parallel with the entire second matrix, and where the execution time of vector dot product does not depend on the vector size. Four sparse matrix multiplication algorithms are explored in this paper, combining AP and baseline CPU processing to various levels. They are evaluated by simulation on a large set of sparse matrices. The computational complexity of sparse matrix multiplication on AP is shown to be an O(nnz) where nnz is the number of nonzero elements. The AP is found to be especially efficient in binary sparse matrix multiplication. AP outperforms conventional solutions in power efficiency.
1
0
0
0
0
0
On the intersection of tame subgroups in groups acting on trees
Let $G$ be a group acting on a tree $T$ with finite edge stabilizers of bounded order. We provide, in some very interesting cases, upper bounds for the complexity of the intersection $H\cap K$ of two tame subgroups $H$ and $K$ of $G$ in terms of the complexities of $H$ and $K$. In particular, we obtain bounds for the Kurosh rank $Kr(H\cap K)$ of the intersection in terms of Kurosh ranks $Kr(H)$ and $Kr(K)$, in the case where $H$ and $K$ act freely on the edges of $T$.
0
0
1
0
0
0
A Lagrangian scheme for the solution of nonlinear diffusion equations using moving simplex meshes
A Lagrangian numerical scheme for solving nonlinear degenerate Fokker-Planck equations in space dimensions $d\ge2$ is presented. It applies to a large class of nonlinear diffusion equations, whose dynamics are driven by internal energies and given external potentials, e.g. the porous medium equation and the fast diffusion equation. The key ingredient in our approach is the gradient flow structure of the dynamics. For discretization of the Lagrangian map, we use a finite subspace of linear maps in space and a variational form of the implicit Euler method in time. Thanks to that time discretisation, the fully discrete solution inherits energy estimates from the original gradient flow, and these lead to weak compactness of the trajectories in the continuous limit. Consistency is analyzed in the planar situation, $d=2$. A variety of numerical experiments for the porous medium equation indicates that the scheme is well-adapted to track the growth of the solution's support.
0
0
1
0
0
0
Optimal Control for Multi-Mode Systems with Discrete Costs
This paper studies optimal time-bounded control in multi-mode systems with discrete costs. Multi-mode systems are an important subclass of linear hybrid systems, in which there are no guards on transitions and all invariants are global. Each state has a continuous cost attached to it, which is linear in the sojourn time, while a discrete cost is attached to each transition taken. We show that an optimal control for this model can be computed in NEXPTIME and approximated in PSPACE. We also show that the one-dimensional case is simpler: although the problem is NP-complete (and in LOGSPACE for an infinite time horizon), we develop an FPTAS for finding an approximate solution.
1
0
0
0
0
0
Digital Advertising Traffic Operation: Flow Management Analysis
In a Web Advertising Traffic Operation the Trafficking Routing Problem (TRP) consists in scheduling the management of Web Advertising (Adv) campaign between Trafficking campaigns in the most efficient way to oversee and manage relationship with partners and internal teams, managing expectations through integration and post-launch in order to ensure success for every stakeholders involved. For our own interest we did that independent research projects also through specific innovative tasks validate towards average working time declared on "specification required" by the main worldwide industry leading Advertising Agency. We present a Mixed Integer Linear Programming (MILP) formulation for end-to-end management of campaign workflow along a predetermined path and generalize it to include alternative path to oversee and manage detail-oriented relationship with partners and internal teams to achieve the goals above mentioned. To meet clients' KPIs, we consider an objective function that includes the punctuality indicators (the average waiting time and completion times) but also the main punctuality indicators (the average delay and the on time performance). Then we investigate their analytical relationships in the advertising domain through experiments based on real data from a Traffic Office. We show that the classic punctuality indicators are in contradiction with the task of reducing waiting times. We propose new indicators used for a synthesize analysis on projects or process changes for the wider team that are more sustainable, but also more relevant for stakeholders. We also show that the flow of a campaign (adv-ways) is the main bottleneck of a Traffic Office and that alternate paths cannot improve the performance indicators.
1
0
1
0
0
0
Addressing Class Imbalance in Classification Problems of Noisy Signals by using Fourier Transform Surrogates
Randomizing the Fourier-transform (FT) phases of temporal-spatial data generates surrogates that approximate examples from the data-generating distribution. We propose such FT surrogates as a novel tool to augment and analyze training of neural networks and explore the approach in the example of sleep-stage classification. By computing FT surrogates of raw EEG, EOG, and EMG signals of under-represented sleep stages, we balanced the CAPSLPDB sleep database. We then trained and tested a convolutional neural network for sleep stage classification, and found that our surrogate-based augmentation improved the mean F1-score by 7%. As another application of FT surrogates, we formulated an approach to compute saliency maps for individual sleep epochs. The visualization is based on the response of inferred class probabilities under replacement of short data segments by partial surrogates. To quantify how well the distributions of the surrogates and the original data match, we evaluated a trained classifier on surrogates of correctly classified examples, and summarized these conditional predictions in a confusion matrix. We show how such conditional confusion matrices can qualitatively explain the performance of surrogates in class balancing. The FT-surrogate augmentation approach may improve classification on noisy signals if carefully adapted to the data distribution under analysis.
0
0
0
1
1
0
Generating global network structures by triad types
This paper addresses the question of whether it is possible to generate networks with a given global structure (defined by selected blockmodels, i.e., cohesive, core-periphery, hierarchical and transitivity), considering only different types of triads. Two methods are used to generate networks: (i) the method of relocating links; and (ii) the Monte Carlo Multi Chain algorithm implemented in the "ergm" package implemented in R. Although all types of triads can generate networks with the selected blockmodel types, the selection of only a subset of triads improves the generated networks' blockmodel structure. However, in the case of a hierarchical blockmodel without complete blocks on the diagonal, additional local structures are needed to achieve the desired global structure of generated networks. This shows that blockmodels can emerge based on only local processes that do not take attributes into account.
0
0
1
1
0
0
Simultaneous smoothness and simultaneous stability of a $C^\infty$ strictly convex integrand and its dual
In this paper, we investigate simultaneous properties of a convex integrand $\gamma$ and its dual $\delta$. The main results are the following three. (1) For a $C^\infty$ convex integrand $\gamma: S^n\to \mathbb{R}_+$, its dual convex integrand $\delta: S^n\to \mathbb{R}_+$ is of class $C^\infty$ if and only if $\gamma$ is a strictly convex integrand. (2) Let $\gamma: S^n\to \mathbb{R}_+$ be a $C^\infty$ strictly convex integrand. Then, $\gamma$ is stable if and only if its dual convex integrand $\delta: S^n\to \mathbb{R}_+$ is stable. (3) Let $\gamma: S^n\to \mathbb{R}_+$ be a $C^\infty$ strictly convex integrand. Suppose that $\gamma$ is stable. Then, for any $i$ $(0\le i\le n)$, a point $\theta_0\in S^n$ is a non-degenerate critical point of $\gamma$ with Morse index $i$ if and only if its antipodal point $-\theta_0\in S^n$ is a non-degenerate critical point of the dual convex integrand $\delta$ with Morse index $(n-i)$.
0
0
1
0
0
0
Adaptive quadrature by expansion for layer potential evaluation in two dimensions
When solving partial differential equations using boundary integral equation methods, accurate evaluation of singular and nearly singular integrals in layer potentials is crucial. A recent scheme for this is quadrature by expansion (QBX), which solves the problem by locally approximating the potential using a local expansion centered at some distance from the source boundary. In this paper we introduce an extension of the QBX scheme in 2D denoted AQBX - adaptive quadrature by expansion - which combines QBX with an algorithm for automated selection of parameters, based on a target error tolerance. A key component in this algorithm is the ability to accurately estimate the numerical errors in the coefficients of the expansion. Combining previous results for flat panels with a procedure for taking the panel shape into account, we derive such error estimates for arbitrarily shaped boundaries in 2D that are discretized using panel-based Gauss-Legendre quadrature. Applying our scheme to numerical solutions of Dirichlet problems for the Laplace and Helmholtz equations, and also for solving these equations, we find that the scheme is able to satisfy a given target tolerance to within an order of magnitude, making it useful for practical applications. This represents a significant simplification over the original QBX algorithm, in which choosing a good set of parameters can be hard.
0
0
1
0
0
0
Anomalous transport effects on switching currents of graphene-based Josephson junctions
We explore the effect of noise on the ballistic graphene-based small Josephson junctions in the framework of the resistively and capacitively shunted model. We use the non-sinusoidal current-phase relation specific for graphene layers partially covered by superconducting electrodes. The noise induced escapes from the metastable states, when the external bias current is ramped, give the switching current distribution, i.e. the probability distribution of the passages to finite voltage from the superconducting state as a function of the bias current, that is the information more promptly available in the experiments. We consider a noise source that is a mixture of two different types of processes: a Gaussian contribution to simulate an uncorrelated ordinary thermal bath, and non-Gaussian, $\alpha$-stable (or Lévy) term, generally associated to non-equilibrium transport phenomena. We find that the analysis of the switching current distribution makes it possible to efficiently detect a non-Gaussian noise component in a Gaussian background.
0
1
0
0
0
0
Fixed-Parameter Tractable Sampling for RNA Design with Multiple Target Structures
The design of multi-stable RNA molecules has important applications in biology, medicine, and biotechnology. Synthetic design approaches profit strongly from effective in-silico methods, which can tremendously impact their cost and feasibility. We revisit a central ingredient of most in-silico design methods: the sampling of sequences for the design of multi-target structures, possibly including pseudoknots. For this task, we present the efficient, tree decomposition-based algorithm. Our fixed parameter tractable approach is underpinned by establishing the P-hardness of uniform sampling. Modeling the problem as a constraint network, our program supports generic Boltzmann-weighted sampling for arbitrary additive RNA energy models; this enables the generation of RNA sequences meeting specific goals like expected free energies or \GCb-content. Finally, we empirically study general properties of the approach and generate biologically relevant multi-target Boltzmann-weighted designs for a common design benchmark. Generating seed sequences with our program, we demonstrate significant improvements over the previously best multi-target sampling strategy (uniform sampling).Our software is freely available at: this https URL .
0
0
0
0
1
0
On the relevance of generalized disclinations in defect mechanics
The utility of the notion of generalized disclinations in materials science is discussed within the physical context of modeling interfacial and bulk line defects like defected grain and phase boundaries, dislocations and disclinations. The Burgers vector of a disclination dipole in linear elasticity is derived, clearly demonstrating the equivalence of its stress field to that of an edge dislocation. We also prove that the inverse deformation/displacement jump of a defect line is independent of the cut-surface when its g.disclination strength vanishes. An explicit formula for the displacement jump of a single localized composite defect line in terms of given g.disclination and dislocation strengths is deduced based on the Weingarten theorem for g.disclination theory (Weingarten-gd theorem) at finite deformation. The Burgers vector of a g.disclination dipole at finite deformation is also derived.
0
1
0
0
0
0
PbTe(111) Sub-Thermionic Photocathode: A Route to High-Quality Electron Pulses
The emission properties of PbTe(111) single crystal have been extensively investigated to demonstrate that PbTe(111) is a promising low root mean square transverse momentum ({\Delta}p$_T$) and high brightness photocathode. The density functional theory (DFT) based photoemission analysis successfully elucidates that the 'hole-like' {\Lambda}$^+_6$ energy band in the $L$ valley with low effective mass $m^*$ results in low {\Delta}p$_T$. Especially, as a 300K solid planar photocathode, Te-terminated PbTe(111) single crystal is expected to be a potential 50K electron source.
0
1
0
0
0
0
Envy-free Matchings with Lower Quotas
While every instance of the Hospitals/Residents problem admits a stable matching, the problem with lower quotas (HR-LQ) has instances with no stable matching. For such an instance, we expect the existence of an envy-free matching, which is a relaxation of a stable matching preserving a kind of fairness property. In this paper, we investigate the existence of an envy-free matching in several settings, in which hospitals have lower quotas and not all doctor-hospital pairs are acceptable. We first show that, for an HR-LQ instance, we can efficiently decide the existence of an envy-free matching. Then, we consider envy-freeness in the Classified Stable Matching model due to Huang (2010), i.e., each hospital has lower and upper quotas on subsets of doctors. We show that, for this model, deciding the existence of an envy-free matching is NP-hard in general, but solvable in polynomial time if quotas are paramodular.
1
0
0
0
0
0
Characterizing Feshbach resonances in ultracold scattering calculations
We describe procedures for converging on and characterizing zero-energy Feshbach resonances that appear in scattering lengths as a function of an external field. The elastic procedure is appropriate for purely elastic scattering, where the scattering length is real and displays a true pole. The regularized scattering length (RSL) procedure is appropriate when there is weak background inelasticity, so that the scattering length is complex and displays an oscillation rather than a pole, but the resonant scattering length $a_{\rm res}$ is close to real. The fully complex procedure is appropriate when there is substantial background inelasticity and the real and complex parts of $a_{\rm res}$ are required. We demonstrate these procedures for scattering of ultracold $^{85}$Rb in various initial states. All of them can converge on and provide full characterization of resonances, from initial guesses many thousands of widths away, using scattering calculations at only about 10 values of the external field.
0
1
0
0
0
0
Interacting Multi-particle Classical Szilard Engine
Szilard engine(SZE) is one of the best example of how information can be used to extract work from a system. Initially, the working substance of SZE was considered to be a single particle. Later on, researchers has extended the studies of SZE to multi-particle systems and even to quantum regime. Here we present a detailed study of classical SZE consisting of $N$ particles with inter-particle interactions, i.e., the working substance is a low density non-ideal gas and compare the work extraction with respect to SZE with non-interacting multi particle system as working substance. We have considered two cases of interactions namely: (i) hard core interactions and (ii) square well interaction. Our study reveals that work extraction is less when more particles are interacting through hard core interactions. More work is extracted when the particles are interacting via square well interaction. Another important result for the second case is that as we increase the particle number the work extraction becomes independent of the initial position of the partition, as opposed to the first case. Work extraction depends crucially on the initial position of the partition. More work can be extracted with larger number of particles when partition is inserted at positions near the boundary walls.
0
1
0
0
0
0
Speeding-up Object Detection Training for Robotics with FALKON
Latest deep learning methods for object detection provide remarkable performance, but have limits when used in robotic applications. One of the most relevant issues is the long training time, which is due to the large size and imbalance of the associated training sets, characterized by few positive and a large number of negative examples (i.e. background). Proposed approaches are based on end-to-end learning by back-propagation [22] or kernel methods trained with Hard Negatives Mining on top of deep features [8]. These solutions are effective, but prohibitively slow for on-line applications. In this paper we propose a novel pipeline for object detection that overcomes this problem and provides comparable performance, with a 60x training speedup. Our pipeline combines (i) the Region Proposal Network and the deep feature extractor from [22] to efficiently select candidate RoIs and encode them into powerful representations, with (ii) the FALKON [23] algorithm, a novel kernel-based method that allows fast training on large scale problems (millions of points). We address the size and imbalance of training data by exploiting the stochastic subsampling intrinsic into the method and a novel, fast, bootstrapping approach. We assess the effectiveness of the approach on a standard Computer Vision dataset (PASCAL VOC 2007 [5]) and demonstrate its applicability to a real robotic scenario with the iCubWorld Transformations [18] dataset.
1
0
0
0
0
0
Graph of Virtual Actors (GOVA): a Big Data Analytics Architecture for IoT
With the emergence of cloud computing and sensor technologies, Big Data analytics for the Internet of Things (IoT) has become the main force behind many innovative solutions for our society's problems. This paper provides practical explanations for the question "why is the number of Big Data applications that succeed and have an effect on our daily life so limited, compared with all of the solutions proposed and tested in the literature?", with examples taken from Smart Grids. We argue that "noninvariants" are the most challenging issues in IoT applications, which can be easily revealed if we use the term "invariant" to replace the more common terms such as "information", "knowledge", or "insight" in any Big Data for IoT research. From our experience with developing Smart Grid applications, we produced a list of "noninvariants", which we believe to be the main causes of the gaps between Big Data in a laboratory and in practice in IoT applications. This paper also proposes Graph of Virtual Actors (GOVA) as a Big Data analytics architecture for IoT applications, which not only can solve the noninvariants issues, but can also quickly scale horizontally in terms of computation, data storage, caching requirements, and programmability of the system.
1
0
0
0
0
0
A Variation of the $q$-Painlevé System with Affine Weyl Group Symmetry of Type $E_7^{(1)}$
Recently a certain $q$-Painlevé type system has been obtained from a reduction of the $q$-Garnier system. In this paper it is shown that the $q$-Painlevé type system is associated with another realization of the affine Weyl group symmetry of type $E_7^{(1)}$ and is different from the well-known $q$-Painlevé system of type $E_7^{(1)}$ from the point of view of evolution directions. We also study a connection between the $q$-Painlevé type system and the $q$-Painlevé system of type $E_7^{(1)}$. Furthermore determinant formulas of particular solutions for the $q$-Painlevé type system are constructed in terms of the terminating $q$-hypergeometric function.
0
1
1
0
0
0
Statics and dynamics of a self-bound dipolar matter-wave droplet
We study the statics and dynamics of a stable, mobile, self-bound three-dimensional dipolar matter-wave droplet created in the presence of a tiny repulsive three-body interaction. In frontal collision with an impact parameter and in angular collision at large velocities {along all directions} two droplets behave like quantum solitons. Such collision is found to be quasi elastic and the droplets emerge undeformed after collision without any change of velocity. However, in a collision at small velocities the axisymmeric dipolar interaction plays a significant role and the collision dynamics is sensitive to the direction of motion. For an encounter along the $z$ direction at small velocities, two droplets, polarized along the $z$ direction, coalesce to form a larger droplet $-$ a droplet molecule. For an encounter along the $x$ direction at small velocities, the same droplets stay apart and never meet each other due to the dipolar repulsion. The present study is based on an analytic variational approximation and a numerical solution of the mean-field Gross-Pitaevskii equation using the parameters of $^{52}$Cr atoms.
0
1
0
0
0
0
Riemannian geometry in infinite dimensional spaces
We lay foundations of the subject in the title, on which we build in another paper devoted to isometries in spaces of Kähler metrics.
0
0
1
0
0
0
Finite Blaschke products with prescribed critical points, Stieltjes polynomials, and moment problems
The determination of a finite Blaschke product from its critical points is a well-known problem with interrelations to other topics. Though existence and uniqueness of solutions are established for long, we present several new aspects which have not yet been explored to their full extent. In particular, we show that the following three problems are equivalent: (i) determining a finite Blaschke product from its critical points, (ii) finding the equilibrium position of moveable point charges interacting with a special configuration of fixed charges, (iii) solving a moment problem for the canonical representation of power moments on the real axis. These equivalences are not only of theoretical interest, but also open up new perspectives for the design of algorithms. For instance, the second problem is closely linked to the determination of certain Stieltjes and Van Vleck polynomials for a second order ODE and allows the description of solutions as global minimizers of an energy functional.
0
0
1
0
0
0
The Stochastic Processes Generation in OpenModelica
Background: Component-based modeling language Modelica (OpenModelica is open source implementation) is used for the numerical simulation of complex processes of different nature represented by ODE system. However, in OpenModelica standard library there is no routines for pseudo-random numbers generation, which makes it impossible to use for stochastic modeling processes. Purpose: The goal of this article is a brief overview of a number of algorithms for generation a sequence of uniformly distributed pseudo random numbers and quality assessment of the sequence given by them, as well as the ways to implement some of these algorithms in OpenModelica system. Methods: All the algorithms are implemented in C language, and the results of their work tested using open source package DieHarder. For those algorithms that do not use bit operations, we describe there realisation using OpwnModelica. The other algorithms can be called in OpenModelica as C functions Results: We have implemented and tested about nine algorithms. DieHarder testing revealed the highest quality pseudo-random number generators. Also we have reviewed libraries Noise and AdvancedNoise, who claim to be adding to the Modelica Standard Library. Conclusions: In OpenModelica system can be implemented generators of uniformly distributed pseudo-random numbers, which is the first step towards to make OpenModelica suitable for simulation of stochastic processes.
1
1
0
0
0
0
MIT SuperCloud Portal Workspace: Enabling HPC Web Application Deployment
The MIT SuperCloud Portal Workspace enables the secure exposure of web services running on high performance computing (HPC) systems. The portal allows users to run any web application as an HPC job and access it from their workstation while providing authentication, encryption, and access control at the system level to prevent unintended access. This capability permits users to seamlessly utilize existing and emerging tools that present their user interface as a website on an HPC system creating a portal workspace. Performance measurements indicate that the MIT SuperCloud Portal Workspace incurs marginal overhead when compared to a direct connection of the same service.
1
0
0
0
0
0
Estimator of Prediction Error Based on Approximate Message Passing for Penalized Linear Regression
We propose an estimator of prediction error using an approximate message passing (AMP) algorithm that can be applied to a broad range of sparse penalties. Following Stein's lemma, the estimator of the generalized degrees of freedom, which is a key quantity for the construction of the estimator of the prediction error, is calculated at the AMP fixed point. The resulting form of the AMP-based estimator does not depend on the penalty function, and its value can be further improved by considering the correlation between predictors. The proposed estimator is asymptotically unbiased when the components of the predictors and response variables are independently generated according to a Gaussian distribution. We examine the behaviour of the estimator for real data under nonconvex sparse penalties, where Akaike's information criterion does not correspond to an unbiased estimator of the prediction error. The model selected by the proposed estimator is close to that which minimizes the true prediction error.
0
0
0
1
0
0
Faster Multiplication for Long Binary Polynomials
We set new speed records for multiplying long polynomials over finite fields of characteristic two. Our multiplication algorithm is based on an additive FFT (Fast Fourier Transform) by Lin, Chung, and Huang in 2014 comparing to previously best results based on multiplicative FFTs. Both methods have similar complexity for arithmetic operations on underlying finite field; however, our implementation shows that the additive FFT has less overhead. For further optimization, we employ a tower field construction because the multipliers in the additive FFT naturally fall into small subfields, which leads to speed-ups using table-lookup instructions in modern CPUs. Benchmarks show that our method saves about $40 \%$ computing time when multiplying polynomials of $2^{28}$ and $2^{29}$ bits comparing to previous multiplicative FFT implementations.
1
0
1
0
0
0
Full-angle Negative Reflection with An Ultrathin Acoustic Gradient Metasurface: Floquet-Bloch Modes Perspective and Experimental Verification
Metasurface with gradient phase response offers new alternative for steering the propagation of waves. Conventional Snell's law has been revised by taking the contribution of local phase gradient into account. However, the requirement of momentum matching along the metasurface sets its nontrivial beam manipulation functionality within a limited-angle incidence. In this work, we theoretically and experimentally demonstrate that the acoustic gradient metasurface supports the negative reflection for full-angle incidence. The mode expansion theory is developed to help understand how the gradient metasurface tailors the incident beams, and the full-angle negative reflection occurs when the first negative order Floquet-Bloch mode dominates. The coiling-up space structures are utilized to build desired acoustic gradient metasurface and the full-angle negative reflections have been perfectly verified by experimental measurements. Our work offers the Floquet-Bloch modes perspective for qualitatively understanding the reflection behaviors of the acoustic gradient metasurface and enables a new degree of the acoustic wave manipulating.
0
1
0
0
0
0
Semigroup C*-algebras and toric varieties
Let S be a finitely generated subsemigroup of Z^2. We derive a general formula for the K-theory of the left regular C*-algebra for S.
0
0
1
0
0
0
Error analysis for small-sample, high-variance data: Cautions for bootstrapping and Bayesian bootstrapping
Recent advances in molecular simulations allow the direct evaluation of kinetic parameters such as rate constants for protein folding or unfolding. However, these calculations are usually computationally expensive and even significant computing resources may result in a small number of independent rate estimates spread over many orders of magnitude. Such small, high-variance samples are not readily amenable to analysis using the standard uncertainty ("standard error of the mean") because unphysical negative limits of confidence intervals result. Bootstrapping, a natural alternative guaranteed to yield a confidence interval within the minimum and maximum values, also exhibits a striking systematic bias of the lower confidence limit. As we show, bootstrapping artifactually assigns high probability to improbably low mean values. A second alternative, the Bayesian bootstrap strategy, does not suffer from the same deficit and is more logically consistent with the type of confidence interval desired, but must be used with caution nevertheless. Neither standard nor Bayesian bootstrapping can overcome the intrinsic challenge of under-estimating the mean from small, high-variance samples. Our report is based on extensive re-analysis of multiple estimates for rate constants obtained from independent atomistic simulations. Although we only analyze rate constants, similar considerations may apply to other types of high-variance calculations, such as may occur in highly non-linear averages like the Jarzynski relation.
0
0
0
1
0
0
Computing Influence of a Product through Uncertain Reverse Skyline
Understanding the influence of a product is crucially important for making informed business decisions. This paper introduces a new type of skyline queries, called uncertain reverse skyline, for measuring the influence of a probabilistic product in uncertain data settings. More specifically, given a dataset of probabilistic products P and a set of customers C, an uncertain reverse skyline of a probabilistic product q retrieves all customers c in C which include q as one of their preferred products. We present efficient pruning ideas and techniques for processing the uncertain reverse skyline query of a probabilistic product using R-Tree data index. We also present an efficient parallel approach to compute the uncertain reverse skyline and influence score of a probabilistic product. Our approach significantly outperforms the baseline approach derived from the existing literature. The efficiency of our approach is demonstrated by conducting extensive experiments with both real and synthetic datasets.
1
0
0
0
0
0
Effect of Heterogeneity in Models of El-Niño Southern Oscillations
The emergence of oscillations in models of the El-Niño effect is of utmost relevance. Here we investigate a coupled nonlinear delay differential system modeling theEl-Niño/ Southern Oscillation (ENSO) phenomenon, which arises through the strong coupling of the ocean-atmosphere system. In particular, we study the temporal patterns of the sea surface temperature anomaly of the two sub-regions. For identical sub-regions we typically observe a co-existence of amplitude and oscillator death behavior for low delays, and heterogeneous oscillations for high delays, when inter-region coupling is weak. For moderate inter-region coupling strengths one obtains homogeneous oscillations for sufficiently large delays and amplitude death for small delays. When the inter-region coupling strength is large, oscillations are suppressed altogether, implying that strongly coupled sub-regions do not exhibit ENSO-like oscillations. Further we observe that larger strengths of self-delay coupling favours oscillations, while oscillations die out when the delayed coupling is weak. This indicates again that delayed feedback, incorporating oceanic wave transit effects, is the principal cause of oscillatory behaviour. So the effect of trapped ocean waves propagating in a basin with closed boundaries is crucial for the emergence of ENSO. Further, we show how non-uniformity in delays, and difference in the strengths of the self-delay coupling of the sub-regions, affect the rise of oscillations. Interestingly we find that larger delays and self-delay coupling strengths lead to oscillations, while strong inter-region coupling kills oscillatory behaviour. Thus, we find that coupling sub-regions has a very significant effect on the emergence of oscillations, and strong coupling typically suppresses oscillations, while weak coupling of non-identical sub-regions can induce oscillations, thereby favouring ENSO.
0
1
0
0
0
0
A branch-and-price approach with MILP formulation to modularity density maximization on graphs
For clustering of an undirected graph, this paper presents an exact algorithm for the maximization of modularity density, a more complicated criterion to overcome drawbacks of the well-known modularity. The problem can be interpreted as the set-partitioning problem, which reminds us of its integer linear programming (ILP) formulation. We provide a branch-and-price framework for solving this ILP, or column generation combined with branch-and-bound. Above all, we formulate the column generation subproblem to be solved repeatedly as a simpler mixed integer linear programming (MILP) problem. Acceleration techniques called the set-packing relaxation and the multiple-cutting-planes-at-a-time combined with the MILP formulation enable us to optimize the modularity density for famous test instances including ones with over 100 vertices in around four minutes by a PC. Our solution method is deterministic and the computation time is not affected by any stochastic behavior. For one of them, column generation at the root node of the branch-and-bound tree provides a fractional upper bound solution and our algorithm finds an integral optimal solution after branching.
1
0
1
0
0
0
Survey of Gravitationally-lensed Objects in HSC Imaging (SuGOHI). I. Automatic search for galaxy-scale strong lenses
The Hyper Suprime-Cam Subaru Strategic Program (HSC SSP) is an excellent survey for the search for strong lenses, thanks to its area, image quality and depth. We use three different methods to look for lenses among 43,000 luminous red galaxies from the Baryon Oscillation Spectroscopic Survey (BOSS) sample with photometry from the S16A internal data release of the HSC SSP. The first method is a newly developed algorithm, named YATTALENS, which looks for arc-like features around massive galaxies and then estimates the likelihood of an object being a lens by performing a lens model fit. The second method, CHITAH, is a modeling-based algorithm originally developed to look for lensed quasars. The third method makes use of spectroscopic data to look for emission lines from objects at a different redshift from that of the main galaxy. We find 15 definite lenses, 36 highly probable lenses and 282 possible lenses. Among the three methods, YATTALENS, which was developed specifically for this problem, performs best in terms of both completeness and purity. Nevertheless five highly probable lenses were missed by YATTALENS but found by the other two methods, indicating that the three methods are highly complementary. Based on these numbers we expect to find $\sim$300 definite or probable lenses by the end of the HSC SSP.
0
1
0
0
0
0
Learning Sparse Neural Networks through $L_0$ Regularization
We propose a practical method for $L_0$ norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero. Such regularization is interesting since (1) it can greatly speed up training and inference, and (2) it can improve generalization. AIC and BIC, well-known model selection criteria, are special cases of $L_0$ regularization. However, since the $L_0$ norm of weights is non-differentiable, we cannot incorporate it directly as a regularization term in the objective function. We propose a solution through the inclusion of a collection of non-negative stochastic gates, which collectively determine which weights to set to zero. We show that, somewhat surprisingly, for certain distributions over the gates, the expected $L_0$ norm of the resulting gated weights is differentiable with respect to the distribution parameters. We further propose the \emph{hard concrete} distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard-sigmoid. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer.
1
0
0
1
0
0
The relationships between PM2.5 and meteorological factors in China: Seasonal and regional variations
The interactions between PM2.5 and meteorological factors play a crucial role in air pollution analysis. However, previous studies that have researched the relationships between PM2.5 concentration and meteorological conditions have been mainly confined to a certain city or district, and the correlation over the whole of China remains unclear. Whether or not spatial and seasonal variations exit deserves further research. In this study, the relationships between PM2.5 concentration and meteorological factors were investigated in 74 major cities in China for a continuous period of 22 months from February 2013 to November 2014, at season, year, city, and regional scales, and the spatial and seasonal variations were analyzed. The meteorological factors were relative humidity (RH), temperature (TEM), wind speed (WS), and surface pressure (PS). We found that spatial and seasonal variations of their relationships with PM2.5 do exist. Spatially, RH is positively correlated with PM2.5 concentration in North China and Urumqi, but the relationship turns to negative in other areas of China. WS is negatively correlated with PM2.5 everywhere expect for Hainan Island. PS has a strong positive relationship with PM2.5 concentration in Northeast China and Mid-south China, and in other areas the correlation is weak. Seasonally, the positive correlation between PM2.5 concentration and RH is stronger in winter and spring. TEM has a negative relationship with PM2.5 in autumn and the opposite in winter. PS is more positively correlated with PM2.5 in autumn than in other seasons. Our study investigated the relationships between PM2.5 and meteorological factors in terms of spatial and seasonal variations, and the conclusions about the relationships between PM2.5 and meteorological factors are more comprehensive and precise than before.
0
1
0
0
0
0
Rank-two Milnor idempotents for the multipullback quantum complex projective plane
The $K_0$-group of the C*-algebra of multipullback quantum complex projective plane is known to be $\mathbb{Z}^3$, with one generator given by the C*-algebra itself, one given by the section module of the noncommutative (dual) tautological line bundle, and one given by the Milnor module associated to a generator of the $K_1$-group of the C*-algebra of Calow-Matthes quantum 3-sphere. Herein we prove that these Milnor modules are isomorphic either to the section module of a noncommutative vector bundle associated to the $SU_q(2)$-prolongation of the Heegaard quantum 5-sphere $S^5_H$ viewed as a $U(1)$-quantum principal bundle, or to a complement of this module in the rank-four free module. Finally, we demonstrate that one of the above Milnor modules always splits into the direct sum of the rank-one free module and a rank-one non-free projective module that is \emph{not} associated with $S^5_H$.
0
0
1
0
0
0
Lost Relatives of the Gumbel Trick
The Gumbel trick is a method to sample from a discrete probability distribution, or to estimate its normalizing partition function. The method relies on repeatedly applying a random perturbation to the distribution in a particular way, each time solving for the most likely configuration. We derive an entire family of related methods, of which the Gumbel trick is one member, and show that the new methods have superior properties in several settings with minimal additional computational cost. In particular, for the Gumbel trick to yield computational benefits for discrete graphical models, Gumbel perturbations on all configurations are typically replaced with so-called low-rank perturbations. We show how a subfamily of our new methods adapts to this setting, proving new upper and lower bounds on the log partition function and deriving a family of sequential samplers for the Gibbs distribution. Finally, we balance the discussion by showing how the simpler analytical form of the Gumbel trick enables additional theoretical results.
1
0
0
1
0
0
Sequential Randomized Matrix Factorization for Gaussian Processes: Efficient Predictions and Hyper-parameter Optimization
This paper presents a sequential randomized lowrank matrix factorization approach for incrementally predicting values of an unknown function at test points using the Gaussian Processes framework. It is well-known that in the Gaussian processes framework, the computational bottlenecks are the inversion of the (regularized) kernel matrix and the computation of the hyper-parameters defining the kernel. The main contributions of this paper are two-fold. First, we formalize an approach to compute the inverse of the kernel matrix using randomized matrix factorization algorithms in a streaming scenario, i.e., data is generated incrementally over time. The metrics of accuracy and computational efficiency of the proposed method are compared against a batch approach based on use of randomized matrix factorization and an existing streaming approach based on approximating the Gaussian process by a finite set of basis vectors. Second, we extend the sequential factorization approach to a class of kernel functions for which the hyperparameters can be efficiently optimized. All results are demonstrated on two publicly available datasets.
1
0
0
1
0
0
Large-scale dynamos in rapidly rotating plane layer convection
Context: Convectively-driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional nonlinear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, it is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity ("grand minima"-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.
0
1
0
0
0
0
Statistical inference for misspecified ergodic Lévy driven stochastic differential equation models
This paper deals with the estimation problem of misspecified ergodic Lévy driven stochastic differential equation models based on high-frequency samples. We utilize the widely applicable and tractable Gaussian quasi-likelihood approach which focuses on (conditional) mean and variance structure. It is shown that the corresponding Gaussian quasi-likelihood estimators of drift and scale parameters satisfy tail probability estimates and asymptotic normality at the same rate as correctly specified case. In this process, extended Poisson equation for time-homogeneous Feller Markov processes plays an important role to handle misspecification effect. Our result confirms the practical usefulness of the Gaussian quasi-likelihood approach for SDE models, more firmly.
0
0
1
1
0
0
Bar formation in the Milky Way type galaxies
Many barred galaxies, possibly including the Milky Way, have cusps in the centres. There is a widespread belief, however, that usual bar instability taking place in bulgeless galaxy models is impossible for the cuspy models, because of the presence of the inner Lindblad resonance for any pattern speed. At the same time there are numerical evidences that the bar instability can form a bar. We analyse this discrepancy, by accurate and diverse N-body simulations and using the calculation of normal modes. We show that bar formation in cuspy galaxies can be explained by taking into account the disc thickness. The exponential growth time is moderate for typical current disc masses (about 250 Myr), but considerably increases (factor 2 or more) upon substitution of the live halo and bulge with a rigid halo/bulge potential; meanwhile pattern speeds remain almost the same. Normal mode analysis with different disc mass favours a young bar hypothesis, according to which the bar instability saturated only recently.
0
1
0
0
0
0
TLR: Transfer Latent Representation for Unsupervised Domain Adaptation
Domain adaptation refers to the process of learning prediction models in a target domain by making use of data from a source domain. Many classic methods solve the domain adaptation problem by establishing a common latent space, which may cause the loss of many important properties across both domains. In this manuscript, we develop a novel method, transfer latent representation (TLR), to learn a better latent space. Specifically, we design an objective function based on a simple linear autoencoder to derive the latent representations of both domains. The encoder in the autoencoder aims to project the data of both domains into a robust latent space. Besides, the decoder imposes an additional constraint to reconstruct the original data, which can preserve the common properties of both domains and reduce the noise that causes domain shift. Experiments on cross-domain tasks demonstrate the advantages of TLR over competing methods.
0
0
0
1
0
0
Automated and Robust Quantification of Colocalization in Dual-Color Fluorescence Microscopy: A Nonparametric Statistical Approach
Colocalization is a powerful tool to study the interactions between fluorescently labeled molecules in biological fluorescence microscopy. However, existing techniques for colocalization analysis have not undergone continued development especially in regards to robust statistical support. In this paper, we examine two of the most popular quantification techniques for colocalization and argue that they could be improved upon using ideas from nonparametric statistics and scan statistics. In particular, we propose a new colocalization metric that is robust, easily implementable, and optimal in a rigorous statistical testing framework. Application to several benchmark datasets, as well as biological examples, further demonstrates the usefulness of the proposed technique.
0
0
0
1
0
0
On the MISO Channel with Feedback: Can Infinitely Massive Antennas Achieve Infinite Capacity?
We consider communication over a multiple-input single-output (MISO) block fading channel in the presence of an independent noiseless feedback link. We assume that the transmitter and receiver have no prior knowledge of the channel state realizations, but the transmitter and receiver can acquire the channel state information (CSIT/CSIR) via downlink training and feedback. For this channel, we show that increasing the number of transmit antennas to infinity will not achieve an infinite capacity, for a finite channel coherence length and a finite input constraint on the second or fourth moment. This insight follows from our new capacity bounds that hold for any linear and nonlinear coding strategies, and any channel training schemes. In addition to the channel capacity bounds, we also provide a characterization on the beamforming gain that is also known as array gain or power gain, at the regime with a large number of antennas.
1
0
0
0
0
0
Structural Analysis and Optimal Design of Distributed System Throttlers
In this paper, we investigate the performance analysis and synthesis of distributed system throttlers (DST). A throttler is a mechanism that limits the flow rate of incoming metrics, e.g., byte per second, network bandwidth usage, capacity, traffic, etc. This can be used to protect a service's backend/clients from getting overloaded, or to reduce the effects of uncertainties in demand for shared services. We study performance deterioration of DSTs subject to demand uncertainty. We then consider network synthesis problems that aim to improve the performance of noisy DSTs via communication link modifications as well as server update cycle modifications.
1
0
1
0
0
0
Subadditivity and additivity of the Yang-Mills action functional in Noncommutative Geometry
We formulate notions of subadditivity and additivity of the Yang-Mills action functional in noncommutative geometry. We identify a suitable hypothesis on spectral triples which proves that the Yang-Mills functional is always subadditive, as per expectation. The additivity property is much stronger in the sense that it implies the subadditivity property. Under this hypothesis we obtain a necessary and sufficient condition for the additivity of the Yang-Mills functional. An instance of additivity is shown for the case of noncommutative $n$-tori. We also investigate the behaviour of critical points of the Yang-Mills functional under additivity. At the end we discuss few examples involving compact spin manifolds, matrix algebras, noncommutative $n$-torus and the quantum Heisenberg manifolds which validate our hypothesis.
0
0
1
0
0
0
Sensivity of the Hermite rank
The Hermite rank appears in limit theorems involving long memory. We show that an Hermite rank higher than one is unstable when the data is slightly perturbed by transformations such as shift and scaling. We carry out a "near higher order rank analysis" to illustrate how the limit theorems are affected by a shift perturbation that is decreasing in size. As a byproduct of our analysis, we also prove the coincidence of the Hermite rank and the power rank in the Gaussian context. The paper is a technical companion of \citet{bai:taqqu:2017:instability} which discusses the instability of the Hermite rank in the statistical context. (Older title "Some properties of the Hermite rank">)
0
0
1
1
0
0
Specifying a positive threshold function via extremal points
An extremal point of a positive threshold Boolean function $f$ is either a maximal zero or a minimal one. It is known that if $f$ depends on all its variables, then the set of its extremal points completely specifies $f$ within the universe of threshold functions. However, in some cases, $f$ can be specified by a smaller set. The minimum number of points in such a set is the specification number of $f$. It was shown in [S.-T. Hu. Threshold Logic, 1965] that the specification number of a threshold function of $n$ variables is at least $n+1$. In [M. Anthony, G. Brightwell, and J. Shawe-Taylor. On specifying Boolean functions by labelled examples. Discrete Applied Mathematics, 1995] it was proved that this bound is attained for nested functions and conjectured that for all other threshold functions the specification number is strictly greater than $n+1$. In the present paper, we resolve this conjecture negatively by exhibiting threshold Boolean functions of $n$ variables, which are non-nested and for which the specification number is $n+1$. On the other hand, we show that the set of extremal points satisfies the statement of the conjecture, i.e., a positive threshold Boolean function depending on all its $n$ variables has $n+1$ extremal points if and only if it is nested. To prove this, we reveal an underlying structure of the set of extremal points.
1
0
0
0
0
0
Link Mining for Kernel-based Compound-Protein Interaction Predictions Using a Chemogenomics Approach
Virtual screening (VS) is widely used during computational drug discovery to reduce costs. Chemogenomics-based virtual screening (CGBVS) can be used to predict new compound-protein interactions (CPIs) from known CPI network data using several methods, including machine learning and data mining. Although CGBVS facilitates highly efficient and accurate CPI prediction, it has poor performance for prediction of new compounds for which CPIs are unknown. The pairwise kernel method (PKM) is a state-of-the-art CGBVS method and shows high accuracy for prediction of new compounds. In this study, on the basis of link mining, we improved the PKM by combining link indicator kernel (LIK) and chemical similarity and evaluated the accuracy of these methods. The proposed method obtained an average area under the precision-recall curve (AUPR) value of 0.562, which was higher than that achieved by the conventional Gaussian interaction profile (GIP) method (0.425), and the calculation time was only increased by a few percent.
1
0
0
1
0
0
Using Stock Prices as Ground Truth in Sentiment Analysis to Generate Profitable Trading Signals
The increasing availability of "big" (large volume) social media data has motivated a great deal of research in applying sentiment analysis to predict the movement of prices within financial markets. Previous work in this field investigates how the true sentiment of text (i.e. positive or negative opinions) can be used for financial predictions, based on the assumption that sentiments expressed online are representative of the true market sentiment. Here we consider the converse idea, that using the stock price as the ground-truth in the system may be a better indication of sentiment. Tweets are labelled as Buy or Sell dependent on whether the stock price discussed rose or fell over the following hour, and from this, stock-specific dictionaries are built for individual companies. A Bayesian classifier is used to generate stock predictions, which are input to an automated trading algorithm. Placing 468 trades over a 1 month period yields a return rate of 5.18%, which annualises to approximately 83% per annum. This approach performs significantly better than random chance and outperforms two baseline sentiment analysis methods tested.
0
0
0
0
0
1
A novel approach to fractional calculus: utilizing fractional integrals and derivatives of the Dirac delta function
While the definition of a fractional integral may be codified by Riemann and Liouville, an agreed-upon fractional derivative has eluded discovery for many years. This is likely a result of integral definitions including numerous constants of integration in their results. An elimination of constants of integration opens the door to an operator that reconciles all known fractional derivatives and shows surprising results in areas unobserved before, including the appearance of the Riemann Zeta Function and fractional Laplace and Fourier Transforms. A new class of functions, known as Zero Functions and closely related to the Dirac Delta Function, are necessary for one to perform elementary operations of functions without using constants. The operator also allows for a generalization of the Volterra integral equation, and provides a method of solving for Riemann's "complimentary" function introduced during his research on fractional derivatives.
0
0
1
0
0
0
When does every definable nonempty set have a definable element?
The assertion that every definable set has a definable element is equivalent over ZF to the principle $V=\text{HOD}$, and indeed, we prove, so is the assertion merely that every $\Pi_2$-definable set has an ordinal-definable element. Meanwhile, every model of ZFC has a forcing extension satisfying $V\neq\text{HOD}$ in which every $\Sigma_2$-definable set has an ordinal-definable element. Similar results hold for $\text{HOD}(\mathbb{R})$ and $\text{HOD}(\text{Ord}^\omega)$ and other natural instances of $\text{HOD}(X)$.
0
0
1
0
0
0
An exploration to visualize finite element data with a DSL
The scientific community use PDEs to model a range of problems. The people in this domain are interested in visualizing their results, but existing mechanisms for visualization can not handle the full richness of computations in the domain. We did an exploration to see how Diderot, a domain specific language for scientific visualization and image analysis, could be used to solve this problem. We demonstrate our first and modest approach of visualizing FE data with Diderot and provide examples. Using Diderot, we do a simple sampling and a volume rendering of a FE field. These examples showcase Diderot's ability to provide a visualization result for Firedrake. This paper describes the extension of the Diderot language to include FE data.
1
0
0
0
0
0
Measurement and Analysis of Quality of Service of Mobile Networks in Afghanistan End User Perspective
Enhanced Quality of Service (QoS) and satisfaction of mobile phone user are major concerns of a service provider. In order to manage network efficiently and to provide enhanced end to end Quality of Experience (QoE), operator is expected to measure and analyze QoS from various perspectives and at different relevant points of network. The scope of this paper is measurement and statistically analysis of QoS of mobile networks from end user perspective in Afghanistan. The study is based on primary data collected on random basis from 1,515 mobile phone users of five cellular operators. The paper furthermore proposes adequate technical solutions to mobile operators in order to address existing challenges in the area of QoS and to remain competitive in the market. Based on the result of processed data, considering geographical locations, population and telecom regulations of the government, authors recommend deployment of small cells (SCs), increasing number of regular performance tests, optimal placement of base stations, increasing number of carriers, and high order sectorization as proposed technical solutions.
1
0
0
0
0
0
On the uncertainty of temperature estimation in a rapid compression machine
Rapid compression machines (RCMs) have been widely used in the combustion literature to study the low-to-intermediate temperature ignition of many fuels. In a typical RCM, the pressure during and after the compression stroke is measured. However, measurement of the temperature history in the RCM reaction chamber is challenging. Thus, the temperature is generally calculated by the isentropic relations between pressure and temperature, assuming that the adiabatic core hypothesis holds. To estimate the uncertainty in the calculated temperature, an uncertainty propagation analysis must be carried out. Our previous analyses assumed that the uncertainties of the parameters in the equation to calculate the temperature were normally distributed and independent, but these assumptions do not hold for typical RCM operating procedures. In this work, a Monte Carlo method is developed to estimate the uncertainty in the calculated temperature, while taking into account the correlation between parameters and the possibility of non-normal probability distributions. In addition, the Monte Carlo method is compared to an analysis that assumes normally distributed, independent parameters. Both analysis methods show that the magnitude of the initial pressure and the uncertainty of the initial temperature have strong influences on the magnitude of the uncertainty. Finally, the uncertainty estimation methods studied here provide a reference value for the uncertainty of the reference temperature in an RCM and can be generalized to other similar facilities.
0
1
0
0
0
0
Abelian varieties isogenous to a power of an elliptic curve over a Galois extension
Given an elliptic curve $E/k$ and a Galois extension $k'/k$, we construct an exact functor from torsion-free modules over the endomorphism ring ${\rm End}(E_{k'})$ with a semilinear ${\rm Gal}(k'/k)$ action to abelian varieties over $k$ that are $k'$-isogenous to a power of $E$. As an application, we show that every elliptic curve with complex multiplication geometrically is isogenous over the ground field to one with complex multiplication by a maximal order.
0
0
1
0
0
0
Radiative nonrecoil nuclear finite size corrections of order $α(Z α)^5$ to the Lamb shift in light muonic atoms
On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order $\alpha(Z \alpha)^5$ to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude with spanning photon are obtained. We present also numerical results for these contributions using modern experimental data on the electromagnetic form factors of light nuclei.
0
1
0
0
0
0
Improved Training of Wasserstein GANs
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.
1
0
0
1
0
0
A cancellation theorem for Milnor-Witt correspondences
We show that finite Milnor-Witt correspondences satisfy a cancellation theorem with respect to the pointed multiplicative group scheme. This has several notable applications in the theory of Milnor-Witt motives and Milnor-Witt motivic cohomology.
0
0
1
0
0
0
Wind Riemannian spaceforms and Randers metrics of constant flag curvature
Recently, wind Riemannian structures (WRS) have been introduced as a generalization of Randers and Kropina metrics. They are constructed from the natural data for Zermelo navigation problem, namely, a Riemannian metric $g_R$ and a vector field $W$ (the wind), where, now, the restriction of mild wind $g_R(W,W)<1$ is dropped. Here, the models of WRS spaceforms of constant flag curvature are determined. Indeed, the celebrated classification of Randers metrics of constant flag curvature by Bao, Robles and Shen, extended to the Kropina case in the works by Yoshikawa, Okubo and Sabau, can be used to obtain the local classification. For the global one, a suitable result on completeness for WRS yields the complete simply connected models. In particular, any of the local models in the Randers classification does admit an extension to a unique model of wind Riemannian structure, even if it cannot be extended as a complete Finslerian manifold. Thus, WRS's emerge as the natural framework for the analysis of Randers spaceforms and, prospectively, wind Finslerian structures would become important for other global problems too. For the sake of completeness, a brief overview about WRS (including a useful link with the conformal geometry of a class of relativistic spacetimes) is also provided.
0
0
1
0
0
0
Developing a Purely Visual Based Obstacle Detection using Inverse Perspective Mapping
Our solution is implemented in and for the frame of Duckietown. The goal of Duckietown is to provide a relatively simple platform to explore, tackle and solve many problems linked to autonomous driving. "Duckietown" is simple in the basics, but an infinitely expandable environment. From controlling single driving Duckiebots until complete fleet management, every scenario is possible and can be put into practice. So far, none of the existing modules was capable of reliably detecting obstacles and reacting to them in real time. We faced the general problem of detecting obstacles given images from a monocular RGB camera mounted at the front of our Duckiebot and reacting to them properly without crashing or erroneously stopping the Duckiebot. Both, the detection as well as the reaction have to be implemented and have to run on a Raspberry Pi in real time. Due to the strong hardware limitations, we decided to not use any learning algorithms for the obstacle detection part. As it later transpired, a working "hard coded" software needs thorough analysis and understanding of the given problem. In layman's terms, we simply seek to make Duckietown a safer place.
1
0
0
0
0
0
Weak Keys and Cryptanalysis of a Cold War Block Cipher
T-310 is a cipher that was used for encryption of governmental communications in East Germany during the final years of the Cold War. Due to its complexity and the encryption process,there was no published attack for a period of more than 40 years until 2018 by Nicolas T. Courtois et al. in [10]. In this thesis we study the so called 'long term keys' that were used in the cipher, in order to expose weaknesses which will assist the design of various attacks on T-310.
1
0
0
0
0
0
Dynamic Optimization of Neural Network Structures Using Probabilistic Modeling
Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for the DNNs have been proposed, selecting and designing the appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability distribution that generates network structures, and optimize the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply to the various network structure optimization problems under the same framework. We apply the proposed method to several structure optimization problems such as selection of layers, selection of unit types, and selection of connections using the MNIST, CIFAR-10, and CIFAR-100 datasets. The experimental results show that the proposed method can find the appropriate and competitive network structures.
0
0
0
1
0
0
Spectroscopy of Ultra-diffuse Galaxies in the Coma Cluster
We present spectra of 5 ultra-diffuse galaxies (UDGs) in the vicinity of the Coma Cluster obtained with the Multi-Object Double Spectrograph on the Large Binocular Telescope. We confirm 4 of these as members of the cluster, quintupling the number of spectroscopically confirmed systems. Like the previously confirmed large (projected half light radius $>$ 4.6 kpc) UDG, DF44, the systems we targeted all have projected half light radii $> 2.9$ kpc. As such, we spectroscopically confirm a population of physically large UDGs in the Coma cluster. The remaining UDG is located in the field, about $45$ Mpc behind the cluster. We observe Balmer and Ca II H \& K absorption lines in all of our UDG spectra. By comparing the stacked UDG spectrum against stellar population synthesis models, we conclude that, on average, these UDGs are composed of metal-poor stars ([Fe/H] $\lesssim -1.5$). We also discover the first UDG with [OII] and [OIII] emission lines within a clustered environment, demonstrating that not all cluster UDGs are devoid of gas and sources of ionizing radiation.
0
1
0
0
0
0
On Robust Tie-line Scheduling in Multi-Area Power Systems
The tie-line scheduling problem in a multi-area power system seeks to optimize tie-line power flows across areas that are independently operated by different system operators (SOs). In this paper, we leverage the theory of multi-parametric linear programming to propose algorithms for optimal tie-line scheduling within a deterministic and a robust optimization framework. Through a coordinator, the proposed algorithms are proved to converge to the optimal schedule within a finite number of iterations. A key feature of the proposed algorithms, besides their finite step convergence, is the privacy of the information exchanges; the SO in an area does not need to reveal its dispatch cost structure, network constraints, or the nature of the uncertainty set to the coordinator. The performance of the algorithms is evaluated using several power system examples.
0
0
1
0
0
0
Novel Phases of Semi-Conducting Silicon Nitride Bilayer: A First-Principle Study
In this paper, we have predicted the stabilities of several two-dimensional phases of silicon nitride, which we name as \alpha-phase, \beta-phase, and \gamma-phase, respectively. Both \alpha- and \beta-phases has formula Si$_{2}$N$_{2}$, and are consisted of two similar layer of buckled SiN sheet. Similarly, \gamma-phase is consisted of two puckered SiN sheets. For these phases, the two layers are connected with Si-Si covalent bonds. Transformation between \alpha- and \beta-phases is difficult because of the high energy barrier. Phonon spectra of both \alpha- and \beta-phase suggest their thermodynamic stabilities, because no phonon mode with imaginary frequency is present. By Contrast, \gamma-phase is unstable because phonon modes with imaginary frequencies are found along \Gamma-Y path in the Brilliouin zone. Both \alpha- and \beta-phase are semiconductor with narrow fundamental indirect band gap of 1.7eV and 1.9eV, respectively. As expected, only s and p orbitals in the outermost shells contribute the band structures. The p$_{z}$ orbitals have greater contribution near the Fermi level. These materials can easily exfoliate to form 2D structures, and may have potential electronic applications.
0
1
0
0
0
0
Schwarz-Christoffel: piliero en rivero (a pillar on a river)
La transformoj de Schwarz-Christoffel mapas, konforme, la superan kompleksan duon-ebenon al regiono limigita per rektaj segmentoj. Cxi tie ni priskribas kiel konvene kunigi mapon de la suba duon-ebeno al mapo de la supera duon-ebeno. Ni emfazas la bezonon de klara difino de angulo de kompleksa nombro, por tiu kunigo. Ni diskutas kelkajn ekzemplojn kaj donas interesan aplikon pri movado de fluido. ------- Schwarz-Christoffel transformations map, conformally, the complex upper half plane into a region bounded by right segments. Here we describe how to couple conveniently a map of the lower half plane to the map of the upper half plane. We emphasize the need of a clear definition of angle of a complex, to that coupling. We discuss some examples and give an interesting application for motion of fluid.
0
0
1
0
0
0
Towards Audio to Scene Image Synthesis using Generative Adversarial Network
Humans can imagine a scene from a sound. We want machines to do so by using conditional generative adversarial networks (GANs). By applying the techniques including spectral norm, projection discriminator and auxiliary classifier, compared with naive conditional GAN, the model can generate images with better quality in terms of both subjective and objective evaluations. Almost three-fourth of people agree that our model have the ability to generate images related to sounds. By inputting different volumes of the same sound, our model output different scales of changes based on the volumes, showing that our model truly knows the relationship between sounds and images to some extent.
1
0
0
0
0
0
Data-Mining Research in Education
As an interdisciplinary discipline, data mining (DM) is popular in education area especially when examining students' learning performances. It focuses on analyzing educational related data to develop models for improving learners' learning experiences and enhancing institutional effectiveness. Therefore, DM does help education institutions provide high-quality education for its learners. Applying data mining in education also known as educational data mining (EDM), which enables to better understand how students learn and identify how improve educational outcomes. Present paper is designed to justify the capabilities of data mining approaches in the filed of education. The latest trends on EDM research are introduced in this review. Several specific algorithms, methods, applications and gaps in the current literature and future insights are discussed here.
1
0
0
1
0
0
Stochastic Input Models in Online Computing
In this paper, we study twelve stochastic input models for online problems and reveal the relationships among the competitive ratios for the models. The competitive ratio is defined as the worst ratio between the expected optimal value and the expected profit of the solution obtained by the online algorithm where the input distribution is restricted according to the model. To handle a broad class of online problems, we use a framework called request-answer games that is introduced by Ben-David et al. The stochastic input models consist of two types: known distribution and unknown distribution. For each type, we consider six classes of distributions: dependent distributions, deterministic input, independent distributions, identical independent distribution, random order of a deterministic input, and random order of independent distributions. As an application of the models, we consider two basic online problems, which are variants of the secretary problem and the prophet inequality problem, under the twelve stochastic input models. We see the difference of the competitive ratios through these problems.
1
0
1
1
0
0
Generating Visual Representations for Zero-Shot Classification
This paper addresses the task of learning an image clas-sifier when some categories are defined by semantic descriptions only (e.g. visual attributes) while the others are defined by exemplar images as well. This task is often referred to as the Zero-Shot classification task (ZSC). Most of the previous methods rely on learning a common embedding space allowing to compare visual features of unknown categories with semantic descriptions. This paper argues that these approaches are limited as i) efficient discrimi-native classifiers can't be used ii) classification tasks with seen and unseen categories (Generalized Zero-Shot Classification or GZSC) can't be addressed efficiently. In contrast , this paper suggests to address ZSC and GZSC by i) learning a conditional generator using seen classes ii) generate artificial training examples for the categories without exemplars. ZSC is then turned into a standard supervised learning problem. Experiments with 4 generative models and 5 datasets experimentally validate the approach, giving state-of-the-art results on both ZSC and GZSC.
1
0
0
0
0
0
Online Learning Rate Adaptation with Hypergradient Descent
We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms. Our method works by dynamically updating the learning rate during optimization using the gradient with respect to the learning rate of the update rule itself. Computing this "hypergradient" needs little additional computation, requires only one extra copy of the original gradient to be stored in memory, and relies upon nothing more than what is provided by reverse-mode automatic differentiation.
1
0
0
1
0
0
Mitigating radiation damage of single photon detectors for space applications
Single-photon detectors in space must retain useful performance characteristics despite being bombarded with sub-atomic particles. Mitigating the effects of this space radiation is vital to enabling new space applications which require high-fidelity single-photon detection. To this end, we conducted proton radiation tests of various models of avalanche photodiodes (APDs) and one model of photomultiplier tube potentially suitable for satellite-based quantum communications. The samples were irradiated with 106 MeV protons at doses approximately equivalent to lifetimes of 0.6 , 6, 12 and 24 months in a low-Earth polar orbit. Although most detection properties were preserved, including efficiency, timing jitter and afterpulsing probability, all APD samples demonstrated significant increases in dark count rate (DCR) due to radiation-induced damage, many orders of magnitude higher than the 200 counts per second (cps) required for ground-to-satellite quantum communications. We then successfully demonstrated the mitigation of this DCR degradation through the use of deep cooling, to as low as -86 degrees C. This achieved DCR below the required 200 cps over the 24 months orbit duration. DCR was further reduced by thermal annealing at temperatures of +50 to +100 degrees C.
0
1
0
0
0
0
The Parameterized Complexity of Positional Games
We study the parameterized complexity of several positional games. Our main result is that Short Generalized Hex is W[1]-complete parameterized by the number of moves. This solves an open problem from Downey and Fellows' influential list of open problems from 1999. Previously, the problem was thought of as a natural candidate for AW[*]-completeness. Our main tool is a new fragment of first-order logic where universally quantified variables only occur in inequalities. We show that model-checking on arbitrary relational structures for a formula in this fragment is W[1]-complete when parameterized by formula size. We also consider a general framework where a positional game is represented as a hypergraph and two players alternately pick vertices. In a Maker-Maker game, the first player to have picked all the vertices of some hyperedge wins the game. In a Maker-Breaker game, the first player wins if she picks all the vertices of some hyperedge, and the second player wins otherwise. In an Enforcer-Avoider game, the first player wins if the second player picks all the vertices of some hyperedge, and the second player wins otherwise. Short Maker-Maker is AW[*]-complete, whereas Short Maker-Breaker is W[1]-complete and Short Enforcer-Avoider co-W[1]-complete parameterized by the number of moves. This suggests a rough parameterized complexity categorization into positional games that are complete for the first level of the W-hierarchy when the winning configurations only depend on which vertices one player has been able to pick, but AW[*]-completeness when the winning condition depends on which vertices both players have picked. However, some positional games where the board and the winning configurations are highly structured are fixed-parameter tractable. We give another example of such a game, Short k-Connect, which is fixed-parameter tractable when parameterized by the number of moves.
1
0
0
0
0
0
Lunar laser ranging in infrfared at hte Grasse laser station
For many years, lunar laser ranging (LLR) observations using a green wavelength have suffered an inhomogeneity problem both temporally and spatially. This paper reports on the implementation of a new infrared detection at the Grasse LLR station and describes how infrared telemetry improves this situation. Our first results show that infrared detection permits us to densify the observations and allows measurements during the new and the full Moon periods. The link budget improvement leads to homogeneous telemetric measurements on each lunar retro-reflector. Finally, a surprising result is obtained on the Lunokhod 2 array which attains the same efficiency as Lunokhod 1 with an infrared laser link, although those two targets exhibit a differential efficiency of six with a green laser link.
0
1
0
0
0
0
An exact solution to a Stefan problem with variable thermal conductivity and a Robin boundary condition
In this article it is proved the existence of similarity solutions for a one-phase Stefan problem with temperature-dependent thermal conductivity and a Robin condition at the fixed face. The temperature distribution is obtained through a generalized modified error function which is defined as the solution to a nonlinear ordinary differential problem of second order. It is proved that the latter has a unique non-negative bounded analytic solution when the parameter on which it depends assumes small positive values. Moreover, it is shown that the generalized modified error function is concave and increasing, and explicit approximations are proposed for it. Relation between the Stefan problem considered in this article with those with either constant thermal conductivity or a temperature boundary condition is also analysed.
0
0
1
0
0
0
State-of-the-art Speech Recognition With Sequence-to-Sequence Models
Attention-based encoder-decoder architectures such as Listen, Attend, and Spell (LAS), subsume the acoustic, pronunciation and language model components of a traditional automatic speech recognition (ASR) system into a single neural network. In previous work, we have shown that such architectures are comparable to state-of-theart ASR systems on dictation tasks, but it was not clear if such architectures would be practical for more challenging tasks such as voice search. In this work, we explore a variety of structural and optimization improvements to our LAS model which significantly improve performance. On the structural side, we show that word piece models can be used instead of graphemes. We also introduce a multi-head attention architecture, which offers improvements over the commonly-used single-head attention. On the optimization side, we explore synchronous training, scheduled sampling, label smoothing, and minimum word error rate optimization, which are all shown to improve accuracy. We present results with a unidirectional LSTM encoder for streaming recognition. On a 12, 500 hour voice search task, we find that the proposed changes improve the WER from 9.2% to 5.6%, while the best conventional system achieves 6.7%; on a dictation task our model achieves a WER of 4.1% compared to 5% for the conventional system.
1
0
0
1
0
0
Bayesian inference for Stable Levy driven Stochastic Differential Equations with high-frequency data
In this article we consider parametric Bayesian inference for stochastic differential equations (SDE) driven by a pure-jump stable Levy process, which is observed at high frequency. In most cases of practical interest, the likelihood function is not available, so we use a quasi-likelihood and place an associated prior on the unknown parameters. It is shown under regularity conditions that there is a Bernstein-von Mises theorem associated to the posterior. We then develop a Markov chain Monte Carlo (MCMC) algorithm for Bayesian inference and assisted by our theoretical results, we show how to scale Metropolis-Hastings proposals when the frequency of the data grows, in order to prevent the acceptance ratio going to zero in the large data limit. Our algorithm is presented on numerical examples that help to verify our theoretical findings.
0
0
1
1
0
0
The Evolution of Reputation-Based Cooperation in Regular Networks
Despite recent advances in reputation technologies, it is not clear how reputation systems can affect human cooperation in social networks. Although it is known that two of the major mechanisms in the evolution of cooperation are spatial selection and reputation-based reciprocity, theoretical study of the interplay between both mechanisms remains almost uncharted. Here, we present a new individual-based model for the evolution of reciprocal cooperation between reputation and networks. We comparatively analyze four of the leading moral assessment rules---shunning, image scoring, stern judging, and simple standing---and base the model on the giving game in regular networks for Cooperators, Defectors, and Discriminators. Discriminators rely on a proper moral assessment rule. By using individual-based models, we show that the four assessment rules are differently characterized in terms of how cooperation evolves, depending on the benefit-to-cost ratio, the network-node degree, and the observation and error conditions. Our findings show that the most tolerant rule---simple standing---is the most robust among the four assessment rules in promoting cooperation in regular networks.
1
1
0
0
0
0
Cocycles of nilpotent quotients of free groups
We focus on the cohomology of the $k$-th nilpotent quotient of the free group, $F/F_k$. This paper describes all the group 2-, 3-cocycles in terms of Massey products, and gives expressions for some of the 3-cocycles. We also give simple proofs of some of the results on Milnor invariants and the Johnson-Morita homomorphisms.
0
0
1
0
0
0
The ABCD of topological recursion
Kontsevich and Soibelman reformulated and slightly generalised the topological recursion of math-ph/0702045, seeing it as a quantization of certain quadratic Lagrangians in $T^*V$ for some vector space $V$. KS topological recursion is a procedure which takes as initial data a quantum Airy structure -- a family of at most quadratic differential operators on $V$ satisfying some axioms -- and gives as outcome a formal series of functions in $V$ (the partition function) simultaneously annihilated by these operators. Finding and classifying quantum Airy structures modulo gauge group action, is by itself an interesting problem which we study here. We provide some elementary, Lie-algebraic tools to address this problem, and give some elements of classification for ${\rm dim}\,V = 2$. We also describe four more interesting classes of quantum Airy structures, coming from respectively Frobenius algebras (here we retrieve the 2d TQFT partition function as a special case), non-commutative Frobenius algebras, loop spaces of Frobenius algebras and a $\mathbb{Z}_{2}$-invariant version of the latter. This $\mathbb{Z}_{2}$-invariant version in the case of a semi-simple Frobenius algebra corresponds to the topological recursion of math-ph/0702045.
0
0
1
0
0
0
Evaluating Compositionality in Sentence Embeddings
An important challenge for human-like AI is compositional semantics. Recent research has attempted to address this by using deep neural networks to learn vector space embeddings of sentences, which then serve as input to other tasks. We present a new dataset for one such task, `natural language inference' (NLI), that cannot be solved using only word-level knowledge and requires some compositionality. We find that the performance of state of the art sentence embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We analyze the decision rules learned by InferSent and find that they are consistent with simple heuristics that are ecologically valid in its training dataset. Further, we find that augmenting training with our dataset improves test performance on our dataset without loss of performance on the original training dataset. This highlights the importance of structured datasets in better understanding and improving AI systems.
0
0
0
1
0
0
A rigourous demonstration of the validity of Boltzmann's scenario for the spatial homogenization of a freely expanding gas and the equilibration of the Kac ring
Boltzmann provided a scenario to explain why individual macroscopic systems composed of a large number $N$ of microscopic constituents are inevitably (i.e., with overwhelming probability) observed to approach a unique macroscopic state of thermodynamic equilibrium, and why after having done so, they are then observed to remain in that state, apparently forever. We provide here rigourous new results that mathematically prove the basic features of Boltzmann's scenario for two classical models: a simple boundary-free model for the spatial homogenization of a non-interacting gas of point particles, and the well-known Kac ring model. Our results, based on concentration inequalities that go back to Hoeffding, and which focus on the typical behavior of individual macroscopic systems, improve upon previous results by providing estimates, exponential in $N$, of probabilities and time scales involved.
0
1
1
0
0
0
Half-range lattice Boltzmann models for the simulation of Couette flow using the Shakhov collision term
The three-dimensional Couette flow between parallel plates is addressed using mixed lattice Boltzmann models which implement the half-range and the full-range Gauss-Hermite quadratures on the Cartesian axes perpendicular and parallel to the walls, respectively. The ability of our models to simulate rarefied flows are validated through comparison against previously reported results obtained using the linearized Boltzmann-BGK equation for values of the Knudsen number (Kn) up to $100$. We find that recovering the non-linear part of the velocity profile (i.e., its deviation from a linear function) at ${\rm Kn} \gtrsim 1$ requires high quadrature orders. We then employ the Shakhov model for the collision term to obtain macroscopic profiles for Maxwell molecules using the standard $\mu \sim T^\omega$ law, as well as for monatomic Helium and Argon gases, modeled through ab-initio potentials, where the viscosity is recovered using the Sutherland model. We validate our implementation by comparison with DSMC results and find excellent match for all macroscopic quantities for ${\rm Kn} \lesssim 0.1$. At ${\rm Kn} \gtrsim 0.1$, small deviations can be seen in the profiles of the diagonal components of the pressure tensor, the heat flux parallel to the plates, and the velocity profile, as well as in the values of the velocity gradient at the channel center. We attribute these deviations to the limited applicability of the Shakhov collision model for highly out of equilibrium flows.
0
1
0
0
0
0
A Nonlinear Dimensionality Reduction Framework Using Smooth Geodesics
Existing dimensionality reduction methods are adept at revealing hidden underlying manifolds arising from high-dimensional data and thereby producing a low-dimensional representation. However, the smoothness of the manifolds produced by classic techniques over sparse and noisy data is not guaranteed. In fact, the embedding generated using such data may distort the geometry of the manifold and thereby produce an unfaithful embedding. Herein, we propose a framework for nonlinear dimensionality reduction that generates a manifold in terms of smooth geodesics that is designed to treat problems in which manifold measurements are either sparse or corrupted by noise. Our method generates a network structure for given high-dimensional data using a nearest neighbors search and then produces piecewise linear shortest paths that are defined as geodesics. Then, we fit points in each geodesic by a smoothing spline to emphasize the smoothness. The robustness of this approach for sparse and noisy datasets is demonstrated by the implementation of the method on synthetic and real-world datasets.
1
0
0
1
0
0
Analyzing and improving maximal attainable accuracy in the communication hiding pipelined BiCGStab method
Pipelined Krylov subspace methods avoid communication latency by reducing the number of global synchronization bottlenecks and by hiding global communication behind useful computational work. In exact arithmetic pipelined Krylov subspace algorithms are equivalent to classic Krylov subspace methods and generate identical series of iterates. However, as a consequence of the reformulation of the algorithm to improve parallelism, pipelined methods may suffer from severely reduced attainable accuracy in a practical finite precision setting. This work presents a numerical stability analysis that describes and quantifies the impact of local rounding error propagation on the maximal attainable accuracy of the multi-term recurrences in the preconditioned pipelined BiCGStab method. Theoretical expressions for the gaps between the true and computed residual as well as other auxiliary variables used in the algorithm are derived, and the elementary dependencies between the gaps on the various recursively computed vector variables are analyzed. The norms of the corresponding propagation matrices and vectors provide insights in the possible amplification of local rounding errors throughout the algorithm. Stability of the pipelined BiCGStab method is compared numerically to that of pipelined CG on a symmetric benchmark problem. Furthermore, numerical evidence supporting the effectiveness of employing a residual replacement type strategy to improve the maximal attainable accuracy for the pipelined BiCGStab method is provided.
1
0
0
0
0
0
Associated Graded Rings and Connected Sums
In 2012, Ananthnarayan, Avramov and Moore gave a new construction of Gorenstein rings from two Gorenstein local rings, called their connected sum. In this article, we investigate conditions on the associated graded ring of a Gorenstein Artin local ring Q, which force it to be a connected sum over its residue field. In particular, we recover some results regarding short, and stretched, Gorenstein Artin rings. Finally, using these decompositions, we obtain results about the rationality of the Poincare series of Q.
0
0
1
0
0
0