title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
On the generalized nonlinear Camassa-Holm equation
In this paper, a generalized nonlinear Camassa-Holm equation with time- and space-dependent coefficients is considered. We show that the control of the higher order dispersive term is possible by using an adequate weight function to define the energy. The existence and uniqueness of solutions are obtained via a Picard iterative method.
0
0
1
0
0
0
802.11 Wireless Simulation and Anomaly Detection using HMM and UBM
Despite the growing popularity of 802.11 wireless networks, users often suffer from connectivity problems and performance issues due to unstable radio conditions and dynamic user behavior among other reasons. Anomaly detection and distinction are in the thick of major challenges that network managers encounter. Complication of monitoring the broaden and complex WLANs, that often requires heavy instrumentation of the user devices, makes the anomaly detection analysis even harder. In this paper we exploit 802.11 access point usage data and propose an anomaly detection technique based on Hidden Markov Model (HMM) and Universal Background Model (UBM) on data that is inexpensive to obtain. We then generate a number of network anomalous scenarios in OMNeT++/INET network simulator and compare the detection outcomes with those in baseline approaches (RawData and PCA). The experimental results show the superiority of HMM and HMM-UBM models in detection precision and sensitivity.
1
0
0
0
0
0
Early MFCC And HPCP Fusion for Robust Cover Song Identification
While most schemes for automatic cover song identification have focused on note-based features such as HPCP and chord profiles, a few recent papers surprisingly showed that local self-similarities of MFCC-based features also have classification power for this task. Since MFCC and HPCP capture complementary information, we design an unsupervised algorithm that combines normalized, beat-synchronous blocks of these features using cross-similarity fusion before attempting to locally align a pair of songs. As an added bonus, our scheme naturally incorporates structural information in each song to fill in alignment gaps where both feature sets fail. We show a striking jump in performance over MFCC and HPCP alone, achieving a state of the art mean reciprocal rank of 0.87 on the Covers80 dataset. We also introduce a new medium-sized hand designed benchmark dataset called "Covers 1000," which consists of 395 cliques of cover songs for a total of 1000 songs, and we show that our algorithm achieves an MRR of 0.9 on this dataset for the first correctly identified song in a clique. We provide the precomputed HPCP and MFCC features, as well as beat intervals, for all songs in the Covers 1000 dataset for use in further research.
1
0
0
0
0
0
Optimally Gathering Two Robots
We present an algorithm that ensures in finite time the gathering of two robots in the non-rigid ASYNC model. To circumvent established impossibility results, we assume robots are equipped with 2-colors lights and are able to measure distances between one another. Aside from its light, a robot has no memory of its past actions, and its protocol is deterministic. Since, in the same model, gathering is impossible when lights have a single color, our solution is optimal with respect to the number of used colors.
1
0
0
0
0
0
On Grauert-Riemenschneider type criterions
Let $(X,\omega)$ be a compact Hermitian manifold of complex dimension $n$. In this article, we first survey recent progress towards Grauert-Riemenschneider type criterions. Secondly, we give a simplified proof of Boucksom's conjecture given by the author under the assumption that the Hermitian metric $\omega$ satisfies $\partial\overline{\partial}\omega^l=$ for all $l$, i.e., if $T$ is a closed positive current on $X$ such that $\int_XT_{ac}^n>0$, then the class $\{T\}$ is big and $X$ is Kähler. Finally, as an easy observation, we point out that Nguyen's result can be generalized as follows: if $\partial\overline{\partial}\omega=0$, and $T$ is a closed positive current with analytic singularities, such that $\int_XT^n_{ac}>0$, then the class $\{T\}$ is big and $X$ is Kähler.
0
0
1
0
0
0
P-Governance Technology: Using Big Data for Political Party Management
Information and Communication Technology (ICT) has been playing a pivotal role since the last decade in developing countries that brings citizen services to the doorsteps and connecting people. With this aspiration ICT has introduced several technologies of citizen services towards all categories of people. The purpose of this study is to examine the Governance technology perspectives for political party, emphasizing on the basic critical steps through which it could be operationalized. We call it P-Governance. P-Governance shows technologies to ensure governance, management, interaction communication in a political party by improving decision making processes using big data. P-Governance challenges the competence perspective to apply itself more assiduously to operationalization, including the need to choose and give definition to one or more units of analysis (of which the routine is a promising candidate). This paper is to focus on research challenges posed by competence to which P-Governance can and should respond include different strategy issues faced by particular sections. Both the qualitative as well as quantitative research approaches were conducted. The standard of citizen services, choice & consultation, courtesy & consultation, entrance & information, and value for money have found the positive relation with citizen's satisfaction. This study results how can be technology make important roles on political movements in developing countries using big data.
1
0
0
0
0
0
The Forgettable-Watcher Model for Video Question Answering
A number of visual question answering approaches have been proposed recently, aiming at understanding the visual scenes by answering the natural language questions. While the image question answering has drawn significant attention, video question answering is largely unexplored. Video-QA is different from Image-QA since the information and the events are scattered among multiple frames. In order to better utilize the temporal structure of the videos and the phrasal structures of the answers, we propose two mechanisms: the re-watching and the re-reading mechanisms and combine them into the forgettable-watcher model. Then we propose a TGIF-QA dataset for video question answering with the help of automatic question generation. Finally, we evaluate the models on our dataset. The experimental results show the effectiveness of our proposed models.
1
0
0
0
0
0
Analogues of the $p^n$th Hilbert symbol in characteristic $p$ (updated)
The $p$th degree Hilbert symbol $(\cdot,\cdot )_p:K^\times/K^{\times p}\times K^\times/K^{\times p}\to{}_p{\rm Br}(K)$ from characteristic $\neq p$ has two analogues in characteristic $p$, $$[\cdot,\cdot )_p:K/\wp (K)\times K^\times/K^{\times p}\to{}_p{\rm Br}(K),$$ where $\wp$ is the Artin-Schreier map $x\mapsto x^p-x$, and $$((\cdot,\cdot ))_p:K/K^p\times K/K^p\to{}_p{\rm Br}(K).$$ The symbol $[\cdot,\cdot )_p$ generalizes to an analogue of $(\cdot,\cdot )_{p^n}$ via the Witt vectors, $$[\cdot,\cdot )_{p^n}:W_n(K)/\wp (W_n(K))\times K^\times/K^{\times p^n}\to{}_{p^n}{\rm Br}(K).$$ Here $W_n(K)$ is the truncation of length $n$ of the ring of $p$-typical Witt wectors, i.e. $W_{\{1,p,\ldots,p^{n-1}\}}(K)$. In this paper we construct similar generalizations for $((\cdot,\cdot ))_p$. Our construction involves Witt vectors and Weyl algebras. In the process we obtain a new kind of Weyl algebras in characteristic $p$, with many interesting properties. The symbols we introduce, $((\cdot,\cdot ))_{p^n}$ and, more generally, $((\cdot,\cdot ))_{p^m,p^n}$, which here are defined in terms of central simple algebras, coincide with the homonymous symbols we introduced in [arXiv:1711.00980] in terms of the symbols $[\cdot,\cdot )_{p^n}$. This will be proved in a future paper. In the present paper we only introduce the symbols and we prove that they have the same properties with the symbols from [arXiv:1711.00980]. These properies are enough to obtain the representation theorem for ${}_{p^n}{\rm Br}(K)$ from [arXiv:1711.00980], Theorem 4.10.
0
0
1
0
0
0
GAMER-2: a GPU-accelerated adaptive mesh refinement code -- accuracy, performance, and scalability
We present GAMER-2, a GPU-accelerated adaptive mesh refinement (AMR) code for astrophysics. It provides a rich set of features, including adaptive time-stepping, several hydrodynamic schemes, magnetohydrodynamics, self-gravity, particles, star formation, chemistry and radiative processes with GRACKLE, data analysis with yt, and memory pool for efficient object allocation. GAMER-2 is fully bitwise reproducible. For the performance optimization, it adopts hybrid OpenMP/MPI/GPU parallelization and utilizes overlapping CPU computation, GPU computation, and CPU-GPU communication. Load balancing is achieved using a Hilbert space-filling curve on a level-by-level basis without the need to duplicate the entire AMR hierarchy on each MPI process. To provide convincing demonstrations of the accuracy and performance of GAMER-2, we directly compare with Enzo on isolated disk galaxy simulations and with FLASH on galaxy cluster merger simulations. We show that the physical results obtained by different codes are in very good agreement, and GAMER-2 outperforms Enzo and FLASH by nearly one and two orders of magnitude, respectively, on the Blue Waters supercomputers using $1-256$ nodes. More importantly, GAMER-2 exhibits similar or even better parallel scalability compared to the other two codes. We also demonstrate good weak and strong scaling using up to 4096 GPUs and 65,536 CPU cores, and achieve a uniform resolution as high as $10{,}240^3$ cells. Furthermore, GAMER-2 can be adopted as an AMR+GPUs framework and has been extensively used for the wave dark matter ($\psi$DM) simulations. GAMER-2 is open source (available at this https URL) and new contributions are welcome.
0
1
0
0
0
0
Partition algebras $\mathsf{P}_k(n)$ with $2k>n$ and the fundamental theorems of invariant theory for the symmetric group $\mathsf{S}_n$
Assume $\mathsf{M}_n$ is the $n$-dimensional permutation module for the symmetric group $\mathsf{S}_n$, and let $\mathsf{M}_n^{\otimes k}$ be its $k$-fold tensor power. The partition algebra $\mathsf{P}_k(n)$ maps surjectively onto the centralizer algebra $\mathsf{End}_{\mathsf{S}_n}(\mathsf{M}_n^{\otimes k})$ for all $k, n \in \mathbb{Z}_{\ge 1}$ and isomorphically when $n \ge 2k$. We describe the image of the surjection $\Phi_{k,n}:\mathsf{P}_k(n) \to \mathsf{End}_{\mathsf{S}_n}(\mathsf{M}_n^{\otimes k})$ explicitly in terms of the orbit basis of $\mathsf{P}_k(n)$ and show that when $2k > n$ the kernel of $\Phi_{k,n}$ is generated by a single essential idempotent $\mathsf{e}_{k,n}$, which is an orbit basis element. We obtain a presentation for $\mathsf{End}_{\mathsf{S}_n}(\mathsf{M}_n^{\otimes k})$ by imposing one additional relation, $\mathsf{e}_{k,n} = 0$, to the standard presentation of the partition algebra $\mathsf{P}_k(n)$ when $2k > n$. As a consequence, we obtain the fundamental theorems of invariant theory for the symmetric group $\mathsf{S}_n$. We show under the natural embedding of the partition algebra $\mathsf{P}_n(n)$ into $\mathsf{P}_k(n)$ for $k \ge n$ that the essential idempotent $\mathsf{e}_{n,n}$ generates the kernel of $\Phi_{k,n}$. Therefore, the relation $\mathsf{e}_{n,n} = 0$ can replace $\mathsf{e}_{k,n} = 0$ when $k \ge n$.
0
0
1
0
0
0
Distributed sub-optimal resource allocation over weight-balanced graph via singular perturbation
In this paper, we consider distributed optimization design for resource allocation problems over weight-balanced graphs. With the help of singular perturbation analysis, we propose a simple sub-optimal continuous-time optimization algorithm. Moreover, we prove the existence and uniqueness of the algorithm equilibrium, and then show the convergence with an exponential rate. Finally, we verify the sub-optimality of the algorithm, which can approach the optimal solution as an adjustable parameter tends to zero.
0
0
1
0
0
0
Decentralized P2P Energy Trading under Network Constraints in a Low-Voltage Network
The increasing uptake of distributed energy resources (DERs) in distribution systems and the rapid advance of technology have established new scenarios in the operation of low-voltage networks. In particular, recent trends in cryptocurrencies and blockchain have led to a proliferation of peer-to-peer (P2P) energy trading schemes, which allow the exchange of energy between the neighbors without any intervention of a conventional intermediary in the transactions. Nevertheless, far too little attention has been paid to the technical constraints of the network under this scenario. A major challenge to implementing P2P energy trading is that of ensuring that network constraints are not violated during the energy exchange. This paper proposes a methodology based on sensitivity analysis to assess the impact of P2P transactions on the network and to guarantee an exchange of energy that does not violate network constraints. The proposed method is tested on a typical UK low-voltage network. The results show that our method ensures that energy is exchanged between users under the P2P scheme without violating the network constraints, and that users can still capture the economic benefits of the P2P architecture.
1
0
0
0
0
0
Fictitious GAN: Training GANs with Historical Models
Generative adversarial networks (GANs) are powerful tools for learning generative models. In practice, the training may suffer from lack of convergence. GANs are commonly viewed as a two-player zero-sum game between two neural networks. Here, we leverage this game theoretic view to study the convergence behavior of the training process. Inspired by the fictitious play learning process, a novel training method, referred to as Fictitious GAN, is introduced. Fictitious GAN trains the deep neural networks using a mixture of historical models. Specifically, the discriminator (resp. generator) is updated according to the best-response to the mixture outputs from a sequence of previously trained generators (resp. discriminators). It is shown that Fictitious GAN can effectively resolve some convergence issues that cannot be resolved by the standard training approach. It is proved that asymptotically the average of the generator outputs has the same distribution as the data samples.
0
0
0
1
0
0
Suppression of Hall number due to charge density wave order in high-$T_c$ cuprates
Understanding the pseudogap phase in hole-doped high temperature cuprate superconductors remains a central challenge in condensed matter physics. From a host of recent experiments there is now compelling evidence of translational symmetry breaking charge density wave (CDW) order in a wide range of doping inside this phase. Two distinct types of incommensurate charge order -- bidirectional at zero or low magnetic fields and unidirectional at high magnetic fields close to the upper critical field $H_{c2}$ -- have been reported so far in approximately the same doping range between $p\simeq 0.08$ and $p\simeq 0.16$. In concurrent developments, recent high field Hall experiments have also revealed two indirect but striking signatures of Fermi surface reconstruction in the pseudogap phase, namely, a sign change of the Hall coefficient to negative values at low temperatures at intermediate range of hole doping and a rapid suppression of the positive Hall number without change in sign near optimal doping $p \sim 0.19$. We show that the assumption of a unidirectional incommensurate CDW (with or without a coexisting weak bidirectional order) at high magnetic fields near optimal doping and a coexistence of both types of orders of approximately equal magnitude at high magnetic fields at intermediate range of doping may help explain the striking behavior of low temperature Hall effect in the entire pseudogap phase.
0
1
0
0
0
0
The Wavefunction of the Collapsing Bose-Einstein Condensate
Bose-Einstein condensates with tunable interatomic interactions have been studied intensely in recent experiments. The investigation of the collapse of a condensate following a sudden change in the nature of the interaction from repulsive to attractive has led to the observation of a remnant condensate that did not undergo further collapse. We suggest that this high-density remnant is in fact the absolute minimum of the energy, if the attractive atomic interactions are nonlocal, and is therefore inherently stable. We show that a variational trial function consisting of a superposition of two distinct gaussians is an accurate representation of the wavefunction of the ground state of the conventional local Gross-Pitaevskii field equation for an attractive condensate and gives correctly the points of emergence of instability. We then use such a superposition of two gaussians as a variational trial function in order to calculate the minima of the energy when it includes a nonlocal interaction term. We use experimental data in order to study the long range of the nonlocal interaction, showing that they agree very well with a dimensionally derived expression for this range.
0
1
0
0
0
0
Not-So-Random Features
We propose a principled method for kernel learning, which relies on a Fourier-analytic characterization of translation-invariant or rotation-invariant kernels. Our method produces a sequence of feature maps, iteratively refining the SVM margin. We provide rigorous guarantees for optimality and generalization, interpreting our algorithm as online equilibrium-finding dynamics in a certain two-player min-max game. Evaluations on synthetic and real-world datasets demonstrate scalability and consistent improvements over related random features-based methods.
1
0
0
1
0
0
Dual-Primal Graph Convolutional Networks
In recent years, there has been a surge of interest in developing deep learning methods for non-Euclidean structured data such as graphs. In this paper, we propose Dual-Primal Graph CNN, a graph convolutional architecture that alternates convolution-like operations on the graph and its dual. Our approach allows to learn both vertex- and edge features and generalizes the previous graph attention (GAT) model. We provide extensive experimental validation showing state-of-the-art results on a variety of tasks tested on established graph benchmarks, including CORA and Citeseer citation networks as well as MovieLens, Flixter, Douban and Yahoo Music graph-guided recommender systems.
0
0
0
1
0
0
CLEAR: Coverage-based Limiting-cell Experiment Analysis for RNA-seq
Direct cDNA preamplification protocols developed for single-cell RNA-seq (scRNA-seq) have enabled transcriptome profiling of rare cells without having to pool multiple samples or to perform RNA extraction. We term this approach limiting-cell RNA-seq (lcRNA-seq). Unlike scRNA-seq, which focuses on 'cell-atlasing', lcRNA-seq focuses on identifying differentially expressed genes (DEGs) between experimental groups. This requires accounting for systems noise which can obscure biological differences. We present CLEAR, a workflow that identifies robust transcripts in lcRNA-seq data for between-group comparisons. To develop CLEAR, we compared DEGs from RNA extracted from FACS-derived CD5+ and CD5- cells from a single chronic lymphocytic leukemia patient diluted to input RNA levels of 10-, 100- and 1,000pg. Data quality at ultralow input levels are known to be noisy. When using CLEAR transcripts vs. using all available transcripts, downstream analyses reveal more shared DEGs, improved Principal Component Analysis separation of cell type, and increased similarity between results across different input RNA amounts. CLEAR was applied to two publicly available ultralow input RNA-seq data and an in-house murine neural cell lcRNA-seq dataset. CLEAR provides a novel way to visualize the public datasets while validates cell phenotype markers for astrocytes, neural stem and progenitor cells.
0
0
0
0
1
0
Training Deep AutoEncoders for Collaborative Filtering
This paper proposes a novel model for the rating prediction task in recommender systems which significantly outperforms previous state-of-the art models on a time-split Netflix data set. Our model is based on deep autoencoder with 6 layers and is trained end-to-end without any layer-wise pre-training. We empirically demonstrate that: a) deep autoencoder models generalize much better than the shallow ones, b) non-linear activation functions with negative parts are crucial for training deep models, and c) heavy use of regularization techniques such as dropout is necessary to prevent over-fiting. We also propose a new training algorithm based on iterative output re-feeding to overcome natural sparseness of collaborate filtering. The new algorithm significantly speeds up training and improves model performance. Our code is available at this https URL
1
0
0
1
0
0
Unit circle rectification of the MVDR beamformer
The sample matrix inversion (SMI) beamformer implements Capon's minimum variance distortionless (MVDR) beamforming using the sample covariance matrix (SCM). In a snapshot limited environment, the SCM is poorly conditioned resulting in a suboptimal performance from the SMI beamformer. Imposing structural constraints on the SCM estimate to satisfy known theoretical properties of the ensemble MVDR beamformer mitigates the impact of limited snapshots on the SMI beamformer performance. Toeplitz rectification and bounding the norm of weight vector are common approaches for such constrains. This paper proposes the unit circle rectification technique which constraints the SMI beamformer to satisfy a property of the ensemble MVDR beamformer: for narrowband planewave beamforming on a uniform linear array, the zeros of the MVDR weight array polynomial must fall on the unit circle. Numerical simulations show that the resulting unit circle MVDR (UC MVDR) beamformer frequently improves the suppression of both discrete interferers and white background noise compared to the classic SMI beamformer. Moreover, the UC MVDR beamformer is shown to suppress discrete interferers better than the MVDR beamformer diagonally loaded to maximize the SINR.
1
0
0
0
0
0
Reducing Storage of Global Wind Ensembles with Stochastic Generators
Wind has the potential to make a significant contribution to future energy resources. Locating the sources of this renewable energy on a global scale is however extremely challenging, given the difficulty to store very large data sets generated by modern computer models. We propose a statistical model that aims at reproducing the data-generating mechanism of an ensemble of runs via a Stochastic Generator (SG) of global annual wind data. We introduce an evolutionary spectrum approach with spatially varying parameters based on large-scale geographical descriptors such as altitude to better account for different regimes across the Earth's orography. We consider a multi-step conditional likelihood approach to estimate the parameters that explicitly accounts for nonstationary features while also balancing memory storage and distributed computation. We apply the proposed model to more than 18 million points of yearly global wind speed. The proposed SG requires orders of magnitude less storage for generating surrogate ensemble members from wind than does creating additional wind fields from the climate model, even if an effective lossy data compression algorithm is applied to the simulation output.
0
0
0
1
0
0
Jackknife Empirical Likelihood-based inference for S-Gini indices
Widely used income inequality measure, Gini index is extended to form a family of income inequality measures known as Single-Series Gini (S-Gini) indices. In this study, we develop empirical likelihood (EL) and jackknife empirical likelihood (JEL) based inference for S-Gini indices. We prove that the limiting distribution of both EL and JEL ratio statistics are Chi-square distribution with one degree of freedom. Using the asymptotic distribution we construct EL and JEL based confidence intervals for realtive S-Gini indices. We also give bootstrap-t and bootstrap calibrated empirical likelihood confidence intervals for S-Gini indices. A numerical study is carried out to compare the performances of the proposed confidence interval with the bootstrap methods. A test for S-Gini indices based on jackknife empirical likelihood ratio is also proposed. Finally we illustrate the proposed method using an income data.
0
0
0
1
0
0
Continual One-Shot Learning of Hidden Spike-Patterns with Neural Network Simulation Expansion and STDP Convergence Predictions
This paper presents a constructive algorithm that achieves successful one-shot learning of hidden spike-patterns in a competitive detection task. It has previously been shown (Masquelier et al., 2008) that spike-timing-dependent plasticity (STDP) and lateral inhibition can result in neurons competitively tuned to repeating spike-patterns concealed in high rates of overall presynaptic activity. One-shot construction of neurons with synapse weights calculated as estimates of converged STDP outcomes results in immediate selective detection of hidden spike-patterns. The capability of continual learning is demonstrated through the successful one-shot detection of new sets of spike-patterns introduced after long intervals in the simulation time. Simulation expansion (Lightheart et al., 2013) has been proposed as an approach to the development of constructive algorithms that are compatible with simulations of biological neural networks. A simulation of a biological neural network may have orders of magnitude fewer neurons and connections than the related biological neural systems; therefore, simulated neural networks can be assumed to be a subset of a larger neural system. The constructive algorithm is developed using simulation expansion concepts to perform an operation equivalent to the exchange of neurons between the simulation and the larger hypothetical neural system. The dynamic selection of neurons to simulate within a larger neural system (hypothetical or stored in memory) may be a starting point for a wide range of developments and applications in machine learning and the simulation of biology.
0
0
0
1
0
0
Interpretable High-Dimensional Inference Via Score Projection with an Application in Neuroimaging
In the fields of neuroimaging and genetics, a key goal is testing the association of a single outcome with a very high-dimensional imaging or genetic variable. Often, summary measures of the high-dimensional variable are created to sequentially test and localize the association with the outcome. In some cases, the results for summary measures are significant, but subsequent tests used to localize differences are underpowered and do not identify regions associated with the outcome. Here, we propose a generalization of Rao's score test based on projecting the score statistic onto a linear subspace of a high-dimensional parameter space. In addition, we provide methods to localize signal in the high-dimensional space by projecting the scores to the subspace where the score test was performed. This allows for inference in the high-dimensional space to be performed on the same degrees of freedom as the score test, effectively reducing the number of comparisons. Simulation results demonstrate the test has competitive power relative to others commonly used. We illustrate the method by analyzing a subset of the Alzheimer's Disease Neuroimaging Initiative dataset. Results suggest cortical thinning of the frontal and temporal lobes may be a useful biological marker of Alzheimer's risk.
0
0
1
1
0
0
A maximum principle for free boundary minimal varieties of arbitrary codimension
We establish a boundary maximum principle for free boundary minimal submanifolds in a Riemannian manifold with boundary, in any dimension and codimension. Our result holds more generally in the context of varifolds.
0
0
1
0
0
0
Drawing Big Graphs using Spectral Sparsification
Spectral sparsification is a general technique developed by Spielman et al. to reduce the number of edges in a graph while retaining its structural properties. We investigate the use of spectral sparsification to produce good visual representations of big graphs. We evaluate spectral sparsification approaches on real-world and synthetic graphs. We show that spectral sparsifiers are more effective than random edge sampling. Our results lead to guidelines for using spectral sparsification in big graph visualization.
1
0
0
0
0
0
Optimal paths on the road network as directed polymers
We analyze the statistics of the shortest and fastest paths on the road network between randomly sampled end points. To a good approximation, these optimal paths are found to be directed in that their lengths (at large scales) are linearly proportional to the absolute distance between them. This motivates comparisons to universal features of directed polymers in random media. There are similarities in scalings of fluctuations in length/time and transverse wanderings, but also important distinctions in the scaling exponents, likely due to long-range correlations in geographic and man-made features. At short scales the optimal paths are not directed due to circuitous excursions governed by a fat-tailed (power-law) probability distribution.
0
1
0
0
0
0
Optimal proportional reinsurance and investment for stochastic factor models
In this work we investigate the optimal proportional reinsurance-investment strategy of an insurance company which wishes to maximize the expected exponential utility of its terminal wealth in a finite time horizon. Our goal is to extend the classical Cramer-Lundberg model introducing a stochastic factor which affects the intensity of the claims arrival process, described by a Cox process, as well as the insurance and reinsurance premia. Using the classical stochastic control approach based on the Hamilton-Jacobi-Bellman equation we characterize the optimal strategy and provide a verification result for the value function via classical solutions of two backward partial differential equations. Existence and uniqueness of these solutions are discussed. Results under various premium calculation principles are illustrated and a new premium calculation rule is proposed in order to get more realistic strategies and to better fit our stochastic factor model. Finally, numerical simulations are performed to obtain sensitivity analyses.
0
0
0
0
0
1
Formal Black-Box Analysis of Routing Protocol Implementations
The Internet infrastructure relies entirely on open standards for its routing protocols. However, the majority of routers on the Internet are closed-source. Hence, there is no straightforward way to analyze them. Specifically, one cannot easily identify deviations of a router's routing functionality from the routing protocol's standard. Such deviations (either deliberate or inadvertent) are particularly important to identify since they may degrade the security or resiliency of the network. A model-based testing procedure is a technique that allows to systematically generate tests based on a model of the system to be tested; thereby finding deviations in the system compared to the model. However, applying such an approach to a complex multi-party routing protocol requires a prohibitively high number of tests to cover the desired functionality. We propose efficient and practical optimizations to the model-based testing procedure that are tailored to the analysis of routing protocols. These optimizations allow to devise a formal black-box method to unearth deviations in closed-source routing protocols' implementations. The method relies only on the ability to test the targeted protocol implementation and observe its output. Identification of the deviations is fully automatic. We evaluate our method against one of the complex and widely used routing protocols on the Internet -- OSPF. We search for deviations in the OSPF implementation of Cisco. Our evaluation identified numerous significant deviations that can be abused to compromise the security of a network. The deviations were confirmed by Cisco. We further employed our method to analyze the OSPF implementation of the Quagga Routing Suite. The analysis revealed one significant deviation. Subsequent to the disclosure of the deviations some of them were also identified by IBM, Lenovo and Huawei in their own products.
1
0
0
0
0
0
Finding polynomial loop invariants for probabilistic programs
Quantitative loop invariants are an essential element in the verification of probabilistic programs. Recently, multivariate Lagrange interpolation has been applied to synthesizing polynomial invariants. In this paper, we propose an alternative approach. First, we fix a polynomial template as a candidate of a loop invariant. Using Stengle's Positivstellensatz and a transformation to a sum-of-squares problem, we find sufficient conditions on the coefficients. Then, we solve a semidefinite programming feasibility problem to synthesize the loop invariants. If the semidefinite program is unfeasible, we backtrack after increasing the degree of the template. Our approach is semi-complete in the sense that it will always lead us to a feasible solution if one exists and numerical errors are small. Experimental results show the efficiency of our approach.
1
0
0
0
0
0
Active Decision Boundary Annotation with Deep Generative Models
This paper is on active learning where the goal is to reduce the data annotation burden by interacting with a (human) oracle during training. Standard active learning methods ask the oracle to annotate data samples. Instead, we take a profoundly different approach: we ask for annotations of the decision boundary. We achieve this using a deep generative model to create novel instances along a 1d line. A point on the decision boundary is revealed where the instances change class. Experimentally we show on three data sets that our method can be plugged-in to other active learning schemes, that human oracles can effectively annotate points on the decision boundary, that our method is robust to annotation noise, and that decision boundary annotations improve over annotating data samples.
1
0
0
0
0
0
Multifrequency Excitation and Detection Scheme in Apertureless Scattering Near Field Scanning Optical Microscopy
We theoretically and experimentally demonstrate a multifrequency excitation and detection scheme in apertureless near field optical microscopy, that exceeds current state of the art sensitivity and background suppression. By exciting the AFM tip at its two first flexural modes, and demodulating the detected signal at the harmonics of their sum, we extract a near field signal with a twofold improved sensitivity and deep sub-wavelength resolution, reaching $\lambda/230$. Furthermore, the method offers rich control over experimental degrees of freedom, expanding the parameter space for achieving complete optical background suppression. This approach breaks the ground for non-interferometric complete phase and amplitude retrieval of the near field signal, and is suitable for any multimodal excitation and higher harmonic demodulation.
0
1
0
0
0
0
Extended Kitaev chain with longer-range hopping and pairing
We consider the Kitaev chain model with finite and infinite range in the hopping and pairing parameters, looking in particular at the appearance of Majorana zero energy modes and massive edge modes. We study the system both in the presence and in the absence of time reversal symmetry, by means of topological invariants and exact diagonalization, disclosing very rich phase diagrams. In particular, for extended hopping and pairing terms, we can get as many Majorana modes at each end of the chain as the neighbors involved in the couplings. Finally we generalize the transfer matrix approach useful to calculate the zero-energy Majorana modes at the edges for a generic number of coupled neighbors.
0
1
0
0
0
0
Global entropy solutions to the compressible Euler equations in the isentropic nozzle flow for large data: Application of the modified Godunov scheme and the generalized invariant regions
We study the motion of isentropic gas in nozzles. This is a major subject in fluid dynamics. In fact, the nozzle is utilized to increase the thrust of rocket engines. Moreover, the nozzle flow is closely related to astrophysics. These phenomena are governed by the compressible Euler equation, which is one of crucial equations in inhomogeneous conservation laws. In this paper, we consider its unsteady flow and devote to proving the global existence and stability of solutions to the Cauchy problem for the general nozzle. The theorem has been proved in (Tsuge in Arch. Ration. Mech. Anal. 209:365-400 (2013)). However, this result is limited to small data. Our aim in the present paper is to remove this restriction, that is, we consider large data. Although the subject is important in Mathematics, Physics and engineering, it remained open for a long time. The problem seems to lie in a bounded estimate of approximate solutions, because we have only method to investigate the behavior with respect to the time variable. To solve this, we first introduce a generalized invariant region. Compared with the existing ones, its upper and lower bounds are extended constants to functions of the space variable. However, we cannot apply the new invariant region to the traditional difference method. Therefore, we invent the modified Godunov scheme. The approximate solutions consist of some functions corresponding to the upper and lower bounds of the invariant regions. These methods enable us to investigate the behavior of approximate solutions with respect to the space variable. The ideas are also applicable to other nonlinear problems involving similar difficulties.
0
0
1
0
0
0
Effects of atrial fibrillation on the arterial fluid dynamics: a modelling perspective
Atrial fibrillation (AF) is the most common form of arrhythmia with accelerated and irregular heart rate (HR), leading to both heart failure and stroke and being responsible for an increase in cardiovascular morbidity and mortality. In spite of its importance, the direct effects of AF on the arterial hemodynamic patterns are not completely known to date. Based on a multiscale modelling approach, the proposed work investigates the effects of AF on the local arterial fluid dynamics. AF and normal sinus rhythm (NSR) conditions are simulated extracting 2000 $\mathrm{RR}$ heartbeats and comparing the most relevant cardiac and vascular parameters at the same HR (75 bpm). Present outcomes evidence that the arterial system is not able to completely absorb the AF-induced variability, which can be even amplified towards the peripheral circulation. AF is also able to locally alter the wave dynamics, by modifying the interplay between forward and backward signals. The sole heart rhythm variation (i.e., from NSR to AF) promotes an alteration of the regular dynamics at the arterial level which, in terms of pressure and peripheral perfusion, suggests a modification of the physiological phenomena ruled by periodicity (e.g., regular organ perfusion)and a possible vascular dysfunction due to the prolonged exposure to irregular and extreme values. The present study represents a first modeling approach to characterize the variability of arterial hemodynamics in presence of AF, which surely deserves further clinical investigation.
0
0
0
0
1
0
Controllability and optimal control of the transport equation with a localized vector field
We study controllability of a Partial Differential Equation of transport type, that arises in crowd models. We are interested in controlling such system with a control being a Lipschitz vector field on a fixed control set $\omega$. We prove that, for each initial and final configuration, one can steer one to another with such class of controls only if the uncontrolled dynamics allows to cross the control set $\omega$. We also prove a minimal time result for such systems. We show that the minimal time to steer one initial configuration to another is related to the condition of having enough mass in $\omega$ to feed the desired final configuration.
0
0
1
0
0
0
Gravitational instabilities in a protosolar-like disc II: continuum emission and mass estimates
Gravitational instabilities (GIs) are most likely a fundamental process during the early stages of protoplanetary disc formation. Recently, there have been detections of spiral features in young, embedded objects that appear consistent with GI-driven structure. It is crucial to perform hydrodynamic and radiative transfer simulations of gravitationally unstable discs in order to assess the validity of GIs in such objects, and constrain optimal targets for future observations. We utilise the radiative transfer code LIME to produce continuum emission maps of a $0.17\,\mathrm{M}_{\odot}$ self-gravitating protosolar-like disc. We note the limitations of using LIME as is and explore methods to improve upon the default gridding. We use CASA to produce synthetic observations of 270 continuum emission maps generated across different frequencies, inclinations and dust opacities. We find that the spiral structure of our protosolar-like disc model is distinguishable across the majority of our parameter space after 1 hour of observation, and is especially prominent at 230$\,$GHz due to the favourable combination of angular resolution and sensitivity. Disc mass derived from the observations is sensitive to the assumed dust opacities and temperatures, and therefore can be underestimated by a factor of at least 30 at 850$\,$GHz and 2.5 at 90$\,$GHz. As a result, this effect could retrospectively validate GIs in discs previously thought not massive enough to be gravitationally unstable, which could have a significant impact on the understanding of the formation and evolution of protoplanetary discs.
0
1
0
0
0
0
Information Processing by Networks of Quantum Decision Makers
We suggest a model of a multi-agent society of decision makers taking decisions being based on two criteria, one is the utility of the prospects and the other is the attractiveness of the considered prospects. The model is the generalization of quantum decision theory, developed earlier for single decision makers realizing one-step decisions, in two principal aspects. First, several decision makers are considered simultaneously, who interact with each other through information exchange. Second, a multistep procedure is treated, when the agents exchange information many times. Several decision makers exchanging information and forming their judgement, using quantum rules, form a kind of a quantum information network, where collective decisions develop in time as a result of information exchange. In addition to characterizing collective decisions that arise in human societies, such networks can describe dynamical processes occurring in artificial quantum intelligence composed of several parts or in a cluster of quantum computers. The practical usage of the theory is illustrated on the dynamic disjunction effect for which three quantitative predictions are made: (i) the probabilistic behavior of decision makers at the initial stage of the process is described; (ii) the decrease of the difference between the initial prospect probabilities and the related utility factors is proved; (iii) the existence of a common consensus after multiple exchange of information is predicted. The predicted numerical values are in very good agreement with empirical data.
1
0
0
0
0
0
The Value of Sharing Intermittent Spectrum
Recent initiatives by regulatory agencies to increase spectrum resources available for broadband access include rules for sharing spectrum with high-priority incumbents. We study a model in which wireless Service Providers (SPs) charge for access to their own exclusive-use (licensed) band along with access to an additional shared band. The total, or delivered price in each band is the announced price plus a congestion cost, which depends on the load, or total users normalized by the bandwidth. The shared band is intermittently available with some probability, due to incumbent activity, and when unavailable, any traffic carried on that band must be shifted to licensed bands. The SPs then compete for quantity of users. We show that the value of the shared band depends on the relative sizes of the SPs: large SPs with more bandwidth are better able to absorb the variability caused by intermittency than smaller SPs. However, as the amount of shared spectrum increases, the large SPs may not make use of it. In that scenario shared spectrum creates more value than splitting it among the SPs for exclusive use. We also show that fixing the average amount of available shared bandwidth, increasing the reliability of the band is preferable to increasing the bandwidth.
1
0
0
0
0
0
Origin of layer dependence in band structures of two-dimensional materials
We study the origin of layer dependence in band structures of two-dimensional materials. We find that the layer dependence, at the density functional theory (DFT) level, is a result of quantum confinement and the non-linearity of the exchange-correlation functional. We use this to develop an efficient scheme for performing DFT and GW calculations of multilayer systems. We show that the DFT and quasiparticle band structures of a multilayer system can be derived from a single calculation on a monolayer of the material. We test this scheme on multilayers of MoS$_2$, graphene and phosphorene. This new scheme yields results in excellent agreement with the standard methods at a fraction of the computation cost. This helps overcome the challenge of performing fully converged GW calculations on multilayers of 2D materials, particularly in the case of transition metal dichalcogenides which involve very stringent convergence parameters.
0
1
0
0
0
0
From Strings to Sets
A complete proof is given of relative interpretability of Adjunctive Set Theory with Extensionality in an elementary concatenation theory.
0
0
1
0
0
0
Sharp measure contraction property for generalized H-type Carnot groups
We prove that H-type Carnot groups of rank $k$ and dimension $n$ satisfy the $\mathrm{MCP}(K,N)$ if and only if $K\leq 0$ and $N \geq k+3(n-k)$. The latter integer coincides with the geodesic dimension of the Carnot group. The same result holds true for the larger class of generalized H-type Carnot groups introduced in this paper, and for which we compute explicitly the optimal synthesis. This constitutes the largest class of Carnot groups for which the curvature exponent coincides with the geodesic dimension. We stress that generalized H-type Carnot groups have step 2, include all corank 1 groups and, in general, admit abnormal minimizing curves. As a corollary, we prove the absolute continuity of the Wasserstein geodesics for the quadratic cost on all generalized H-type Carnot groups.
0
0
1
0
0
0
Twisted Quantum Double Model of Topological Orders with Boundaries
We generalize the twisted quantum double model of topological orders in two dimensions to the case with boundaries by systematically constructing the boundary Hamiltonians. Given the bulk Hamiltonian defined by a gauge group $G$ and a three-cocycle in the third cohomology group of $G$ over $U(1)$, a boundary Hamiltonian can be defined by a subgroup $K$ of $G$ and a two-cochain in the second cochain group of $K$ over $U(1)$. The consistency between the bulk and boundary Hamiltonians is dictated by what we call the Frobenius condition that constrains the two-cochain given the three-cocyle. We offer a closed-form formula computing the ground state degeneracy of the model on a cylinder in terms of the input data only, which can be naturally generalized to surfaces with more boundaries. We also explicitly write down the ground-state wavefunction of the model on a disk also in terms of the input data only.
0
1
1
0
0
0
Evolutionary game of coalition building under external pressure
We study the fragmentation-coagulation (or merging and splitting) evolutionary control model as introduced recently by one of the authors, where $N$ small players can form coalitions to resist to the pressure exerted by the principal. It is a Markov chain in continuous time and the players have a common reward to optimize. We study the behavior as $N$ grows and show that the problem converges to a (one player) deterministic optimization problem in continuous time, in the infinite dimensional state space.
0
0
1
0
0
0
Nearly resolution V plans on blocks of small size
In Bagchi (2010) main effect plans "orthogonal through the block factor" (POTB) have been constructed. The main advantages of a POTB are that (a) it may exist in a set up where an "usual" orthogonal main effect plan (OMEP) cannot exist and (b) the data analysis is nearly as simple as an OMEP. In the present paper we extend this idea and define the concept of orthogonality between a pair of factorial effects ( main effects or interactions) "through the block factor" in the context of a symmetrical experiment. We consider plans generated from an initial plan by adding runs. For such a plan we have derived necessary and sufficient conditions for a pair of effects to be orthogonal through the block factor in terms of the generators. We have also derived a sufficient condition on the generators so as to turn a pair of effects aliased in the initial plan separated in the final plan. The theory developed is illustrated with plans for experiments with three-level factors in situations where interactions between three or more factors are absent. We have constructed plans with blocks of size four and fewer runs than a resolution $V$ plan estimating all main effects and all but at most one two-factor interactions.
0
0
1
1
0
0
Graphene-based electron transport layers in perovskite solar cells: a step-up for an efficient carrier collection
The electron transport layer (ETL) plays a fundamental role in perovskite solar cells. Recently, graphene-based ETLs have been proved to be good candidate for scalable fabrication processes and to achieve higher carrier injection with respect to most commonly used ETLs. In this work we experimentally study the effects of different graphene-based ETLs in sensitized MAPI solar cells. By means of time-integrated and picosecond time-resolved photoluminescence techniques, the carrier recombination dynamics in MAPI films embedded in different ETLs is investigated. Using graphene doped mesoporous TiO2 (G+mTiO2) with the addition of a lithium-neutralized graphene oxide (GO-Li) interlayer as ETL, we find that the carrier collection efficiency is increased by about a factor two with respect to standard mTiO2. Taking advantage of the absorption coefficient dispersion, we probe the MAPI layer morphology, along the thickness, finding that the MAPI embedded in the ETL composed by G+mTiO2 plus GO-Li brings to a very good crystalline quality of the MAPI layer with a trap density about one order of magnitude lower than that found with the other ETLs. In addition, this ETL freezes MAPI at the tetragonal phase, regardless of the temperature. Graphene-based ETLs can open the way to significant improvement of perovskite solar cells.
0
1
0
0
0
0
Learning and Transferring IDs Representation in E-commerce
Many machine intelligence techniques are developed in E-commerce and one of the most essential components is the representation of IDs, including user ID, item ID, product ID, store ID, brand ID, category ID etc. The classical encoding based methods (like one-hot encoding) are inefficient in that it suffers sparsity problems due to its high dimension, and it cannot reflect the relationships among IDs, either homogeneous or heterogeneous ones. In this paper, we propose an embedding based framework to learn and transfer the representation of IDs. As the implicit feedbacks of users, a tremendous amount of item ID sequences can be easily collected from the interactive sessions. By jointly using these informative sequences and the structural connections among IDs, all types of IDs can be embedded into one low-dimensional semantic space. Subsequently, the learned representations are utilized and transferred in four scenarios: (i) measuring the similarity between items, (ii) transferring from seen items to unseen items, (iii) transferring across different domains, (iv) transferring across different tasks. We deploy and evaluate the proposed approach in Hema App and the results validate its effectiveness.
1
0
0
1
0
0
Differentiable Supervector Extraction for Encoding Speaker and Phrase Information in Text Dependent Speaker Verification
In this paper, we propose a new differentiable neural network alignment mechanism for text-dependent speaker verification which uses alignment models to produce a supervector representation of an utterance. Unlike previous works with similar approaches, we do not extract the embedding of an utterance from the mean reduction of the temporal dimension. Our system replaces the mean by a phrase alignment model to keep the temporal structure of each phrase which is relevant in this application since the phonetic information is part of the identity in the verification task. Moreover, we can apply a convolutional neural network as front-end, and thanks to the alignment process being differentiable, we can train the whole network to produce a supervector for each utterance which will be discriminative with respect to the speaker and the phrase simultaneously. As we show, this choice has the advantage that the supervector encodes the phrase and speaker information providing good performance in text-dependent speaker verification tasks. In this work, the process of verification is performed using a basic similarity metric, due to simplicity, compared to other more elaborate models that are commonly used. The new model using alignment to produce supervectors was tested on the RSR2015-Part I database for text-dependent speaker verification, providing competitive results compared to similar size networks using the mean to extract embeddings.
1
0
0
0
0
0
In situ accretion of gaseous envelopes on to planetary cores embedded in evolving protoplanetary discs
The core accretion hypothesis posits that planets with significant gaseous envelopes accreted them from their protoplanetary discs after the formation of rocky/icy cores. Observations indicate that such exoplanets exist at a broad range of orbital radii, but it is not known whether they accreted their envelopes in situ, or originated elsewhere and migrated to their current locations. We consider the evolution of solid cores embedded in evolving viscous discs that undergo gaseous envelope accretion in situ with orbital radii in the range $0.1-10\rm au$. Additionally, we determine the long-term evolution of the planets that had no runaway gas accretion phase after disc dispersal. We find: (i) Planets with $5 \rm M_{\oplus}$ cores never undergo runaway accretion. The most massive envelope contained $2.8 \rm M_{\oplus}$ with the planet orbiting at $10 \rm au$. (ii) Accretion is more efficient onto $10 \rm M_{\oplus}$ and $15 \rm M_{\oplus}$ cores. For orbital radii $a_{\rm p} \ge 0.5 \rm au$, $15 \rm M_{\oplus}$ cores always experienced runaway gas accretion. For $a_{\rm p} \ge 5 \rm au$, all but one of the $10 \rm M_{\oplus}$ cores experienced runaway gas accretion. No planets experienced runaway growth at $a_{\rm p} = 0.1 \rm au$. (iii) We find that, after disc dispersal, planets with significant gaseous envelopes cool and contract on Gyr time-scales, the contraction time being sensitive to the opacity assumed. Our results indicate that Hot Jupiters with core masses $\lesssim 15 \rm M_{\oplus}$ at $\lesssim 0.1 \rm au$ likely accreted their gaseous envelopes at larger distances and migrated inwards. Consistently with the known exoplanet population, Super-Earths and mini-Neptunes at small radii during the disc lifetime, accrete only modest gaseous envelopes.
0
1
0
0
0
0
New Two Step Laplace Adam-Bashforth Method for Integer an Non integer Order Partial Differential Equations
This paper presents a novel method that allows to generalise the use of the Adam-Bashforth to Partial Differential Equations with local and non local operator. The Method derives a two step Adam-Bashforth numerical scheme in Laplace space and the solution is taken back into the real space via inverse Laplace transform. The method yields a powerful numerical algorithm for fractional order derivative where the usually very difficult to manage summation in the numerical scheme disappears. Error Analysis of the method is also presented. Applications of the method and numerical simulations are presented on a wave-equation like, and on a fractional order diffusion equation.
0
0
1
0
0
0
The Impact of Alternation
Alternating automata have been widely used to model and verify systems that handle data from finite domains, such as communication protocols or hardware. The main advantage of the alternating model of computation is that complementation is possible in linear time, thus allowing to concisely encode trace inclusion problems that occur often in verification. In this paper we consider alternating automata over infinite alphabets, whose transition rules are formulae in a combined theory of booleans and some infinite data domain, that relate past and current values of the data variables. The data theory is not fixed, but rather it is a parameter of the class. We show that union, intersection and complementation are possible in linear time in this model and, though the emptiness problem is undecidable, we provide two efficient semi-algorithms, inspired by two state-of-the-art abstraction refinement model checking methods: lazy predicate abstraction \cite{HJMS02} and the \impact~ semi-algorithm \cite{mcmillan06}. We have implemented both methods and report the results of an experimental comparison.
1
0
0
0
0
0
Partial-wave Coulomb t-matrices for like-charged particles at ground-state energy
We study a special case at which the analytical solution of the Lippmann-Schwinger integral equation for the partial wave two-body Coulomb transition matrix for likely charged particles at negative energy is possible. With the use of the Fock's method of the stereographic projection of the momentum space onto the four-dimensional unit sphere, the analytical expressions for s-, p- and d-wave partial Coulomb transition matrices for repulsively interacting particles at bound-state energy have been derived.
0
1
0
0
0
0
Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness
Causal effect estimation from observational data is an important and much studied research topic. The instrumental variable (IV) and local causal discovery (LCD) patterns are canonical examples of settings where a closed-form expression exists for the causal effect of one variable on another, given the presence of a third variable. Both rely on faithfulness to infer that the latter only influences the target effect via the cause variable. In reality, it is likely that this assumption only holds approximately and that there will be at least some form of weak interaction. This brings about the paradoxical situation that, in the large-sample limit, no predictions are made, as detecting the weak edge invalidates the setting. We introduce an alternative approach by replacing strict faithfulness with a prior that reflects the existence of many 'weak' (irrelevant) and 'strong' interactions. We obtain a posterior distribution over the target causal effect estimator which shows that, in many cases, we can still make good estimates. We demonstrate the approach in an application on a simple linear-Gaussian setting, using the MultiNest sampling algorithm, and compare it with established techniques to show our method is robust even when strict faithfulness is violated.
1
0
0
1
0
0
The Cross-section of a Spherical Double Cone
We show that the poset of $SL(n)$-orbit closures in the product of two partial flag varieties is a lattice if the action of $SL(n)$ is spherical.
0
0
1
0
0
0
Understanding looping kinetics of a long polymer molecule in solution. Exact solution for delocalized sink model
The fundamental understanding of loop formation of long polymer chains in solution has been an important thread of research for several theoretical and experimental studies. Loop formations are important phenomenological parameters in many important biological processes. Here we give a general method for finding an exact analytical solution for the occurrence of looping of a long polymer chains in solution modeled by using a Smoluchowski-like equation with a delocalized sink. The average rate constant for the delocalized sink is explicitly expressed in terms of the corresponding rate constants for localized sinks with different initial conditions. Simple analytical expressions are provided for average rate constant.
0
1
0
0
0
0
SESA: Supervised Explicit Semantic Analysis
In recent years supervised representation learning has provided state of the art or close to the state of the art results in semantic analysis tasks including ranking and information retrieval. The core idea is to learn how to embed items into a latent space such that they optimize a supervised objective in that latent space. The dimensions of the latent space have no clear semantics, and this reduces the interpretability of the system. For example, in personalization models, it is hard to explain why a particular item is ranked high for a given user profile. We propose a novel model of representation learning called Supervised Explicit Semantic Analysis (SESA) that is trained in a supervised fashion to embed items to a set of dimensions with explicit semantics. The model learns to compare two objects by representing them in this explicit space, where each dimension corresponds to a concept from a knowledge base. This work extends Explicit Semantic Analysis (ESA) with a supervised model for ranking problems. We apply this model to the task of Job-Profile relevance in LinkedIn in which a set of skills defines our explicit dimensions of the space. Every profile and job are encoded to this set of skills their similarity is calculated in this space. We use RNNs to embed text input into this space. In addition to interpretability, our model makes use of the web-scale collaborative skills data that is provided by users for each LinkedIn profile. Our model provides state of the art result while it remains interpretable.
1
0
0
0
0
0
High-Level Concepts for Affective Understanding of Images
This paper aims to bridge the affective gap between image content and the emotional response of the viewer it elicits by using High-Level Concepts (HLCs). In contrast to previous work that relied solely on low-level features or used convolutional neural network (CNN) as a black-box, we use HLCs generated by pretrained CNNs in an explicit way to investigate the relations/associations between these HLCs and a (small) set of Ekman's emotional classes. As a proof-of-concept, we first propose a linear admixture model for modeling these relations, and the resulting computational framework allows us to determine the associations between each emotion class and certain HLCs (objects and places). This linear model is further extended to a nonlinear model using support vector regression (SVR) that aims to predict the viewer's emotional response using both low-level image features and HLCs extracted from images. These class-specific regressors are then assembled into a regressor ensemble that provide a flexible and effective predictor for predicting viewer's emotional responses from images. Experimental results have demonstrated that our results are comparable to existing methods, with a clear view of the association between HLCs and emotional classes that is ostensibly missing in most existing work.
1
0
0
0
0
0
A Network of Networks Approach to Interconnected Power Grids
We present two different approaches to model power grids as interconnected networks of networks. Both models are derived from a model for spatially embedded mono-layer networks and are generalised to handle an arbitrary number of network layers. The two approaches are distinguished by their use case. The static glue stick construction model yields a multi-layer network from a predefined layer interconnection scheme, i.e. different layers are attached with transformer edges. It is especially suited to construct multi-layer power grids with a specified number of nodes in and transformers between layers. We contrast it with a genuine growth model which we label interconnected layer growth model.
0
1
0
0
0
0
A Nonparametric Method for Producing Isolines of Bivariate Exceedance Probabilities
We present a method for drawing isolines indicating regions of equal joint exceedance probability for bivariate data. The method relies on bivariate regular variation, a dependence framework widely used for extremes. This framework enables drawing isolines corresponding to very low exceedance probabilities and these lines may lie beyond the range of the data. The method we utilize for characterizing dependence in the tail is largely nonparametric. Furthermore, we extend this method to the case of asymptotic independence and propose a procedure which smooths the transition from asymptotic independence in the interior to the first-order behavior on the axes. We propose a diagnostic plot for assessing isoline estimate and choice of smoothing, and a bootstrap procedure to visually assess uncertainty.
0
0
0
1
0
0
Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients
This work is concerned with the optimal control problems governed by a 1D wave equation with variable coefficients and the control spaces $\mathcal M_T$ of either measure-valued functions $L_{w^*}^2(I,\mathcal M(\Omega))$ or vector measures $\mathcal M(\Omega,L^2(I))$. The cost functional involves the standard quadratic tracking terms and the regularization term $\alpha\|u\|_{\mathcal M_T}$ with $\alpha>0$. We construct and study three-level in time bilinear finite element discretizations for this class of problems. The main focus lies on the derivation of error estimates for the optimal state variable and the error measured in the cost functional. The analysis is mainly based on some previous results of the authors. The numerical results are included.
0
0
1
0
0
0
Marked points on translation surfaces
We show that all GL(2,R) equivariant point markings over orbit closures of translation surfaces arise from branched covering constructions and periodic points, completely classify such point markings over strata of quadratic differentials, and give applications to the finite blocking problem.
0
0
1
0
0
0
Multitask Learning for Fundamental Frequency Estimation in Music
Fundamental frequency (f0) estimation from polyphonic music includes the tasks of multiple-f0, melody, vocal, and bass line estimation. Historically these problems have been approached separately, and only recently, using learning-based approaches. We present a multitask deep learning architecture that jointly estimates outputs for various tasks including multiple-f0, melody, vocal and bass line estimation, and is trained using a large, semi-automatically annotated dataset. We show that the multitask model outperforms its single-task counterparts, and explore the effect of various design decisions in our approach, and show that it performs better or at least competitively when compared against strong baseline methods.
1
0
0
1
0
0
Real eigenvalues of a non-self-adjoint perturbation of the self-adjoint Zakharov-Shabat operator
We study the eigenvalues of the self-adjoint Zakharov-Shabat operator corresponding to the defocusing nonlinear Schrodinger equation in the inverse scattering method. Real eigenvalues exist when the square of the potential has a simple well. We derive two types of quantization condition for the eigenvalues by using the exact WKB method, and show that the eigenvalues stay real for a sufficiently small non-self-adjoint perturbation when the potential has some PT-like symmetry.
0
0
1
0
0
0
Planning with Multiple Biases
Recent work has considered theoretical models for the behavior of agents with specific behavioral biases: rather than making decisions that optimize a given payoff function, the agent behaves inefficiently because its decisions suffer from an underlying bias. These approaches have generally considered an agent who experiences a single behavioral bias, studying the effect of this bias on the outcome. In general, however, decision-making can and will be affected by multiple biases operating at the same time. How do multiple biases interact to produce the overall outcome? Here we consider decisions in the presence of a pair of biases exhibiting an intuitively natural interaction: present bias -- the tendency to value costs incurred in the present too highly -- and sunk-cost bias -- the tendency to incorporate costs experienced in the past into one's plans for the future. We propose a theoretical model for planning with this pair of biases, and we show how certain natural behavioral phenomena can arise in our model only when agents exhibit both biases. As part of our model we differentiate between agents that are aware of their biases (sophisticated) and agents that are unaware of them (naive). Interestingly, we show that the interaction between the two biases is quite complex: in some cases, they mitigate each other's effects while in other cases they might amplify each other. We obtain a number of further results as well, including the fact that the planning problem in our model for an agent experiencing and aware of both biases is computationally hard in general, though tractable under more relaxed assumptions.
1
1
0
0
0
0
Zhu reduction for Jacobi $n$-point functions and applications
We establish precise Zhu reduction formulas for Jacobi $n$-point functions which show the absence of any possible poles arising in these formulas. We then exploit this to produce results concerning the structure of strongly regular vertex operator algebras, and also to motivate new differential operators acting on Jacobi forms. Finally, we apply the reduction formulas to the Fermion model in order to create polynomials of quasi-Jacobi forms which are Jacobi forms.
0
0
1
0
0
0
Fourier-like multipliers and applications for integral operators
Timelimited functions and bandlimited functions play a fundamental role in signal and image processing. But by the uncertainty principles, a signal cannot be simultaneously time and bandlimited. A natural assumption is thus that a signal is almost time and almost bandlimited. The aim of this paper is to prove that the set of almost time and almost bandlimited signals is not excluded from the uncertainty principles. The transforms under consideration are integral operators with bounded kernels for which there is a Parseval Theorem. Then we define the wavelet multipliers for this class of operators, and study their boundedness and Schatten class properties. We show that the wavelet multiplier is unitary equivalent to a scalar multiple of the phase space restriction operator. Moreover we prove that a signal which is almost time and almost bandlimited can be approximated by its projection on the span of the first eigenfunctions of the phase space restriction operator, corresponding to the largest eigenvalues which are close to one.
0
0
1
0
0
0
Inferring Properties of the ISM from Supernova Remnant Size Distributions
We model the size distribution of supernova remnants to infer the surrounding ISM density. Using simple, yet standard SNR evolution models, we find that the distribution of ambient densities is remarkably narrow; either the standard assumptions about SNR evolution are wrong, or observable SNRs are biased to a narrow range of ambient densities. We show that the size distributions are consistent with log-normal, which severely limits the number of model parameters in any SNR population synthesis model. Simple Monte Carlo simulations demonstrate that the size distribution is indistinguishable from log-normal when the SNR sample size is less than 600. This implies that these SNR distributions provide only information on the mean and variance, yielding additional information only when the sample size grows larger than $\sim{600}$ SNRs. To infer the parameters of the ambient density, we use Bayesian statistical inference under the assumption that SNR evolution is dominated by the Sedov phase. In particular, we use the SNR sizes and explosion energies to estimate the mean and variance of the ambient medium surrounding SNR progenitors. We find that the mean ISM particle density around our sample of SNRs is $\mu_{\log{n}} = -1.33$, in $\log_{10}$ of particles per cubic centimeter, with variance $\sigma^2_{\log{n}} = 0.49$. If interpreted at face value, this implies that most SNRs result from supernovae propagating in the warm, ionized medium. However, it is also likely that either SNR evolution is not dominated by the simple Sedov evolution or SNR samples are biased to the warm, ionized medium (WIM).
0
1
0
0
0
0
Mining Target Attribute Subspace and Set of Target Communities in Large Attributed Networks
Community detection provides invaluable help for various applications, such as marketing and product recommendation. Traditional community detection methods designed for plain networks may not be able to detect communities with homogeneous attributes inside on attributed networks with attribute information. Most of recent attribute community detection methods may fail to capture the requirements of a specific application and not be able to mine the set of required communities for a specific application. In this paper, we aim to detect the set of target communities in the target subspace which has some focus attributes with large importance weights satisfying the requirements of a specific application. In order to improve the university of the problem, we address the problem in an extreme case where only two sample nodes in any potential target community are provided. A Target Subspace and Communities Mining (TSCM) method is proposed. In TSCM, a sample information extension method is designed to extend the two sample nodes to a set of exemplar nodes from which the target subspace is inferred. Then the set of target communities are located and mined based on the target subspace. Experiments on synthetic datasets demonstrate the effectiveness and efficiency of our method and applications on real-world datasets show its application values.
1
1
0
0
0
0
Achieving rental harmony with a secretive roommate
Given the subjective preferences of n roommates in an n-bedroom apartment, one can use Sperner's lemma to find a division of the rent such that each roommate is content with a distinct room. At the given price distribution, no roommate has a strictly stronger preference for a different room. We give a new elementary proof that the subjective preferences of only n-1 of the roommates actually suffice to achieve this envy-free rent division. Our proof, in particular, yields an algorithm to find such a fair division of rent. The techniques also give generalizations of Sperner's lemma including a new proof of a conjecture of the third author.
0
0
1
0
0
0
A Channel-Based Perspective on Conjugate Priors
A desired closure property in Bayesian probability is that an updated posterior distribution be in the same class of distributions --- say Gaussians --- as the prior distribution. When the updating takes place via a statistical model, one calls the class of prior distributions the `conjugate priors' of the model. This paper gives (1) an abstract formulation of this notion of conjugate prior, using channels, in a graphical language, (2) a simple abstract proof that such conjugate priors yield Bayesian inversions, and (3) a logical description of conjugate priors that highlights the required closure of the priors under updating. The theory is illustrated with several standard examples, also covering multiple updating.
1
0
0
0
0
0
Cascaded Coded Distributed Computing on Heterogeneous Networks
Coded distributed computing (CDC) introduced by Li et al. in 2015 offers an efficient approach to trade computing power to reduce the communication load in general distributed computing frameworks such as MapReduce. For the more general cascaded CDC, Map computations are repeated at $r$ nodes to significantly reduce the communication load among nodes tasked with computing $Q$ Reduce functions $s$ times. While an achievable cascaded CDC scheme was proposed, it only operates on homogeneous networks, where the storage, computation load and communication load of each computing node is the same. In this paper, we address this limitation by proposing a novel combinatorial design which operates on heterogeneous networks where nodes have varying storage and computing capabilities. We provide an analytical characterization of the computation-communication trade-off and show that it is optimal within a constant factor and could outperform the state-of-the-art homogeneous schemes.
1
0
0
0
0
0
A FEL Based on a Superlattice
The motion and photon emission of electrons in a superlattice may be described as in an undulator. Therefore, there is a close analogy between ballistic electrons in a superlattice and electrons in a free electron laser (FEL). Touching upon this analogy the intensity of photon emission in the IR region and the gain are calculated. It is shown that the amplification can be significant, reaching tens of percent.
0
1
0
0
0
0
Thermal and non-thermal emission from the cocoon of a gamma-ray burst jet
We present hydrodynamic simulations of the hot cocoon produced when a relativistic jet passes through the gamma-ray burst (GRB) progenitor star and its environment, and we compute the lightcurve and spectrum of the radiation emitted by the cocoon. The radiation from the cocoon has a nearly thermal spectrum with a peak in the X-ray band, and it lasts for a few minutes in the observer frame; the cocoon radiation starts at roughly the same time as when $\gamma$-rays from a burst trigger detectors aboard GRB satellites. The isotropic cocoon luminosity ($\sim 10^{47}$ erg s$^{-1}$) is of the same order of magnitude as the X-ray luminosity of a typical long-GRB afterglow during the plateau phase. This radiation should be identifiable in the Swift data because of its nearly thermal spectrum which is distinct from the somewhat brighter power-law component. The detection of this thermal component would provide information regarding the size and density stratification of the GRB progenitor star. Photons from the cocoon are also inverse-Compton (IC) scattered by electrons in the relativistic jet. We present the IC lightcurve and spectrum, by post-processing the results of the numerical simulations. The IC spectrum lies in 10 keV--MeV band for typical GRB parameters. The detection of this IC component would provide an independent measurement of GRB jet Lorentz factor and it would also help to determine the jet magnetisation parameter.
0
1
0
0
0
0
Least Squares Polynomial Chaos Expansion: A Review of Sampling Strategies
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.
0
0
0
1
0
0
Efficient Dense Labeling of Human Activity Sequences from Wearables using Fully Convolutional Networks
Recognizing human activities in a sequence is a challenging area of research in ubiquitous computing. Most approaches use a fixed size sliding window over consecutive samples to extract features---either handcrafted or learned features---and predict a single label for all samples in the window. Two key problems emanate from this approach: i) the samples in one window may not always share the same label. Consequently, using one label for all samples within a window inevitably lead to loss of information; ii) the testing phase is constrained by the window size selected during training while the best window size is difficult to tune in practice. We propose an efficient algorithm that can predict the label of each sample, which we call dense labeling, in a sequence of human activities of arbitrary length using a fully convolutional network. In particular, our approach overcomes the problems posed by the sliding window step. Additionally, our algorithm learns both the features and classifier automatically. We release a new daily activity dataset based on a wearable sensor with hospitalized patients. We conduct extensive experiments and demonstrate that our proposed approach is able to outperform the state-of-the-arts in terms of classification and label misalignment measures on three challenging datasets: Opportunity, Hand Gesture, and our new dataset.
1
0
0
0
0
0
Higher Tetrahedral Algebras
We introduce and study the higher tetrahedral algebras, an exotic family of finite-dimensional tame symmetric algebras over an algebraically closed field. The Gabriel quiver of such an algebra is the triangulation quiver associated to the coherent orientation of the tetrahedron. Surprisingly, these algebras occurred in the classification of all algebras of generalised quaternion type, but are not weighted surface algebras. We prove that a higher tetrahedral algebra is periodic if and only if it is non-singular.
0
0
1
0
0
0
Adaptive multi-penalty regularization based on a generalized Lasso path
For many algorithms, parameter tuning remains a challenging and critical task, which becomes tedious and infeasible in a multi-parameter setting. Multi-penalty regularization, successfully used for solving undetermined sparse regression of problems of unmixing type where signal and noise are additively mixed, is one of such examples. In this paper, we propose a novel algorithmic framework for an adaptive parameter choice in multi-penalty regularization with a focus on the correct support recovery. Building upon the theory of regularization paths and algorithms for single-penalty functionals, we extend these ideas to a multi-penalty framework by providing an efficient procedure for the construction of regions containing structurally similar solutions, i.e., solutions with the same sparsity and sign pattern, over the whole range of parameters. Combining this with a model selection criterion, we can choose regularization parameters in a data-adaptive manner. Another advantage of our algorithm is that it provides an overview on the solution stability over the whole range of parameters. This can be further exploited to obtain additional insights into the problem of interest. We provide a numerical analysis of our method and compare it to the state-of-the-art single-penalty algorithms for compressed sensing problems in order to demonstrate the robustness and power of the proposed algorithm.
1
0
0
1
0
0
Shape Generation using Spatially Partitioned Point Clouds
We propose a method to generate 3D shapes using point clouds. Given a point-cloud representation of a 3D shape, our method builds a kd-tree to spatially partition the points. This orders them consistently across all shapes, resulting in reasonably good correspondences across all shapes. We then use PCA analysis to derive a linear shape basis across the spatially partitioned points, and optimize the point ordering by iteratively minimizing the PCA reconstruction error. Even with the spatial sorting, the point clouds are inherently noisy and the resulting distribution over the shape coefficients can be highly multi-modal. We propose to use the expressive power of neural networks to learn a distribution over the shape coefficients in a generative-adversarial framework. Compared to 3D shape generative models trained on voxel-representations, our point-based method is considerably more light-weight and scalable, with little loss of quality. It also outperforms simpler linear factor models such as Probabilistic PCA, both qualitatively and quantitatively, on a number of categories from the ShapeNet dataset. Furthermore, our method can easily incorporate other point attributes such as normal and color information, an additional advantage over voxel-based representations.
1
0
0
0
0
0
Open Vocabulary Scene Parsing
Recognizing arbitrary objects in the wild has been a challenging problem due to the limitations of existing classification models and datasets. In this paper, we propose a new task that aims at parsing scenes with a large and open vocabulary, and several evaluation metrics are explored for this problem. Our proposed approach to this problem is a joint image pixel and word concept embeddings framework, where word concepts are connected by semantic relations. We validate the open vocabulary prediction ability of our framework on ADE20K dataset which covers a wide variety of scenes and objects. We further explore the trained joint embedding space to show its interpretability.
1
0
0
0
0
0
Parameter Estimation in Mean Reversion Processes with Periodic Functional Tendency
This paper describes the procedure to estimate the parameters in mean reversion processes with functional tendency defined by a periodic continuous deterministic function, expressed as a series of truncated Fourier. Two phases of estimation are defined, in the first phase through Gaussian techniques using the Euler-Maruyama discretization, we obtain the maximum likelihood function, that will allow us to find estimators of the external parameters and an estimation of the expected value of the process. In the second phase, a reestimate of the periodic functional tendency with it's parameters of phase and amplitude is carried out, this will allow, improve the initial estimation. Some experimental result using simulated data sets are graphically illustrated.
0
0
0
1
0
0
User Interface (UI) Design Issues for the Multilingual Users: A Case Study
A multitude of web and desktop applications are now widely available in diverse human languages. This paper explores the design issues that are specifically relevant for multilingual users. It reports on the continued studies of Information System (IS) issues and users' behaviour across cross-cultural and transnational boundaries. Taking the BBC website as a model that is internationally recognised, usability tests were conducted to compare different versions of the website. The dependant variables derived from the questionnaire were analysed (via descriptive statistics) to elucidate the multilingual UI design issues. Using Principal Component Analysis (PCA), five de-correlated variables were identified which were then used for hypotheses tests. A modified version of Herzberg's Hygiene-motivational Theory about the Workplace was applied to assess the components used in the website. Overall, it was concluded that the English versions of the website gave superior usability results and this implies the need for deeper study of the problems in usability of the translated versions.
1
0
0
0
0
0
NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media
Nowadays, a big part of people rely on available content in social media in their decisions (e.g. reviews and feedback on a topic or product). The possibility that anybody can leave a review provide a golden opportunity for spammers to write spam reviews about products and services for different interests. Identifying these spammers and the spam content is a hot topic of research and although a considerable number of studies have been done recently toward this end, but so far the methodologies put forth still barely detect spam reviews, and none of them show the importance of each extracted feature type. In this study, we propose a novel framework, named NetSpam, which utilizes spam features for modeling review datasets as heterogeneous information networks to map spam detection procedure into a classification problem in such networks. Using the importance of spam features help us to obtain better results in terms of different metrics experimented on real-world review datasets from Yelp and Amazon websites. The results show that NetSpam outperforms the existing methods and among four categories of features; including review-behavioral, user-behavioral, reviewlinguistic, user-linguistic, the first type of features performs better than the other categories.
1
1
0
0
0
0
Proving Non-Deterministic Computations in Agda
We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.
1
0
0
0
0
0
NeuroRule: A Connectionist Approach to Data Mining
Classification, which involves finding rules that partition a given data set into disjoint groups, is one class of data mining problems. Approaches proposed so far for mining classification rules for large databases are mainly decision tree based symbolic learning methods. The connectionist approach based on neural networks has been thought not well suited for data mining. One of the major reasons cited is that knowledge generated by neural networks is not explicitly represented in the form of rules suitable for verification or interpretation by humans. This paper examines this issue. With our newly developed algorithms, rules which are similar to, or more concise than those generated by the symbolic methods can be extracted from the neural networks. The data mining process using neural networks with the emphasis on rule extraction is described. Experimental results and comparison with previously published works are presented.
1
0
0
0
0
0
Contextual Explanation Networks
Modern learning algorithms excel at producing accurate but complex models of the data. However, deploying such models in the real-world requires extra care: we must ensure their reliability, robustness, and absence of undesired biases. This motivates the development of models that are equally accurate but can be also easily inspected and assessed beyond their predictive performance. To this end, we introduce contextual explanation networks (CENs)---a class of architectures that learn to predict by generating and utilizing intermediate, simplified probabilistic models. Specifically, CENs generate parameters for intermediate graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain jointly. Our approach offers two major advantages: (i) for each prediction, valid, instance-specific explanations are generated with no computational overhead and (ii) prediction via explanation acts as a regularizer and boosts performance in low-resource settings. We analyze the proposed framework theoretically and experimentally. Our results on image and text classification and survival analysis tasks demonstrate that CENs are not only competitive with the state-of-the-art methods but also offer additional insights behind each prediction, that are valuable for decision support. We also show that while post-hoc methods may produce misleading explanations in certain cases, CENs are always consistent and allow to detect such cases systematically.
1
0
0
1
0
0
Inference for Differential Equation Models using Relaxation via Dynamical Systems
Statistical regression models whose mean functions are represented by ordinary differential equations (ODEs) can be used to describe phenomenons dynamical in nature, which are abundant in areas such as biology, climatology and genetics. The estimation of parameters of ODE based models is essential for understanding its dynamics, but the lack of an analytical solution of the ODE makes the parameter estimation challenging. The aim of this paper is to propose a general and fast framework of statistical inference for ODE based models by relaxation of the underlying ODE system. Relaxation is achieved by a properly chosen numerical procedure, such as the Runge-Kutta, and by introducing additive Gaussian noises with small variances. Consequently, filtering methods can be applied to obtain the posterior distribution of the parameters in the Bayesian framework. The main advantage of the proposed method is computation speed. In a simulation study, the proposed method was at least 14 times faster than the other methods. Theoretical results which guarantee the convergence of the posterior of the approximated dynamical system to the posterior of true model are presented. Explicit expressions are given that relate the order and the mesh size of the Runge-Kutta procedure to the rate of convergence of the approximated posterior as a function of sample size.
0
0
0
1
0
0
Joint Trajectory and Communication Design for UAV-Enabled Multiple Access
Unmanned aerial vehicles (UAVs) have attracted significant interest recently in wireless communication due to their high maneuverability, flexible deployment, and low cost. This paper studies a UAV-enabled wireless network where the UAV is employed as an aerial mobile base station (BS) to serve a group of users on the ground. To achieve fair performance among users, we maximize the minimum throughput over all ground users by jointly optimizing the multiuser communication scheduling and UAV trajectory over a finite horizon. The formulated problem is shown to be a mixed integer non-convex optimization problem that is difficult to solve in general. We thus propose an efficient iterative algorithm by applying the block coordinate descent and successive convex optimization techniques, which is guaranteed to converge to at least a locally optimal solution. To achieve fast convergence and stable throughput, we further propose a low-complexity initialization scheme for the UAV trajectory design based on the simple circular trajectory. Extensive simulation results are provided which show significant throughput gains of the proposed design as compared to other benchmark schemes.
1
0
1
0
0
0
Imaging the Schwarzschild-radius-scale Structure of M87 with the Event Horizon Telescope using Sparse Modeling
We propose a new imaging technique for radio and optical/infrared interferometry. The proposed technique reconstructs the image from the visibility amplitude and closure phase, which are standard data products of short-millimeter very long baseline interferometers such as the Event Horizon Telescope (EHT) and optical/infrared interferometers, by utilizing two regularization functions: the $\ell_1$-norm and total variation (TV) of the brightness distribution. In the proposed method, optimal regularization parameters, which represent the sparseness and effective spatial resolution of the image, are derived from data themselves using cross validation (CV). As an application of this technique, we present simulated observations of M87 with the EHT based on four physically motivated models. We confirm that $\ell_1$+TV regularization can achieve an optimal resolution of $\sim 20-30$% of the diffraction limit $\lambda/D_{\rm max}$, which is the nominal spatial resolution of a radio interferometer. With the proposed technique, the EHT can robustly and reasonably achieve super-resolution sufficient to clearly resolve the black hole shadow. These results make it promising for the EHT to provide an unprecedented view of the event-horizon-scale structure in the vicinity of the super-massive black hole in M87 and also the Galactic center Sgr A*.
0
1
0
0
0
0
The process of purely event-driven programs
Using process algebra, this paper describes the formalisation of the process/semantics behind the purely event-driven programming language.
1
0
0
0
0
0
Partial dust obscuration in active galactic nuclei as a cause of broad-line profile and lag variability, and apparent accretion disc inhomogeneities
The profiles of the broad emission lines of active galactic nuclei (AGNs) and the time delays in their response to changes in the ionizing continuum ("lags") give information about the structure and kinematics of the inner regions of AGNs. Line profiles are also our main way of estimating the masses of the supermassive black holes (SMBHs). However, the profiles often show ill-understood, asymmetric structure and velocity-dependent lags vary with time. Here we show that partial obscuration of the broad-line region (BLR) by outflowing, compact, dusty clumps produces asymmetries and velocity-dependent lags similar to those observed. Our model explains previously inexplicable changes in the ratios of the hydrogen lines with time and velocity, the lack of correlation of changes in line profiles with variability of the central engine, the velocity dependence of lags, and the change of lags with time. We propose that changes on timescales longer than the light-crossing time do not come from dynamical changes in the BLR, but are a natural result of the effect of outflowing dusty clumps driven by radiation pressure acting on the dust. The motion of these clumps offers an explanation of long-term changes in polarization. The effects of the dust complicate the study of the structure and kinematics of the BLR and the search for sub-parsec SMBH binaries. Partial obscuration of the accretion disc can also provide the local fluctuations in luminosity that can explain sizes deduced from microlensing.
0
1
0
0
0
0
From Plants to Landmarks: Time-invariant Plant Localization that uses Deep Pose Regression in Agricultural Fields
Agricultural robots are expected to increase yields in a sustainable way and automate precision tasks, such as weeding and plant monitoring. At the same time, they move in a continuously changing, semi-structured field environment, in which features can hardly be found and reproduced at a later time. Challenges for Lidar and visual detection systems stem from the fact that plants can be very small, overlapping and have a steadily changing appearance. Therefore, a popular way to localize vehicles with high accuracy is based on ex- pensive global navigation satellite systems and not on natural landmarks. The contribution of this work is a novel image- based plant localization technique that uses the time-invariant stem emerging point as a reference. Our approach is based on a fully convolutional neural network that learns landmark localization from RGB and NIR image input in an end-to-end manner. The network performs pose regression to generate a plant location likelihood map. Our approach allows us to cope with visual variances of plants both for different species and different growth stages. We achieve high localization accuracies as shown in detailed evaluations of a sugar beet cultivation phase. In experiments with our BoniRob we demonstrate that detections can be robustly reproduced with centimeter accuracy.
1
0
0
0
0
0
There's more to the multimedia effect than meets the eye: is seeing pictures believing?
Textbooks in applied mathematics often use graphs to explain the meaning of formulae, even though their benefit is still not fully explored. To test processes underlying this assumed multimedia effect we collected performance scores, eye movements, and think-aloud protocols from students solving problems in vector calculus with and without graphs. Results showed no overall multimedia effect, but instead an effect to confirm statements that were accompanied by graphs, irrespective of whether these statements were true or false. Eye movement and verbal data shed light on this surprising finding. Students looked proportionally less at the text and the problem statement when a graph was present. Moreover, they experienced more mental effort with the graph, as indicated by more silent pauses in thinking aloud. Hence, students actively processed the graphs. This, however, was not sufficient. Further analysis revealed that the more students looked at the statement, the better they performed. Thus, in the multimedia condition the graph drew students' attention and cognitive capacities away from focusing on the statement. A good alternative strategy in the multimedia condition was to frequently look between graph and problem statement, and thus to integrate their information. In conclusion, graphs influence where students look and what they process, and may even mislead them into believing accompanying information. Thus, teachers and textbook designers should be very critical on when to use graphs and carefully consider how the graphs are integrated with other parts of the problem.
0
1
1
0
0
0
Methods of Enumerating Two Vertex Maps of Arbitrary Genus
This paper provides an alternate proof to parts of the Goulden-Slofstra formula for enumerating two vertex maps by genus, which is an extension of the famous Harer-Zagier formula that computes the Euler characteristic of the moduli space of curves. This paper also shows a further simplification to the Goulden-Slofstra formula. Portions of this alternate proof will be used in a subsequent paper, where it forms a basis for a more general result that applies for a certain class of maps with an arbitrary number of vertices.
0
0
1
0
0
0
Higher Theory and the Three Problems of Physics
According to the Butterfield--Isham proposal, to understand quantum gravity we must revise the way we view the universe of mathematics. However, this paper demonstrates that the current elaborations of this programme neglect quantum interactions. The paper then introduces the Faddeev--Mickelsson anomaly which obstructs the renormalization of Yang--Mills theory, suggesting that to theorise on many-particle systems requires a many-topos view of mathematics itself: higher theory. As our main contribution, the topos theoretic framework is used to conceptualise the fact that there are principally three different quantisation problems, the differences of which have been ignored not just by topos physicists but by most philosophers of science. We further argue that if higher theory proves out to be necessary for understanding quantum gravity, its implications to philosophy will be foundational: higher theory challenges the propositional concept of truth and thus the very meaning of theorising in science.
0
1
1
0
0
0
Light emission by accelerated electric, toroidal and anapole dipolar sources
Emission of electromagnetic radiation by accelerated particles with electric, toroidal and anapole dipole moments is analyzed. It is shown that ellipticity of the emitted light can be used to differentiate between electric and toroidal dipole sources, and that anapoles, elementary neutral non-radiating configurations, which consist of electric and toroidal dipoles, can emit light under uniform acceleration. The existence of non-radiating configurations in electrodynamics implies that it is impossible to fully determine the internal makeup of the emitter given only the distribution of the emitted light. Here we demonstrate that there is a loop-hole in this `inverse source problem'. Our results imply that there may be a whole range of new phenomena to be discovered by studying the electromagnetic response of matter under acceleration.
0
1
0
0
0
0
Bayesian Approaches to Distribution Regression
Distribution regression has recently attracted much interest as a generic solution to the problem of supervised learning where labels are available at the group level, rather than at the individual level. Current approaches, however, do not propagate the uncertainty in observations due to sampling variability in the groups. This effectively assumes that small and large groups are estimated equally well, and should have equal weight in the final regression. We account for this uncertainty with a Bayesian distribution regression formalism, improving the robustness and performance of the model when group sizes vary. We frame our models in a neural network style, allowing for simple MAP inference using backpropagation to learn the parameters, as well as MCMC-based inference which can fully propagate uncertainty. We demonstrate our approach on illustrative toy datasets, as well as on a challenging problem of predicting age from images.
1
0
0
1
0
0
Atomic-Scale Structure Relaxation, Chemistry and Charge Distribution of Dislocation Cores in SrTiO3
By using the state-of-the-art microscopy and spectroscopy in aberration-corrected scanning transmission electron microscopes, we determine the atomic arrangements, occupancy, elemental distribution, and the electronic structures of dislocation cores in the 10°tilted SrTiO3 bicrystal. We identify that there are two different types of oxygen deficient dislocation cores, i.e., the SrO plane terminated Sr0.82Ti0.85O3-x (Ti3.67+, 0.48<x<0.91) and TiO2 plane terminated Sr0.63Ti0.90O3-y (Ti3.60+, 0.57<y<1). They have the same Burgers vector of a[100] but different atomic arrangements and chemical properties. Besides the oxygen vacancies, Sr vacancies and rocksalt-like titanium oxide reconstruction are also identified in the dislocation core with TiO2 plane termination. Our atomic-scale study reveals the true atomic structures and chemistry of individual dislocation cores, providing useful insights into understanding the properties of dislocations and grain boundaries.
0
1
0
0
0
0
Transição de fase no sistema de Hénon-Heiles (Phase transition in the Henon-Heiles system)
The Henon-Heiles system was originally proposed to describe the dynamical behavior of galaxies, but this system has been widely applied in dynamical systems by exhibit great details in phase space. This work presents the formalism to describe Henon-Heiles system and a qualitative approach of dynamics behavior. The growth of chaotic region in phase space is observed by Poincare Surface of Section when the total energy increases. Island of regularity remain around stable points and relevants phenomena appear, such as sticky.
0
1
0
0
0
0
Self-similar solutions of fragmentation equations revisited
We study the large time behaviour of the mass (size) of particles described by the fragmentation equation with homogeneous breakup kernel. We give necessary and sufficient conditions for the convergence of solutions to the unique self-similar solution.
0
0
1
0
0
0
Refining Source Representations with Relation Networks for Neural Machine Translation
Although neural machine translation (NMT) with the encoder-decoder framework has achieved great success in recent times, it still suffers from some drawbacks: RNNs tend to forget old information which is often useful and the encoder only operates through words without considering word relationship. To solve these problems, we introduce a relation networks (RN) into NMT to refine the encoding representations of the source. In our method, the RN first augments the representation of each source word with its neighbors and reasons all the possible pairwise relations between them. Then the source representations and all the relations are fed to the attention module and the decoder together, keeping the main encoder-decoder architecture unchanged. Experiments on two Chinese-to-English data sets in different scales both show that our method can outperform the competitive baselines significantly.
1
0
0
0
0
0