title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Tidal disruptions by rotating black holes: relativistic hydrodynamics with Newtonian codes
We propose an approximate approach for studying the relativistic regime of stellar tidal disruptions by rotating massive black holes. It combines an exact relativistic description of the hydrodynamical evolution of a test fluid in a fixed curved spacetime with a Newtonian treatment of the fluid's self-gravity. Explicit expressions for the equations of motion are derived for Kerr spacetime using two different coordinate systems. We implement the new methodology within an existing Newtonian Smoothed Particle Hydrodynamics code and show that including the additional physics involves very little extra computational cost. We carefully explore the validity of the novel approach by first testing its ability to recover geodesic motion, and then by comparing the outcome of tidal disruption simulations against previous relativistic studies. We further compare simulations in Boyer--Lindquist and Kerr--Schild coordinates and conclude that our approach allows accurate simulation even of tidal disruption events where the star penetrates deeply inside the tidal radius of a rotating black hole. Finally, we use the new method to study the effect of the black hole spin on the morphology and fallback rate of the debris streams resulting from tidal disruptions, finding that while the spin has little effect on the fallback rate, it does imprint heavily on the stream morphology, and can even be a determining factor in the survival or disruption of the star itself. Our methodology is discussed in detail as a reference for future astrophysical applications.
0
1
0
0
0
0
A promise checked is a promise kept: Inspection Testing
Occasionally, developers need to ensure that the compiler treats their code in a specific way that is only visible by inspecting intermediate or final compilation artifacts. This is particularly common with carefully crafted compositional libraries, where certain usage patterns are expected to trigger an intricate sequence of compiler optimizations -- stream fusion is a well-known example. The developer of such a library has to manually inspect build artifacts and check for the expected properties. Because this is too tedious to do often, it will likely go unnoticed if the property is broken by a change to the library code, its dependencies or the compiler. The lack of automation has led to released versions of such libraries breaking their documented promises. This indicates that there is an unrecognized need for a new testing paradigm, inspection testing, where the programmer declaratively describes non-functional properties of an compilation artifact and the compiler checks these properties. We define inspection testing abstractly, implement it in the context of Haskell and show that it increases the quality of such libraries.
1
0
0
0
0
0
Linear-Time Sequence Classification using Restricted Boltzmann Machines
Classification of sequence data is the topic of interest for dynamic Bayesian models and Recurrent Neural Networks (RNNs). While the former can explicitly model the temporal dependencies between class variables, the latter have a capability of learning representations. Several attempts have been made to improve performance by combining these two approaches or increasing the processing capability of the hidden units in RNNs. This often results in complex models with a large number of learning parameters. In this paper, a compact model is proposed which offers both representation learning and temporal inference of class variables by rolling Restricted Boltzmann Machines (RBMs) and class variables over time. We address the key issue of intractability in this variant of RBMs by optimising a conditional distribution, instead of a joint distribution. Experiments reported in the paper on melody modelling and optical character recognition show that the proposed model can outperform the state-of-the-art. Also, the experimental results on optical character recognition, part-of-speech tagging and text chunking demonstrate that our model is comparable to recurrent neural networks with complex memory gates while requiring far fewer parameters.
1
0
0
1
0
0
A central $U(1)$-extension of a double Lie groupoid
In this paper, we introduce a notion of a central $U(1)$-extension of a double Lie groupoid and show that it defines a cocycle in the certain triple complex.
0
0
1
0
0
0
Common fixed point theorems under an implicit contractive condition on metric spaces endowed with an arbitrary binary relation and an application
The aim of this paper is to establish some metrical coincidence and common fixed point theorems with an arbitrary relation under an implicit contractive condition which is general enough to cover a multitude of well known contraction conditions in one go besides yielding several new ones. We also provide an example to demonstrate the generality of our results over several well known corresponding results of the existing literature. Finally, we utilize our results to prove an existence theorem for ensuring the solution of an integral equation.
0
0
1
0
0
0
The NIEP
The nonnegative inverse eigenvalue problem (NIEP) asks which lists of $n$ complex numbers (counting multiplicity) occur as the eigenvalues of some $n$-by-$n$ entry-wise nonnegative matrix. The NIEP has a long history and is a known hard (perhaps the hardest in matrix analysis?) and sought after problem. Thus, there are many subproblems and relevant results in a variety of directions. We survey most work on the problem and its several variants, with an emphasis on recent results, and include 130 references. The survey is divided into: a) the single eigenvalue problems; b) necessary conditions; c) low dimensional results; d) sufficient conditions; e) appending 0's to achieve realizability; f) the graph NIEP's; g) Perron similarities; and h) the relevance of Jordan structure.
0
0
1
0
0
0
Operationalizing Conflict and Cooperation between Automated Software Agents in Wikipedia: A Replication and Expansion of 'Even Good Bots Fight'
This paper replicates, extends, and refutes conclusions made in a study published in PLoS ONE ("Even Good Bots Fight"), which claimed to identify substantial levels of conflict between automated software agents (or bots) in Wikipedia using purely quantitative methods. By applying an integrative mixed-methods approach drawing on trace ethnography, we place these alleged cases of bot-bot conflict into context and arrive at a better understanding of these interactions. We found that overwhelmingly, the interactions previously characterized as problematic instances of conflict are typically better characterized as routine, productive, even collaborative work. These results challenge past work and show the importance of qualitative/quantitative collaboration. In our paper, we present quantitative metrics and qualitative heuristics for operationalizing bot-bot conflict. We give thick descriptions of kinds of events that present as bot-bot reverts, helping distinguish conflict from non-conflict. We computationally classify these kinds of events through patterns in edit summaries. By interpreting found/trace data in the socio-technical contexts in which people give that data meaning, we gain more from quantitative measurements, drawing deeper understandings about the governance of algorithmic systems in Wikipedia. We have also released our data collection, processing, and analysis pipeline, to facilitate computational reproducibility of our findings and to help other researchers interested in conducting similar mixed-method scholarship in other platforms and contexts.
1
0
0
0
0
0
Synthesizing Bijective Lenses
Bidirectional transformations between different data representations occur frequently in modern software systems. They appear as serializers and deserializers, as database views and view updaters, and more. Manually building bidirectional transformations---by writing two separate functions that are intended to be inverses---is tedious and error prone. A better approach is to use a domain-specific language in which both directions can be written as a single expression. However, these domain-specific languages can be difficult to program in, requiring programmers to manage fiddly details while working in a complex type system. To solve this, we present Optician, a tool for type-directed synthesis of bijective string transformers. The inputs to Optician are two ordinary regular expressions representing two data formats and a few concrete examples for disambiguation. The output is a well-typed program in Boomerang (a bidirectional language based on the theory of lenses). The main technical challenge involves navigating the vast program search space efficiently enough. Unlike most prior work on type-directed synthesis, our system operates in the context of a language with a rich equivalence relation on types (the theory of regular expressions). We synthesize terms of a equivalent language and convert those generated terms into our lens language. We prove the correctness of our synthesis algorithm. We also demonstrate empirically that our new language changes the synthesis problem from one that admits intractable solutions to one that admits highly efficient solutions. We evaluate Optician on a benchmark suite of 39 examples including both microbenchmarks and realistic examples derived from other data management systems including Flash Fill, a tool for synthesizing string transformations in spreadsheets, and Augeas, a tool for bidirectional processing of Linux system configuration files.
1
0
0
0
0
0
Tuning the piezoelectric and mechanical properties of the AlN system via alloying with YN and BN
Recent advances in microelectromechanical systems often require multifunctional materials, which are designed so as to optimize more than one property. Using density functional theory calculations for alloyed nitride systems, we illustrate how co-alloying a piezoelectric material (AlN) with different nitrides helps tune both its piezoelectric and mechanical properties simultaneously. Wurtzite AlN-YN alloys display increased piezoelectric response with YN concentration, accompanied by mechanical softening along the crystallographic c direction. Both effects increase the electromechanical coupling coefficients relevant for transducers and actuators. Resonator applications, however, require superior stiffness, thus leading to the need to decouple the increased piezoelectric response from a softened lattice. We show that co-alloying of AlN with YN and BN results in improved elastic properties while retaining most of the piezoelectric enhancements from YN alloying. This finding may lead to new avenues for tuning the design properties of piezoelectrics through composition-property maps. Keywords: piezoelectricity, electromechanical coupling, density functional theory, co-alloying
0
1
0
0
0
0
Embedding for bulk systems using localized atomic orbitals
We present an embedding approach for semiconductors and insulators based on or- bital rotations in the space of occupied Kohn-Sham orbitals. We have implemented our approach in the popular VASP software package. We demonstrate its power for defect structures in silicon and polaron formation in titania, two challenging cases for conventional Kohn-Sham density functional theory.
0
1
0
0
0
0
Simple Surveys: Response Retrieval Inspired by Recommendation Systems
In the last decade, the use of simple rating and comparison surveys has proliferated on social and digital media platforms to fuel recommendations. These simple surveys and their extrapolation with machine learning algorithms shed light on user preferences over large and growing pools of items, such as movies, songs and ads. Social scientists have a long history of measuring perceptions, preferences and opinions, often over smaller, discrete item sets with exhaustive rating or ranking surveys. This paper introduces simple surveys for social science application. We ran experiments to compare the predictive accuracy of both individual and aggregate comparative assessments using four types of simple surveys: pairwise comparisons and ratings on 2, 5 and continuous point scales in three distinct contexts: perceived Safety of Google Streetview Images, Likeability of Artwork, and Hilarity of Animal GIFs. Across contexts, we find that continuous scale ratings best predict individual assessments but consume the most time and cognitive effort. Binary choice surveys are quick and perform best to predict aggregate assessments, useful for collective decision tasks, but poorly predict personalized preferences, for which they are currently used by Netflix to recommend movies. Pairwise comparisons, by contrast, perform well to predict personal assessments, but poorly predict aggregate assessments despite being widely used to crowdsource ideas and collective preferences. We demonstrate how findings from these surveys can be visualized in a low-dimensional space that reveals distinct respondent interpretations of questions asked in each context. We conclude by reflecting on differences between sparse, incomplete simple surveys and their traditional survey counterparts in terms of efficiency, information elicited and settings in which knowing less about more may be critical for social science.
1
0
0
1
0
0
A Reduction for the Distinct Distances Problem in ${\mathbb R}^d$
We introduce a reduction from the distinct distances problem in ${\mathbb R}^d$ to an incidence problem with $(d-1)$-flats in ${\mathbb R}^{2d-1}$. Deriving the conjectured bound for this incidence problem (the bound predicted by the polynomial partitioning technique) would lead to a tight bound for the distinct distances problem in ${\mathbb R}^d$. The reduction provides a large amount of information about the $(d-1)$-flats, and a framework for deriving more restrictions that these satisfy. Our reduction is based on introducing a Lie group that is a double cover of the special Euclidean group. This group can be seen as a variant of the Spin group, and a large part of our analysis involves studying its properties.
0
0
1
0
0
0
Local electronic properties of the graphene-protected giant Rashba-split BiAg$_2$ surface
We report the preparation of the interface between graphene and the strong Rashba-split BiAg$_2$ surface alloy and investigatigation of its structure as well as the electronic properties by means of scanning tunneling microscopy/spectroscopy and density functional theory calculations. Upon evaluation of the quasiparticle interference patterns the unpertrubated linear dispersion for the $\pi$ band of $n$-doped graphene is observed. Our results also reveal the intact nature of the giant Rashba-split surface states of the BiAg$_2$ alloy, which demonstrate only a moderate downward energy shift upon the presence of graphene. This effect is explained in the framework of density functional theory by an inward relaxation of the Bi atoms at the interface and subsequent delocalisation of the wave function of the surface states. Our findings demonstrate a realistic pathway to prepare a graphene protected giant Rashba-split BiAg$_2$ for possible spintronic applications.
0
1
0
0
0
0
Particle trapping and conveying using an optical Archimedes' screw
Trapping and manipulation of particles using laser beams has become an important tool in diverse fields of research. In recent years, particular interest is given to the problem of conveying optically trapped particles over extended distances either down or upstream the direction of the photons momentum flow. Here, we propose and demonstrate experimentally an optical analogue of the famous Archimedes' screw where the rotation of a helical-intensity beam is transferred to the axial motion of optically-trapped micro-meter scale airborne carbon based particles. With this optical screw, particles were easily conveyed with controlled velocity and direction, upstream or downstream the optical flow, over a distance of half a centimeter. Our results offer a very simple optical conveyor that could be adapted to a wide range of optical trapping scenarios.
0
1
0
0
0
0
Scalable Inference for Nested Chinese Restaurant Process Topic Models
Nested Chinese Restaurant Process (nCRP) topic models are powerful nonparametric Bayesian methods to extract a topic hierarchy from a given text corpus, where the hierarchical structure is automatically determined by the data. Hierarchical Latent Dirichlet Allocation (hLDA) is a popular instance of nCRP topic models. However, hLDA has only been evaluated at small scale, because the existing collapsed Gibbs sampling and instantiated weight variational inference algorithms either are not scalable or sacrifice inference quality with mean-field assumptions. Moreover, an efficient distributed implementation of the data structures, such as dynamically growing count matrices and trees, is challenging. In this paper, we propose a novel partially collapsed Gibbs sampling (PCGS) algorithm, which combines the advantages of collapsed and instantiated weight algorithms to achieve good scalability as well as high model quality. An initialization strategy is presented to further improve the model quality. Finally, we propose an efficient distributed implementation of PCGS through vectorization, pre-processing, and a careful design of the concurrent data structures and communication strategy. Empirical studies show that our algorithm is 111 times more efficient than the previous open-source implementation for hLDA, with comparable or even better model quality. Our distributed implementation can extract 1,722 topics from a 131-million-document corpus with 28 billion tokens, which is 4-5 orders of magnitude larger than the previous largest corpus, with 50 machines in 7 hours.
1
0
0
1
0
0
Equations of $\,\overline{M}_{0,n}$
Following work of Keel and Tevelev, we give explicit polynomials in the Cox ring of $\mathbb{P}^1\times\cdots\times\mathbb{P}^{n-3}$ that, conjecturally, determine $\overline{M}_{0,n}$ as a subscheme. Using Macaulay2, we prove that these equations generate the ideal for $n=5, 6, 7, 8$. For $n \leq 6$ we give a cohomological proof that these polynomials realize $\overline{M}_{0,n}$ as a projective variety, embedded in $\mathbb{P}^{(n-2)!-1}$ by the complete log canonical linear system.
0
0
1
0
0
0
Lattice Boltzmann simulation of viscous fingering of immiscible displacement in a channel using an improved wetting scheme
An improved wetting boundary implementation strategy is proposed based on lattice Boltzmann color-gradient model in this paper. In this strategy, an extra interface force condition is demonstrated based on the diffuse interface assumption and is employed in contact line region. It has been validated by three benchmark problems: static droplet wetting on a flat surface and a curved surface, and dynamic capillary filling. Good performances are shown in all three cases. Relied on the strict validation to our scheme, the viscous fingering phenomenon of immiscible fluids displacement in a two-dimensional channel has been restudied in this paper. High viscosity ratio, wide range contact angle, accurate moving contact line and mutual independence between surface tension and viscosity are the obvious advantages of our model. We find the linear relationship between the contact angle and displacement velocity or variation of finger length. When the viscosity ratio is smaller than 20, the displacement velocity is increasing with increasing viscosity ratio and reducing capillary number, and when the viscosity ratio is larger than 20, the displacement velocity tends to a specific constant. A similar conclusion is obtained on the variation of finger length.
0
1
0
0
0
0
Implementation of Control Strategies for Sterile Insect Techniques
In this paper, we propose a sex-structured entomological model that serves as a basis for design of control strategies relying on releases of sterile male mosquitoes (Aedes spp) and aiming at elimination of the wild vector population in some target locality. We consider different types of releases (constant and periodic impulsive), providing necessary conditions to reach elimination. However, the main part of the paper is focused on the study of the periodic impulsive control in different situations. When the size of wild mosquito population cannot be assessed in real time, we propose the so-called open-loop control strategy that relies on periodic impulsive releases of sterile males with constant release size. Under this control mode, global convergence towards the mosquito-free equilibrium is proved on the grounds of sufficient condition that relates the size and frequency of releases. If periodic assessments (either synchronized with releases or more sparse) of the wild population size are available in real time, we propose the so-called closed-loop control strategy, which is adjustable in accordance with reliable estimations of the wild population sizes. Under this control mode, global convergence to the mosquito-free equilibrium is proved on the grounds of another sufficient condition that relates not only the size and frequency of periodic releases but also the frequency of sparse measurements taken on wild populations. Finally, we propose a mixed control strategy that combines open-loop and closed-loop strategies. This control mode renders the best result, in terms of overall time needed to reach elimination and the number of releases to be effectively carried out during the whole release campaign, while requiring for a reasonable amount of released sterile insects.
0
0
0
0
1
0
Convergence and submeasures in Boolean algebras
A Boolean algebra carries a strictly positive exhaustive submeasure if and only if it has a sequential topology that is uniformly Frechet.
0
0
1
0
0
0
Characterizing a CCD detector for astronomical purposes: OAUNI Project
This work verifies the instrumental characteristics of the CCD detector which is part of the UNI astronomical observatory. We measured the linearity of the CCD detector of the SBIG STXL6303E camera, along with the associated gain and readout noise. The linear response to the incident light of the detector is extremely linear (R2 =99.99%), its effective gain is 1.65 +/- 0.01 e-/ADU and its readout noise is 12.2 e-. These values are in agreement with the manufacturer. We confirm that this detector is extremely precise to make measurements for astronomical purposes.
0
1
0
0
0
0
Genetic algorithm-based control of birefringent filtering for self-tuning, self-pulsing fiber lasers
Polarization-based filtering in fiber lasers is well-known to enable spectral tunability and a wide range of dynamical operating states. This effect is rarely exploited in practical systems, however, because optimization of cavity parameters is non-trivial and evolves due to environmental sensitivity. Here, we report a genetic algorithm-based approach, utilizing electronic control of the cavity transfer function, to autonomously achieve broad wavelength tuning and the generation of Q-switched pulses with variable repetition rate and duration. The practicalities and limitations of simultaneous spectral and temporal self-tuning from a simple fiber laser are discussed, paving the way to on-demand laser properties through algorithmic control and machine learning schemes.
0
1
0
0
0
0
Public discourse and news consumption on online social media: A quantitative, cross-platform analysis of the Italian Referendum
The rising attention to the spreading of fake news and unsubstantiated rumors on online social media and the pivotal role played by confirmation bias led researchers to investigate different aspects of the phenomenon. Experimental evidence showed that confirmatory information gets accepted even if containing deliberately false claims while dissenting information is mainly ignored or might even increase group polarization. It seems reasonable that, to address misinformation problem properly, we have to understand the main determinants behind content consumption and the emergence of narratives on online social media. In this paper we address such a challenge by focusing on the discussion around the Italian Constitutional Referendum by conducting a quantitative, cross-platform analysis on both Facebook public pages and Twitter accounts. We observe the spontaneous emergence of well-separated communities on both platforms. Such a segregation is completely spontaneous, since no categorization of contents was performed a priori. By exploring the dynamics behind the discussion, we find that users tend to restrict their attention to a specific set of Facebook pages/Twitter accounts. Finally, taking advantage of automatic topic extraction and sentiment analysis techniques, we are able to identify the most controversial topics inside and across both platforms. We measure the distance between how a certain topic is presented in the posts/tweets and the related emotional response of users. Our results provide interesting insights for the understanding of the evolution of the core narratives behind different echo chambers and for the early detection of massive viral phenomena around false claims.
1
1
0
0
0
0
Testing small scale gravitational wave detectors with dynamical mass distributions
The recent discovery of gravitational waves by the LIGO-Virgo collaboration created renewed interest in the investigation of alternative gravitational detector designs, such as small scale resonant detectors. In this article, it is shown how proposed small scale detectors can be tested by generating dynamical gravitational fields with appropriate distributions of moving masses. A series of interesting experiments will be possible with this setup. In particular, small scale detectors can be tested very early in the development phase and tests can be used to progress quickly in their development. This could contribute to the emerging field of gravitational wave astronomy.
0
1
0
0
0
0
AIDE: An algorithm for measuring the accuracy of probabilistic inference algorithms
Approximate probabilistic inference algorithms are central to many fields. Examples include sequential Monte Carlo inference in robotics, variational inference in machine learning, and Markov chain Monte Carlo inference in statistics. A key problem faced by practitioners is measuring the accuracy of an approximate inference algorithm on a specific data set. This paper introduces the auxiliary inference divergence estimator (AIDE), an algorithm for measuring the accuracy of approximate inference algorithms. AIDE is based on the observation that inference algorithms can be treated as probabilistic models and the random variables used within the inference algorithm can be viewed as auxiliary variables. This view leads to a new estimator for the symmetric KL divergence between the approximating distributions of two inference algorithms. The paper illustrates application of AIDE to algorithms for inference in regression, hidden Markov, and Dirichlet process mixture models. The experiments show that AIDE captures the qualitative behavior of a broad class of inference algorithms and can detect failure modes of inference algorithms that are missed by standard heuristics.
1
0
0
1
0
0
Mining within-trial oscillatory brain dynamics to address the variability of optimized spatial filters
Data-driven spatial filtering algorithms optimize scores such as the contrast between two conditions to extract oscillatory brain signal components. Most machine learning approaches for filter estimation, however, disregard within-trial temporal dynamics and are extremely sensitive to changes in training data and involved hyperparameters. This leads to highly variable solutions and impedes the selection of a suitable candidate for, e.g.,~neurotechnological applications. Fostering component introspection, we propose to embrace this variability by condensing the functional signatures of a large set of oscillatory components into homogeneous clusters, each representing specific within-trial envelope dynamics. The proposed method is exemplified by and evaluated on a complex hand force task with a rich within-trial structure. Based on electroencephalography data of 18 healthy subjects, we found that the components' distinct temporal envelope dynamics are highly subject-specific. On average, we obtained seven clusters per subject, which were strictly confined regarding their underlying frequency bands. As the analysis method is not limited to a specific spatial filtering algorithm, it could be utilized for a wide range of neurotechnological applications, e.g., to select and monitor functionally relevant features for brain-computer interface protocols in stroke rehabilitation.
0
0
0
1
1
0
Neural Architecture Search with Bayesian Optimisation and Optimal Transport
Bayesian Optimisation (BO) refers to a class of methods for global optimisation of a function $f$ which is only accessible via point evaluations. It is typically used in settings where $f$ is expensive to evaluate. A common use case for BO in machine learning is model selection, where it is not possible to analytically model the generalisation performance of a statistical model, and we resort to noisy and expensive training and validation procedures to choose the best model. Conventional BO methods have focused on Euclidean and categorical domains, which, in the context of model selection, only permits tuning scalar hyper-parameters of machine learning algorithms. However, with the surge of interest in deep learning, there is an increasing demand to tune neural network \emph{architectures}. In this work, we develop NASBOT, a Gaussian process based BO framework for neural architecture search. To accomplish this, we develop a distance metric in the space of neural network architectures which can be computed efficiently via an optimal transport program. This distance might be of independent interest to the deep learning community as it may find applications outside of BO. We demonstrate that NASBOT outperforms other alternatives for architecture search in several cross validation based model selection tasks on multi-layer perceptrons and convolutional neural networks.
0
0
0
1
0
0
An improved parametric model for hysteresis loop approximation
A number of improvements have been added to the existing analytical model of hysteresis loop defined in parametric form. In particular, three phase shifts are included in the model, which permits to tilt the hysteresis loop smoothly by the required angle at the split point as well as to smoothly change the curvature of the loop. As a result, the error of approximation of a hysteresis loop by the improved model does not exceed 1%, which is several times less than the error of the existing model. The improved model is capable of approximating most of the known types of rate-independent symmetrical hysteresis loops encountered in the practice of physical measurements. The model allows building smooth, piecewise-linear, hybrid, minor, mirror-reflected, inverse, reverse, double and triple loops. One of the possible applications of the model developed is linearization of a probe microscope piezoscanner. The improved model can be found useful for the tasks of simulation of scientific instruments that contain hysteresis elements.
1
1
1
0
0
0
Kidnapping Model: An Extension of Selten's Game
Selten's game is a kidnapping model where the probability of capturing the kidnapper is independent of whether the hostage has been released or executed. Most often, in view of the elevated sensitivities involved, authorities put greater effort and resources into capturing the kidnapper if the hostage has been executed, in contrast to the case when a ransom is paid to secure the hostage's release. In this paper, we study the asymmetric game when the probability of capturing the kidnapper depends on whether the hostage has been executed or not and find a new uniquely determined perfect equilibrium point in Selten's game.
1
0
0
0
0
0
To prune, or not to prune: exploring the efficacy of pruning for model compression
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.
1
0
0
1
0
0
Connecting Software Metrics across Versions to Predict Defects
Accurate software defect prediction could help software practitioners allocate test resources to defect-prone modules effectively and efficiently. In the last decades, much effort has been devoted to build accurate defect prediction models, including developing quality defect predictors and modeling techniques. However, current widely used defect predictors such as code metrics and process metrics could not well describe how software modules change over the project evolution, which we believe is important for defect prediction. In order to deal with this problem, in this paper, we propose to use the Historical Version Sequence of Metrics (HVSM) in continuous software versions as defect predictors. Furthermore, we leverage Recurrent Neural Network (RNN), a popular modeling technique, to take HVSM as the input to build software prediction models. The experimental results show that, in most cases, the proposed HVSM-based RNN model has a significantly better effort-aware ranking effectiveness than the commonly used baseline models.
1
0
0
0
0
0
Semiparametric panel data models using neural networks
This paper presents an estimator for semiparametric models that uses a feed-forward neural network to fit the nonparametric component. Unlike many methodologies from the machine learning literature, this approach is suitable for longitudinal/panel data. It provides unbiased estimation of the parametric component of the model, with associated confidence intervals that have near-nominal coverage rates. Simulations demonstrate (1) efficiency, (2) that parametric estimates are unbiased, and (3) coverage properties of estimated intervals. An application section demonstrates the method by predicting county-level corn yield using daily weather data from the period 1981-2015, along with parametric time trends representing technological change. The method is shown to out-perform linear methods such as OLS and ridge/lasso, as well as random forest. The procedures described in this paper are implemented in the R package panelNNET.
0
0
0
1
0
0
Split and Rephrase
We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences. Like sentence simplification, splitting-and-rephrasing has the potential of benefiting both natural language processing and societal applications. Because shorter sentences are generally better processed by NLP systems, it could be used as a preprocessing step which facilitates and improves the performance of parsers, semantic role labellers and machine translation systems. It should also be of use for people with reading disabilities because it allows the conversion of longer sentences into shorter ones. This paper makes two contributions towards this new task. First, we create and make available a benchmark consisting of 1,066,115 tuples mapping a single complex sentence to a sequence of sentences expressing the same meaning. Second, we propose five models (vanilla sequence-to-sequence to semantically-motivated models) to understand the difficulty of the proposed task.
1
0
0
0
0
0
Constructive Euler hydrodynamics for one-dimensional attractive particle systems
We review a (constructive) approach first introduced in [6] and further developed in [7, 8, 38, 9] for hydrodynamic limits of asymmetric attractive particle systems, in a weak or in a strong (that is, almost sure) sense, in an homogeneous or in a quenched disordered setting.
0
0
1
0
0
0
Cyber Insurance for Heterogeneous Wireless Networks
Heterogeneous wireless networks (HWNs) composed of densely deployed base stations of different types with various radio access technologies have become a prevailing trend to accommodate ever-increasing traffic demand in enormous volume. Nowadays, users rely heavily on HWNs for ubiquitous network access that contains valuable and critical information such as financial transactions, e-health, and public safety. Cyber risks, representing one of the most significant threats to network security and reliability, are increasing in severity. To address this problem, this article introduces the concept of cyber insurance to transfer the cyber risk (i.e., service outage, as a consequence of cyber risks in HWNs) to a third party insurer. Firstly, a review of the enabling technologies for HWNs and their vulnerabilities to cyber risks is presented. Then, the fundamentals of cyber insurance are introduced, and subsequently, a cyber insurance framework for HWNs is presented. Finally, open issues are discussed and the challenges are highlighted for integrating cyber insurance as a service of next generation HWNs.
1
0
0
0
0
0
Combining Contrast Invariant L1 Data Fidelities with Nonlinear Spectral Image Decomposition
This paper focuses on multi-scale approaches for variational methods and corresponding gradient flows. Recently, for convex regularization functionals such as total variation, new theory and algorithms for nonlinear eigenvalue problems via nonlinear spectral decompositions have been developed. Those methods open new directions for advanced image filtering. However, for an effective use in image segmentation and shape decomposition, a clear interpretation of the spectral response regarding size and intensity scales is needed but lacking in current approaches. In this context, $L^1$ data fidelities are particularly helpful due to their interesting multi-scale properties such as contrast invariance. Hence, the novelty of this work is the combination of $L^1$-based multi-scale methods with nonlinear spectral decompositions. We compare $L^1$ with $L^2$ scale-space methods in view of spectral image representation and decomposition. We show that the contrast invariant multi-scale behavior of $L^1-TV$ promotes sparsity in the spectral response providing more informative decompositions. We provide a numerical method and analyze synthetic and biomedical images at which decomposition leads to improved segmentation.
1
0
1
0
0
0
PaccMann: Prediction of anticancer compound sensitivity with multi-modal attention-based neural networks
We present a novel approach for the prediction of anticancer compound sensitivity by means of multi-modal attention-based neural networks (PaccMann). In our approach, we integrate three key pillars of drug sensitivity, namely, the molecular structure of compounds, transcriptomic profiles of cancer cells as well as prior knowledge about interactions among proteins within cells. Our models ingest a drug-cell pair consisting of SMILES encoding of a compound and the gene expression profile of a cancer cell and predicts an IC50 sensitivity value. Gene expression profiles are encoded using an attention-based encoding mechanism that assigns high weights to the most informative genes. We present and study three encoders for SMILES string of compounds: 1) bidirectional recurrent 2) convolutional 3) attention-based encoders. We compare our devised models against a baseline model that ingests engineered fingerprints to represent the molecular structure. We demonstrate that using our attention-based encoders, we can surpass the baseline model. The use of attention-based encoders enhance interpretability and enable us to identify genes, bonds and atoms that were used by the network to make a prediction.
0
0
0
0
1
0
Evaporation and scattering of momentum- and velocity-dependent dark matter in the Sun
Dark matter with momentum- or velocity-dependent interactions with nuclei has shown significant promise for explaining the so-called Solar Abundance Problem, a longstanding discrepancy between solar spectroscopy and helioseismology. The best-fit models are all rather light, typically with masses in the range of 3-5 GeV. This is exactly the mass range where dark matter evaporation from the Sun can be important, but to date no detailed calculation of the evaporation of such models has been performed. Here we carry out this calculation, for the first time including arbitrary velocity- and momentum-dependent interactions, thermal effects, and a completely general treatment valid from the optically thin limit all the way through to the optically thick regime. We find that depending on the dark matter mass, interaction strength and type, the mass below which evaporation is relevant can vary from 1 to 4 GeV. This has the effect of weakening some of the better-fitting solutions to the Solar Abundance Problem, but also improving a number of others. As a by-product, we also provide an improved derivation of the capture rate that takes into account thermal and optical depth effects, allowing the standard result to be smoothly matched to the well-known saturation limit.
0
1
0
0
0
0
ORSIm Detector: A Novel Object Detection Framework in Optical Remote Sensing Imagery Using Spatial-Frequency Channel Features
With the rapid development of spaceborne imaging techniques, object detection in optical remote sensing imagery has drawn much attention in recent decades. While many advanced works have been developed with powerful learning algorithms, the incomplete feature representation still cannot meet the demand for effectively and efficiently handling image deformations, particularly objective scaling and rotation. To this end, we propose a novel object detection framework, called optical remote sensing imagery detector (ORSIm detector), integrating diverse channel features extraction, feature learning, fast image pyramid matching, and boosting strategy. ORSIm detector adopts a novel spatial-frequency channel feature (SFCF) by jointly considering the rotation-invariant channel features constructed in frequency domain and the original spatial channel features (e.g., color channel, gradient magnitude). Subsequently, we refine SFCF using learning-based strategy in order to obtain the high-level or semantically meaningful features. In the test phase, we achieve a fast and coarsely-scaled channel computation by mathematically estimating a scaling factor in the image domain. Extensive experimental results conducted on the two different airborne datasets are performed to demonstrate the superiority and effectiveness in comparison with previous state-of-the-art methods.
1
0
0
0
0
0
Multi-Path Region-Based Convolutional Neural Network for Accurate Detection of Unconstrained "Hard Faces"
Large-scale variations still pose a challenge in unconstrained face detection. To the best of our knowledge, no current face detection algorithm can detect a face as large as 800 x 800 pixels while simultaneously detecting another one as small as 8 x 8 pixels within a single image with equally high accuracy. We propose a two-stage cascaded face detection framework, Multi-Path Region-based Convolutional Neural Network (MP-RCNN), that seamlessly combines a deep neural network with a classic learning strategy, to tackle this challenge. The first stage is a Multi-Path Region Proposal Network (MP-RPN) that proposes faces at three different scales. It simultaneously utilizes three parallel outputs of the convolutional feature maps to predict multi-scale candidate face regions. The "atrous" convolution trick (convolution with up-sampled filters) and a newly proposed sampling layer for "hard" examples are embedded in MP-RPN to further boost its performance. The second stage is a Boosted Forests classifier, which utilizes deep facial features pooled from inside the candidate face regions as well as deep contextual features pooled from a larger region surrounding the candidate face regions. This step is included to further remove hard negative samples. Experiments show that this approach achieves state-of-the-art face detection performance on the WIDER FACE dataset "hard" partition, outperforming the former best result by 9.6% for the Average Precision.
1
0
0
0
0
0
Naturally occurring $^{32}$Si and low-background silicon dark matter detectors
The naturally occurring radioisotope $^{32}$Si represents a potentially limiting background in future dark matter direct-detection experiments. We investigate sources of $^{32}$Si and the vectors by which it comes to reside in silicon crystals used for fabrication of radiation detectors. We infer that the $^{32}$Si concentration in commercial single-crystal silicon is likely variable, dependent upon the specific geologic and hydrologic history of the source (or sources) of silicon "ore" and the details of the silicon-refinement process. The silicon production industry is large, highly segmented by refining step, and multifaceted in terms of final product type, from which we conclude that production of $^{32}$Si-mitigated crystals requires both targeted silicon material selection and a dedicated refinement-through-crystal-production process. We review options for source material selection, including quartz from an underground source and silicon isotopically reduced in $^{32}$Si. To quantitatively evaluate the $^{32}$Si content in silicon metal and precursor materials, we propose analytic methods employing chemical processing and radiometric measurements. Ultimately, it appears feasible to produce silicon detectors with low levels of $^{32}$Si, though significant assay method development is required to validate this claim and thereby enable a quality assurance program during an actual controlled silicon-detector production cycle.
0
1
0
0
0
0
Synergies between Exoplanet Surveys and Variable Star Research
With the discovery of the first transiting extrasolar planetary system back to 1999, a great number of projects started to hunt for other similar systems. Because of the incidence rate of such systems was unknown and the length of the shallow transit events is only a few percent of the orbital period, the goal was to monitor continuously as many stars as possible for at least a period of a few months. Small aperture, large field of view automated telescope systems have been installed with a parallel development of new data reduction and analysis methods, leading to better than 1% per data point precision for thousands of stars. With the successful launch of the photometric satellites CoRot and Kepler, the precision increased further by one-two orders of magnitude. Millions of stars have been analyzed and searched for transits. In the history of variable star astronomy this is the biggest undertaking so far, resulting in photometric time series inventories immensely valuable for the whole field. In this review we briefly discuss the methods of data analysis that were inspired by the main science driver of these surveys and highlight some of the most interesting variable star results that impact the field of variable star astronomy.
0
1
0
0
0
0
Improved Set-based Symbolic Algorithms for Parity Games
Graph games with {\omega}-regular winning conditions provide a mathematical framework to analyze a wide range of problems in the analysis of reactive systems and programs (such as the synthesis of reactive systems, program repair, and the verification of branching time properties). Parity conditions are canonical forms to specify {\omega}-regular winning conditions. Graph games with parity conditions are equivalent to {\mu}-calculus model checking, and thus a very important algorithmic problem. Symbolic algorithms are of great significance because they provide scalable algorithms for the analysis of large finite-state systems, as well as algorithms for the analysis of infinite-state systems with finite quotient. A set-based symbolic algorithm uses the basic set operations and the one-step predecessor operators. We consider graph games with $n$ vertices and parity conditions with $c$ priorities. While many explicit algorithms exist for graph games with parity conditions, for set-based symbolic algorithms there are only two algorithms (notice that we use space to refer to the number of sets stored by a symbolic algorithm): (a) the basic algorithm that requires $O(n^c)$ symbolic operations and linear space; and (b) an improved algorithm that requires $O(n^{c/2+1})$ symbolic operations but also $O(n^{c/2+1})$ space (i.e., exponential space). In this work we present two set-based symbolic algorithms for parity games: (a) our first algorithm requires $O(n^{c/2+1})$ symbolic operations and only requires linear space; and (b) developing on our first algorithm, we present an algorithm that requires $O(n^{c/3+1})$ symbolic operations and only linear space. We also present the first linear space set-based symbolic algorithm for parity games that requires at most a sub-exponential number of symbolic operations.
1
0
0
0
0
0
A Syllable-based Technique for Word Embeddings of Korean Words
Word embedding has become a fundamental component to many NLP tasks such as named entity recognition and machine translation. However, popular models that learn such embeddings are unaware of the morphology of words, so it is not directly applicable to highly agglutinative languages such as Korean. We propose a syllable-based learning model for Korean using a convolutional neural network, in which word representation is composed of trained syllable vectors. Our model successfully produces morphologically meaningful representation of Korean words compared to the original Skip-gram embeddings. The results also show that it is quite robust to the Out-of-Vocabulary problem.
1
0
0
0
0
0
Existence of Evolutionarily Stable Strategies Remains Hard to Decide for a Wide Range of Payoff Values
The concept of an evolutionarily stable strategy (ESS), introduced by Smith and Price, is a refinement of Nash equilibrium in 2-player symmetric games in order to explain counter-intuitive natural phenomena, whose existence is not guaranteed in every game. The problem of deciding whether a game possesses an ESS has been shown to be $\Sigma_{2}^{P}$-complete by Conitzer using the preceding important work by Etessami and Lochbihler. The latter, among other results, proved that deciding the existence of ESS is both NP-hard and coNP-hard. In this paper we introduce a "reduction robustness" notion and we show that deciding the existence of an ESS remains coNP-hard for a wide range of games even if we arbitrarily perturb within some intervals the payoff values of the game under consideration. In contrast, ESS exist almost surely for large games with random and independent payoffs chosen from the same distribution.
1
0
0
0
0
0
Compound Poisson approximation to estimate the Lévy density
We construct an estimator of the Lévy density of a pure jump Lévy process, possibly of infinite variation, from the discrete observation of one trajectory at high frequency. The novelty of our procedure is that we directly estimate the Lévy density relying on a pathwise strategy, whereas existing procedures rely on spectral techniques. By taking advantage of a compound Poisson approximation of the Lévy density, we circumvent the use of spectral techniques and in particular of the Lévy-Khintchine formula. A linear wavelet estimators is built and its performance is studied in terms of $L_p$ loss functions, $p\geq 1$, over Besov balls. The resulting rates are minimax-optimal for a large class of Lévy processes. We discuss the robustness of the procedure to the presence of a Brownian part and to the estimation set getting close to the critical value 0.
0
0
1
1
0
0
On the non-vanishing of certain Dirichlet series
Given $k\in\mathbb N$, we study the vanishing of the Dirichlet series $$D_k(s,f):=\sum_{n\geq1} d_k(n)f(n)n^{-s}$$ at the point $s=1$, where $f$ is a periodic function modulo a prime $p$. We show that if $(k,p-1)=1$ or $(k,p-1)=2$ and $p\equiv 3\mod 4$, then there are no odd rational-valued functions $f\not\equiv 0$ such that $D_k(1,f)=0$, whereas in all other cases there are examples of odd functions $f$ such that $D_k(1,f)=0$. As a consequence, we obtain, for example, that the set of values $L(1,\chi)^2$, where $\chi$ ranges over odd characters mod $p$, are linearly independent over $\mathbb Q$.
0
0
1
0
0
0
On discrete homology of a free pro-$p$-group
For a prime $p$, let $\hat F_p$ be a finitely generated free pro-$p$-group of rank $\geq 2$. We show that the second discrete homology group $H_2(\hat F_p,\mathbb Z/p)$ is an uncountable $\mathbb Z/p$-vector space. This answers a problem of A.K. Bousfield.
0
0
1
0
0
0
Conceptual Modeling of Inventory Management Processes as a Thinging Machine
A control model is typically classified into three forms: conceptual, mathematical and simulation (computer). This paper analyzes a conceptual modeling application with respect to an inventory management system. Today, most organizations utilize computer systems for inventory control that provide protection when interruptions or breakdowns occur within work processes. Modeling the inventory processes is an active area of research that utilizes many diagrammatic techniques, including data flow diagrams, Universal Modeling Language (UML) diagrams and Integration DEFinition (IDEF). We claim that current conceptual modeling frameworks lack uniform notions and have inability to appeal to designers and analysts. We propose modeling an inventory system as an abstract machine, called a Thinging Machine (TM), with five operations: creation, processing, receiving, releasing and transferring. The paper provides side-by-side contrasts of some existing examples of conceptual modeling methodologies that apply to TM. Additionally, TM is applied in a case study of an actual inventory system that uses IBM Maximo. The resulting conceptual depictions point to the viability of FM as a valuable tool for developing a high-level representation of inventory processes.
1
0
0
0
0
0
Points2Pix: 3D Point-Cloud to Image Translation using conditional Generative Adversarial Networks
We present the first approach for 3D point-cloud to image translation based on conditional Generative Adversarial Networks (cGAN). The model handles multi-modal information sources from different domains, i.e. raw point-sets and images. The generator is capable of processing three conditions, whereas the point-cloud is encoded as raw point-set and camera projection. An image background patch is used as constraint to bias environmental texturing. A global approximation function within the generator is directly applied on the point-cloud (Point-Net). Hence, the representative learning model incorporates global 3D characteristics directly at the latent feature space. Conditions are used to bias the background and the viewpoint of the generated image. This opens up new ways in augmenting or texturing 3D data to aim the generation of fully individual images. We successfully evaluated our method on the Kitti and SunRGBD dataset with an outstanding object detection inception score.
1
0
0
0
0
0
11 T Dipole for the Dispersion Suppressor Collimators
Chapter 11 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
0
1
0
0
0
0
Spectral Analysis of Jet Substructure with Neural Networks: Boosted Higgs Case
Jets from boosted heavy particles have a typical angular scale which can be used to distinguish them from QCD jets. We introduce a machine learning strategy for jet substructure analysis using a spectral function on the angular scale. The angular spectrum allows us to scan energy deposits over the angle between a pair of particles in a highly visual way. We set up an artificial neural network (ANN) to find out characteristic shapes of the spectra of the jets from heavy particle decays. By taking the Higgs jets and QCD jets as examples, we show that the ANN of the angular spectrum input has similar performance to existing taggers. In addition, some improvement is seen when additional extra radiations occur. Notably, the new algorithm automatically combines the information of the multi-point correlations in the jet.
0
0
0
1
0
0
UTD-CRSS Submission for MGB-3 Arabic Dialect Identification: Front-end and Back-end Advancements on Broadcast Speech
This study presents systems submitted by the University of Texas at Dallas, Center for Robust Speech Systems (UTD-CRSS) to the MGB-3 Arabic Dialect Identification (ADI) subtask. This task is defined to discriminate between five dialects of Arabic, including Egyptian, Gulf, Levantine, North African, and Modern Standard Arabic. We develop multiple single systems with different front-end representations and back-end classifiers. At the front-end level, feature extraction methods such as Mel-frequency cepstral coefficients (MFCCs) and two types of bottleneck features (BNF) are studied for an i-Vector framework. As for the back-end level, Gaussian back-end (GB), and Generative Adversarial Networks (GANs) classifiers are applied alternately. The best submission (contrastive) is achieved for the ADI subtask with an accuracy of 76.94% by augmenting the randomly chosen part of the development dataset. Further, with a post evaluation correction in the submitted system, final accuracy is increased to 79.76%, which represents the best performance achieved so far for the challenge on the test dataset.
1
0
0
0
0
0
Parametrization and Generation of Geological Models with Generative Adversarial Networks
One of the main challenges in the parametrization of geological models is the ability to capture complex geological structures often observed in subsurface fields. In recent years, Generative Adversarial Networks (GAN) were proposed as an efficient method for the generation and parametrization of complex data, showing state-of-the-art performances in challenging computer vision tasks such as reproducing natural images (handwritten digits, human faces, etc.). In this work, we study the application of Wasserstein GAN for the parametrization of geological models. The effectiveness of the method is assessed for uncertainty propagation tasks using several test cases involving different permeability patterns and subsurface flow problems. Results show that GANs are able to generate samples that preserve the multipoint statistical features of the geological models both visually and quantitatively. The generated samples reproduce both the geological structures and the flow properties of the reference data.
0
1
0
1
0
0
Finite element procedures for computing normals and mean curvature on triangulated surfaces and their use for mesh refinement
In this paper we consider finite element approaches to computing the mean curvature vector and normal at the vertices of piecewise linear triangulated surfaces. In particular, we adopt a stabilization technique which allows for first order $L^2$-convergence of the mean curvature vector and apply this stabilization technique also to the computation of continuous, recovered, normals using $L^2$-projections of the piecewise constant face normals. Finally, we use our projected normals to define an adaptive mesh refinement approach to geometry resolution where we also employ spline techniques to reconstruct the surface before refinement. We compare or results to previously proposed approaches.
0
0
1
0
0
0
Basis Adaptive Sample Efficient Polynomial Chaos (BASE-PC)
For a large class of orthogonal basis functions, there has been a recent identification of expansion methods for computing accurate, stable approximations of a quantity of interest. This paper presents, within the context of uncertainty quantification, a practical implementation using basis adaptation, and coherence motivated sampling, which under assumptions has satisfying guarantees. This implementation is referred to as Basis Adaptive Sample Efficient Polynomial Chaos (BASE-PC). A key component of this is the use of anisotropic polynomial order which admits evolving global bases for approximation in an efficient manner, leading to consistently stable approximation for a practical class of smooth functionals. This fully adaptive, non-intrusive method, requires no a priori information of the solution, and has satisfying theoretical guarantees of recovery. A key contribution to stability is the use of a presented correction sampling for coherence-optimal sampling in order to improve stability and accuracy within the adaptive basis scheme. Theoretically, the method may dramatically reduce the impact of dimensionality in function approximation, and numerically the method is demonstrated to perform well on problems with dimension up to 1000.
0
0
1
1
0
0
Maximum a posteriori estimation through simulated annealing for binary asteroid orbit determination
This paper considers a new method for the binary asteroid orbit determination problem. The method is based on the Bayesian approach with a global optimisation algorithm. The orbital parameters to be determined are modelled through an a posteriori distribution made of a priori and likelihood terms. The first term constrains the parameters space and it allows the introduction of available knowledge about the orbit. The second term is based on given observations and it allows us to use and compare different observational error models. Once the a posteriori model is built, the estimator of the orbital parameters is computed using a global optimisation procedure: the simulated annealing algorithm. The maximum a posteriori (MAP) techniques are verified using simulated and real data. The obtained results validate the proposed method. The new approach guarantees independence of the initial parameters estimation and theoretical convergence towards the global optimisation solution. It is particularly useful in these situations, whenever a good initial orbit estimation is difficult to get, whenever observations are not well-sampled, and whenever the statistical behaviour of the observational errors cannot be stated Gaussian like.
0
1
0
1
0
0
Extrapolating Expected Accuracies for Large Multi-Class Problems
The difficulty of multi-class classification generally increases with the number of classes. Using data from a subset of the classes, can we predict how well a classifier will scale with an increased number of classes? Under the assumptions that the classes are sampled identically and independently from a population, and that the classifier is based on independently learned scoring functions, we show that the expected accuracy when the classifier is trained on k classes is the (k-1)st moment of a certain distribution that can be estimated from data. We present an unbiased estimation method based on the theory, and demonstrate its application on a facial recognition example.
1
0
0
1
0
0
Deep Learning Methods for Efficient Large Scale Video Labeling
We present a solution to "Google Cloud and YouTube-8M Video Understanding Challenge" that ranked 5th place. The proposed model is an ensemble of three model families, two frame level and one video level. The training was performed on augmented dataset, with cross validation.
1
0
0
1
0
0
RSI-CB: A Large Scale Remote Sensing Image Classification Benchmark via Crowdsource Data
Remote sensing image classification is a fundamental task in remote sensing image processing. Remote sensing field still lacks of such a large-scale benchmark compared to ImageNet, Place2. We propose a remote sensing image classification benchmark (RSI-CB) based on crowd-source data which is massive, scalable, and diversity. Using crowdsource data, we can efficiently annotate ground objects in remotes sensing image by point of interests, vectors data from OSM or other crowd-source data. Based on this method, we construct a worldwide large-scale benchmark for remote sensing image classification. In this benchmark, there are two sub datasets with 256 * 256 and 128 * 128 size respectively since different convolution neural networks requirement different image size. The former sub dataset contains 6 categories with 35 subclasses with total of more than 24,000 images; the later one contains 6 categories with 45 subclasses with total of more than 36,000 images. The six categories are agricultural land, construction land and facilities, transportation and facilities, water and water conservancy facilities, woodland and other land, and each category has several subclasses. This classification system is defined according to the national standard of land use classification in China, and is inspired by the hierarchy mechanism of ImageNet. Finally, we have done a large number of experiments to compare RSI-CB with SAT-4, UC-Merced datasets on handcrafted features, such as such as SIFT, and classical CNN models, such as AlexNet, VGG, GoogleNet, and ResNet. We also show CNN models trained by RSI-CB have good performance when transfer to other dataset, i.e. UC-Merced, and good generalization ability. The experiments show that RSI-CB is more suitable as a benchmark for remote sensing image classification task than other ones in big data era, and can be potentially used in practical applications.
1
0
0
0
0
0
Quantum groups, Yang-Baxter maps and quasi-determinants
For any quasi-triangular Hopf algebra, there exists the universal R-matrix, which satisfies the Yang-Baxter equation. It is known that the adjoint action of the universal R-matrix on the elements of the tensor square of the algebra constitutes a quantum Yang-Baxter map, which satisfies the set-theoretic Yang-Baxter equation. The map has a zero curvature representation among L-operators defined as images of the universal R-matrix. We find that the zero curvature representation can be solved by the Gauss decomposition of a product of L-operators. Thereby obtained a quasi-determinant expression of the quantum Yang-Baxter map associated with the quantum algebra $U_{q}(gl(n))$. Moreover, the map is identified with products of quasi-Plücker coordinates over a matrix composed of the L-operators. We also consider the quasi-classical limit, where the underlying quantum algebra reduces to a Poisson algebra. The quasi-determinant expression of the quantum Yang-Baxter map reduces to ratios of determinants, which give a new expression of a classical Yang-Baxter map.
0
1
1
0
0
0
Low-level Active Visual Navigation: Increasing robustness of vision-based localization using potential fields
This paper proposes a low-level visual navigation algorithm to improve visual localization of a mobile robot. The algorithm, based on artificial potential fields, associates each feature in the current image frame with an attractive or neutral potential energy, with the objective of generating a control action that drives the vehicle towards the goal, while still favoring feature rich areas within a local scope, thus improving the localization performance. One key property of the proposed method is that it does not rely on mapping, and therefore it is a lightweight solution that can be deployed on miniaturized aerial robots, in which memory and computational power are major constraints. Simulations and real experimental results using a mini quadrotor equipped with a downward looking camera demonstrate that the proposed method can effectively drive the vehicle to a designated goal through a path that prevents localization failure.
1
0
0
0
0
0
A Brownian Motion Model and Extreme Belief Machine for Modeling Sensor Data Measurements
As the title suggests, we will describe (and justify through the presentation of some of the relevant mathematics) prediction methodologies for sensor measurements. This exposition will mainly be concerned with the mathematics related to modeling the sensor measurements.
1
0
0
0
0
0
On the nature of the magnetic phase transition in a Weyl semimetal
We investigate the nature of the magnetic phase transition induced by the short-ranged electron-electron interactions in a Weyl semimetal by using the perturbative renormalization-group method. We find that the critical point associated with the quantum phase transition is characterized by a Gaussian fixed point perturbed by a dangerously irrelevant operator. Although the low-energy and long-distance physics is governed by a free theory, the velocities of the fermionic quasiparticles and the magnetic excitations suffer from nontrivial renormalization effects. In particular, their ratio approaches one, which indicates an emergent Lorentz symmetry at low energies. We further investigate the stability of the fixed point in the presence of weak disorder. We show that while the fixed point is generally stable against weak disorder, among those disorders that are consistent with the emergent chiral symmetry of the clean system, a moderately strong random chemical potential and/or random vector potential may induce a quantum phase transition towards a disorder-dominated phase. We propose a global phase diagram of the Weyl semimetal in the presence of both electron-electron interactions and disorder based on our results.
0
1
0
0
0
0
Detecting Cyber-Physical Attacks in Additive Manufacturing using Digital Audio Signing
Additive Manufacturing (AM, or 3D printing) is a novel manufacturing technology that is being adopted in industrial and consumer settings. However, the reliance of this technology on computerization has raised various security concerns. In this paper we address sabotage via tampering with the 3D printing process. We present an object verification system using side-channel emanations: sound generated by onboard stepper motors. The contributions of this paper are following. We present two algorithms: one which generates a master audio fingerprint for the unmodified printing process, and one which computes the similarity between other print recordings and the master audio fingerprint. We then evaluate the deviation due to tampering, focusing on the detection of minimal tampering primitives. By detecting the deviation at the time of its occurrence, we can stop the printing process for compromised objects, thus save time and prevent material waste. We discuss impacts on the method by aspects like background noise, or different audio recorder positions. We further outline our vision with use cases incorporating our approach.
1
0
0
0
0
0
Short-wavelength out-of-band EUV emission from Sn laser-produced plasma
We present the results of spectroscopic measurements in the extreme ultraviolet (EUV) regime (7-17 nm) of molten tin microdroplets illuminated by a high-intensity 3-J, 60-ns Nd:YAG laser pulse. The strong 13.5 nm emission from this laser-produced plasma is of relevance for next-generation nanolithography machines. Here, we focus on the shorter wavelength features between 7 and 12 nm which have so far remained poorly investigated despite their diagnostic relevance. Using flexible atomic code calculations and local thermodynamic equilibrium arguments, we show that the line features in this region of the spectrum can be explained by transitions from high-lying configurations within the Sn$^{8+}$-Sn$^{15+}$ ions. The dominant transitions for all ions but Sn$^{8+}$ are found to be electric-dipole transitions towards the $n$=4 ground state from the core-excited configuration in which a 4$p$ electron is promoted to the 5$s$ sub-shell. Our results resolve some long-standing spectroscopic issues and provide reliable charge state identification for Sn laser-produced plasma, which could be employed as a useful tool for diagnostic purposes.
0
1
0
0
0
0
Compact Cardinals and Eight Values in Cichoń's Diagram
Assuming three strongly compact cardinals, it is consistent that \[ \aleph_1 < \mathrm{add}(\mathrm{null}) < \mathrm{cov}(\mathrm{null}) < \mathfrak{b} < \mathfrak{d} < \mathrm{non}(\mathrm{null}) < \mathrm{cof}(\mathrm{null}) < 2^{\aleph_0}.\] Under the same assumption, it is consistent that \[ \aleph_1 < \mathrm{add}(\mathrm{null}) < \mathrm{cov}(\mathrm{null}) < \mathrm{non}(\mathrm{meager}) < \mathrm{cov}(\mathrm{meager}) < \mathrm{non}(\mathrm{null}) < \mathrm{cof}(\mathrm{null}) < 2^{\aleph_0}.\]
0
0
1
0
0
0
Analysis of Peer Review Effectiveness for Academic Journals Based on Distributed Parallel System
A simulation model based on parallel systems is established, aiming to explore the relation between the number of submissions and the overall quality of academic journals within a similar discipline under peer review. The model can effectively simulate the submission, review and acceptance behaviors of academic journals, in a distributed manner. According to the simulation experiments, it could possibly happen that the overall standard of academic journals may deteriorate due to excessive submissions.
1
0
0
0
0
0
Observational signatures of linear warps in circumbinary discs
In recent years an increasing number of observational studies have hinted at the presence of warps in protoplanetary discs, however a general comprehensive description of observational diagnostics of warped discs was missing. We performed a series of 3D SPH hydrodynamic simulations and combined them with 3D radiative transfer calculations to study the observability of warps in circumbinary discs, whose plane is misaligned with respect to the orbital plane of the central binary. Our numerical hydrodynamic simulations confirm previous analytical results on the dependence of the warp structure on the viscosity and the initial misalignment between the binary and the disc. To study the observational signatures of warps we calculate images in the continuum at near-infrared and sub-millimetre wavelengths and in the pure rotational transition of CO in the sub-millimetre. Warped circumbinary discs show surface brightness asymmetry in near-infrared scattered light images as well as in optically thick gas lines at sub-millimetre wavelengths. The asymmetry is caused by self-shadowing of the disc by the inner warped regions, thus the strength of the asymmetry depends on the strength of the warp. The projected velocity field, derived from line observations, shows characteristic deviations, twists and a change in the slope of the rotation curve, from that of an unperturbed disc. In extreme cases even the direction of rotation appears to change in the disc inwards of a characteristic radius. The strength of the kinematical signatures of warps decreases with increasing inclination. The strength of all warp signatures decreases with decreasing viscosity.
0
1
0
0
0
0
Bootstrapping single-channel source separation via unsupervised spatial clustering on stereo mixtures
Separating an audio scene into isolated sources is a fundamental problem in computer audition, analogous to image segmentation in visual scene analysis. Source separation systems based on deep learning are currently the most successful approaches for solving the underdetermined separation problem, where there are more sources than channels. Traditionally, such systems are trained on sound mixtures where the ground truth decomposition is already known. Since most real-world recordings do not have such a decomposition available, this limits the range of mixtures one can train on, and the range of mixtures the learned models may successfully separate. In this work, we use a simple blind spatial source separation algorithm to generate estimated decompositions of stereo mixtures. These estimates, together with a weighting scheme in the time-frequency domain, based on confidence in the separation quality, are used to train a deep learning model that can be used for single-channel separation, where no source direction information is available. This demonstrates how a simple cue such as the direction of origin of source can be used to bootstrap a model for source separation that can be used in situations where that cue is not available.
1
0
0
0
0
0
Multi-parameter One-Sided Monitoring Test
Multi-parameter one-sided hypothesis test problems arise naturally in many applications. We are particularly interested in effective tests for monitoring multiple quality indices in forestry products. Our search reveals that there are many effective statistical methods in the literature for normal data, and that they can easily be adapted for non-normal data. We find that the beautiful likelihood ratio test is unsatisfactory, because in order to control the size, it must cope with the least favorable distributions at the cost of power. In this paper, we find a novel way to slightly ease the size control, obtaining a much more powerful test. Simulation confirms that the new test retains good control of the type I error and is markedly more powerful than the likelihood ratio test as well as many competitors based on normal data. The new method performs well in the context of monitoring multiple quality indices.
0
0
1
1
0
0
Statistical mechanics of low-rank tensor decomposition
Often, large, high dimensional datasets collected across multiple modalities can be organized as a higher order tensor. Low-rank tensor decomposition then arises as a powerful and widely used tool to discover simple low dimensional structures underlying such data. However, we currently lack a theoretical understanding of the algorithmic behavior of low-rank tensor decompositions. We derive Bayesian approximate message passing (AMP) algorithms for recovering arbitrarily shaped low-rank tensors buried within noise, and we employ dynamic mean field theory to precisely characterize their performance. Our theory reveals the existence of phase transitions between easy, hard and impossible inference regimes, and displays an excellent match with simulations. Moreover, it reveals several qualitative surprises compared to the behavior of symmetric, cubic tensor decomposition. Finally, we compare our AMP algorithm to the most commonly used algorithm, alternating least squares (ALS), and demonstrate that AMP significantly outperforms ALS in the presence of noise.
0
0
0
0
1
0
A path integral based model for stocks and order dynamics
We introduce a model for the short-term dynamics of financial assets based on an application to finance of quantum gauge theory, developing ideas of Ilinski. We present a numerical algorithm for the computation of the probability distribution of prices and compare the results with APPLE stocks prices and the S&P500 index.
0
0
0
0
0
1
A New Algorithm to Automate Inductive Learning of Default Theories
In inductive learning of a broad concept, an algorithm should be able to distinguish concept examples from exceptions and noisy data. An approach through recursively finding patterns in exceptions turns out to correspond to the problem of learning default theories. Default logic is what humans employ in common-sense reasoning. Therefore, learned default theories are better understood by humans. In this paper, we present new algorithms to learn default theories in the form of non-monotonic logic programs. Experiments reported in this paper show that our algorithms are a significant improvement over traditional approaches based on inductive logic programming.
1
0
0
0
0
0
Origin of the pressure-dependent T$_c$ valley in superconducting simple cubic phosphorus
Motivated by recent experiments, we investigate the pressure-dependent electronic structure and electron-phonon (\emph{e-ph}) coupling for simple cubic phosphorus by performing first-principle calculations within the full potential linearized augmented plane wave method. As a function of increasing pressure, our calculations show a valley feature in T$_c$, followed by an eventual decrease for higher pressures. We demonstrate that this T$_c$ valley at low pressures is due to two nearby Lifshitz transitions, as we analyze the band-resolved contributions to the \emph{e-ph} coupling. Below the first Lifshitz transition, the phonon hardening and shrinking of the $\gamma$ Fermi surface with $s$ orbital character results in a decreased T$_c$ with increasing pressure. After the second Lifshitz transition, the appearance of $\delta$ Fermi surfaces with $3d$ orbital character generate strong \emph{e-ph} inter-band couplings in $\alpha\delta$ and $\beta\delta$ channels, and hence lead to an increase of T$_c$. For higher pressures, the phonon hardening finally dominates, and T$_c$ decreases again. Our study reveals that the intriguing T$_c$} valley discovered in experiment can be attributed to Lifshitz transitions, while the plateau of T$_c$ detected at intermediate pressures appears to be beyond the scope of our analysis. This strongly suggests that besides \emph{e-ph} coupling, electronic correlations along with plasmonic contributions may be relevant for simple cubic phosphorous. Our findings hint at the notion that increasing pressure can shift the low-energy orbital weight towards $d$ character, and as such even trigger an enhanced importance of orbital-selective electronic correlations despite an increase of the overall bandwidth.
0
1
0
0
0
0
V-cycle multigrid algorithms for discontinuous Galerkin methods on non-nested polytopic meshes
In this paper we analyse the convergence properties of V-cycle multigrid algorithms for the numerical solution of the linear system of equations arising from discontinuous Galerkin discretization of second-order elliptic partial differential equations on polytopal meshes. Here, the sequence of spaces that stands at the basis of the multigrid scheme is possibly non nested and is obtained based on employing agglomeration with possible edge/face coarsening. We prove that the method converges uniformly with respect to the granularity of the grid and the polynomial approximation degree p, provided that the number of smoothing steps, which depends on p, is chosen sufficiently large.
1
0
0
0
0
0
On a result of Fel'dman on linear forms in the values of some E-functions
We shall consider a result of Fel'dman, where a sharp Baker-type lower bound is obtained for linear forms in the values of some E-functions. Fel'dman's proof is based on an explicit construction of Padé approximations of the first kind for these functions. In the present paper we introduce Padé approximations of the second kind for the same functions and use these to obtain a slightly improved version of Fel'dman's result.
0
0
1
0
0
0
Learning Low-shot facial representations via 2D warping
In this work, we mainly study the influence of the 2D warping module for one-shot face recognition.
1
0
0
0
0
0
Catalyzed bimolecular reactions in responsive nanoreactors
We describe a general theory for surface-catalyzed bimolecular reactions in responsive nanoreactors, catalytically active nanoparticles coated by a stimuli-responsive 'gating' shell, whose permeability controls the activity of the process. We address two archetypal scenarios encountered in this system: The first, where two species diffusing from a bulk solution react at the catalyst's surface; the second where only one of the reactants diffuses from the bulk while the other one is produced at the nanoparticle surface, e.g., by light conversion. We find that in both scenarios the total catalytic rate has the same mathematical structure, once diffusion rates are properly redefined. Moreover, the diffusional fluxes of the different reactants are strongly coupled, providing a richer behavior than that arising in unimolecular reactions. We also show that in stark contrast to bulk reactions, the identification of a limiting reactant is not simply determined by the relative bulk concentrations but controlled by the nanoreactor shell permeability. Finally, we describe an application of our theory by analyzing experimental data on the reaction between hexacyanoferrate (III) and borohydride ions in responsive hydrogel-based core-shell nanoreactors.
0
1
0
0
0
0
Average Case Constant Factor Time and Distance Optimal Multi-Robot Path Planning in Well-Connected Environments
Fast algorithms for optimal multi-robot path planning are sought after in many real-world applications. Known methods, however, generally do not simultaneously guarantee good solution optimality and fast run time (e.g., polynomial). In this work, we develop a low-polynomial running time algorithm, called SplitAndGroup (SAG),that solves the multi-robot path planning problem on grids and grid-like environments and produces constant factor makespan-optimal solutions in the average case. That is, SAG is an average case O(1)-approximation algorithm. SAG computes solutions with sub-linear makespan and is capable of handling cases when the density of robots is extremely high - in a graph-theoretic setting, the algorithm supports cases where all vertices of the underlying graph are occupied by robots. SAG attains its desirable properties through a careful combination of divide-and-conquer technique and network flow based methods for routing the robots. Solutions from SAG, in a weaker sense, is also a constant factor approximation on total distance optimality.
1
0
0
0
0
0
Hierarchical Learning for Modular Robots
We argue that hierarchical methods can become the key for modular robots achieving reconfigurability. We present a hierarchical approach for modular robots that allows a robot to simultaneously learn multiple tasks. Our evaluation results present an environment composed of two different modular robot configurations, namely 3 degrees-of-freedom (DoF) and 4DoF with two corresponding targets. During the training, we switch between configurations and targets aiming to evaluate the possibility of training a neural network that is able to select appropriate motor primitives and robot configuration to achieve the target. The trained neural network is then transferred and executed on a real robot with 3DoF and 4DoF configurations. We demonstrate how this technique generalizes to robots with different configurations and tasks.
1
0
0
0
0
0
Online Learning with Diverse User Preferences
In this paper, we investigate the impact of diverse user preference on learning under the stochastic multi-armed bandit (MAB) framework. We aim to show that when the user preferences are sufficiently diverse and each arm can be optimal for certain users, the O(log T) regret incurred by exploring the sub-optimal arms under the standard stochastic MAB setting can be reduced to a constant. Our intuition is that to achieve sub-linear regret, the number of times an optimal arm being pulled should scale linearly in time; when all arms are optimal for certain users and pulled frequently, the estimated arm statistics can quickly converge to their true values, thus reducing the need of exploration dramatically. We cast the problem into a stochastic linear bandits model, where both the users preferences and the state of arms are modeled as {independent and identical distributed (i.i.d)} d-dimensional random vectors. After receiving the user preference vector at the beginning of each time slot, the learner pulls an arm and receives a reward as the linear product of the preference vector and the arm state vector. We also assume that the state of the pulled arm is revealed to the learner once its pulled. We propose a Weighted Upper Confidence Bound (W-UCB) algorithm and show that it can achieve a constant regret when the user preferences are sufficiently diverse. The performance of W-UCB under general setups is also completely characterized and validated with synthetic data.
1
0
0
1
0
0
Gaussian Process based Passivation of a Class of Nonlinear Systems with Unknown Dynamics
The paper addresses the problem of passivation of a class of nonlinear systems where the dynamics are unknown. For this purpose, we use the highly flexible, data-driven Gaussian process regression for the identification of the unknown dynamics for feed-forward compensation. The closed loop system of the nonlinear system, the Gaussian process model and a feedback control law is guaranteed to be semi-passive with a specific probability. The predicted variance of the Gaussian process regression is used to bound the model error which additionally allows to specify the state space region where the closed-loop system behaves passive. Finally, the theoretical results are illustrated by a simulation.
1
0
0
0
0
0
A Bayesian Method for Joint Clustering of Vectorial Data and Network Data
We present a new model-based integrative method for clustering objects given both vectorial data, which describes the feature of each object, and network data, which indicates the similarity of connected objects. The proposed general model is able to cluster the two types of data simultaneously within one integrative probabilistic model, while traditional methods can only handle one data type or depend on transforming one data type to another. Bayesian inference of the clustering is conducted based on a Markov chain Monte Carlo algorithm. A special case of the general model combining the Gaussian mixture model and the stochastic block model is extensively studied. We used both synthetic data and real data to evaluate this new method and compare it with alternative methods. The results show that our simultaneous clustering method performs much better. This improvement is due to the power of the model-based probabilistic approach for efficiently integrating information.
1
0
0
1
0
0
Pointwise-generalized-inverses of linear maps between C$^*$-algebras and JB$^*$-triples
We study pointwise-generalized-inverses of linear maps between C$^*$-algebras. Let $\Phi$ and $\Psi$ be linear maps between complex Banach algebras $A$ and $B$. We say that $\Psi$ is a pointwise-generalized-inverse of $\Phi$ if $\Phi(aba)=\Phi(a)\Psi(b)\Phi(a),$ for every $a,b\in A$. The pair $(\Phi,\Psi)$ is Jordan-triple multiplicative if $\Phi$ is a pointwise-generalized-inverse of $\Psi$ and the latter is a pointwise-generalized-inverse of $\Phi$. We study the basic properties of this maps in connection with Jordan homomorphism, triple homomorphisms and strongly preservers. We also determine conditions to guarantee the automatic continuity of the pointwise-generalized-inverse of continuous operator between C$^*$-algebras. An appropriate generalization is introduced in the setting of JB$^*$-triples.
0
0
1
0
0
0
The cohomology of rank two stable bundle moduli: mod two nilpotency & skew Schur polynomials
We compute cup product pairings in the integral cohomology ring of the moduli space of rank two stable bundles with odd determinant over a Riemann surface using methods of Zagier. The resulting formula is related to a generating function for certain skew Schur polynomials. As an application, we compute the nilpotency degree of a distinguished degree two generator in the mod two cohomology ring. We then give descriptions of the mod two cohomology rings in low genus, and describe the subrings invariant under the mapping class group action.
0
0
1
0
0
0
Automated Formal Synthesis of Digital Controllers for State-Space Physical Plants
We present a sound and automated approach to synthesize safe digital feedback controllers for physical plants represented as linear, time invariant models. Models are given as dynamical equations with inputs, evolving over a continuous state space and accounting for errors due to the digitalization of signals by the controller. Our approach has two stages, leveraging counterexample guided inductive synthesis (CEGIS) and reachability analysis. CEGIS synthesizes a static feedback controller that stabilizes the system under restrictions given by the safety of the reach space. Safety is verified either via BMC or abstract acceleration; if the verification step fails, we refine the controller by generalizing the counterexample. We synthesize stable and safe controllers for intricate physical plant models from the digital control literature.
1
0
0
0
0
0
A constrained control-planning strategy for redundant manipulators
This paper presents an interconnected control-planning strategy for redundant manipulators, subject to system and environmental constraints. The method incorporates low-level control characteristics and high-level planning components into a robust strategy for manipulators acting in complex environments, subject to joint limits. This strategy is formulated using an adaptive control rule, the estimated dynamic model of the robotic system and the nullspace of the linearized constraints. A path is generated that takes into account the capabilities of the platform. The proposed method is computationally efficient, enabling its implementation on a real multi-body robotic system. Through experimental results with a 7 DOF manipulator, we demonstrate the performance of the method in real-world scenarios.
1
0
0
0
0
0
Lagrangian for RLC circuits using analogy with the classical mechanics concepts
We study and formulate the Lagrangian for the LC, RC, RL, and RLC circuits by using the analogy concept with the mechanical problem in classical mechanics formulations. We found that the Lagrangian for the LC and RLC circuits are governed by two terms i. e. kinetic energy-like and potential energy-like terms. The Lagrangian for the RC circuit is only a contribution from the potential energy-like term and the Lagrangian for the RL circuit is only from the kinetic energy-like term.
0
1
0
0
0
0
A note on recent criticisms to Birnbaum's theorem
In this note, we provide critical commentary on two articles that cast doubt on the validity and implications of Birnbaum's theorem: Evans (2013) and Mayo (2014). In our view, the proof is correct and the consequences of the theorem are alive and well.
0
0
1
1
0
0
High efficiently numerical simulation of the TDGL equation with reticular free energy in hydrogel
In this paper, we focus on the numerical simulation of phase separation about macromolecule microsphere composite (MMC) hydrogel. The model equation is based on Time-Dependent Ginzburg-Landau (TDGL) equation with reticular free energy. We have put forward two $L^2$ stable schemes to simulate simplified TDGL equation. In numerical experiments, we observe that simulating the whole process of phase separation requires a considerably long time. We also notice that the total free energy changes significantly in initial time and varies slightly in the following time. Based on these properties, we introduce an adaptive strategy based on one of stable scheme mentioned. It is found that the introduction of the time adaptivity cannot only resolve the dynamical changes of the solution accurately but also can significantly save CPU time for the long time simulation.
0
1
0
0
0
0
Entanglement verification protocols for distributed systems based on the Quantum Recursive Network Architecture
In distributed systems based on the Quantum Recursive Network Architecture, quantum channels and quantum memories are used to establish entangled quantum states between node pairs. Such systems are robust against attackers that interact with the quantum channels. Conversely, weaknesses emerge when an attacker takes full control of a node and alters the configuration of the local quantum memory, either to make a denial-of-service attack or to reprogram the node. In such a scenario, entanglement verification over quantum memories is a means for detecting the intruder. Usually, entanglement verification approaches focus either on untrusted sources of entangled qubits (photons, in most cases) or on eavesdroppers that interfere with the quantum channel while entangled qubits are transmitted. Instead, in this work we assume that the source of entanglement is trusted, but parties may be dishonest. Looking for efficient entanglement verification protocols that only require classical channels and local quantum operations to work, we thoroughly analyze the one proposed by Nagy and Akl, that we denote as NA2010 for simplicity, and we define and analyze two entanglement verification protocols based on teleportation (denoted as AC1 and AC2), characterized by increasing efficiency in terms of intrusion detection probability versus sacrificed quantum resources.
1
0
0
0
0
0
On the radius of spatial analyticity for the quartic generalized KdV equation
Lower bound on the rate of decrease in time of the uniform radius of spatial analyticity of solutions to the quartic generalized KdV equation is derived, which improves an earlier result by Bona, Grujić and Kalisch.
0
0
1
0
0
0
Calibrating Noise to Variance in Adaptive Data Analysis
Datasets are often used multiple times and each successive analysis may depend on the outcome of previous analyses. Standard techniques for ensuring generalization and statistical validity do not account for this adaptive dependence. A recent line of work studies the challenges that arise from such adaptive data reuse by considering the problem of answering a sequence of "queries" about the data distribution where each query may depend arbitrarily on answers to previous queries. The strongest results obtained for this problem rely on differential privacy -- a strong notion of algorithmic stability with the important property that it "composes" well when data is reused. However the notion is rather strict, as it requires stability under replacement of an arbitrary data element. The simplest algorithm is to add Gaussian (or Laplace) noise to distort the empirical answers. However, analysing this technique using differential privacy yields suboptimal accuracy guarantees when the queries have low variance. Here we propose a relaxed notion of stability that also composes adaptively. We demonstrate that a simple and natural algorithm based on adding noise scaled to the standard deviation of the query provides our notion of stability. This implies an algorithm that can answer statistical queries about the dataset with substantially improved accuracy guarantees for low-variance queries. The only previous approach that provides such accuracy guarantees is based on a more involved differentially private median-of-means algorithm and its analysis exploits stronger "group" stability of the algorithm.
1
0
0
0
0
0
Testing Equality of Autocovariance Operators for Functional Time Series
We consider strictly stationary stochastic processes of Hilbert space-valued random variables and focus on tests of the equality of the lag-zero autocovariance operators of several independent functional time series. A moving block bootstrap-based testing procedure is proposed which generates pseudo random elements that satisfy the null hypothesis of interest. It is based on directly bootstrapping the time series of tensor products which overcomes some common difficulties associated with applications of the bootstrap to related testing problems. The suggested methodology can be potentially applied to a broad range of test statistics of the hypotheses of interest. As an example, we establish validity for approximating the distribution under the null of a fully functional test statistic based on the Hilbert-Schmidt distance of the corresponding sample lag-zero autocovariance operators, and show consistency under the alternative. As a prerequisite, we prove a central limit theorem for the moving block bootstrap procedure applied to the sample autocovariance operator which is of interest on its own. The finite sample size and power performance of the suggested moving block bootstrap-based testing procedure is illustrated through simulations and an application to a real-life dataset is discussed.
0
0
1
1
0
0
The Lyman-alpha forest power spectrum from the XQ-100 Legacy Survey
We present the Lyman-$\alpha$ flux power spectrum measurements of the XQ-100 sample of quasar spectra obtained in the context of the European Southern Observatory Large Programme "Quasars and their absorption lines: a legacy survey of the high redshift universe with VLT/XSHOOTER". Using $100$ quasar spectra with medium resolution and signal-to-noise ratio we measure the power spectrum over a range of redshifts $z = 3 - 4.2$ and over a range of scales $k = 0.003 - 0.06\,\mathrm{s\,km^{-1}}$. The results agree well with the measurements of the one-dimensional power spectrum found in the literature. The data analysis used in this paper is based on the Fourier transform and has been tested on synthetic data. Systematic and statistical uncertainties of our measurements are estimated, with a total error (statistical and systematic) comparable to the one of the BOSS data in the overlapping range of scales, and smaller by more than $50\%$ for higher redshift bins ($z>3.6$) and small scales ($k > 0.01\,\mathrm{s\,km^{-1}}$). The XQ-100 data set has the unique feature of having signal-to-noise ratios and resolution intermediate between the two data sets that are typically used to perform cosmological studies, i.e. BOSS and high-resolution spectra (e.g. UVES/VLT or HIRES). More importantly, the measured flux power spectra span the high redshift regime which is usually more constraining for structure formation models.
0
1
0
0
0
0
AdS4 backgrounds with N>16 supersymmetries in 10 and 11 dimensions
We explore all warped $AdS_4\times_w M^{D-4}$ backgrounds with the most general allowed fluxes that preserve more than 16 supersymmetries in $D=10$- and $11$-dimensional supergravities. After imposing the assumption that either the internal space $M^{D-4}$ is compact without boundary or the isometry algebra of the background decomposes into that of AdS$_4$ and that of $M^{D-4}$, we find that there are no such backgrounds in IIB supergravity. Similarly in IIA supergravity, there is a unique such background with 24 supersymmetries locally isometric to $AdS_4\times \mathbb{CP}^3$, and in $D=11$ supergravity all such backgrounds are locally isometric to the maximally supersymmetric $AdS_4\times S^7$ solution.
0
0
1
0
0
0
Room Temperature Polariton Lasing in All-Inorganic Perovskites
Polariton lasing is the coherent emission arising from a macroscopic polariton condensate first proposed in 1996. Over the past two decades, polariton lasing has been demonstrated in a few inorganic and organic semiconductors in both low and room temperatures. Polariton lasing in inorganic materials significantly relies on sophisticated epitaxial growth of crystalline gain medium layers sandwiched by two distributed Bragg reflectors in which combating the built-in strain and mismatched thermal properties is nontrivial. On the other hand, organic active media usually suffer from large threshold density and weak nonlinearity due to the Frenkel exciton nature. Further development of polariton lasing towards technologically significant applications demand more accessible materials, ease of device fabrication and broadly tunable emission at room temperature. Herein, we report the experimental realization of room-temperature polariton lasing based on an epitaxy-free all-inorganic cesium lead chloride perovskite microcavity. Polariton lasing is unambiguously evidenced by a superlinear power dependence, macroscopic ground state occupation, blueshift of ground state emission, narrowing of the linewidth and the build-up of long-range spatial coherence. Our work suggests considerable promise of lead halide perovskites towards large-area, low-cost, high performance room temperature polariton devices and coherent light sources extending from the ultraviolet to near infrared range.
0
1
0
0
0
0
Conditional Independence, Conditional Mean Independence, and Zero Conditional Covariance
Investigation of the reversibility of the directional hierarchy in the interdependency among the notions of conditional independence, conditional mean independence, and zero conditional covariance, for two random variables X and Y given a conditioning element Z which is not constrained by any topological restriction on its range, reveals that if the first moments of X, Y, and XY exist, then conditional independence implies conditional mean independence and conditional mean independence implies zero conditional covariance, but the direction of the hierarchy is not reversible in general. If the conditional expectation of Y given X and Z is "affine in X," which happens when X is Bernoulli, then the "intercept" and "slope" of the conditional expectation (that is, the nonparametric regression function) equal the "intercept" and "slope" of the "least-squares linear regression function", as a result of which zero conditional covariance implies conditional mean independence.
0
0
1
1
0
0
Probabilistic Sensor Fusion for Ambient Assisted Living
There is a widely-accepted need to revise current forms of health-care provision, with particular interest in sensing systems in the home. Given a multiple-modality sensor platform with heterogeneous network connectivity, as is under development in the Sensor Platform for HEalthcare in Residential Environment (SPHERE) Interdisciplinary Research Collaboration (IRC), we face specific challenges relating to the fusion of the heterogeneous sensor modalities. We introduce Bayesian models for sensor fusion, which aims to address the challenges of fusion of heterogeneous sensor modalities. Using this approach we are able to identify the modalities that have most utility for each particular activity, and simultaneously identify which features within that activity are most relevant for a given activity. We further show how the two separate tasks of location prediction and activity recognition can be fused into a single model, which allows for simultaneous learning an prediction for both tasks. We analyse the performance of this model on data collected in the SPHERE house, and show its utility. We also compare against some benchmark models which do not have the full structure,and show how the proposed model compares favourably to these methods
1
0
0
1
0
0
Bounded time computation on metric spaces and Banach spaces
We extend the framework by Kawamura and Cook for investigating computational complexity for operators occurring in analysis. This model is based on second-order complexity theory for functions on the Baire space, which is lifted to metric spaces by means of representations. Time is measured in terms of the length of the input encodings and the required output precision. We propose the notions of a complete representation and of a regular representation. We show that complete representations ensure that any computable function has a time bound. Regular representations generalize Kawamura and Cook's more restrictive notion of a second-order representation, while still guaranteeing fast computability of the length of the encodings. Applying these notions, we investigate the relationship between purely metric properties of a metric space and the existence of a representation such that the metric is computable within bounded time. We show that a bound on the running time of the metric can be straightforwardly translated into size bounds of compact subsets of the metric space. Conversely, for compact spaces and for Banach spaces we construct a family of admissible, complete, regular representations that allow for fast computation of the metric and provide short encodings. Here it is necessary to trade the time bound off against the length of encodings.
1
0
1
0
0
0