title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Excitonic effects in third harmonic generation: the case of carbon nanotubes and nanoribbons
Linear and nonlinear optical properties of low dimensional nanostructures have attracted a large interest in the scientific community as tools to probe the strong confinement of the electrons and for possible applications in optoelectronic devices. In particular it has been shown that the linear optical response of carbon nanotubes [Science 308, 838 (2005)] and graphene nanoribbons [Nat. Comm. 5, 4253 (2014)] is dominated by bounded electron-hole pairs, the excitons. The role of excitons in linear response has been widely studied, but still little is known on their effect on nonlinear susceptibilities. Using a recently developed methodology [Phys. Rev. B 88, 235113 (2013)] based on well-established ab-initio many-body perturbation theory approaches, we find that quasiparticle shifts and excitonic effects significantly modify the third-harmonic generation in carbon nanotubes and graphene nanoribbons. For both systems the net effect of many-body effects is to reduce the intensity of the main peak in the independent particle spectrum and redistribute the spectral weight among several excitonic resonances.
0
1
0
0
0
0
A general family of congruences for Bernoulli numbers
We prove a general family of congruences for Bernoulli numbers whose index is a polynomial function of a prime, modulo a power of that prime. Our family generalizes many known results, including the von Staudt--Clausen theorem and Kummer's congruence.
0
0
1
0
0
0
The Fourier algebra of a rigid $C^{\ast}$-tensor category
Completely positive and completely bounded mutlipliers on rigid $C^{\ast}$-tensor categories were introduced by Popa and Vaes. Using these notions, we define and study the Fourier-Stieltjes algebra, the Fourier algebra and the algebra of completely bounded multipliers of a rigid $C^{\ast}$-tensor category. The rich structure that these algebras have in the setting of locally compact groups is still present in the setting of rigid $C^{\ast}$-tensor categories. We also prove that Leptin's characterization of amenability still holds in this setting, and we collect some natural observations on property (T).
0
0
1
0
0
0
On lattice path matroid polytopes: integer points and Ehrhart polynomial
In this paper we investigate the number of integer points lying in dilations of lattice path matroid polytopes. We give a characterization of such points as polygonal paths in the diagram of the lattice path matroid. Furthermore, we prove that lattice path matroid polytopes are affinely equivalent to a family of distributive polytopes. As applications we obtain two new infinite families of matroids verifying a conjecture of De Loera et.~al. and present an explicit formula of the Ehrhart polynomial for one of them.
0
0
1
0
0
0
Quantum effects and magnetism in the spatially distributed DNA molecules
Electronic and magnetic properties of DNA structures doped by simple and transition d- and f-metal ions (Gd, La, Cu, Zn, Au) are reviewed. Both one- and two dimensional systems are considered. A particular attention is paid to gadolinium and copper doped DNA systems, their unusual magnetism being treated. The problem of classical and quantum transport (including transfer of genetic information during replication and transcription) and electron localization in biological systems is discussed.
0
1
0
0
0
0
A Stochastic Model for Short-Term Probabilistic Forecast of Solar Photo-Voltaic Power
In this paper, a stochastic model with regime switching is developed for solar photo-voltaic (PV) power in order to provide short-term probabilistic forecasts. The proposed model for solar PV power is physics inspired and explicitly incorporates the stochasticity due to clouds using different parameters addressing the attenuation in power.Based on the statistical behavior of parameters, a simple regime-switching process between the three classes of sunny, overcast and partly cloudy is proposed. Then, probabilistic forecasts of solar PV power are obtained by identifying the present regime using PV power measurements and assuming persistence in this regime. To illustrate the technique developed, a set of solar PV power data from a single rooftop installation in California is analyzed and the effectiveness of the model in fitting the data and in providing short-term point and probabilistic forecasts is verified. The proposed forecast method outperforms a variety of reference models that produce point and probabilistic forecasts and therefore portrays the merits of employing the proposed approach.
0
0
0
1
0
0
Stability and elasticity of metastable solid solutions and superlattices in the MoN-TaN system: a first-principles study
Employing ab initio calculations, we discuss chemical, mechanical, and dynamical stability of MoN-TaN solid solutions together with cubic-like MoN/TaN superlattices, as another materials design concept. Hexagonal-type structures based on low-energy modifications of MoN and TaN are the most stable ones over the whole composition range. Despite being metastable, disordered cubic polymorphs are energetically significantly preferred over their ordered counterparts. An in-depth analysis of atomic environments in terms of bond lengths and angles reveals that the chemical disorder results in (partially) broken symmetry, i.e., the disordered cubic structure relaxes towards a hexagonal NiAs-type phase, the ground state of MoN. Surprisingly, also the superlattice architecture is clearly favored over the ordered cubic solid solution. We show that the bi-axial coherency stresses in superlattices break the cubic symmetry beyond simple tetragonal distortions and lead to a new tetragonal $\zeta$-phase (space group P4/nmm), which exhibits a more negative formation energy than the symmetry-stabilized cubic structures of MoN and TaN. Unlike cubic TaN, the $\zeta\text{-TaN}$ is elastically and vibrationally stable, while $\zeta$-MoN is stabilized only by the superlattice structure. To map compositional trends in elasticity, we establish mechanical stability of various Mo$_{1-x}$Ta$_x$N systems and find the closest high-symmetry approximants of the corresponding elastic tensors. According to the estimated polycrystalline moduli, the hexagonal polymorphs are predicted to be extremely hard, however, less ductile than the cubic phases and superlattices. The trends in stability based on energetics and elasticity are corroborated by density of electronic states.
0
1
0
0
0
0
High Accuracy Classification of Parkinson's Disease through Shape Analysis and Surface Fitting in $^{123}$I-Ioflupane SPECT Imaging
Early and accurate identification of parkinsonian syndromes (PS) involving presynaptic degeneration from non-degenerative variants such as Scans Without Evidence of Dopaminergic Deficit (SWEDD) and tremor disorders, is important for effective patient management as the course, therapy and prognosis differ substantially between the two groups. In this study, we use Single Photon Emission Computed Tomography (SPECT) images from healthy normal, early PD and SWEDD subjects, as obtained from the Parkinson's Progression Markers Initiative (PPMI) database, and process them to compute shape- and surface fitting-based features for the three groups. We use these features to develop and compare various classification models that can discriminate between scans showing dopaminergic deficit, as in PD, from scans without the deficit, as in healthy normal or SWEDD. Along with it, we also compare these features with Striatal Binding Ratio (SBR)-based features, which are well-established and clinically used, by computing a feature importance score using Random forests technique. We observe that the Support Vector Machine (SVM) classifier gave the best performance with an accuracy of 97.29%. These features also showed higher importance than the SBR-based features. We infer from the study that shape analysis and surface fitting are useful and promising methods for extracting discriminatory features that can be used to develop diagnostic models that might have the potential to help clinicians in the diagnostic process.
1
0
0
1
0
0
End-to-End Learning for Structured Prediction Energy Networks
Structured Prediction Energy Networks (SPENs) are a simple, yet expressive family of structured prediction models (Belanger and McCallum, 2016). An energy function over candidate structured outputs is given by a deep network, and predictions are formed by gradient-based optimization. This paper presents end-to-end learning for SPENs, where the energy function is discriminatively trained by back-propagating through gradient-based prediction. In our experience, the approach is substantially more accurate than the structured SVM method of Belanger and McCallum (2016), as it allows us to use more sophisticated non-convex energies. We provide a collection of techniques for improving the speed, accuracy, and memory requirements of end-to-end SPENs, and demonstrate the power of our method on 7-Scenes image denoising and CoNLL-2005 semantic role labeling tasks. In both, inexact minimization of non-convex SPEN energies is superior to baseline methods that use simplistic energy functions that can be minimized exactly.
1
0
0
1
0
0
Self-compression of spatially limited laser pulses in a system of coupled light-guides
The self-action features of wave packets propagating in a two-dimensional system of equidistantly arranged fibers are studied analytically and numerically on the basis of the discrete nonlinear Schrödinger equation. Self-consistent equations for the characteristic scales of a Gaussian wave packet are derived on the basis of the variational approach, which are proved numerically for powers $\mathcal{P} < 10 \mathcal{P}_\text{cr}$ exceeding slightly the critical one for self-focusing. At higher powers, the wave beams become filamented, and their amplitude is limited due to nonlinear breaking of the interaction between neighbor light-guides. This make impossible to collect a powerful wave beam into the single light-guide. The variational analysis show the possibility of adiabatic self-compression of soliton-like laser pulses in the process of their three-dimensional self-focusing to the central light-guide. However, the further increase of the field amplitude during self-compression leads to the longitudinal modulation instability development and formation of a set of light bullets in the central fiber. In the regime of hollow wave beams, filamentation instability becomes predominant. As a result, it becomes possible to form a set of light bullets in optical fibers located on the ring.
0
1
0
0
0
0
A Hand-Held Multimedia Translation and Interpretation System with Application to Diet Management
We propose a network independent, hand-held system to translate and disambiguate foreign restaurant menu items in real-time. The system is based on the use of a portable multimedia device, such as a smartphones or a PDA. An accurate and fast translation is obtained using a Machine Translation engine and a context-specific corpora to which we apply two pre-processing steps, called translation standardization and $n$-gram consolidation. The phrase-table generated is orders of magnitude lighter than the ones commonly used in market applications, thus making translations computationally less expensive, and decreasing the battery usage. Translation ambiguities are mitigated using multimedia information including images of dishes and ingredients, along with ingredient lists. We implemented a prototype of our system on an iPod Touch Second Generation for English speakers traveling in Spain. Our tests indicate that our translation method yields higher accuracy than translation engines such as Google Translate, and does so almost instantaneously. The memory requirements of the application, including the database of images, are also well within the limits of the device. By combining it with a database of nutritional information, our proposed system can be used to help individuals who follow a medical diet maintain this diet while traveling.
0
0
0
1
0
0
Adding Neural Network Controllers to Behavior Trees without Destroying Performance Guarantees
In this paper, we show how controllers created using data driven designs, such as neural networks, can be used together with model based controllers in a way that combines the performance guarantees of the model based controllers with the efficiency of the data driven controllers. The considered performance guarantees include both safety, in terms of avoiding designated unsafe parts of the state space, and convergence, in terms of reaching a given beneficial part of the state space. Using the framework Behavior Trees, we are able to show how this can be done on the top level, concerning just two controllers, as described above, but also note that the same approach can be used in arbitrary sub-trees. The price for introducing the new controller is that the upper bound on the time needed to reach the desired part of the state space increases. The approach is illustrated with an inverted pendulum example.
1
0
0
0
0
0
Drop pattern resulting from the breakup of a bidimensional grid of liquid filaments
A rectangular grid formed by liquid filaments on a partially wetting substrate evolves in a series of breakups leading to arrays of drops with different shapes distributed in a rather regular bidimensional pattern. Our study is focused on the configuration produced when two long parallel filaments of silicone oil, which are placed upon a glass substrate previously coated with a fluorinated solution, are crossed perpendicularly by another pair of long parallel filaments. A remarkable feature of this kind of grids is that there are two qualitatively different types of drops. While one set is formed at the crossing points, the rest are consequence of the breakup of shorter filaments formed between the crossings. Here, we analyze the main geometric features of all types of drops, such as shape of the footprint and contact angle distribution along the drop periphery. The formation of a series of short filaments with similar geometric and physical properties allows us to have simultaneously quasi identical experiments to study the subsequent breakups. We develop a simple hydrodynamic model to predict the number of drops that results from a filament of given initial length and width. This model is able to yield the length intervals corresponding to a small number of drops and its predictions are successfully compared with the experimental data as well as with numerical simulations of the full Navier--Stokes equation that provide a detailed time evolution of the dewetting motion of the filament till the breakup into drops. Finally, the prediction for finite filaments is contrasted with the existing theories for infinite ones.
0
1
0
0
0
0
FIRED: Frequent Inertial Resets with Diversification for Emerging Commodity Cyber-Physical Systems
A Cyber-Physical System (CPS) is defined by its unique characteristics involving both the cyber and physical domains. Their hybrid nature introduces new attack vectors, but also provides an opportunity to design new security defenses. In this paper, we present a new domain-specific security mechanism, FIRED, that leverages physical properties such as inertia of the CPS to improve security. FIRED is simple to describe and implement. It goes through two operations: Reset and Diversify, as frequently as possible -- typically in the order of seconds or milliseconds. The combined effect of these operations is that attackers are unable to gain persistent control of the system. The CPS stays safe and stable even under frequent resets because of the inertia present. Further, resets simplify certain diversification mechanisms and makes them feasible to implement in CPSs with limited computing resources. We evaluate our idea on two real-world systems: an engine management unit of a car and a flight controller of a quadcopter. Roughly speaking, these two systems provide typical and extreme operational requirements for evaluating FIRED in terms of stability, algorithmic complexity, and safety requirements. We show that FIRED provides robust security guarantees against hijacking attacks and persistent CPS threats. We find that our defense is suitable for emerging CPS such as commodity unmanned vehicles that are currently unregulated and cost sensitive.
1
0
0
0
0
0
Modelling of Dictyostelium Discoideum Movement in Linear Gradient of Chemoattractant
Chemotaxis is a ubiquitous biological phenomenon in which cells detect a spatial gradient of chemoattractant, and then move towards the source. Here we present a position-dependent advection-diffusion model that quantitatively describes the statistical features of the chemotactic motion of the social amoeba {\it Dictyostelium discoideum} in a linear gradient of cAMP (cyclic adenosine monophosphate). We fit the model to experimental trajectories that are recorded in a microfluidic setup with stationary cAMP gradients and extract the diffusion and drift coefficients in the gradient direction. Our analysis shows that for the majority of gradients, both coefficients decrease in time and become negative as the cells crawl up the gradient. The extracted model parameters also show that besides the expected drift in the direction of chemoattractant gradient, we observe a nonlinear dependency of the corresponding variance in time, which can be explained by the model. Furthermore, the results of the model show that the non-linear term in the mean squared displacement of the cell trajectories can dominate the linear term on large time scales.
0
0
0
0
1
0
Cycles of Activity in the Jovian Atmosphere
Jupiter's banded appearance may appear unchanging to the casual observer, but closer inspection reveals a dynamic, ever-changing system of belts and zones with distinct cycles of activity. Identification of these long-term cycles requires access to datasets spanning multiple jovian years, but explaining them requires multi-spectral characterization of the thermal, chemical, and aerosol changes associated with visible color variations. The Earth-based support campaign for Juno's exploration of Jupiter has already characterized two upheaval events in the equatorial and temperate belts that are part of long-term jovian cycles, whose underlying sources could be revealed by Juno's exploration of Jupiter's deep atmosphere.
0
1
0
0
0
0
Predicting Tactical Solutions to Operational Planning Problems under Imperfect Information
This paper offers a methodological contribution at the intersection of machine learning and operations research. Namely, we propose a methodology to quickly predict tactical solutions to a given operational problem. In this context, the tactical solution is less detailed than the operational one but it has to be computed in very short time and under imperfect information. The problem is of importance in various applications where tactical and operational planning problems are interrelated and information about the operational problem is revealed over time. This is for instance the case in certain capacity planning and demand management systems. We formulate the problem as a two-stage optimal prediction stochastic program whose solution we predict with a supervised machine learning algorithm. The training data set consists of a large number of deterministic (second stage) problems generated by controlled probabilistic sampling. The labels are computed based on solutions to the deterministic problems (solved independently and offline) employing appropriate aggregation and subselection methods to address uncertainty. Results on our motivating application in load planning for rail transportation show that deep learning algorithms produce highly accurate predictions in very short computing time (milliseconds or less). The prediction accuracy is comparable to solutions computed by sample average approximation of the stochastic program.
1
0
0
1
0
0
Consistent Approval-Based Multi-Winner Rules
This paper is an axiomatic study of consistent approval-based multi-winner rules, i.e., voting rules that select a fixed-size group of candidates based on approval ballots. We introduce the class of counting rules, provide an axiomatic characterization of this class and, in particular, show that counting rules are consistent. Building upon this result, we axiomatically characterize three important consistent multi-winner rules: Proportional Approval Voting, Multi-Winner Approval Voting and the Approval Chamberlin-Courant rule. Our results demonstrate the variety of multi-winner rules and illustrate three different, orthogonal principles that multi-winner voting rules may represent: individual excellence, diversity, and proportionality.
1
0
0
0
0
0
Bridge functional for the molecular density functional theory with consistent pressure and surface tension and its importance for solvation in water
We address the problem of predicting the solvation free energy and equilibrium solvent density profile in fews minutes from the molecular density functional theory beyond the usual hypernetted-chain approximation. We introduce a bridge functional of a coarse-grained, weighted solvent density. In few minutes at most, for solutes of sizes ranging from small compounds to large proteins, we produce (i) an estimation of the free energy of solvation within 1 kcal/mol of the experimental data for the hydrophobic solutes presented here, and (ii) the solvent distribution around the solute. Contrary to previous propositions, this bridge functional is thermodynamically consistent in that it produces the correct liquid-vapor coexistence and the experimental surface tension. We show this consistency to be of crucial importance for water at room temperature and pressure. This bridge functional is designed to be simple, local, and thus numerically efficient. Finally, we illustrate this new level of molecular theory of solutions with the study of the hydration shell of a protein.
0
1
0
0
0
0
Fourier multiplier theorems for Triebel-Lizorkin spaces
In this paper we study sharp generalizations of $\dot{F}_p^{0,q}$ multiplier theorem of Mikhlin-Hörmander type. The class of multipliers that we consider involves Herz spaces $K_u^{s,t}$. Plancherel's theorem proves $\widehat{L_s^2}=K_2^{s,2}$ and we study the optimal triple $(u,t,s)$ for which $\sup_{k\in\mathbb{Z}}{\big\Vert \big( m(2^k\cdot)\varphi\big)^{\vee}\big\Vert_{K_u^{s,t}}}<\infty$ implies $\dot{F}_p^{0,q}$ boundedness of multiplier operator $T_m$ where $\varphi$ is a cutoff function. Our result also covers the $BMO$-type space $\dot{F}_{\infty}^{0,q}$.
0
0
1
0
0
0
Individual dynamic predictions using landmarking and joint modelling: validation of estimators and robustness assessment
After the diagnosis of a disease, one major objective is to predict cumulative probabilities of events such as clinical relapse or death from the individual information collected up to a prediction time, including usually biomarker repeated measurements. Several competing estimators have been proposed to calculate these individual dynamic predictions, mainly from two approaches: joint modelling and landmarking. These approaches differ by the information used, the model assumptions and the complexity of the computational procedures. It is essential to properly validate the estimators derived from joint models and landmark models, quantify their variability and compare them in order to provide key elements for the development and use of individual dynamic predictions in clinical follow-up of patients. Motivated by the prediction of two competing causes of progression of prostate cancer from the history of prostate-specific antigen, we conducted an in-depth simulation study to validate and compare the dynamic predictions derived from these two methods. Specifically, we formally defined the quantity to estimate and its estimators, proposed techniques to assess the uncertainty around predictions and validated them. We also compared the individual dynamic predictions derived from joint models and landmark models in terms of prediction error, discriminatory power, efficiency and robustness to model assumptions. We show that these prediction tools should be handled with care, in particular by properly specifying models and estimators.
0
0
0
1
0
0
A partial converse to the Andreotti-Grauert theorem
Let $X$ be a smooth projective manifold with $\dim_\mathbb{C} X=n$. We show that if a line bundle $L$ is $(n-1)$-ample, then it is $(n-1)$-positive. This is a partial converse to the Andreotti-Grauert theorem. As an application, we show that a projective manifold $X$ is uniruled if and only if there exists a Hermitian metric $\omega$ on $X$ such that its Ricci curvature $\mathrm{Ric}(\omega)$ has at least one positive eigenvalue everywhere.
0
0
1
0
0
0
Ab initio study of magnetocrystalline anisotropy, magnetostriction, and Fermi surface of L10 FeNi (tetrataenite)
The ordered L1$_0$ FeNi phase (tetrataenite) is recently considered as a promising candidate for the rare-earth free permanent magnets applications. In this work we calculate several characteristics of the L1$_0$ FeNi, where most of the results come form the fully relativistic full potential FPLO method with the generalized gradient approximation (GGA). A special attention deserves the summary of the magnetocrystalline anisotropy energies (MAE's), the full potential calculations of the anisotropy constant $K_3$, and the combined analysis of the Fermi surface and three-dimensional $\mathbf{k}$-resolved MAE. Other calculated parameters presented in this article are the magnetic moments $m_{s}$ and $m_{l}$, magnetostrictive coefficient $\lambda_{001}$, bulk modulus B$_0$, and lattice parameters. The MAE's summary shows rather big discrepancies between the experimental MAE's from literature and also between the calculated MAE's.
0
1
0
0
0
0
Modular groups, Hurwitz classes and dynamic portraits of NET maps
An orientation-preserving branched covering $f: S^2 \to S^2$ is a nearly Euclidean Thurston (NET) map if each critical point is simple and its postcritical set has exactly four points. Inspired by classical, non-dynamical notions such as Hurwitz equivalence of branched covers of surfaces, we develop invariants for such maps. We then apply these notions to the classification and enumeration of NET maps. As an application, we obtain a complete classification of the dynamic critical orbit portraits of NET maps.
0
0
1
0
0
0
Proximal Planar Shape Signatures. Homology Nerves and Descriptive Proximity
This article introduces planar shape signatures derived from homology nerves, which are intersecting 1-cycles in a collection of homology groups endowed with a proximal relator (set of nearness relations) that includes a descriptive proximity. A 1-cycle is a closed, connected path with a zero boundary in a simplicial complex covering a finite, bounded planar shape. The signature of a shape sh A (denoted by sig(sh A)) is a feature vector that describes sh A. A signature sig(sh A) is derived from the geometry, homology nerves, Betti number, and descriptive CW topology on the shape sh A. Several main results are given, namely, (a) every finite, bounded planar shape has a signature derived from the homology group on the shape, (b) a homology group equipped with a proximal relator defines a descriptive Leader uniform topology and (c) a description of a homology nerve and union of the descriptions of the 1-cycles in the nerve have same homotopy type.
0
0
1
0
0
0
Asymptotic analysis of a 2D overhead crane with input delays in the boundary control
The paper investigates the asymptotic behavior of a 2D overhead crane with input delays in the boundary control. A linear boundary control is proposed. The main feature of such a control lies in the facts that it solely depends on the velocity but under the presence of time-delays. We end-up with a closed-loop system where no displacement term is involved. It is shown that the problem is well-posed in the sense of semigroups theory. LaSalle's invariance principle is invoked in order to establish the asymptotic convergence for the solutions of the system to a stationary position which depends on the initial data. Using a resolvent method it is proved that the convergence is indeed polynomial.
0
0
1
0
0
0
Abrupt disappearance and reemergence of the SU(2) and SU(4) Kondo effects due to population inversion
The interplay of almost degenerate levels in quantum dots and molecular junctions with possibly different couplings to the reservoirs has lead to many observable phenomena, such as the Fano effect, transmission phase slips and the SU(4) Kondo effect. Here we predict a dramatic repeated disappearance and reemergence of the SU(4) and anomalous SU(2) Kondo effects with increasing gate voltage. This phenomenon is attributed to the level occupation switching which has been previously invoked to explain the universal transmission phase slips in the conductance through a quantum dot. We use analytical arguments and numerical renormalization group calculations to explain the observations and discuss their experimental relevance and dependence on the physical parameters.
0
1
0
0
0
0
Stochastic Global Optimization Algorithms: A Systematic Formal Approach
As we know, some global optimization problems cannot be solved using analytic methods, so numeric/algorithmic approaches are used to find near to the optimal solutions for them. A stochastic global optimization algorithm (SGoal) is an iterative algorithm that generates a new population (a set of candidate solutions) from a previous population using stochastic operations. Although some research works have formalized SGoals using Markov kernels, such formalization is not general and sometimes is blurred. In this paper, we propose a comprehensive and systematic formal approach for studying SGoals. First, we present the required theory of probability (\sigma-algebras, measurable functions, kernel, markov chain, products, convergence and so on) and prove that some algorithmic functions like swapping and projection can be represented by kernels. Then, we introduce the notion of join-kernel as a way of characterizing the combination of stochastic methods. Next, we define the optimization space, a formal structure (a set with a \sigma-algebra that contains strict \epsilon-optimal states) for studying SGoals, and we develop kernels, like sort and permutation, on such structure. Finally, we present some popular SGoals in terms of the developed theory, we introduce sufficient conditions for convergence of a SGoal, and we prove convergence of some popular SGoals.
1
0
1
0
0
0
A complete characterization of optimal dictionaries for least squares representation
Dictionaries are collections of vectors used for representations of elements in Euclidean spaces. While recent research on optimal dictionaries is focussed on providing sparse (i.e., $\ell_0$-optimal,) representations, here we consider the problem of finding optimal dictionaries such that representations of samples of a random vector are optimal in an $\ell_2$-sense. For us, optimality of representation is equivalent to minimization of the average $\ell_2$-norm of the coefficients used to represent the random vector, with the lengths of the dictionary vectors being specified a priori. With the help of recent results on rank-$1$ decompositions of symmetric positive semidefinite matrices and the theory of majorization, we provide a complete characterization of $\ell_2$-optimal dictionaries. Our results are accompanied by polynomial time algorithms that construct $\ell_2$-optimal dictionaries from given data.
1
0
0
1
0
0
Least Square Variational Bayesian Autoencoder with Regularization
In recent years Variation Autoencoders have become one of the most popular unsupervised learning of complicated distributions.Variational Autoencoder (VAE) provides more efficient reconstructive performance over a traditional autoencoder. Variational auto enocders make better approximaiton than MCMC. The VAE defines a generative process in terms of ancestral sampling through a cascade of hidden stochastic layers. They are a directed graphic models. Variational autoencoder is trained to maximise the variational lower bound. Here we are trying maximise the likelihood and also at the same time we are trying to make a good approximation of the data. Its basically trading of the data log-likelihood and the KL divergence from the true posterior. This paper describes the scenario in which we wish to find a point-estimate to the parameters $\theta$ of some parametric model in which we generate each observations by first sampling a local latent variable and then sampling the associated observation. Here we use least square loss function with regularization in the the reconstruction of the image, the least square loss function was found to give better reconstructed images and had a faster training time.
1
0
0
1
0
0
Rational points of rationally simply connected varieties over global function fields
A complex projective manifold is rationally connected, resp. rationally simply connected, if finite subsets are connected by a rational curve, resp. the spaces parameterizing these connecting rational curves are themselves rationally connected. We prove that a projective scheme over a global function field with vanishing "elementary obstruction" has a rational point if it deforms to a rationally simply connected variety in characteristic 0. This gives new, uniform proofs over these fields of the Period-Index Theorem, the quasi-split case of Serre's "Conjecture II", and Lang's $C_2$ property.
0
0
1
0
0
0
A Koszul sign map
We define a Koszul sign map encoding the Koszul sign convention. A cohomological interpretation is given.
0
0
1
0
0
0
Combining low- to high-resolution transit spectroscopy of HD 189733b. Linking the troposphere and the thermosphere of a hot gas giant
Space-borne low-to medium-resolution (R~10^2-10^3) transmission spectroscopy of atmospheres detect the broadest spectral features (alkali doublets, molecular bands, scattering), while high-resolution (R~10^5), ground-based observations probe the sharpest features (cores of the alkali lines, molecular lines).The two techniques differ by:(1) The LSF of ground-based observations is 10^3 times narrower than for space-borne observations;(2)Space-borne transmission spectra probe up to the base of thermosphere, while ground-based observations can reach pressures down to 10^(-11);(3)Space-borne observations directly yield the transit depth of the planet, while ground-based observations measure differences in the radius of the planet at different wavelengths.It is challenging to combine both techniques.We develop a method to compare theoretical models with observations at different resolutions.We introduce PyETA, a line-by-line 1D radiative transfer code to compute transmission spectra at R~10^6 (0.01 A) over a broad wavelength range.An hybrid forward modeling/retrieval optimization scheme is devised to deal with the large computational resources required by modeling a broad wavelength range (0.3-2 $\mu$m) at high resolution.We apply our technique to HD189733b.Here, HST observations reveal a flattened spectrum due to scattering by aerosols, while high-resolution ground-based HARPS observations reveal the sharp cores of sodium lines.We reconcile these results by building models that reproduce simultaneously both data sets, from the troposphere to the thermosphere. We confirm:(1)the presence of scattering by tropospheric aerosols;(2)that the sodium core feature is of thermospheric origin.Accounting for aerosols, the sodium cores indicate T up to 10000K in the thermosphere.The precise value of the thermospheric temperature is degenerate with the abundance of sodium and altitude of the aerosol deck.
0
1
0
0
0
0
Factorizations in Modules and Splitting Multiplicatively Closed Subsets
We introduce the concept of multiplicatively closed subsets of a commutative ring $R$ which split an $R$-module $M$ and study factorization properties of elements of $M$ with respect to such a set. Also we demonstrate how one can utilize this concept to investigate factorization properties of $R$ and deduce some Nagata type theorems relating factorization properties of $R$ to those of its localizations, when $R$ is an integral domain.
0
0
1
0
0
0
ROPPERI - A TPC readout with GEMs, pads and Timepix
The concept of a hybrid readout of a time projection chamber is presented. It combines a GEM-based amplification and a pad-based anode plane with a pixel chip as readout electronics. This way, a high granularity enabling to identify electron clusters from the primary ionisation is achieved as well as flexibility and large anode coverage. The benefits of this high granularity, in particular for dE/dx measurements, are outlined and the current software and hardware development status towards a proof-of-principle is given.
0
1
0
0
0
0
A Shared Task on Bandit Learning for Machine Translation
We introduce and describe the results of a novel shared task on bandit learning for machine translation. The task was organized jointly by Amazon and Heidelberg University for the first time at the Second Conference on Machine Translation (WMT 2017). The goal of the task is to encourage research on learning machine translation from weak user feedback instead of human references or post-edits. On each of a sequence of rounds, a machine translation system is required to propose a translation for an input, and receives a real-valued estimate of the quality of the proposed translation for learning. This paper describes the shared task's learning and evaluation setup, using services hosted on Amazon Web Services (AWS), the data and evaluation metrics, and the results of various machine translation architectures and learning protocols.
1
0
0
1
0
0
Double Sparsity Kernel Learning with Automatic Variable Selection and Data Extraction
Learning with Reproducing Kernel Hilbert Spaces (RKHS) has been widely used in many scientific disciplines. Because a RKHS can be very flexible, it is common to impose a regularization term in the optimization to prevent overfitting. Standard RKHS learning employs the squared norm penalty of the learning function. Despite its success, many challenges remain. In particular, one cannot directly use the squared norm penalty for variable selection or data extraction. Therefore, when there exists noise predictors, or the underlying function has a sparse representation in the dual space, the performance of standard RKHS learning can be suboptimal. In the literature,work has been proposed on how to perform variable selection in RKHS learning, and a data sparsity constraint was considered for data extraction. However, how to learn in a RKHS with both variable selection and data extraction simultaneously remains unclear. In this paper, we propose a unified RKHS learning method, namely, DOuble Sparsity Kernel (DOSK) learning, to overcome this challenge. An efficient algorithm is provided to solve the corresponding optimization problem. We prove that under certain conditions, our new method can asymptotically achieve variable selection consistency. Simulated and real data results demonstrate that DOSK is highly competitive among existing approaches for RKHS learning.
0
0
0
1
0
0
Controlling of blow-up responses by a nonlinear $\cal{PT}$ symmetric coupling
We investigate the dynamics of a coupled waveguide system with competing linear and nonlinear loss-gain profiles which can facilitate power saturation. We show the usefulness of the model in achieving unidirectional beam propagation. In this regard, the considered type of coupled waveguide system has two drawbacks, (i) difficulty in achieving perfect isolation of light in a waveguide and (ii) existence of blow-up type behavior for certain input power situations. We here show a nonlinear $\cal{PT}$ symmetric coupling that helps to overcome these two drawbacks. Such a nonlinear coupling has close connection with the phenomenon of stimulated Raman scattering. In particular, we have elucidated the role of this nonlinear coupling using an integrable $\cal{PT}$ symmetric situation. In particular, using the integrals of motion, we have reduced this coupled waveguide problem to the problem of dynamics of a particle in a potential. With the latter picture, we have clearly illustrated the role of the considered nonlinear coupling. The above $\cal{PT}$ symmetric case corresponds to a limiting form of a general equation describing the phenomenon of stimulated Raman scattering. We also point out the ability to transport light unidirectionally even in this general case.
0
1
0
0
0
0
Text Extraction From Texture Images Using Masked Signal Decomposition
Text extraction is an important problem in image processing with applications from optical character recognition to autonomous driving. Most of the traditional text segmentation algorithms consider separating text from a simple background (which usually has a different color from texts). In this work we consider separating texts from a textured background, that has similar color to texts. We look at this problem from a signal decomposition perspective, and consider a more realistic scenario where signal components are overlaid on top of each other (instead of adding together). When the signals are overlaid, to separate signal components, we need to find a binary mask which shows the support of each component. Because directly solving the binary mask is intractable, we relax this problem to the approximated continuous problem, and solve it by alternating optimization method. We show that the proposed algorithm achieves significantly better results than other recent works on several challenging images.
1
0
0
0
0
0
Deep Echo State Networks with Uncertainty Quantification for Spatio-Temporal Forecasting
Long-lead forecasting for spatio-temporal systems can often entail complex nonlinear dynamics that are difficult to specify it a priori. Current statistical methodologies for modeling these processes are often highly parameterized and thus, challenging to implement from a computational perspective. One potential parsimonious solution to this problem is a method from the dynamical systems and engineering literature referred to as an echo state network (ESN). ESN models use so-called {\it reservoir computing} to efficiently compute recurrent neural network (RNN) forecasts. Moreover, so-called "deep" models have recently been shown to be successful at predicting high-dimensional complex nonlinear processes, particularly those with multiple spatial and temporal scales of variability (such as we often find in spatio-temporal environmental data). Here we introduce a deep ensemble ESN (D-EESN) model. We present two versions of this model for spatio-temporal processes that both produce forecasts and associated measures of uncertainty. The first approach utilizes a bootstrap ensemble framework and the second is developed within a hierarchical Bayesian framework (BD-EESN). This more general hierarchical Bayesian framework naturally accommodates non-Gaussian data types and multiple levels of uncertainties. The methodology is first applied to a data set simulated from a novel non-Gaussian multiscale Lorenz-96 dynamical system simulation model and then to a long-lead United States (U.S.) soil moisture forecasting application.
0
0
0
1
0
0
Using Convex Optimization of Autocorrelation with Constrained Support and Windowing for Improved Phase Retrieval Accuracy
In imaging modalities recording diffraction data, the original image can be reconstructed assuming known phases. When phases are unknown, oversampling and a constraint on the support region in the original object can be used to solve a non-convex optimization problem. Such schemes are ill-suited to find the optimum solution for sparse data, since the recorded image does not correspond exactly to the original wave function. We construct a convex optimization problem using a relaxed support constraint and a maximum-likelihood treatment of the recorded data as a sample from the underlying wave function. We also stress the need to use relevant windowing techniques to account for the sampled pattern being finite. On simulated data, we demonstrate the benefits of our approach in terms of visual quality and an improvement in the crystallographic R-factor from .4 to .1 for highly noisy data.
1
0
0
0
0
0
Parametrizations, weights, and optimal prediction: Part 1
We consider the problem of the annual mean temperature prediction. The years taken into account and the corresponding annual mean temperatures are denoted by $0,\ldots, n$ and $t_0$, $\ldots$, $t_n$, respectively. We propose to predict the temperature $t_{n+1}$ using the data $t_0$, $\ldots$, $t_n$. For each $0\leq l\leq n$ and each parametrization $\Theta^{(l)}$ of the Euclidean space $\mathbb{R}^{l+1}$ we construct a list of weights for the data $\{t_0,\ldots, t_l\}$ based on the rows of $\Theta^{(l)}$ which are correlated with the constant trend. Using these weights we define a list of predictors of $t_{l+1}$ from the data $t_0$, $\ldots$, $t_l$. We analyse how the parametrization affects the prediction, and provide three optimality criteria for the selection of weights and parametrization. We illustrate our results for the annual mean temperature of France and Morocco.
0
0
0
1
0
0
Time irreversibility from symplectic non-squeezing
The issue of how time reversible microscopic dynamics gives rise to macroscopic irreversible processes has been a recurrent issue in Physics since the time of Boltzmann whose ideas shaped, and essentially resolved, such an apparent contradiction. Following Boltzmann's spirit and ideas, but employing Gibbs's approach, we advance the view that macroscopic irreversibility of Hamiltonian systems of many degrees of freedom can be also seen as a result of the symplectic non-squeezing theorem.
0
1
1
0
0
0
On Optimal Weighted-Delay Scheduling in Input-Queued Switches
Motivated by relatively few delay-optimal scheduling results, in comparison to results on throughput optimality, we investigate an input-queued switch scheduling problem in which the objective is to minimize a linear function of the queue-length vector. Theoretical properties of variants of the well-known MaxWeight scheduling algorithm are established within this context, which includes showing that these algorithms exhibit optimal heavy-traffic queue-length scaling. For the case of $2 \times 2$ input-queued switches, we derive an optimal scheduling policy and establish its theoretical properties, demonstrating fundamental differences with the variants of MaxWeight scheduling. Our theoretical results are expected to be of interest more broadly than input-queued switches. Computational experiments demonstrate and quantify the benefits of our optimal scheduling policy.
0
0
1
0
0
0
Hausdorff dimension, projections, intersections, and Besicovitch sets
This is a survey on recent developments on the Hausdorff dimension of projections and intersections for general subsets of Euclidean spaces, with an emphasis on estimates of the Hausdorff dimension of exceptional sets and on restricted projection families. We shall also discuss relations between projections and Hausdorff dimension of Besicovitch sets.
0
0
1
0
0
0
Interpolating between matching and hedonic pricing models
We consider the theoretical properties of a model which encompasses bi-partite matching under transferable utility on the one hand, and hedonic pricing on the other. This framework is intimately connected to tripartite matching problems (known as multi-marginal optimal transport problems in the mathematical literature). We exploit this relationship in two ways; first, we show that a known structural result from multi-marginal optimal transport can be used to establish an upper bound on the dimension of the support of stable matchings. Next, assuming the distribution of agents on one side of the market is continuous, we identify a condition on their preferences that ensures purity and uniqueness of the stable matching; this condition is a variant of a known condition in the mathematical literature, which guarantees analogous properties in the multi-marginal optimal transport problem. We exhibit several examples of surplus functions for which our condition is satisfied, as well as some for which it fails.
0
0
1
0
0
0
Modeling and predicting the short term evolution of the Geomagnetic field
The coupled evolution of the magnetic field and the flow at the Earth's core mantle boundary is modeled within the 1900.0-2014.0 time period. To constraint the dynamical behavior of the system with a core field model deriving from direct measurements of the Earth's magnetic field we used an Ensemble Kalman filter algorithm. By simulating an ensemble of possible states, access to the complete statistical properties of the considered fields is available. Furthermore, the method enables to provide predictions and to assess their reliability. In this study, we could highlight the cohabitation of two distinct flow regimes. One associated with the large scale part of the eccentric gyre, which evolves slowly in time and posses a very long memory of its past, and a faster one associated with the small scale velocity field. We show that the latter can exhibit rapid variations in localized areas. The combination of the two regimes can predict quite well the decadal variations in length of day, but it can also explain the discrepancies between the physically predicted and the observed trend in these variations. Hindcast tests demonstrate that the model is well balanced and that it can provide accurate short term predictions of a mean state and its associated uncertainties. However, magnetic field predictions are limited by the high randomization rate of the different velocity field scales, and after approximately 2000 years of forecast, no reliable information on the core field can be recovered.
0
1
0
0
0
0
Analysing the Potential of BLE to Support Dynamic Broadcasting Scenarios
In this paper, we present a novel approach for broadcasting information based on a Bluetooth Low Energy (BLE) ibeacon technology. We propose a dynamic method that uses a combination of Wi-Fi and BLE technology where every technology plays a part in a user discovery and broadcasting process. In such system, a specific ibeacon device broadcasts the information when a user is in proximity. Using experiments, we conduct a scenario where the system discovers users, disseminates information, and later we use collected data to examine the system performance and capability. The results show that our proposed approach has a promising potential to become a powerful tool in the discovery and broadcasting concept that can be easily implemented and used in business environments.
1
0
0
0
0
0
$\texttt{PyTranSpot}$ - A tool for multiband light curve modeling of planetary transits and stellar spots
Several studies have shown that stellar activity features, such as occulted and non-occulted starspots, can affect the measurement of transit parameters biasing studies of transit timing variations and transmission spectra. We present $\texttt{PyTranSpot}$, which we designed to model multiband transit light curves showing starspot anomalies, inferring both transit and spot parameters. The code follows a pixellation approach to model the star with its corresponding limb darkening, spots, and transiting planet on a two dimensional Cartesian coordinate grid. We combine $\texttt{PyTranSpot}$ with an MCMC framework to study and derive exoplanet transmission spectra, which provides statistically robust values for the physical properties and uncertainties of a transiting star-planet system. We validate $\texttt{PyTranSpot}$'s performance by analyzing eleven synthetic light curves of four different star-planet systems and 20 transit light curves of the well-studied WASP-41b system. We also investigate the impact of starspots on transit parameters and derive wavelength dependent transit depth values for WASP-41b covering a range of 6200-9200 $\AA$, indicating a flat transmission spectrum.
0
1
0
0
0
0
Interplay of spatial dynamics and local adaptation shapes species lifetime distributions and species-area relationships
The distributions of species lifetimes and species in space are related, since species with good local survival chances have more time to colonize new habitats and species inhabiting large areas have higher chances to survive local disturbances. Yet, both distributions have been discussed in mostly separate communities. Here, we study both patterns simultaneously using a spatially explicit, evolutionary community assembly approach. We present and investigate a metacommunity model, consisting of a grid of patches, where each patch contains a local food web. Species survival depends on predation and competition interactions, which in turn depend on species body masses as the key traits. The system evolves due to the migration of species to neighboring patches, the addition of new species as modifications of existing species, and local extinction events. The structure of each local food web thus emerges in a self-organized manner as the highly non-trivial outcome of the relative time scales of these processes. Our model generates a large variety of complex, multi-trophic networks and therefore serves as a powerful tool to investigate ecosystems on long temporal and large spatial scales. We find that the observed lifetime distributions and species-area relations resemble power laws over appropriately chosen parameter ranges and thus agree qualitatively with empirical findings. Moreover, we observe strong finite-size effects, and a dependence of the relationships on the trophic level of the species. By comparing our results to simple neutral models found in the literature, we identify the features that are responsible for the values of the exponents.
0
0
0
0
1
0
Routing Symmetric Demands in Directed Minor-Free Graphs with Constant Congestion
The problem of routing in graphs using node-disjoint paths has received a lot of attention and a polylogarithmic approximation algorithm with constant congestion is known for undirected graphs [Chuzhoy and Li 2016] and [Chekuri and Ene 2013]. However, the problem is hard to approximate within polynomial factors on directed graphs, for any constant congestion [Chuzhoy, Kim and Li 2016]. Recently, [Chekuri, Ene and Pilipczuk 2016] have obtained a polylogarithmic approximation with constant congestion on directed planar graphs, for the special case of symmetric demands. We extend their result by obtaining a polylogarithmic approximation with constant congestion on arbitrary directed minor-free graphs, for the case of symmetric demands.
1
0
0
0
0
0
Fast sampling of parameterised Gaussian random fields
Gaussian random fields are popular models for spatially varying uncertainties, arising for instance in geotechnical engineering, hydrology or image processing. A Gaussian random field is fully characterised by its mean function and covariance operator. In more complex models these can also be partially unknown. In this case we need to handle a family of Gaussian random fields indexed with hyperparameters. Sampling for a fixed configuration of hyperparameters is already very expensive due to the nonlocal nature of many classical covariance operators. Sampling from multiple configurations increases the total computational cost severely. In this report we employ parameterised Karhunen-Loève expansions for sampling. To reduce the cost we construct a reduced basis surrogate built from snapshots of Karhunen-Loève eigenvectors. In particular, we consider Matérn-type covariance operators with unknown correlation length and standard deviation. We suggest a linearisation of the covariance function and describe the associated online-offline decomposition. In numerical experiments we investigate the approximation error of the reduced eigenpairs. As an application we consider forward uncertainty propagation and Bayesian inversion with an elliptic partial differential equation where the logarithm of the diffusion coefficient is a parameterised Gaussian random field. In the Bayesian inverse problem we employ Markov chain Monte Carlo on the reduced space to generate samples from the posterior measure. All numerical experiments are conducted in 2D physical space, with non-separable covariance operators, and finite element grids with $\sim 10^4$ degrees of freedom.
1
0
0
1
0
0
Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis
We show how the discovery of robust scalable numerical solvers for arbitrary bounded linear operators can be automated as a Game Theory problem by reformulating the process of computing with partial information and limited resources as that of playing underlying hierarchies of adversarial information games. When the solution space is a Banach space $B$ endowed with a quadratic norm $\|\cdot\|$, the optimal measure (mixed strategy) for such games (e.g. the adversarial recovery of $u\in B$, given partial measurements $[\phi_i, u]$ with $\phi_i\in B^*$, using relative error in $\|\cdot\|$-norm as a loss) is a centered Gaussian field $\xi$ solely determined by the norm $\|\cdot\|$, whose conditioning (on measurements) produces optimal bets. When measurements are hierarchical, the process of conditioning this Gaussian field produces a hierarchy of elementary bets (gamblets). These gamblets generalize the notion of Wavelets and Wannier functions in the sense that they are adapted to the norm $\|\cdot\|$ and induce a multi-resolution decomposition of $B$ that is adapted to the eigensubspaces of the operator defining the norm $\|\cdot\|$. When the operator is localized, we show that the resulting gamblets are localized both in space and frequency and introduce the Fast Gamblet Transform (FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT can be used to solve and diagonalize arbitrary PDEs with constant coefficients, the FGT can be used to decompose a wide range of continuous linear operators (including arbitrary continuous linear bijections from $H^s_0$ to $H^{-s}$ or to $L^2$) into a sequence of independent linear systems with uniformly bounded condition numbers and leads to $\mathcal{O}(N \operatorname{polylog} N)$ solvers and eigenspace adapted Multiresolution Analysis (resulting in near linear complexity approximation of all eigensubspaces).
0
0
1
1
0
0
Preliminary Experiments using Subjective Logic for the Polyrepresentation of Information Needs
According to the principle of polyrepresentation, retrieval accuracy may improve through the combination of multiple and diverse information object representations about e.g. the context of the user, the information sought, or the retrieval system. Recently, the principle of polyrepresentation was mathematically expressed using subjective logic, where the potential suitability of each representation for improving retrieval performance was formalised through degrees of belief and uncertainty. No experimental evidence or practical application has so far validated this model. We extend the work of Lioma et al. (2010), by providing a practical application and analysis of the model. We show how to map the abstract notions of belief and uncertainty to real-life evidence drawn from a retrieval dataset. We also show how to estimate two different types of polyrepresentation assuming either (a) independence or (b) dependence between the information objects that are combined. We focus on the polyrepresentation of different types of context relating to user information needs (i.e. work task, user background knowledge, ideal answer) and show that the subjective logic model can predict their optimal combination prior and independently to the retrieval process.
1
0
0
0
0
0
General Robust Bayes Pseudo-Posterior: Exponential Convergence results with Applications
Although Bayesian inference is an immensely popular paradigm among a large segment of scientists including statisticians, most of the applications consider the objective priors and need critical investigations (Efron, 2013, Science). And although it has several optimal properties, one major drawback of Bayesian inference is the lack of robustness against data contamination and model misspecification, which becomes pernicious in the use of objective priors. This paper presents the general formulation of a Bayes pseudo-posterior distribution yielding robust inference. Exponential convergence results related to the new pseudo-posterior and the corresponding Bayes estimators are established under the general parametric set-up and illustrations are provided for the independent stationary models and the independent non-homogenous models. For the first case, the discrete priors and the corresponding maximum posterior estimators are discussed with additional details. We further apply this new pseudo-posterior to propose robust versions of the Bayes predictive density estimators and the expected Bayes estimator for the fixed-design (normal) linear regression models; their properties are illustrated both theoretically as well as empirically.
0
0
1
1
0
0
The $H_0$ tension in light of vacuum dynamics in the Universe
Despite the outstanding achievements of modern cosmology, the classical dispute on the precise value of $H_0$, which is the first ever parameter of modern cosmology and one of the prime parameters in the field, still goes on and on after over half a century of measurements. Recently the dispute came to the spotlight with renewed strength owing to the significant tension (at $>3\sigma$ c.l.) between the latest Planck determination obtained from the CMB anisotropies and the local (distance ladder) measurement from the Hubble Space Telescope (HST), based on Cepheids. In this work, we investigate the impact of the running vacuum model (RVM) and related models on such a controversy. For the RVM, the vacuum energy density $\rho_{\Lambda}$ carries a mild dependence on the cosmic expansion rate, i.e. $\rho_{\Lambda}(H)$, which allows to ameliorate the fit quality to the overall $SNIa+BAO+H(z)+LSS+CMB$ cosmological observations as compared to the concordance $\Lambda$CDM model. By letting the RVM to deviate from the vacuum option, the equation of state $w=-1$ continues to be favored by the overall fit. Vacuum dynamics also predicts the following: i) the CMB range of values for $H_0$ is more favored than the local ones, and ii) smaller values for $\sigma_8(0)$. As a result, a better account for the LSS structure formation data is achieved as compared to the $\Lambda$CDM, which is based on a rigid (i.e. non-dynamical) $\Lambda$ term.
0
1
0
0
0
0
On certain type of difference polynomials of meromorphic functions
In this paper, we investigate zeros of difference polynomials of the form $f(z)^nH(z, f)-s(z)$, where $f(z)$ is a meromorphic function, $H(z, f)$ is a difference polynomial of $f(z)$ and $s(z)$ is a small function. We first obtain some inequalities for the relationship of the zero counting function of $f(z)^nH(z, f)-s(z)$ and the characteristic function and pole counting function of $f(z)$. Based on these inequalities, we establish some difference analogues of a classical result of Hayman for meromorphic functions. Some special cases are also investigated. These results improve previous findings.
0
0
1
0
0
0
Teaching methods are erroneous: approaches which lead to erroneous end-user computing
If spreadsheets are not erroneous then who, or what, is? Research has found that end-users are. If end-users are erroneous then why are they? Research has found that responsibility lies with human beings' fast and slow thinking modes and the inappropriate way they use them. If we are aware of this peculiarity of human thinking, then why do we not teach students how to train their brains? This is the main problem, this is the weakest link in the process: teaching. We have to make teachers realize that end-users are erroneous because of the erroneous teaching approaches to end-user computing. The proportion of fast and slow thinking modes is not constant, and teachers are mistaken when they apply the same proportion in both the teaching and end-user roles. Teachers should believe in the incremental nature of science and have high self-efficacy to make students understand and appreciate science. This is not currently the case in ICT and CS, and it is high time fundamental changes were introduced.
1
0
0
0
0
0
Modulational Instability in Linearly Coupled Asymmetric Dual-Core Fibers
We investigate modulational instability (MI) in asymmetric dual-core nonlinear directional couplers incorporating the effects of the differences in effective mode areas and group velocity dispersions, as well as phase- and group-velocity mismatches. Using coupled-mode equations for this system, we identify MI conditions from the linearization with respect to small perturbations. First, we compare the MI spectra of the asymmetric system and its symmetric counterpart in the case of the anomalous group-velocity dispersion (GVD). In particular, it is demonstrated that the increase of the inter-core linear-coupling coefficient leads to a reduction of the MI gain spectrum in the asymmetric coupler. The analysis is extended for the asymmetric system in the normal-GVD regime, where the coupling induces and controls the MI, as well as for the system with opposite GVD signs in the two cores. Following the analytical consideration of the MI, numerical simulations are carried out to explore nonlinear development of the MI, revealing the generation of periodic chains of localized peaks with growing amplitudes, which may transform into arrays of solitons.
0
1
0
0
0
0
Discovering Visual Concept Structure with Sparse and Incomplete Tags
Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
1
0
0
0
0
0
GALILEO: A Generalized Low-Entropy Mixture Model
We present a new method of generating mixture models for data with categorical attributes. The keys to this approach are an entropy-based density metric in categorical space and annealing of high-entropy/low-density components from an initial state with many components. Pruning of low-density components using the entropy-based density allows GALILEO to consistently find high-quality clusters and the same optimal number of clusters. GALILEO has shown promising results on a range of test datasets commonly used for categorical clustering benchmarks. We demonstrate that the scaling of GALILEO is linear in the number of records in the dataset, making this method suitable for very large categorical datasets.
1
0
0
1
0
0
Approximation by mappings with singular Hessian minors
Let $\Omega\subset\mathbb R^n$ be a Lipschitz domain. Given $1\leq p<k\leq n$ and any $u\in W^{2,p}(\Omega)$ belonging to the little Hölder class $c^{1,\alpha}$, we construct a sequence $u_j$ in the same space with $\operatorname{rank}D^2u_j<k$ almost everywhere such that $u_j\to u$ in $C^{1,\alpha}$ and weakly in $W^{2,p}$. This result is in strong contrast with known regularity behavior of functions in $W^{2,p}$, $p\geq k$, satisfying the same rank inequality.
0
0
1
0
0
0
Adaptive Cardinality Estimation
In this paper we address cardinality estimation problem which is an important subproblem in query optimization. Query optimization is a part of every relational DBMS responsible for finding the best way of the execution for the given query. These ways are called plans. The execution time of different plans may differ by several orders, so query optimizer has a great influence on the whole DBMS performance. We consider cost-based query optimization approach as the most popular one. It was observed that cost-based optimization quality depends much on cardinality estimation quality. Cardinality of the plan node is the number of tuples returned by it. In the paper we propose a novel cardinality estimation approach with the use of machine learning methods. The main point of the approach is using query execution statistics of the previously executed queries to improve cardinality estimations. We called this approach adaptive cardinality estimation to reflect this point. The approach is general, flexible, and easy to implement. The experimental evaluation shows that this approach significantly increases the quality of cardinality estimation, and therefore increases the DBMS performance for some queries by several times or even by several dozens of times.
1
0
0
1
0
0
Non-stationary Stochastic Optimization under $L_{p,q}$-Variation Measures
We consider a non-stationary sequential stochastic optimization problem, in which the underlying cost functions change over time under a variation budget constraint. We propose an $L_{p,q}$-variation functional to quantify the change, which yields less variation for dynamic function sequences whose changes are constrained to short time periods or small subsets of input domain. Under the $L_{p,q}$-variation constraint, we derive both upper and matching lower regret bounds for smooth and strongly convex function sequences, which generalize previous results in Besbes et al. (2015). Furthermore, we provide an upper bound for general convex function sequences with noisy gradient feedback, which matches the optimal rate as $p\to\infty$. Our results reveal some surprising phenomena under this general variation functional, such as the curse of dimensionality of the function domain. The key technical novelties in our analysis include affinity lemmas that characterize the distance of the minimizers of two convex functions with bounded Lp difference, and a cubic spline based construction that attains matching lower bounds.
1
0
0
1
0
0
Spin conductance of YIG thin films driven from thermal to subthermal magnons regime by large spin-orbit torque
We report a study on spin conductance in ultra-thin films of Yttrium Iron Garnet (YIG), where spin transport is provided by propagating spin waves, that are generated and detected by direct and inverse spin Hall effects in two Pt wires deposited on top. While at low current the spin conductance is dominated by transport of thermal magnons, at high current, the spin conductance is dominated by low-damping non-equilibrium magnons thermalized near the spectral bottom by magnon-magnon interaction, with consequent a sensitivity to the applied magnetic field and a longer decay length. This picture is supported by microfocus Brillouin Light Scattering spectroscopy.
0
1
0
0
0
0
Mathematical renormalization in quantum electrodynamics via noncommutative generating series
In this work, we focus on on the approach by noncommutative formal power series to study the combinatorial aspects of the renormalization at the singularities in $\{0,1,+\infty\}$ of the solutions of nonlinear differential equations involved in quantum electrodynamics.
0
0
1
0
0
0
Inference in Deep Networks in High Dimensions
Deep generative networks provide a powerful tool for modeling complex data in a wide range of applications. In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs. Inference is also required for sampling during stochastic training on these generative models. This paper considers inference in a deep stochastic neural network where the parameters (e.g., weights, biases and activation functions) are known and the problem is to estimate the values of the input and hidden units from the output. While several approximate algorithms have been proposed for this task, there are few analytic tools that can provide rigorous guarantees in the reconstruction error. This work presents a novel and computationally tractable output-to-input inference method called Multi-Layer Vector Approximate Message Passing (ML-VAMP). The proposed algorithm, derived from expectation propagation, extends earlier AMP methods that are known to achieve the replica predictions for optimality in simple linear inverse problems. Our main contribution shows that the mean-squared error (MSE) of ML-VAMP can be exactly predicted in a certain large system limit (LSL) where the numbers of layers is fixed and weight matrices are random and orthogonally-invariant with dimensions that grow to infinity. ML-VAMP is thus a principled method for output-to-input inference in deep networks with a rigorous and precise performance achievability result in high dimensions.
1
0
0
1
0
0
Differentially Private Variational Dropout
Deep neural networks with their large number of parameters are highly flexible learning systems. The high flexibility in such networks brings with some serious problems such as overfitting, and regularization is used to address this problem. A currently popular and effective regularization technique for controlling the overfitting is dropout. Often, large data collections required for neural networks contain sensitive information such as the medical histories of patients, and the privacy of the training data should be protected. In this paper, we modify the recently proposed variational dropout technique which provided an elegant Bayesian interpretation to dropout, and show that the intrinsic noise in the variational dropout can be exploited to obtain a degree of differential privacy. The iterative nature of training neural networks presents a challenge for privacy-preserving estimation since multiple iterations increase the amount of noise added. We overcome this by using a relaxed notion of differential privacy, called concentrated differential privacy, which provides tighter estimates on the overall privacy loss. We demonstrate the accuracy of our privacy-preserving variational dropout algorithm on benchmark datasets.
1
0
0
1
0
0
Persistent Currents in Ferromagnetic Condensates
Persistent currents in Bose condensates with a scalar order parameter are stabilized by the topology of the order parameter manifold. In condensates with multicomponent order parameters it is topologically possible for supercurrents to `unwind' without leaving the manifold. We study the energetics of this process in the case of ferromagnetic condensates using a long wavelength energy functional that includes both the superfluid and spin stiffnesses. Exploiting analogies to an elastic rod and rigid body motion, we show that the current carrying state in a 1D ring geometry transitions between a spin helix in the energy minima and a soliton-like configuration at the maxima. The relevance to recent experiments in ultracold atoms is briefly discussed.
0
1
0
0
0
0
Parameter Adaptation and Criticality in Particle Swarm Optimization
Generality is one of the main advantages of heuristic algorithms, as such, multiple parameters are exposed to the user with the objective of allowing them to shape the algorithms to their specific needs. Parameter selection, therefore, becomes an intrinsic problem of every heuristic algorithm. Selecting good parameter values relies not only on knowledge related to the problem at hand, but to the algorithms themselves. This research explores the usage of self-organized criticality to reduce user interaction in the process of selecting suitable parameters for particle swarm optimization (PSO) heuristics. A particle swarm variant (named Adaptive PSO) with self-organized criticality is developed and benchmarked against the standard PSO. Criticality is observed in the dynamic behaviour of this swarm and excellent results are observed in the long run. In contrast with the standard PSO, the Adaptive PSO does not stagnate at any point in time, balancing the concepts of exploration and exploitation better. A software platform for experimenting with particle swarms, called PSO Laboratory, is also developed. This software is used to test the standard PSO as well as all other PSO variants developed in the process of creating the Adaptive PSO. As the software is intended to be of aid to future and related research, special attention has been put in the development of a friendly graphical user interface. Particle swarms are executed in real time, allowing users to experiment by changing parameters on-the-fly.
1
0
0
0
0
0
Model Predictive Control meets robust Kalman filtering
Model Predictive Control (MPC) is the principal control technique used in industrial applications. Although it offers distinguishable qualities that make it ideal for industrial applications, it can be questioned its robustness regarding model uncertainties and external noises. In this paper we propose a robust MPC controller that merges the simplicity in the design of MPC with added robustness. In particular, our control system stems from the idea of adding robustness in the prediction phase of the algorithm through a specific robust Kalman filter recently introduced. Notably, the overall result is an algorithm very similar to classic MPC but that also provides the user with the possibility to tune the robustness of the control. To test the ability of the controller to deal with errors in modeling, we consider a servomechanism system characterized by nonlinear dynamics.
0
0
1
0
0
0
Election forensic analysis of the Turkish Constitutional Referendum 2017
With a majority of 'Yes' votes in the Constitutional Referendum of 2017, Turkey continues its transition from democracy to autocracy. By the will of the Turkish people, this referendum transferred practically all executive power to president Erdogan. However, the referendum was confronted with a substantial number of allegations of electoral misconducts and irregularities, ranging from state coercion of 'No' supporters to the controversial validity of unstamped ballots. In this note we report the results of an election forensic analysis of the 2017 referendum to clarify to what extent these voting irregularities were present and if they were able to influence the outcome of the referendum. We specifically apply novel statistical forensics tests to further identify the specific nature of electoral malpractices. In particular, we test whether the data contains fingerprints for ballot-stuffing (submission of multiple ballots per person during the vote) and voter rigging (coercion and intimidation of voters). Additionally, we perform tests to identify numerical anomalies in the election results. We find systematic and highly significant support for the presence of both, ballot-stuffing and voter rigging. In 6% of stations we find signs for ballot-stuffing with an error (probability of ballot-stuffing not happening) of 0.15% (3 sigma event). The influence of these vote distortions were large enough to tip the overall balance from 'No' to a majority of 'Yes' votes.
0
1
0
1
0
0
Efficient Bayesian inference for multivariate factor stochastic volatility models with leverage
This paper discusses the efficient Bayesian estimation of a multivariate factor stochastic volatility (Factor MSV) model with leverage. We propose a novel approach to construct the sampling schemes that converges to the posterior distribution of the latent volatilities and the parameters of interest of the Factor MSV model based on recent advances in Particle Markov chain Monte Carlo (PMCMC). As opposed to the approach of Chib et al. (2006} and Omori et al. (2007}, our approach does not require approximating the joint distribution of outcome and volatility innovations by a mixture of bivariate normal distributions. To sample the free elements of the loading matrix we employ the interweaving method used in Kastner et al. (2017} in the Particle Metropolis within Gibbs (PMwG) step. The proposed method is illustrated empirically using a simulated dataset and a sample of daily US stock returns.
0
0
0
1
0
0
Next Steps for the Colorado Risk-Limiting Audit (CORLA) Program
Colorado conducted risk-limiting tabulation audits (RLAs) across the state in 2017, including both ballot-level comparison audits and ballot-polling audits. Those audits only covered contests restricted to a single county; methods to efficiently audit contests that cross county boundaries and combine ballot polling and ballot-level comparisons have not been available. Colorado's current audit software (RLATool) needs to be improved to audit these contests that cross county lines and to audit small contests efficiently. This paper addresses these needs. It presents extremely simple but inefficient methods, more efficient methods that combine ballot polling and ballot-level comparisons using stratified samples, and methods that combine ballot-level comparison and variable-size batch comparison audits in a way that does not require stratified sampling. We conclude with some recommendations, and illustrate our recommended method using examples that compare them to existing approaches. Exemplar open-source code and interactive Jupyter notebooks are provided that implement the methods and allow further exploration.
0
0
0
1
0
0
HD 202206 : A Circumbinary Brown Dwarf System
With Hubble Space Telescope Fine Guidance Sensor astrometry and previously published radial velocity measures we explore the exoplanetary system HD 202206. Our modeling results in a parallax, $\pi_{abs} = 21.96\pm0.12$ milliseconds of arc, a mass for HD 202206 B of M$_B = 0.089^{ +0.007}_{-0.006}$ Msun, and a mass for HD 202206 c of M$_c = 17.9 ^{ +2.9}_{-1.8}$ MJup. HD 202206 is a nearly face-on G+M binary orbited by a brown dwarf. The system architecture we determine supports past assertions that stability requires a 5:1 mean motion resonance (we find a period ratio, $P_c/P_B = 4.92\pm0.04$) and coplanarity (we find a mutual inclination, Phi = 6 \arcdeg \pm 2 \arcdeg).
0
1
0
0
0
0
On Optimal Group Claims at Voting in a Stochastic Environment
There is a paradox in the model of social dynamics determined by voting in a stochastic environment (the ViSE model) called "pit of losses." It consists in the fact that a series of democratic decisions may systematically lead the society to the states unacceptable for all the voters. The paper examines how this paradox can be neutralized by the presence in society of a group that votes for its benefit and can regulate the threshold of its claims. We obtain and analyze analytical results characterizing the welfare of the whole society, the group, and the other participants as functions of the said claims threshold.
1
0
1
0
0
0
Exploring many body localization and thermalization using semiclassical method
The Discrete Truncated Wigner Approximation (DTWA) is a semi-classical phase space method useful for the exploration of Many-body quantum dynamics. In this work we investigate Many-Body Localization (MBL) and thermalization using DTWA and compare its performance to exact numerical solutions. By taking as a benchmark case a 1D random field Heisenberg spin chain with short range interactions, and by comparing to numerically exact techniques, we show that DTWA is able to reproduce dynamical signatures that characterize both the thermal and the MBL phases. It exhibits the best quantitative agreement at short times deep in each of the phases and larger mismatches close to the phase transition. The DTWA captures the logarithmic growth of entanglement in the MBL phase, even though a pure classical mean-field analysis would lead to no dynamics at all. Our results suggest the DTWA can become a useful method to investigate MBL and thermalization in experimentally relevant settings intractable with exact numerical techniques, such as systems with long range interactions and/or systems in higher dimensions.
0
1
0
0
0
0
An FPTAS for the Knapsack Problem with Parametric Weights
In this paper, we investigate the parametric weight knapsack problem, in which the item weights are affine functions of the form $w_i(\lambda) = a_i + \lambda \cdot b_i$ for $i \in \{1,\ldots,n\}$ depending on a real-valued parameter $\lambda$. The aim is to provide a solution for all values of the parameter. It is well-known that any exact algorithm for the problem may need to output an exponential number of knapsack solutions. We present the first fully polynomial-time approximation scheme (FPTAS) for the problem that, for any desired precision $\varepsilon \in (0,1)$, computes $(1-\varepsilon)$-approximate solutions for all values of the parameter. Our FPTAS is based on two different approaches and achieves a running time of $\mathcal{O}(n^3/\varepsilon^2 \cdot \min\{ \log^2 P, n^2 \} \cdot \min\{\log M, n \log (n/\varepsilon) / \log(n \log (n/\varepsilon) )\})$ where $P$ is an upper bound on the optimal profit and $M := \max\{W, n \cdot \max\{a_i,b_i: i \in \{1,\ldots,n\}\}\}$ for a knapsack with capacity $W$.
1
0
1
0
0
0
Mellin and Wiener-Hopf operators in a non-classical boundary value problem describing a Lévy process
Markov processes are well understood in the case when they take place in the whole Euclidean space. However, the situation becomes much more complicated if a Markov process is restricted to a domain with a boundary, and then a satisfactory theory only exists for processes with continuous trajectories. This research, into non-classical boundary value problems, is motivated by the study of stochastic processes, restricted to a domain, that can have discontinuous trajectories. To make this general problem more tractable, we consider a particular operator, $\mathcal{A}$, which is chosen to be the generator of a certain stable Lévy process restricted to the positive half-line. We are able to represent $\mathcal{A}$ as a (hyper-) singular integral and, using this representation, deduce simple conditions for its boundedness, between Bessel potential spaces. Moreover, from energy estimates, we prove that, under certain conditions, $\mathcal{A}$ has a trivial kernel. A central feature of this research is our use of Mellin operators to deal with the leading singular terms that combine, and cancel, at the boundary. Indeed, after considerable analysis, the problem is reformulated in the context of an algebra of multiplication, Wiener-Hopf and Mellin operators, acting on a Lebesgue space. The resulting generalised symbol is examined and, it turns out, that a certain transcendental equation, involving gamma and trigonometric functions with complex arguments, plays a pivotal role. Following detailed consideration of this transcendental equation, we are able to determine when our operator is Fredholm and, in that case, calculate its index. Finally, combining information on the kernel with the Fredholm index, we establish precise conditions for the invertibility of $\mathcal{A}$.
0
0
1
0
0
0
Long-term photometric behavior of the eclipsing cataclysmic variable V729 Sgr
We present the analysis results of an eclipsing cataclysmic variable (CV) V729 Sgr, based on our observations and AAVSO data. Some outburst parameters were determined such as outburst amplitude ($A_{n}$) and recurrence time ($T_{n}$), and then the relationship between $A_{n}$ and $T_{n}$ is discussed. A cursory examination for the long-term light curves reveals that there are small-amplitude outbursts and dips present, which is similar to the behaviors seen in some nova-like CVs (NLs). More detailed inspection suggests that the outbursts in V729 Sgr may be Type A (outside-in) with a rise time $\sim1.76$ d. Further analysis also shows that V729 Sgr is an intermediate between dwarf nova and NLs, and we constrain its mass transfer rate to $1.59\times10^{-9} < \dot{M}_{2} < 5.8\times10^{-9}M_{\odot}yr^{-1}$ by combining the theory for Z Cam type stars with observations. Moreover, the rapid oscillations in V729 Sgr were detected and analyzed for the first time. Our results indicate that the oscillation at $\sim 25.5$ s is a true DNO, being associated with the accretion events. The classification of the oscillations at $\sim 136$ and $154$ s as lpDNOs is based on the relation between $P_{lpDNOs}$ and $P_{DNOs}$. Meanwhile, the QPOs at the period of hundreds of seconds are also detected.
0
1
0
0
0
0
SPUX: Scalable Particle Markov Chain Monte Carlo for uncertainty quantification in stochastic ecological models
Calibration of individual based models (IBMs), successful in modeling complex ecological dynamical systems, is often performed only ad-hoc. Bayesian inference can be used for both parameter estimation and uncertainty quantification, but its successful application to realistic scenarios has been hindered by the complex stochastic nature of IBMs. Computationally expensive techniques such as Particle Filter (PF) provide marginal likelihood estimates, where multiple model simulations (particles) are required to get a sample from the state distribution conditional on the observed data. Particle ensembles are re-sampled at each data observation time, requiring particle destruction and replication, which lead to an increase in algorithmic complexity. We present SPUX, a Python implementation of parallel Particle Markov Chain Monte Carlo (PMCMC) algorithm, which mitigates high computational costs by distributing particles over multiple computational units. Adaptive load re-balancing techniques are used to mitigate computational work imbalances introduced by re-sampling. Framework performance is investigated and significant speed-ups are observed for a simple predator-prey IBM model.
1
0
0
1
0
0
A Social Network Analysis Framework for Modeling Health Insurance Claims Data
Health insurance companies in Brazil have their data about claims organized having the view only for providers. In this way, they loose the physician view and how they share patients. Partnership between physicians can view as a fruitful work in most of the cases but sometimes this could be a problem for health insurance companies and patients, for example a recommendation to visit another physician only because they work in same clinic. The focus of the work is to better understand physicians activities and how these activities are represented in the data. Our approach considers three aspects: the relationships among physicians, the relationships between physicians and patients, and the relationships between physicians and health providers. We present the results of an analysis of a claims database (detailing 18 months of activity) from a large health insurance company in Brazil. The main contribution presented in this paper is a set of models to represent: mutual referral between physicians, patient retention, and physician centrality in the health insurance network. Our results show the proposed models based on social network frameworks, extracted surprising insights about physicians from real health insurance claims data.
1
0
0
0
0
0
Congruences for Restricted Plane Overpartitions Modulo 4 and 8
In 2009, Corteel, Savelief and Vuletić generalized the concept of overpartitions to a new object called plane overpartitions. In recent work, the author considered a restricted form of plane overpartitions called $k$-rowed plane overpartions and proved a method to obtain congruences for these and other types of combinatorial generating functions. In this paper, we prove several restricted and unrestricted plane overpartition congruences modulo $4$ and $8$ using other techniques.
0
0
1
0
0
0
AndroVault: Constructing Knowledge Graph from Millions of Android Apps for Automated Analysis
Data driven research on Android has gained a great momentum these years. The abundance of data facilitates knowledge learning, however, also increases the difficulty of data preprocessing. Therefore, it is non-trivial to prepare a demanding and accurate set of data for research. In this work, we put forward AndroVault, a framework for the Android research composing of data collection, knowledge representation and knowledge extraction. It has started with a long-running web crawler for data collection (both apps and description) since 2013, which guarantees the timeliness of data; With static analysis and dynamic analysis of the collected data, we compute a variety of attributes to characterize Android apps. After that, we employ a knowledge graph to connect all these apps by computing their correlation in terms of attributes; Last, we leverage multiple technologies such as logical inference, machine learning, and correlation analysis to extract facts (more accurate and demanding, either high level or not, data) that are beneficial for a specific research problem. With the produced data of high quality, we have successfully conducted many research works including malware detection, code generation, and Android testing. We would like to release our data to the research community in an authenticated manner, and encourage them to conduct productive research.
1
0
0
0
0
0
Universal and generalizable restoration strategies for degraded ecological networks
Humans are increasingly stressing ecosystems via habitat destruction, climate change and global population movements leading to the widespread loss of biodiversity and the disruption of key ecological services. Ecosystems characterized primarily by mutualistic relationships between species such as plant-pollinator interactions may be particularly vulnerable to such perturbations because the loss of biodiversity can cause extinction cascades that can compromise the entire network. Here, we develop a general restoration strategy based on network-science for degraded ecosystems. Specifically, we show that network topology can be used to identify the optimal sequence of species reintroductions needed to maximize biodiversity gains following partial and full ecosystem collapse. This restoration strategy generalizes across topologically-disparate and geographically-distributed ecosystems. Additionally, we find that although higher connectance and diversity promote persistence in pristine ecosystems, these attributes reduce the effectiveness of restoration efforts in degraded networks. Hence, focusing on restoring the factors that promote persistence in pristine ecosystems may yield suboptimal recovery strategies for degraded ecosystems. Overall, our results have important insights for designing effective ecosystem restoration strategies to preserve biodiversity and ensure the delivery of critical natural services that fuel economic development, food security and human health around the globe
0
0
0
0
1
0
Information transmission and signal permutation in active flow networks
Recent experiments show that both natural and artificial microswimmers in narrow channel-like geometries will self-organise to form steady, directed flows. This suggests that networks of flowing active matter could function as novel autonomous microfluidic devices. However, little is known about how information propagates through these far-from-equilibrium systems. Through a mathematical analogy with spin-ice vertex models, we investigate here the input-output characteristics of generic incompressible active flow networks (AFNs). Our analysis shows that information transport through an AFN is inherently different from conventional pressure or voltage driven networks. Active flows on hexagonal arrays preserve input information over longer distances than their passive counterparts and are highly sensitive to bulk topological defects, whose presence can be inferred from marginal input-output distributions alone. This sensitivity further allows controlled permutations on parallel inputs, revealing an unexpected link between active matter and group theory that can guide new microfluidic mixing strategies facilitated by active matter and aid the design of generic autonomous information transport networks.
1
1
0
0
0
0
Heuristic Framework for Multi-Scale Testing of the Multi-Manifold Hypothesis
When analyzing empirical data, we often find that global linear models overestimate the number of parameters required. In such cases, we may ask whether the data lies on or near a manifold or a set of manifolds (a so-called multi-manifold) of lower dimension than the ambient space. This question can be phrased as a (multi-) manifold hypothesis. The identification of such intrinsic multiscale features is a cornerstone of data analysis and representation and has given rise to a large body of work on manifold learning. In this work, we review key results on multi-scale data analysis and intrinsic dimension followed by the introduction of a heuristic, multiscale framework for testing the multi-manifold hypothesis. Our method implements a hypothesis test on a set of spline-interpolated manifolds constructed from variance-based intrinsic dimensions. The workflow is suitable for empirical data analysis as we demonstrate on two use cases.
0
0
0
1
0
0
Kitting in the Wild through Online Domain Adaptation
Technological developments call for increasing perception and action capabilities of robots. Among other skills, vision systems that can adapt to any possible change in the working conditions are needed. Since these conditions are unpredictable, we need benchmarks which allow to assess the generalization and robustness capabilities of our visual recognition algorithms. In this work we focus on robotic kitting in unconstrained scenarios. As a first contribution, we present a new visual dataset for the kitting task. Differently from standard object recognition datasets, we provide images of the same objects acquired under various conditions where camera, illumination and background are changed. This novel dataset allows for testing the robustness of robot visual recognition algorithms to a series of different domain shifts both in isolation and unified. Our second contribution is a novel online adaptation algorithm for deep models, based on batch-normalization layers, which allows to continuously adapt a model to the current working conditions. Differently from standard domain adaptation algorithms, it does not require any image from the target domain at training time. We benchmark the performance of the algorithm on the proposed dataset, showing its capability to fill the gap between the performances of a standard architecture and its counterpart adapted offline to the given target domain.
1
0
0
0
0
0
Event Analysis of Pulse-reclosers in Distribution Systems Through Sparse Representation
The pulse-recloser uses pulse testing technology to verify that the line is clear of faults before initiating a reclose operation, which significantly reduces stress on the system components (e.g. substation transformers) and voltage sags on adjacent feeders. Online event analysis of pulse-reclosers are essential to increases the overall utility of the devices, especially when there are numerous devices installed throughout the distribution system. In this paper, field data recorded from several devices were analyzed to identify specific activity and fault locations. An algorithm is developed to screen the data to identify the status of each pole and to tag time windows with a possible pulse event. In the next step, selected time windows are further analyzed and classified using a sparse representation technique by solving an l1-regularized least-square problem. This classification is obtained by comparing the pulse signature with the reference dictionary to find a set that most closely matches the pulse features. This work also sheds additional light on the possibility of fault classification based on the pulse signature. Field data collected from a distribution system are used to verify the effectiveness and reliability of the proposed method.
1
0
0
0
0
0
Projected Power Iteration for Network Alignment
The network alignment problem asks for the best correspondence between two given graphs, so that the largest possible number of edges are matched. This problem appears in many scientific problems (like the study of protein-protein interactions) and it is very closely related to the quadratic assignment problem which has graph isomorphism, traveling salesman and minimum bisection problems as particular cases. The graph matching problem is NP-hard in general. However, under some restrictive models for the graphs, algorithms can approximate the alignment efficiently. In that spirit the recent work by Feizi and collaborators introduce EigenAlign, a fast spectral method with convergence guarantees for Erdős-Renyí graphs. In this work we propose the algorithm Projected Power Alignment, which is a projected power iteration version of EigenAlign. We numerically show it improves the recovery rates of EigenAlign and we describe the theory that may be used to provide performance guarantees for Projected Power Alignment.
1
0
1
1
0
0
3D mean Projective Shape Difference for Face Differentiation from Multiple Digital Camera Images
We give a nonparametric methodology for hypothesis testing for equality of extrinsic mean objects on a manifold embedded in a numerical spaces. The results obtained in the general setting are detailed further in the case of 3D projective shapes represented in a space of symmetric matrices via the quadratic Veronese-Whitney (VW) embedding. Large sample and nonparametric bootstrap confidence regions are derived for the common VW-mean of random projective shapes for finite 3D configurations. As an example, the VW MANOVA testing methodology is applied to the multi-sample mean problem for independent projective shapes of $3D$ facial configurations retrieved from digital images, via Agisoft PhotoScan technology.
0
0
0
1
0
0
Video Highlight Prediction Using Audience Chat Reactions
Sports channel video portals offer an exciting domain for research on multimodal, multilingual analysis. We present methods addressing the problem of automatic video highlight prediction based on joint visual features and textual analysis of the real-world audience discourse with complex slang, in both English and traditional Chinese. We present a novel dataset based on League of Legends championships recorded from North American and Taiwanese Twitch.tv channels (will be released for further research), and demonstrate strong results on these using multimodal, character-level CNN-RNN model architectures.
1
0
0
0
0
0
Effects of Interactions on Dynamic Correlations of Hard-Core Bosons at Finite Temperatures
We investigate how dynamic correlations of hard-core bosonic excitation at finite temperature are affected by additional interactions besides the hard-core repulsion which prevents them from occupying the same site. We focus especially on dimerized spin systems, where these additional interactions between the elementary excitations, triplons, lead to the formation of bound states, relevant for the correct description of scattering processes. In order to include these effects quantitatively we extend the previously developed Brückner approach to include also nearest-neighbor (NN) and next-nearest neighbor (NNN) interactions correctly in a low-temperature expansion. This leads to the extension of the scalar Bethe-Salpeter equation to a matrix-valued equation. Exemplarily, we consider the Heisenberg spin ladder to illustrate the significance of the additional interactions on the spectral functions at finite temperature which are proportional to inelastic neutron scattering rates.
0
1
0
0
0
0
The Frechet distribution: Estimation and Application an Overview
In this article, we consider the problem of estimating the parameters of the Fréchet distribution from both frequentist and Bayesian points of view. First we briefly describe different frequentist approaches, namely, maximum likelihood, method of moments, percentile estimators, L-moments, ordinary and weighted least squares, maximum product of spacings, maximum goodness-of-fit estimators and compare them with respect to mean relative estimates, mean squared errors and the 95\% coverage probability of the asymptotic confidence intervals using extensive numerical simulations. Next, we consider the Bayesian inference approach using reference priors. The Metropolis-Hasting algorithm is used to draw Markov Chain Monte Carlo samples, and they have in turn been used to compute the Bayes estimates and also to construct the corresponding credible intervals. Five real data sets related to the minimum flow of water on Piracicaba river in Brazil are used to illustrate the applicability of the discussed procedures.
0
0
0
1
0
0
Prediction of Sea Surface Temperature using Long Short-Term Memory
This letter adopts long short-term memory(LSTM) to predict sea surface temperature(SST), which is the first attempt, to our knowledge, to use recurrent neural network to solve the problem of SST prediction, and to make one week and one month daily prediction. We formulate the SST prediction problem as a time series regression problem. LSTM is a special kind of recurrent neural network, which introduces gate mechanism into vanilla RNN to prevent the vanished or exploding gradient problem. It has strong ability to model the temporal relationship of time series data and can handle the long-term dependency problem well. The proposed network architecture is composed of two kinds of layers: LSTM layer and full-connected dense layer. LSTM layer is utilized to model the time series relationship. Full-connected layer is utilized to map the output of LSTM layer to a final prediction. We explore the optimal setting of this architecture by experiments and report the accuracy of coastal seas of China to confirm the effectiveness of the proposed method. In addition, we also show its online updated characteristics.
1
0
0
0
0
0
An Operational Framework for Specifying Memory Models using Instantaneous Instruction Execution
There has been great progress recently in formally specifying the memory model of microprocessors like ARM and POWER. These specifications are, however, too complicated for reasoning about program behaviors, verifying compilers etc., because they involve microarchitectural details like the reorder buffer (ROB), partial and speculative execution, instruction replay on speculation failure, etc. In this paper we present a new Instantaneous Instruction Execution (I2E) framework which allows us to specify weak memory models in the same style as SC and TSO. Each instruction in I2E is executed instantaneously and in-order such that the state of the processor is always correct. The effect of instruction reordering is captured by the way data is moved between the processors and the memory non-deterministically, using three conceptual devices: invalidation buffers, timestamps and dynamic store buffers. We prove that I2E models capture the behaviors of modern microarchitectures and cache-coherent memory systems accurately, thus eliminating the need to think about microarchitectural details.
1
0
0
0
0
0
Efficient Algorithms for Moral Lineage Tracing
Lineage tracing, the joint segmentation and tracking of living cells as they move and divide in a sequence of light microscopy images, is a challenging task. Jug et al. have proposed a mathematical abstraction of this task, the moral lineage tracing problem (MLTP), whose feasible solutions define both a segmentation of every image and a lineage forest of cells. Their branch-and-cut algorithm, however, is prone to many cuts and slow convergence for large instances. To address this problem, we make three contributions: (i) we devise the first efficient primal feasible local search algorithms for the MLTP, (ii) we improve the branch-and-cut algorithm by separating tighter cutting planes and by incorporating our primal algorithms, (iii) we show in experiments that our algorithms find accurate solutions on the problem instances of Jug et al. and scale to larger instances, leveraging moral lineage tracing to practical significance.
1
0
0
0
0
0
Interpolating between $k$-Median and $k$-Center: Approximation Algorithms for Ordered $k$-Median
We consider a generalization of $k$-median and $k$-center, called the {\em ordered $k$-median} problem. In this problem, we are given a metric space $(\mathcal{D},\{c_{ij}\})$ with $n=|\mathcal{D}|$ points, and a non-increasing weight vector $w\in\mathbb{R}_+^n$, and the goal is to open $k$ centers and assign each point each point $j\in\mathcal{D}$ to a center so as to minimize $w_1\cdot\text{(largest assignment cost)}+w_2\cdot\text{(second-largest assignment cost)}+\ldots+w_n\cdot\text{($n$-th largest assignment cost)}$. We give an $(18+\epsilon)$-approximation algorithm for this problem. Our algorithms utilize Lagrangian relaxation and the primal-dual schema, combined with an enumeration procedure of Aouad and Segev. For the special case of $\{0,1\}$-weights, which models the problem of minimizing the $\ell$ largest assignment costs that is interesting in and of by itself, we provide a novel reduction to the (standard) $k$-median problem showing that LP-relative guarantees for $k$-median translate to guarantees for the ordered $k$-median problem; this yields a nice and clean $(8.5+\epsilon)$-approximation algorithm for $\{0,1\}$ weights.
1
0
0
0
0
0
Statistical Challenges in Modeling Big Brain Signals
Brain signal data are inherently big: massive in amount, complex in structure, and high in dimensions. These characteristics impose great challenges for statistical inference and learning. Here we review several key challenges, discuss possible solutions, and highlight future research directions.
0
0
0
1
0
0
Injective and Automorphism-Invariant Non-Singular Modules
Every automorphism-invariant right non-singular $A$-module is injective if and only if the factor ring of the ring $A$ with respect to its right Goldie radical is a right strongly semiprime ring.
0
0
1
0
0
0