title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks
Recent years have seen a sharp increase in the number of related yet distinct advances in semantic segmentation. Here, we tackle this problem by leveraging the respective strengths of these advances. That is, we formulate a conditional random field over a four-connected graph as end-to-end trainable convolutional and recurrent networks, and estimate them via an adversarial process. Importantly, our model learns not only unary potentials but also pairwise potentials, while aggregating multi-scale contexts and controlling higher-order inconsistencies. We evaluate our model on two standard benchmark datasets for semantic face segmentation, achieving state-of-the-art results on both of them.
1
0
0
0
0
0
Bayesian Coreset Construction via Greedy Iterative Geodesic Ascent
Coherent uncertainty quantification is a key strength of Bayesian methods. But modern algorithms for approximate Bayesian posterior inference often sacrifice accurate posterior uncertainty estimation in the pursuit of scalability. This work shows that previous Bayesian coreset construction algorithms---which build a small, weighted subset of the data that approximates the full dataset---are no exception. We demonstrate that these algorithms scale the coreset log-likelihood suboptimally, resulting in underestimated posterior uncertainty. To address this shortcoming, we develop greedy iterative geodesic ascent (GIGA), a novel algorithm for Bayesian coreset construction that scales the coreset log-likelihood optimally. GIGA provides geometric decay in posterior approximation error as a function of coreset size, and maintains the fast running time of its predecessors. The paper concludes with validation of GIGA on both synthetic and real datasets, demonstrating that it reduces posterior approximation error by orders of magnitude compared with previous coreset constructions.
0
0
0
1
0
0
Practical Bayesian optimization in the presence of outliers
Inference in the presence of outliers is an important field of research as outliers are ubiquitous and may arise across a variety of problems and domains. Bayesian optimization is method that heavily relies on probabilistic inference. This allows outstanding sample efficiency because the probabilistic machinery provides a memory of the whole optimization process. However, that virtue becomes a disadvantage when the memory is populated with outliers, inducing bias in the estimation. In this paper, we present an empirical evaluation of Bayesian optimization methods in the presence of outliers. The empirical evidence shows that Bayesian optimization with robust regression often produces suboptimal results. We then propose a new algorithm which combines robust regression (a Gaussian process with Student-t likelihood) with outlier diagnostics to classify data points as outliers or inliers. By using an scheduler for the classification of outliers, our method is more efficient and has better convergence over the standard robust regression. Furthermore, we show that even in controlled situations with no expected outliers, our method is able to produce better results.
1
0
0
1
0
0
Is there any polynomial upper bound for the universal labeling of graphs?
A {\it universal labeling} of a graph $G$ is a labeling of the edge set in $G$ such that in every orientation $\ell$ of $G$ for every two adjacent vertices $v$ and $u$, the sum of incoming edges of $v$ and $u$ in the oriented graph are different from each other. The {\it universal labeling number} of a graph $G$ is the minimum number $k$ such that $G$ has {\it universal labeling} from $\{1,2,\ldots, k\}$ denoted it by $\overrightarrow{\chi_{u}}(G) $. We have $2\Delta(G)-2 \leq \overrightarrow{\chi_{u}} (G)\leq 2^{\Delta(G)}$, where $\Delta(G)$ denotes the maximum degree of $G$. In this work, we offer a provocative question that is:" Is there any polynomial function $f$ such that for every graph $G$, $\overrightarrow{\chi_{u}} (G)\leq f(\Delta(G))$?". Towards this question, we introduce some lower and upper bounds on their parameter of interest. Also, we prove that for every tree $T$, $\overrightarrow{\chi_{u}}(T)=\mathcal{O}(\Delta^3) $. Next, we show that for a given 3-regular graph $G$, the universal labeling number of $G$ is 4 if and only if $G$ belongs to Class 1. Therefore, for a given 3-regular graph $G$, it is an $ \mathbf{NP} $-complete to determine whether the universal labeling number of $G$ is 4. Finally, using probabilistic methods, we almost confirm a weaker version of the problem.
1
0
1
0
0
0
An Empirical Study on Team Formation in Online Games
Online games provide a rich recording of interactions that can contribute to our understanding of human behavior. One potential lesson is to understand what motivates people to choose their teammates and how their choices leadto performance. We examine several hypotheses about team formation using a large, longitudinal dataset from a team-based online gaming environment. Specifically, we test how positive familiarity, homophily, and competence determine team formationin Battlefield 4, a popular team-based game in which players choose one of two competing teams to play on. Our dataset covers over two months of in-game interactions between over 380,000 players. We show that familiarity is an important factorin team formation, while homophily is not. Competence affects team formation in more nuanced ways: players with similarly high competence team-up repeatedly, but large variations in competence discourage repeated interactions.
1
0
0
0
0
0
Nematic superconductivity in Cu$_{x}$Bi$_{2}$Se$_{3}$: The surface Andreev bound states
We study theoretically the topological surface states (TSSs) and the possible surface Andreev bound states (SABSs) of Cu$_{x}$Bi$_{2}$Se$_{3}$ which is known to be a topological insulator at $x=0$. The superconductivity (SC) pairing of this compound is assumed to have the broken spin-rotation symmetry, similar to that of the A-phase of $^{3}$He as suggested by recent nuclear-magnetic resonance experiments. For both spheroidal and corrugated cylindrical Fermi surfaces with the hexagonal warping terms, we show that the bulk SC gap is rather anisotropic; the minimum of the gap is negligibly small as comparing to the maximum of the gap. This would make the fully-gapped pairing effectively nodal. For a clean system, our results indicate the bulk of this compound to be a topological superconductor with the SABSs appearing inside the bulk SC gap. The zero-energy SABSs which are Majorana fermions, together with the TSSs not gapped by the pairing, produce a zero-energy peak in the surface density of states (SDOS). The SABSs are expected to be stable against short-range nonmagnetic impurities, and the local SDOS is calculated around a nonmagnetic impurity. The relevance of our results to experiments is discussed.
0
1
0
0
0
0
The structure of a minimal $n$-chart with two crossings II: Neighbourhoods of $Γ_1\cupΓ_{n-1}$
Given a 2-crossing minimal chart $\Gamma$, a minimal chart with two crossings, set $\alpha=\min\{~i~|~$there exists an edge of label $i$ containing a white vertex$\}$, and $\beta=\max\{~i~|~$there exists an edge of label $i$ containing a white vertex$\}$. In this paper we study the structure of a neighbourhood of $\Gamma_\alpha\cup\Gamma_\beta$, and propose a normal form for 2-crossing minimal $n$-charts, here $\Gamma_\alpha$ and $\Gamma_\beta$ mean the union of all the edges of label $\alpha$ and $\beta$ respectively.
0
0
1
0
0
0
Generating Shared Latent Variables for Robots to Imitate Human Movements and Understand their Physical Limitations
Assistive robotics and particularly robot coaches may be very helpful for rehabilitation healthcare. In this context, we propose a method based on Gaussian Process Latent Variable Model (GP-LVM) to transfer knowledge between a physiotherapist, a robot coach and a patient. Our model is able to map visual human body features to robot data in order to facilitate the robot learning and imitation. In addition , we propose to extend the model to adapt robots' understanding to patient's physical limitations during the assessment of rehabilitation exercises. Experimental evaluation demonstrates promising results for both robot imitation and model adaptation according to the patients' limitations.
1
0
0
0
0
0
Superconvergence analysis of linear FEM based on the polynomial preserving recovery and Richardson extrapolation for Helmholtz equation with high wave number
We study superconvergence property of the linear finite element method with the polynomial preserving recovery (PPR) and Richardson extrapolation for the two dimensional Helmholtz equation. The $H^1$-error estimate with explicit dependence on the wave number $k$ {is} derived. First, we prove that under the assumption $k(kh)^2\leq C_0$ ($h$ is the mesh size) and certain mesh condition, the estimate between the finite element solution and the linear interpolation of the exact solution is superconvergent under the $H^1$-seminorm, although the pollution error still exists. Second, we prove a similar result for the recovered gradient by PPR and found that the PPR can only improve the interpolation error and has no effect on the pollution error. Furthermore, we estimate the error between the finite element gradient and recovered gradient and discovered that the pollution error is canceled between these two quantities. Finally, we apply the Richardson extrapolation to recovered gradient and demonstrate numerically that PPR combined with the Richardson extrapolation can reduce the interpolation and pollution errors simultaneously, and therefore, leads to an asymptotically exact {\it a posteriori} error estimator. All theoretical findings are verified by numerical tests.
0
0
1
0
0
0
Semi-blind source separation with multichannel variational autoencoder
This paper proposes a multichannel source separation technique called the multichannel variational autoencoder (MVAE) method, which uses a conditional VAE (CVAE) to model and estimate the power spectrograms of the sources in a mixture. By training the CVAE using the spectrograms of training examples with source-class labels, we can use the trained decoder distribution as a universal generative model capable of generating spectrograms conditioned on a specified class label. By treating the latent space variables and the class label as the unknown parameters of this generative model, we can develop a convergence-guaranteed semi-blind source separation algorithm that consists of iteratively estimating the power spectrograms of the underlying sources as well as the separation matrices. In experimental evaluations, our MVAE produced better separation performance than a baseline method.
0
0
0
1
0
0
Quantitative characterization of pore structure of several biochars with 3D imaging
Pore space characteristics of biochars may vary depending on the used raw material and processing technology. Pore structure has significant effects on the water retention properties of biochar amended soils. In this work, several biochars were characterized with three-dimensional imaging and image analysis. X-ray computed microtomography was used to image biochars at resolution of 1.14 $\mu$m and the obtained images were analysed for porosity, pore-size distribution, specific surface area and structural anisotropy. In addition, random walk simulations were used to relate structural anisotropy to diffusive transport. Image analysis showed that considerable part of the biochar volume consist of pores in size range relevant to hydrological processes and storage of plant available water. Porosity and pore-size distribution were found to depend on the biochar type and the structural anisotopy analysis showed that used raw material considerably affects the pore characteristics at micrometre scale. Therefore attention should be paid to raw material selection and quality in applications requiring optimized pore structure.
0
1
0
0
0
0
First-principles-based method for electron localization: Application to monolayer hexagonal boron nitride
We present a first-principles-based many-body typical medium dynamical cluster approximation method for characterizing electron localization in disordered structures. This method applied to monolayer hexagonal boron nitride shows that the presence of a boron vacancies could turn this wide-gap insulator into a correlated metal. Depending on the strength of the electron interactions, these calculations suggest that conduction could be obtained at a boron vacancy concentration as low as $1.0\%$. We also explore the distribution of the local density of states, a fingerprint of spatial variations, which allows localized and delocalized states to be distinguished. The presented method enables the study of disorder-driven insulator-metal transitions not only in $h$-BN but also in other physical materials.
0
1
0
0
0
0
Compressed H$_3$S: inter-sublattice Coulomb coupling in a high-$\textit{T}$$_C$ superconductor
Upon thermal annealing at or above room temperature (RT) and high pressure $\it P$ $\sim$ 155 GPa, H$_3$S exhibits superconductivity at $\it T_C$ $\sim$ 200 K. Various theoretical frameworks with strong electron-phonon coupling and Coulomb repulsion have reproduced this record-level $\it T_C$. Of particular relevance is that observed H-D isotopic correlations among $\it T_C$, $\it P$, and annealed order indicate limitations on the H-D isotope effect, leaving open for consideration unconventional high-$\it T_C$ superconductivity with electronic-based enhancements. The present work examines Coulombic pairing arising from interactions between neighboring S and H species on separate interlaced sublattices constituting H$_3$S in the Im$\overline{3}$m structure. The optimal transition temperature is calculated from $\it{T}$$_{C0}$ = $\it{k}$$_B$$^{-1}$$\Lambda$$\it{e}$$^2$/$\ell$$\zeta$, with $\Lambda$ = 0.007465 $\AA$, inter-sublattice S-H separation spacing $\zeta$ = $\it{a}$$_0$/$\sqrt{2}$, interaction charge linear spacing $\ell$ = $\it{a}$$_0$(3/$\sigma$)$^{1/2}$, average participating charge fraction $\sigma$ = 3.43 $\pm$ 0.10 estimated from theory, and lattice parameter $\it{a}$$_0$ = 3.0823 \AA. The result $\it{T}$$_{C0}$ = 198.5 $\pm$ 3.0 K is in excellent agreement with transition temperatures determined from resistivity and susceptibility data. Analysis of mid-infrared reflectivity confirms correlation between boson energy and $\zeta$$^{-1}$. Suppression of $\it T_C$ with increasing residual resistance for $<$ RT annealing is treated by scattering-induced pair breaking. Correspondence with layered high-$\it T_C$ superconductor structures are discussed. A model considering Compton scattering of virtual photons of energies $\leq$ $\it e$$^2$/$\zeta$ by inter-sublattice electrons is introduced, illustrating $\Lambda$ is proportional to the reduced electron Compton wavelength.
0
1
0
0
0
0
A physiology--based parametric imaging method for FDG--PET data
Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [{18}F]-fluorodeoxyglucose Positron Emission Tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixelwise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.
0
1
1
0
0
0
Item Recommendation with Evolving User Preferences and Experience
Current recommender systems exploit user and item similarities by collaborative filtering. Some advanced methods also consider the temporal evolution of item ratings as a global background process. However, all prior methods disregard the individual evolution of a user's experience level and how this is expressed in the user's writing in a review community. In this paper, we model the joint evolution of user experience, interest in specific item facets, writing style, and rating behavior. This way we can generate individual recommendations that take into account the user's maturity level (e.g., recommending art movies rather than blockbusters for a cinematography expert). As only item ratings and review texts are observables, we capture the user's experience and interests in a latent model learned from her reviews, vocabulary and writing style. We develop a generative HMM-LDA model to trace user evolution, where the Hidden Markov Model (HMM) traces her latent experience progressing over time -- with solely user reviews and ratings as observables over time. The facets of a user's interest are drawn from a Latent Dirichlet Allocation (LDA) model derived from her reviews, as a function of her (again latent) experience level. In experiments with five real-world datasets, we show that our model improves the rating prediction over state-of-the-art baselines, by a substantial margin. We also show, in a use-case study, that our model performs well in the assessment of user experience levels.
1
0
0
1
0
0
On some integrals of Hardy
We consider some properties of integrals considered by Hardy and Koshliakov, and which have also been further extended recently by Dixit. We establish a new general integral formula from some observations about the digamma function. We also obtain lower and upper bounds for Hardy's integral through properties of the digamma function.
0
0
1
0
0
0
Complexity of short generating functions
We give complexity analysis of the class of short generating functions (GF). Assuming $\#P \not\subseteq FP/poly$, we show that this class is not closed under taking many intersections, unions or projections of GFs, in the sense that these operations can increase the bitlength of coefficients of GFs by a super-polynomial factor. We also prove that truncated theta functions are hard in this class.
1
0
1
0
0
0
Finite-Size Effects in Non-Neutral Two-Dimensional Coulomb Fluids
Thermodynamic potential of a neutral two-dimensional (2D) Cou\-lomb fluid, confined to a large domain with a smooth boundary, exhibits at any (inverse) temperature $\beta$ a logarithmic finite-size correction term whose universal prefactor depends only on the Euler number of the domain and the conformal anomaly number $c=-1$. A minimal free boson conformal field theory, which is equivalent to the 2D symmetric two-component plasma of elementary $\pm e$ charges at coupling constant $\Gamma=\beta e^2$, was studied in the past. It was shown that creating a non-neutrality by spreading out a charge $Q e$ at infinity modifies the anomaly number to $c(Q,\Gamma) = - 1 + 3\Gamma Q^2$. Here, we study the effect of non-neutrality on the finite-size expansion of the free energy for another Coulomb fluid, namely the 2D one-component plasma (jellium) composed of identical pointlike $e$-charges in a homogeneous background surface charge density. For the disk geometry of the confining domain we find that the non-neutrality induces the same change of the anomaly number in the finite-size expansion. We derive this result first at the free-fermion coupling $\Gamma\equiv\beta e^2=2$ and then, by using a mapping of the 2D one-component plasma onto an anticommuting field theory formulated on a chain, for an arbitrary coupling constant.
0
1
0
0
0
0
Type Safe Redis Queries: A Case Study of Type-Level Programming in Haskell
Redis is an in-memory data structure store, often used as a database, with a Haskell interface Hedis. Redis is dynamically typed --- a key can be discarded and re-associated to a value of a different type, and a command, when fetching a value of a type it does not expect, signals a runtime error. We develop a domain-specific language that, by exploiting Haskell type-level programming techniques including indexed monad, type-level literals and closed type families, keeps track of types of values in the database and statically guarantees that type errors cannot happen for a class of Redis programs.
1
0
0
0
0
0
Ground state solutions for a nonlinear Choquard equation
We discuss the existence of ground state solutions for the Choquard equation $$-\Delta u=(I_\alpha*F(u))F'(u)\quad\quad\quad\text{in }\mathbb R^N.$$ We prove the existence of solutions under general hypotheses, investigating in particular the case of a homogeneous nonlinearity $F(u)=\frac{|u|^p}p$. The cases $N=2$ and $N\ge3$ are treated differently in some steps. The solutions are found through a variational mountain pass strategy. The result presented are contained in the papers with arXiv ID 1212.2027 and 1604.03294
0
0
1
0
0
0
Ultrashort dark solitons interactions and nonlinear tunneling in the modified nonlinear Schrödinger equation with variable coefficients
We present the study of the dark soliton dynamics in an inhomogenous fiber by means of a variable coefficient modified nonlinear Schrödinger equation (Vc-MNLSE) with distributed dispersion, self-phase modulation, self-steepening and linear gain/loss. The ultrashort dark soliton pulse evolution and interaction is studied by using the Hirota bilinear (HB) method. In particular, we give much insight into the effect of self-steepening (SS) on the dark soliton dynamics. The study reveals a shock wave formation, as a major effect of SS. Numerically, we study the dark soliton propagation in the continuous wave background, and the stability of the soliton solution is tested in the presence of photon noise. The elastic collision behaviors of the dark solitons are discussed by the asymptotic analysis. On the other hand, considering the nonlinear tunneling of dark soliton through barrier/well, we find that the tunneling of the dark soliton depends on the height of the barrier and the amplitude of the soliton. The intensity of the tunneling soliton either forms a peak or valley and retains its shape after the tunneling. For the case of exponential background, the soliton tends to compress after tunneling through the barrier/well.
0
1
0
0
0
0
Long-range p-d exchange interaction in a ferromagnet-semiconductor Co/CdMgTe/CdTe quantum well hybrid structure
The exchange interaction between magnetic ions and charge carriers in semiconductors is considered as prime tool for spin control. Here, we solve a long-standing problem by uniquely determining the magnitude of the long-range $p-d$ exchange interaction in a ferromagnet-semiconductor (FM-SC) hybrid structure where a 10~nm thick CdTe quantum well is separated from the FM Co layer by a CdMgTe barrier with a thickness on the order of 10~nm. The exchange interaction is manifested by the spin splitting of acceptor bound holes in the effective magnetic field induced by the FM. The exchange splitting is directly evaluated using spin-flip Raman scattering by analyzing the dependence of the Stokes shift $\Delta_S$ on the external magnetic field $B$. We show that in strong magnetic field $\Delta_S$ is a linear function of $B$ with an offset of $\Delta_{pd} = 50-100~\mu$eV at zero field from the FM induced effective exchange field. On the other hand, the $s-d$ exchange interaction between conduction band electrons and FM, as well as the $p-d$ contribution for free valence band holes, are negligible. The results are well described by the model of indirect exchange interaction between acceptor bound holes in the CdTe quantum well and the FM layer mediated by elliptically polarized phonons in the hybrid structure.
0
1
0
0
0
0
Parsec-scale Faraday rotation and polarization of 20 active galactic nuclei jets
We perform polarimetry analysis of 20 active galactic nuclei (AGN) jets using the Very Long Baseline Array (VLBA) at 1.4, 1.6, 2.2, 2.4, 4.6, 5.0, 8.1, 8.4, and 15.4 GHz. The study allowed us to investigate linearly polarized properties of the jets at parsec-scales: distribution of the Faraday rotation measure (RM) and fractional polarization along the jets, Faraday effects and structure of Faraday-corrected polarization images. Wavelength-dependence of the fractional polarization and polarization angle is consistent with external Faraday rotation, while some sources show internal rotation. The RM changes along the jets, systematically increasing its value towards synchrotron self-absorbed cores at shorter wavelengths. The highest core RM reaches 16,900 rad/m^2 in the source rest frame for the quasar 0952+179, suggesting the presence of highly magnetized, dense media in these regions. The typical RM of transparent jet regions has values of an order of a hundred rad/m^2. Significant transverse rotation measure gradients are observed in seven sources. The magnetic field in the Faraday screen has no preferred orientation, and is observed to be random or regular from source to source. Half of the sources show evidence for the helical magnetic fields in their rotating magnetoionic media. At the same time jets themselves contain large-scale, ordered magnetic fields and tend to align its direction with the jet flow. The observed variety of polarized signatures can be explained by a model of spine-sheath jet structure.
0
1
0
0
0
0
Positive scalar curvature and connected sums
Let $N$ be a closed enlargeable manifold in the sense of Gromov-Lawson and $M$ a closed spin manifold of equal dimension, a famous theorem of Gromov-Lawson states that the connected sum $M\# N$ admits no metric of positive scalar curvature. We present a potential generalization of this result to the case where $M$ is nonspin. We use index theory for Dirac operators to prove our result.
0
0
1
0
0
0
Optical computing by injection-locked lasers
A programmable optical computer has remained an elusive concept. To construct a practical computing primitive equivalent to an electronic Boolean logic, one should find a nonlinear phenomenon that overcomes weaknesses present in many optical processing schemes. Ideally, the nonlinearity should provide a functionally complete set of logic operations, enable ultrafast all-optical programmability, and allow cascaded operations without a change in the operating wavelength or in the signal encoding format. Here we demonstrate a programmable logic gate using an injection-locked Vertical-Cavity Surface-Emitting Laser (VCSEL). The gate program is switched between the AND and the OR operations at the rate of 1 GHz with Bit Error Ratio (BER) of 10e-6 without changes in the wavelength or in the signal encoding format. The scheme is based on nonlinearity of normalization operations, which can be used to construct any continuous complex function or operation, Boolean or otherwise.
1
0
0
0
0
0
On the joint asymptotic distribution of the restricted estimators in multivariate regression model
The main Theorem of Jain et al.[Jain, K., Singh, S., and Sharma, S. (2011), Re- stricted estimation in multivariate measurement error regression model; JMVA, 102, 2, 264-280] is established in its full generality. Namely, we derive the joint asymp- totic normality of the unrestricted estimator (UE) and the restricted estimators of the matrix of the regression coefficients. The derived result holds under the hypothesized restriction as well as under the sequence of alternative restrictions. In addition, we establish Asymptotic Distributional Risk for the estimators and compare their relative performance. It is established that near the restriction, the restricted estimators (REs) perform better than the UE. But the REs perform worse than the unrestricted estimator when one moves far away from the restriction.
0
0
1
1
0
0
Rotational spectroscopy, tentative interstellar detection, and chemical modelling of N-methylformamide
N-methylformamide, CH3NHCHO, may be an important molecule for interstellar pre-biotic chemistry because it contains a peptide bond. The rotational spectrum of the most stable trans conformer of CH3NHCHO is complicated by strong torsion-rotation interaction due to the low barrier of the methyl torsion. We use two absorption spectrometers in Kharkiv and Lille to measure the rotational spectra over 45--630 GHz. The analysis is carried out using the Rho-axis method and the RAM36 code. We search for N-methylformamide toward the hot molecular core Sgr B2(N2) using a spectral line survey carried out with ALMA. The astronomical results are put into a broader astrochemical context with the help of a gas-grain chemical kinetics model. The laboratory data set for the trans conformer of CH3NHCHO consists of 9469 line frequencies with J <= 62, including the first assignment of the rotational spectra of the first and second excited torsional states. All these lines are fitted within experimental accuracy. We report the tentative detection of CH3NHCHO towards Sgr B2(N2). We find CH3NHCHO to be more than one order of magnitude less abundant than NH2CHO, a factor of two less abundant than CH3NCO, but only slightly less abundant than CH3CONH2. The chemical models indicate that the efficient formation of HNCO via NH + CO on grains is a necessary step in the achievement of the observed gas-phase abundance of CH3NCO. Production of CH3NHCHO may plausibly occur on grains either through the direct addition of functional-group radicals or through the hydrogenation of CH3NCO. Provided the detection of CH3NHCHO is confirmed, the only slight underabundance of this molecule compared to its more stable structural isomer acetamide and the sensitivity of the model abundances to the chemical kinetics parameters suggest that the formation of these two molecules is controlled by kinetics rather than thermal equilibrium.
0
1
0
0
0
0
Dynamic anisotropy in MHD turbulence induced by mean magnetic field
In this paper, we study the development of anisotropy in strong MHD turbulence in the presence of a large scale magnetic field B 0 by analyzing the results of direct numerical simulations. Our results show that the developed anisotropy among the different components of the velocity and magnetic field is a direct outcome of the inverse cascade of energy of the perpendicular velocity components u? and a forward cascade of the energy of the parallel component u k . The inverse cascade develops for a strong B0, where the flow exhibits a strong vortical structure by the suppression of fluctuations along the magnetic field. Both the inverse and the forward cascade are examined in detail by investigating the anisotropic energy spectra, the energy fluxes, and the shell to shell energy transfers among different scales.
0
1
0
0
0
0
Dealing with the exponential wall in electronic structure calculations
An alternative to Density Functional Theory are wavefunction based electronic structure calculations for solids. In order to perform them the Exponential Wall (EW) problem has to be resolved. It is caused by an exponential increase of the number of configurations with increasing electron number N. There are different routes one may follow. One is to characterize a many-electron wavefunction by a vector in Liouville space with a cumulant metric rather than in Hilbert space. This removes the EW problem. Another is to model the solid by an {\it impurity} or {\it fragment} embedded in a {\it bath} which is treated at a much lower level than the former. This is the case in Density Matrix Embedding Theory (DMET) or Density Embedding Theory (DET). The latter are closely related to a Schmidt decomposition of a system and to the determination of the associated entanglement. We show here the connection between the two approaches. It turns out that the DMET (or DET) has an identical active space as a previously used Local Ansatz, based on a projection and partitioning approach. Yet, the EW problem is resolved differently in the two cases. By studying a $H_{10}$ ring these differences are analyzed with the help of the method of increments.
0
1
0
0
0
0
Influence maximization on correlated networks through community identification
The identification of the minimal set of nodes that maximizes the propagation of information is one of the most important problems in network science. In this paper, we introduce a new method to find the set of initial spreaders to maximize the information propagation in complex networks. We evaluate this method in assortative networks and verify that degree-degree correlation plays a fundamental role on the spreading dynamics. Simulation results show that our algorithm is statistically similar, in terms of the average size of outbreaks, to the greedy approach. However, our method is much less time consuming than the greedy algorithm.
1
1
0
0
0
0
Conformal metrics with prescribed fractional scalar curvature on conformal infinities with positive fractional Yamabe constants
Let $(X, g^+)$ be an asymptotically hyperbolic manifold and $(M, [\hat{h}])$ its conformal infinity. Our primary aim in this paper is to introduce the prescribed fractional scalar curvature problem on $M$ and provide solutions under various geometric conditions on $X$ and $M$. We also obtain the existence results for the fractional Yamabe problem in the endpoint case, e.g., $n = 3$, $\gamma = 1/2$ and $M$ is non-umbilic, etc. Every solution we find turns out to be smooth on $M$.
0
0
1
0
0
0
A Short Review of Ethical Challenges in Clinical Natural Language Processing
Clinical NLP has an immense potential in contributing to how clinical practice will be revolutionized by the advent of large scale processing of clinical records. However, this potential has remained largely untapped due to slow progress primarily caused by strict data access policies for researchers. In this paper, we discuss the concern for privacy and the measures it entails. We also suggest sources of less sensitive data. Finally, we draw attention to biases that can compromise the validity of empirical research and lead to socially harmful applications.
1
0
0
0
0
0
kd-switch: A Universal Online Predictor with an application to Sequential Two-Sample Testing
We propose a novel online predictor for discrete labels conditioned on multivariate features in $\mathbb{R}^d$. The predictor is pointwise universal: it achieves a normalized log loss performance asymptotically as good as the true conditional entropy of the labels given the features. The predictor is based on a feature space discretization induced by a full-fledged k-d tree with randomly picked directions and a switch distribution, requiring no hyperparameter setting and automatically selecting the most relevant scales in the feature space. Using recent results, a consistent sequential two-sample test is built from this predictor. In terms of discrimination power, on selected challenging datasets, it is comparable to or better than state-of-the-art non-sequential two-sample tests based on the train-test paradigm and, a recent sequential test requiring hyperparameters. The time complexity to process the $n$-th sample point is $O(\log n)$ in probability (with respect to the distribution generating the data points), in contrast to the linear complexity of the previous sequential approach.
1
0
0
1
0
0
A Heat Equation on some Adic Completions of Q and Ultrametric Analysis
This article deals with a Markov process related to the fundamental solution of a heat equation on the direct product ring Q_S, where Q_S is a finite direct product of p-adic fields. The techniques developed here are different from the well known ones: they are geometrical and very simple. As a result, the techniques developed here provides a general framework of these problems on other related ultrametric groups.
0
0
1
0
0
0
A simple alteration of the peridynamics correspondence principle to eliminate zero-energy deformation
We look for an enhancement of the correspondence model of peridynamics with a view to eliminating the zero-energy deformation modes. Since the non-local integral definition of the deformation gradient underlies the problem, we initially look for a remedy by introducing a class of localizing corrections to the integral. Since the strategy is found to afford only a reduction, and not complete elimination, of the oscillatory zero-energy deformation, we propose in the sequel an alternative approach based on the notion of sub-horizons. A most useful feature of the last proposal is that the setup, whilst providing the solution with the necessary stability, deviates only marginally from the original correspondence formulation. We also undertake a set of numerical simulations that attest to the remarkable efficacy of the sub-horizon based methodology.
0
1
0
0
0
0
Interpretable Active Learning
Active learning has long been a topic of study in machine learning. However, as increasingly complex and opaque models have become standard practice, the process of active learning, too, has become more opaque. There has been little investigation into interpreting what specific trends and patterns an active learning strategy may be exploring. This work expands on the Local Interpretable Model-agnostic Explanations framework (LIME) to provide explanations for active learning recommendations. We demonstrate how LIME can be used to generate locally faithful explanations for an active learning strategy, and how these explanations can be used to understand how different models and datasets explore a problem space over time. In order to quantify the per-subgroup differences in how an active learning strategy queries spatial regions, we introduce a notion of uncertainty bias (based on disparate impact) to measure the discrepancy in the confidence for a model's predictions between one subgroup and another. Using the uncertainty bias measure, we show that our query explanations accurately reflect the subgroup focus of the active learning queries, allowing for an interpretable explanation of what is being learned as points with similar sources of uncertainty have their uncertainty bias resolved. We demonstrate that this technique can be applied to track uncertainty bias over user-defined clusters or automatically generated clusters based on the source of uncertainty.
1
0
0
1
0
0
Orbital-Free Density-Functional Theory Simulations of Displacement Cascade in Aluminum
Here, we report orbital-free density-functional theory (OF DFT) molecular dynamics simulations of the displacement cascade in aluminum. The electronic effect is our main concern. The displacement threshold energies are calculated using OF DFT and classical molecular dynamics (MD) and the comparison reveals the role of charge bridge. Compared to MD simulation, the displacement spike from OF DFT has a lower peak and shorter duration time, which is attributed to the effect of electronic damping. The charge density profiles clearly display the existence of depleted zones, vacancy and interstitial clusters. And it is found that the energy exchanges between ions and electrons are mainly contributed by the kinetic energies.
0
1
0
0
0
0
Binary systems with an RR Lyrae component - progress in 2016
In this contribution, we summarize the progress made in the investigation of binary candidates with an RR Lyrae component in 2016. We also discuss the actual status of the RRLyrBinCan database.
0
1
0
0
0
0
Covering and tiling hypergraphs with tight cycles
Given $3 \leq k \leq s$, we say that a $k$-uniform hypergraph $C^k_s$ is a tight cycle on $s$ vertices if there is a cyclic ordering of the vertices of $C^k_s$ such that every $k$ consecutive vertices under this ordering form an edge. We prove that if $k \ge 3$ and $s \ge 2k^2$, then every $k$-uniform hypergraph on $n$ vertices with minimum codegree at least $(1/2 + o(1))n$ has the property that every vertex is covered by a copy of $C^k_s$. Our result is asymptotically best possible for infinitely many pairs of $s$ and $k$, e.g. when $s$ and $k$ are coprime. A perfect $C^k_s$-tiling is a spanning collection of vertex-disjoint copies of $C^k_s$. When $s$ is divisible by $k$, the problem of determining the minimum codegree that guarantees a perfect $C^k_s$-tiling was solved by a result of Mycroft. We prove that if $k \ge 3$ and $s \ge 5k^2$ is not divisible by $k$ and $s$ divides $n$, then every $k$-uniform hypergraph on $n$ vertices with minimum codegree at least $(1/2 + 1/(2s) + o(1))n$ has a perfect $C^k_s$-tiling. Again our result is asymptotically best possible for infinitely many pairs of $s$ and $k$, e.g. when $s$ and $k$ are coprime with $k$ even.
0
0
1
0
0
0
DNA methylation markers to assess biological age
Among the different biomarkers of aging based on omics and clinical data, DNA methylation clocks stand apart providing unmatched accuracy in assessing the biological age of both humans and animal models of aging. Here, we discuss robustness of DNA methylation clocks and bounds on their out-of-sample performance and review computational strategies for development of the clocks.
0
0
0
0
1
0
Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification
Effective and efficient mitigation of malware is a long-time endeavor in the information security community. The development of an anti-malware system that can counteract an unknown malware is a prolific activity that may benefit several sectors. We envision an intelligent anti-malware system that utilizes the power of deep learning (DL) models. Using such models would enable the detection of newly-released malware through mathematical generalization. That is, finding the relationship between a given malware $x$ and its corresponding malware family $y$, $f: x \mapsto y$. To accomplish this feat, we used the Malimg dataset (Nataraj et al., 2011) which consists of malware images that were processed from malware binaries, and then we trained the following DL models 1 to classify each malware family: CNN-SVM (Tang, 2013), GRU-SVM (Agarap, 2017), and MLP-SVM. Empirical evidence has shown that the GRU-SVM stands out among the DL models with a predictive accuracy of ~84.92%. This stands to reason for the mentioned model had the relatively most sophisticated architecture design among the presented models. The exploration of an even more optimal DL-SVM model is the next stage towards the engineering of an intelligent anti-malware system.
0
0
0
1
0
0
Exponential profiles from stellar scattering off interstellar clumps and holes in dwarf galaxy discs
Holes and clumps in the interstellar gas of dwarf irregular galaxies are gravitational scattering centers that heat field stars and change their radial and vertical distributions. Because the gas structures are extended and each stellar scattering is relatively weak, the stellar orbits remain nearly circular and the net effect accumulates slowly over time. We calculate the radial profile of scattered stars with an idealized model and find that it approaches an equilibrium shape that is exponential, similar to the observed shapes of galaxy discs. Our models treat only scattering and have no bars or spiral arms, so the results apply mostly to dwarf irregular galaxies where there are no other obvious scattering processes. Stellar scattering by gaseous perturbations slows down when the stellar population gets thicker than the gas layer. An accreting galaxy with a growing thin gas layer can form multiple stellar exponential profiles from the inside-out, preserving the remnants of each Gyr interval in a sequence of ever-lengthening and thinning stellar subdiscs.
0
1
0
0
0
0
Visualizations for an Explainable Planning Agent
In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making. Imposing transparency and explainability requirements on such agents is especially important in order to establish trust and common ground with the end-to-end automated planning system. Visualizing the agent's internal decision-making processes is a crucial step towards achieving this. This may include externalizing the "brain" of the agent -- starting from its sensory inputs, to progressively higher order decisions made by it in order to drive its planning components. We also show how the planner can bootstrap on the latest techniques in explainable planning to cast plan visualization as a plan explanation problem, and thus provide concise model-based visualization of its plans. We demonstrate these functionalities in the context of the automated planning components of a smart assistant in an instrumented meeting space.
1
0
0
0
0
0
Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions
We present a neural network architecture based upon the Autoencoder (AE) and Generative Adversarial Network (GAN) that promotes a convex latent distribution by training adversarially on latent space interpolations. By using an AE as both the generator and discriminator of a GAN, we pass a pixel-wise error function across the discriminator, yielding an AE which produces non-blurry samples that match both high- and low-level features of the original images. Interpolations between images in this space remain within the latent-space distribution of real images as trained by the discriminator, and therfore preserve realistic resemblances to the network inputs.
0
0
0
1
0
0
Attribute-Guided Face Generation Using Conditional CycleGAN
We are interested in attribute-guided face generation: given a low-res face input image, an attribute vector that can be extracted from a high-res image (attribute image), our new method generates a high-res face image for the low-res input that satisfies the given attributes. To address this problem, we condition the CycleGAN and propose conditional CycleGAN, which is designed to 1) handle unpaired training data because the training low/high-res and high-res attribute images may not necessarily align with each other, and to 2) allow easy control of the appearance of the generated face via the input attributes. We demonstrate impressive results on the attribute-guided conditional CycleGAN, which can synthesize realistic face images with appearance easily controlled by user-supplied attributes (e.g., gender, makeup, hair color, eyeglasses). Using the attribute image as identity to produce the corresponding conditional vector and by incorporating a face verification network, the attribute-guided network becomes the identity-guided conditional CycleGAN which produces impressive and interesting results on identity transfer. We demonstrate three applications on identity-guided conditional CycleGAN: identity-preserving face superresolution, face swapping, and frontal face generation, which consistently show the advantage of our new method.
1
0
0
1
0
0
Objective-Reinforced Generative Adversarial Networks (ORGAN) for Sequence Generation Models
In unsupervised data generation tasks, besides the generation of a sample based on previous observations, one would often like to give hints to the model in order to bias the generation towards desirable metrics. We propose a method that combines Generative Adversarial Networks (GANs) and reinforcement learning (RL) in order to accomplish exactly that. While RL biases the data generation process towards arbitrary metrics, the GAN component of the reward function ensures that the model still remembers information learned from data. We build upon previous results that incorporated GANs and RL in order to generate sequence data and test this model in several settings for the generation of molecules encoded as text sequences (SMILES) and in the context of music generation, showing for each case that we can effectively bias the generation process towards desired metrics.
1
0
0
1
0
0
Pluripotential theory on the support of closed positive currents and applications to dynamics in $\mathbb{C}^n$
We extend certain classical theorems in pluripotential theory to a class of functions defined on the support of a $(1,1)$-closed positive current $T$, analogous to plurisubharmonic functions, called $T$-plurisubharmonic functions. These functions are defined as limits, on the support of $T$, of sequences of plurisubharmonic functions decreasing on this support. In particular, we show that the poles of such functions are pluripolar sets. We also show that the maximum principle and the Hartogs's theorem remain valid in a weak sense. We study these functions by means of a class of measures, so-called "pluri-Jensen measures", about which we prove that they are numerous on the support of $(1,1)$-closed positive currents. We also obtain, for any fat compact set, an expression of its relative Green's function in terms of an infimum of an integral over a set of pluri-Jensen measures. We then deduce, by means of these measures, a characterization of the polynomially convex fat compact sets, as well as a characterization of pluripolar sets, and the fact that the support of a closed positive $(1,1)$-current is nowhere pluri-thin. In the second part of this article, these tools are used to study dynamics of a certain class of automorphisms of $\mathbb{C}^n$ which naturally generalize Hénon's automorphisms of $\mathbb{C}^2$. First we study the geometry of the support of canonical invariant currents. Then we obtain an equidistribution result for the convergence of pull-back of certain measures towards an ergodic invariant measure, with compact support.
0
0
1
0
0
0
Inconsistency of Measure-Theoretic Probability and Random Behavior of Microscopic Systems
We report an inconsistency found in probability theory (also referred to as measure-theoretic probability). For probability measures induced by real-valued random variables, we deduce an "equality" such that one side of the "equality" is a probability, but the other side is not. For probability measures induced by extended random variables, we deduce an "equality" such that its two sides are unequal probabilities. The deduced expressions are erroneous only when it can be proved that measure-theoretic probability is a theory free from contradiction. However, such a proof does not exist. The inconsistency appears only in the theory rather than in the physical world, and will not affect practical applications as long as ideal events in the theory (which will not occur physically) are not mistaken for observable events in the real world. Nevertheless, unlike known paradoxes in mathematics, the inconsistency cannot be explained away and hence must be resolved. The assumption of infinite additivity in the theory is relevant to the inconsistency, and may cause confusion of ideal events and real events. As illustrated by an example in this article, since abstract properties of mathematical entities in theoretical thinking are not necessarily properties of physical quantities observed in the real world, mistaking the former for the latter may lead to misinterpreting random phenomena observed in experiments with microscopic systems. Actually the inconsistency is due to the notion of "numbers" adopted in conventional mathematics. A possible way to resolve the inconsistency is to treat "numbers" from the viewpoint of constructive mathematics.
0
0
1
0
0
0
Robustness of Quasiparticle Interference Test for Sign-changing Gaps in Multiband Superconductors
Recently, a test for a sign-changing gap function in a candidate multiband unconventional superconductor involving quasiparticle interference data was proposed. The test was based on the antisymmetric, Fourier transformed conductance maps integrated over a range of momenta $\bf q$ corresponding to interband processes, which was argued to display a particular resonant form, provided the gaps changed sign between the Fermi surface sheets connected by $\bf q$. The calculation was performed for a single impurity, however, raising the question of how robust this measure is as a test of sign-changing pairing in a realistic system with many impurities. Here we reproduce the results of the previous work within a model with two distinct Fermi surface sheets, and show explicitly that the previous result, while exact for a single nonmagnetic scatterer and also in the limit of a dense set of random impurities, can be difficult to implement for a few dilute impurities. In this case, however, appropriate isolation of a single impurity is sufficient to recover the expected result, allowing a robust statement about the gap signs to be made.
0
1
0
0
0
0
Adaptive User Perspective Rendering for Handheld Augmented Reality
Handheld Augmented Reality commonly implements some variant of magic lens rendering, which turns only a fraction of the user's real environment into AR while the rest of the environment remains unaffected. Since handheld AR devices are commonly equipped with video see-through capabilities, AR magic lens applications often suffer from spatial distortions, because the AR environment is presented from the perspective of the camera of the mobile device. Recent approaches counteract this distortion based on estimations of the user's head position, rendering the scene from the user's perspective. To this end, approaches usually apply face-tracking algorithms on the front camera of the mobile device. However, this demands high computational resources and therefore commonly affects the performance of the application beyond the already high computational load of AR applications. In this paper, we present a method to reduce the computational demands for user perspective rendering by applying lightweight optical flow tracking and an estimation of the user's motion before head tracking is started. We demonstrate the suitability of our approach for computationally limited mobile devices and we compare it to device perspective rendering, to head tracked user perspective rendering, as well as to fixed point of view user perspective rendering.
1
0
0
0
0
0
A distributed primal-dual algorithm for computation of generalized Nash equilibria with shared affine coupling constraints via operator splitting methods
In this paper, we propose a distributed primal-dual algorithm for computation of a generalized Nash equilibrium (GNE) in noncooperative games over network systems. In the considered game, not only each player's local objective function depends on other players' decisions, but also the feasible decision sets of all the players are coupled together with a globally shared affine inequality constraint. Adopting the variational GNE, that is the solution of a variational inequality, as a refinement of GNE, we introduce a primal-dual algorithm that players can use to seek it in a distributed manner. Each player only needs to know its local objective function, local feasible set, and a local block of the affine constraint. Meanwhile, each player only needs to observe the decisions on which its local objective function explicitly depends through the interference graph and share information related to multipliers with its neighbors through a multiplier graph. Through a primal-dual analysis and an augmentation of variables, we reformulate the problem as finding the zeros of a sum of monotone operators. Our distributed primal-dual algorithm is based on forward-backward operator splitting methods. We prove its convergence to the variational GNE for fixed step-sizes under some mild assumptions. Then a distributed algorithm with inertia is also introduced and analyzed for variational GNE seeking. Finally, numerical simulations for network Cournot competition are given to illustrate the algorithm efficiency and performance.
1
0
1
0
0
0
Inverse dynamic and spectral problems for the one-dimensional Dirac system on a finite tree
We consider inverse dynamic and spectral problems for the one dimensional Dirac system on a finite tree. Our aim will be to recover the topology of a tree (lengths and connectivity of edges) as well as matrix potentials on each edge. As inverse data we use the Weyl-Titchmarsh matrix function or the dynamic response operator.
0
0
1
0
0
0
Extensions of isomorphisms of subvarieties in flexile varieties
Let $X$ be a quasi-affine algebraic variety isomorphic to the complement of a closed subvariety of dimension at most $n-3$ in $\C^n$. We find some conditions under which an isomorphism of two closed subvarieties of $X$ can be extended to an automorphism of $X$.
0
0
1
0
0
0
Frequentist coverage and sup-norm convergence rate in Gaussian process regression
Gaussian process (GP) regression is a powerful interpolation technique due to its flexibility in capturing non-linearity. In this paper, we provide a general framework for understanding the frequentist coverage of point-wise and simultaneous Bayesian credible sets in GP regression. As an intermediate result, we develop a Bernstein von-Mises type result under supremum norm in random design GP regression. Identifying both the mean and covariance function of the posterior distribution of the Gaussian process as regularized $M$-estimators, we show that the sampling distribution of the posterior mean function and the centered posterior distribution can be respectively approximated by two population level GPs. By developing a comparison inequality between two GPs, we provide exact characterization of frequentist coverage probabilities of Bayesian point-wise credible intervals and simultaneous credible bands of the regression function. Our results show that inference based on GP regression tends to be conservative; when the prior is under-smoothed, the resulting credible intervals and bands have minimax-optimal sizes, with their frequentist coverage converging to a non-degenerate value between their nominal level and one. As a byproduct of our theory, we show that the GP regression also yields minimax-optimal posterior contraction rate relative to the supremum norm, which provides a positive evidence to the long standing problem on optimal supremum norm contraction rate in GP regression.
0
0
1
1
0
0
Most Probable Evolution Trajectories in a Genetic Regulatory System Excited by Stable Lévy Noise
We study the most probable trajectories of the concentration evolution for the transcription factor activator in a genetic regulation system, with non-Gaussian stable Lévy noise in the synthesis reaction rate taking into account. We calculate the most probable trajectory by spatially maximizing the probability density of the system path, i.e., the solution of the associated nonlocal Fokker-Planck equation. We especially examine those most probable trajectories from low concentration state to high concentration state (i.e., the likely transcription regime) for certain parameters, in order to gain insights into the transcription processes and the tipping time for the transcription likely to occur. This enables us: (i) to visualize the progress of concentration evolution (i.e., observe whether the system enters the transcription regime within a given time period); (ii) to predict or avoid certain transcriptions via selecting specific noise parameters in particular regions in the parameter space. Moreover, we have found some peculiar or counter-intuitive phenomena in this gene model system, including (a) a smaller noise intensity may trigger the transcription process, while a larger noise intensity can not, under the same asymmetric Lévy noise. This phenomenon does not occur in the case of symmetric Lévy noise; (b) the symmetric Lévy motion always induces transition to high concentration, but certain asymmetric Lévy motions do not trigger the switch to transcription. These findings provide insights for further experimental research, in order to achieve or to avoid specific gene transcriptions, with possible relevance for medical advances.
0
0
0
0
1
0
Simplified derivation of the collision probability of two objects in independent Keplerian orbits
Many topics in planetary studies demand an estimate of the collision probability of two objects moving on nearly Keplerian orbits. In the classic works of Öpik (1951) and Wetherill (1967), the collision probability was derived by linearizing the motion near the collision points, and there is now a vast literature using their method. We present here a simpler and more physically motivated derivation for non-tangential collisions in Keplerian orbits, as well as for tangential collisions that were not previously considered. Our formulas have the added advantage of being manifestly symmetric in the parameters of the two colliding bodies. In common with the Öpik-Wetherill treatments, we linearize the motion of the bodies in the vicinity of the point of orbit intersection (or near the points of minimum distance between the two orbits) and assume a uniform distribution of impact parameter within the collision radius. We point out that the linear approximation leads to singular results for the case of tangential encounters. We regularize this singularity by use of a parabolic approximation of the motion in the vicinity of a tangential encounter.
0
1
0
0
0
0
Optically Coupled Methods for Microwave Impedance Microscopy
Scanning Microwave Impedance Microscopy (MIM) measurement of photoconductivity with 50 nm resolution is demonstrated using a modulated optical source. The use of a modulated source allows for measurement of photoconductivity in a single scan without a reference region on the sample, as well as removing most topographical artifacts and enhancing signal to noise as compared with unmodulated measurement. A broadband light source with tunable monochrometer is then used to measure energy resolved photoconductivity with the same methodology. Finally, a pulsed optical source is used to measure local photo-carrier lifetimes via MIM, using the same 50 nm resolution tip.
0
1
0
0
0
0
Generalized feedback vertex set problems on bounded-treewidth graphs: chordality is the key to single-exponential parameterized algorithms
It has long been known that Feedback Vertex Set can be solved in time $2^{\mathcal{O}(w\log w)}n^{\mathcal{O}(1)}$ on $n$-vertex graphs of treewidth $w$, but it was only recently that this running time was improved to $2^{\mathcal{O}(w)}n^{\mathcal{O}(1)}$, that is, to single-exponential parameterized by treewidth. We investigate which generalizations of Feedback Vertex Set can be solved in a similar running time. Formally, for a class $\mathcal{P}$ of graphs, the Bounded $\mathcal{P}$-Block Vertex Deletion problem asks, given a graph~$G$ on $n$ vertices and positive integers~$k$ and~$d$, whether $G$ contains a set~$S$ of at most $k$ vertices such that each block of $G-S$ has at most $d$ vertices and is in $\mathcal{P}$. Assuming that $\mathcal{P}$ is recognizable in polynomial time and satisfies a certain natural hereditary condition, we give a sharp characterization of when single-exponential parameterized algorithms are possible for fixed values of $d$: if $\mathcal{P}$ consists only of chordal graphs, then the problem can be solved in time $2^{\mathcal{O}(wd^2)} n^{\mathcal{O}(1)}$, and if $\mathcal{P}$ contains a graph with an induced cycle of length $\ell\ge 4$, then the problem is not solvable in time $2^{o(w\log w)} n^{\mathcal{O}(1)}$ even for fixed $d=\ell$, unless the ETH fails. We also study a similar problem, called Bounded $\mathcal{P}$-Component Vertex Deletion, where the target graphs have connected components of small size rather than blocks of small size, and we present analogous results. For this problem, we also show that if $d$ is part of the input and $\mathcal{P}$ contains all chordal graphs, then it cannot be solved in time $f(w)n^{o(w)}$ for some function $f$, unless the ETH fails.
1
0
0
0
0
0
Electron-phonon coupling mechanisms for hydrogen-rich metals at high pressure
The mechanisms for strong electron-phonon coupling predicted for hydrogen-rich alloys with high superconducting critical temperature ($T_c$) are examined within the Migdal-Eliashberg theory. Analysis of the functional derivative of $T_c$ with respect to the electron-phonon spectral function shows that at low pressures, when the alloys often adopt layered structures, bending vibrations have the most dominant effect. At very high pressures, the H-H interactions in two-dimensional (2D) and three-dimensional (3D) extended structures are weakened, resulting in mixed bent (libration) and stretch vibrations, and the electron-phonon coupling process is distributed over a broad frequency range leading to very high $T_c$.
0
1
0
0
0
0
Competitive Equilibrium For almost All Incomes
Competitive equilibrium from equal incomes (CEEI) is a well-known rule for fair allocation of resources among agents with different preferences. It has many advantages, among them is the fact that a CEEI allocation is both Pareto efficient and envy-free. However, when the resources are indivisible, a CEEI allocation might not exist even when there are two agents and a single item. In contrast to this discouraging non-existence result, Babaioff, Nisan and Talgam-Cohen (2017) recently suggested a new and more encouraging approach to allocation of indivisible items: instead of insisting that the incomes be equal, they suggest to look at the entire space of possible incomes, and check whether there exists a competitive equilibrium for almost all income-vectors (CEFAI) --- all income-space except a subset of measure zero. They show that a CEFAI exists when there are at most 3 items, or when there are 4 items and two agents. They also show that when there are 5 items and two agents there might not exist a CEFAI. They leave open the cases of 4 items with three or four agents. This paper presents a new way to implement a CEFAI, as a subgame-perfect equilibrium of a sequential game. This new implementation allows us both to offer much simpler solutions to the known cases (at most 3 items, and 4 items with two agents), and to prove that a CEFAI exists even in the much more difficult case of 4 items and three agents. Moreover, we prove that a CEFAI might not exist with 4 items and four agents. When the items to be divided are bads (chores), CEFAI exists for two agents with at most 4 chores, but does not exist for two agents with 5 chores or with three agents with 3 or more chores. Thus, this paper completes the characterization of CEFAI existence for monotone preferences.
1
0
0
0
0
0
Converting of algebraic Diophantine equations to a diagonal form with the help of an integer non-orthogonal transformation, maintaining the asymptotic behavior of the number of its integer solutions
The author showed that any homogeneous algebraic Diophantine equation of the second order can be converted to a diagonal form using an integer non-orthogonal transformation maintaining asymptotic behavior of the number of its integer solutions. In this paper, we consider the transformation to the diagonal form of a wider class of algebraic second-order Diophantine equations, and also we consider the conditions for converting higher order algebraic Diophantine equations to this form. The author found an asymptotic estimate for the number of integer solutions of the diagonal Thue equation of odd degree with an amount of variables greater than two, and also he got and asymptotic estimates of the number of integer solutions of other types of diagonal algebraic Diophantine equations.
0
0
1
0
0
0
A Deep Learning Approach with an Attention Mechanism for Automatic Sleep Stage Classification
Automatic sleep staging is a challenging problem and state-of-the-art algorithms have not yet reached satisfactory performance to be used instead of manual scoring by a sleep technician. Much research has been done to find good feature representations that extract the useful information to correctly classify each epoch into the correct sleep stage. While many useful features have been discovered, the amount of features have grown to an extent that a feature reduction step is necessary in order to avoid the curse of dimensionality. One reason for the need of such a large feature set is that many features are good for discriminating only one of the sleep stages and are less informative during other stages. This paper explores how a second feature representation over a large set of pre-defined features can be learned using an auto-encoder with a selective attention for the current sleep stage in the training batch. This selective attention allows the model to learn feature representations that focuses on the more relevant inputs without having to perform any dimensionality reduction of the input data. The performance of the proposed algorithm is evaluated on a large data set of polysomnography (PSG) night recordings of patients with sleep-disordered breathing. The performance of the auto-encoder with selective attention is compared with a regular auto-encoder and previous works using a deep belief network (DBN).
0
0
0
0
1
0
Laser Wakefield Accelerators
The one-dimensional wakefield generation equations are solved for increasing levels of non-linearity, to demonstrate how they contribute to the overall behaviour of a non-linear wakefield in a plasma. The effect of laser guiding is also studied as a way to increase the interaction length of a laser wakefield accelerator.
0
1
0
0
0
0
Permutation complexity of images of Sturmian words by marked morphisms
We show that the permutation complexity of the image of a Sturmian word by a binary marked morphism is $n+k$ for some constant $k$ and all lengths $n$ sufficiently large.
1
0
0
0
0
0
Three principles of data science: predictability, computability, and stability (PCS)
We propose the predictability, computability, and stability (PCS) framework to extract reproducible knowledge from data that can guide scientific hypothesis generation and experimental design. The PCS framework builds on key ideas in machine learning, using predictability as a reality check and evaluating computational considerations in data collection, data storage, and algorithm design. It augments PC with an overarching stability principle, which largely expands traditional statistical uncertainty considerations. In particular, stability assesses how results vary with respect to choices (or perturbations) made across the data science life cycle, including problem formulation, pre-processing, modeling (data and algorithm perturbations), and exploratory data analysis (EDA) before and after modeling. Furthermore, we develop PCS inference to investigate the stability of data results and identify when models are consistent with relatively simple phenomena. We compare PCS inference with existing methods, such as selective inference, in high-dimensional sparse linear model simulations to demonstrate that our methods consistently outperform others in terms of ROC curves over a wide range of simulation settings. Finally, we propose a PCS documentation based on Rmarkdown, iPython, or Jupyter Notebook, with publicly available, reproducible codes and narratives to back up human choices made throughout an analysis. The PCS workflow and documentation are demonstrated in a genomics case study available on Zenodo.
1
0
0
1
0
0
The Elasticity of Puiseux Monoids
Let $M$ be an atomic monoid and let $x$ be a non-unit element of $M$. The elasticity of $x$, denoted by $\rho(x)$, is the ratio of its largest factorization length to its shortest factorization length, and it measures how far is $x$ from having a unique factorization. The elasticity $\rho(M)$ of $M$ is the supremum of the elasticities of all non-unit elements of $M$. The monoid $M$ has accepted elasticity if $\rho(M) = \rho(m)$ for some $m \in M$. In this paper, we study the elasticity of Puiseux monoids (i.e., additive submonoids of $\mathbb{Q}_{\ge 0}$). First, we characterize the Puiseux monoids $M$ having finite elasticity and find a formula for $\rho(M)$. Then we classify the Puiseux monoids having accepted elasticity in terms of their sets of atoms. When $M$ is a primary Puiseux monoid, we describe the topology of the set of elasticities of $M$, including a characterization of when $M$ is a bounded factorization monoid. Lastly, we give an example of a Puiseux monoid that is bifurcus (that is, every nonzero element has a factorization of length at most $2$).
0
0
1
0
0
0
Barcode Embeddings for Metric Graphs
Stable topological invariants are a cornerstone of persistence theory and applied topology, but their discriminative properties are often poorly-understood. In this paper we study a rich homology-based invariant first defined by Dey, Shi, and Wang, which we think of as embedding a metric graph in the barcode space. We prove that this invariant is locally injective on the space of metric graphs and globally injective on a GH-dense subset. Moreover, we define a new topology on the space of metric graphs, which we call the fibered topology, for which the barcode transform is injective on a generic (open and dense) subset.
0
0
1
0
0
0
On the extremal extensions of a non-negative Jacobi operator
We consider minimal non-negative Jacobi operator with $p\times p-$matrix entries. Using the technique of boundary triplets and the corresponding Weyl functions, we describe the Friedrichs and Krein extensions of the minimal Jacobi operator. Moreover, we parameterize the set of all non-negative self-adjoint extensions in terms of boundary conditions.
0
0
1
0
0
0
Note on the geodesic Monte Carlo
Geodesic Monte Carlo (gMC) is a powerful algorithm for Bayesian inference on non-Euclidean manifolds. The original gMC algorithm was cleverly derived in terms of its progenitor, the Riemannian manifold Hamiltonian Monte Carlo (RMHMC). Here, it is shown that alternative and theoretically simpler derivations are available in which the original algorithm is a special case of two general classes of algorithms characterized by non-trivial mass matrices. The proposed derivations work entirely in embedding coordinates and thus clarify the original algorithm as applied to manifolds embedded in Euclidean space.
0
0
0
1
0
0
Google Scholar and the gray literature: A reply to Bonato's review
Recently, a review concluded that Google Scholar (GS) is not a suitable source of information "for identifying recent conference papers or other gray literature publications". The goal of this letter is to demonstrate that GS can be an effective tool to search and find gray literature, as long as appropriate search strategies are used. To do this, we took as examples the same two case studies used by the original review, describing first how GS processes original's search strategies, then proposing alternative search strategies, and finally generalizing each case study to compose a general search procedure aimed at finding gray literature in Google Scholar for two wide selected case studies: a) all contributions belonging to a congress (the ASCO Annual Meeting); and b) indexed guidelines as well as gray literature within medical institutions (National Institutes of Health) and governmental agencies (U.S. Department of Health & Human Services). The results confirm that original search strategies were undertrained offering misleading results and erroneous conclusions. Google Scholar lacks many of the advanced search features available in other bibliographic databases (such as Pubmed), however, it is one thing to have a friendly search experience, and quite another to find gray literature. We finally conclude that Google Scholar is a powerful tool for searching gray literature, as long as the users are familiar with all the possibilities it offers as a search engine. Poorly formulated searches will undoubtedly return misleading results.
1
0
0
0
0
0
A Next-Best-Smell Approach for Remote Gas Detection with a Mobile Robot
The problem of gas detection is relevant to many real-world applications, such as leak detection in industrial settings and landfill monitoring. Using mobile robots for gas detection has several advantages and can reduce danger for humans. In our work, we address the problem of planning a path for a mobile robotic platform equipped with a remote gas sensor, which minimizes the time to detect all gas sources in a given environment. We cast this problem as a coverage planning problem by defining a basic sensing operation -- a scan with the remote gas sensor -- as the field of "view" of the sensor. Given the computing effort required by previously proposed offline approaches, in this paper we suggest a online coverage algorithm, called Next-Best-Smell, adapted from the Next-Best-View class of exploration algorithms. Our algorithm evaluates candidate locations with a global utility function, which combines utility values for travel distance, information gain, and sensing time, using Multi-Criteria Decision Making. In our experiments, conducted both in simulation and with a real robot, we found the performance of the Next-Best-Smell approach to be comparable with that of the state-of-the-art offline algorithm, at much lower computational cost.
1
0
0
0
0
0
KZ-Calogero correspondence revisited
We discuss the correspondence between the Knizhnik-Zamolodchikov equations associated with $GL(N)$ and the $n$-particle quantum Calogero model in the case when $n$ is not necessarily equal to $N$. This can be viewed as a natural "quantization" of the quantum-classical correspondence between quantum Gaudin and classical Calogero models.
0
1
1
0
0
0
Spectral up- and downshifting of Akhmediev breathers under wind forcing
We experimentally and numerically investigate the effect of wind forcing on the spectral dynamics of Akhmediev breathers, a wave-type known to model the modulation instability. We develop the wind model to the same order in steepness as the higher order modifcation of the nonlinear Schroedinger equation, also referred to as the Dysthe equation. This results in an asymmetric wind term in the higher order, in addition to the leading order wind forcing term. The derived model is in good agreement with laboratory experiments within the range of the facility's length. We show that the leading order forcing term amplifies all frequencies equally and therefore induces only a broadening of the spectrum while the asymmetric higher order term in the model enhances higher frequencies more than lower ones. Thus, the latter term induces a permanent upshift of the spectral mean. On the other hand, in contrast to the direct effect of wind forcing, wind can indirectly lead to frequency downshifts, due to dissipative effects such as wave breaking, or through amplification of the intrinsic spectral asymmetry of the Dysthe equation. Furthermore, the definitions of the up- and downshift in terms of peak- and mean frequencies, that are critical to relate our work to previous results, are highlighted and discussed.
0
1
0
0
0
0
High Rate LDPC Codes from Difference Covering Arrays
This paper presents a combinatorial construction of low-density parity-check (LDPC) codes from difference covering arrays. While the original construction by Gallagher was by randomly allocating bits in a sparse parity-check matrix, over the past 20 years researchers have used a variety of more structured approaches to construct these codes, with the more recent constructions of well-structured LDPC coming from balanced incomplete block designs (BIBDs) and from Latin squares over finite fields. However these constructions have suffered from the limited orders for which these designs exist. Here we present a construction of LDPC codes of length $4n^2 - 2n$ for all $n$ using the cyclic group of order $2n$. These codes achieve high information rate (greater than 0.8) for $n \geq 8$, have girth at least 6 and have minimum distance 6 for $n$ odd.
1
0
1
0
0
0
Bootstrapping spectral statistics in high dimensions
Statistics derived from the eigenvalues of sample covariance matrices are called spectral statistics, and they play a central role in multivariate testing. Although bootstrap methods are an established approach to approximating the laws of spectral statistics in low-dimensional problems, these methods are relatively unexplored in the high-dimensional setting. The aim of this paper is to focus on linear spectral statistics as a class of prototypes for developing a new bootstrap in high-dimensions --- and we refer to this method as the Spectral Bootstrap. In essence, the method originates from the parametric bootstrap, and is motivated by the notion that, in high dimensions, it is difficult to obtain a non-parametric approximation to the full data-generating distribution. From a practical standpoint, the method is easy to use, and allows the user to circumvent the difficulties of complex asymptotic formulas for linear spectral statistics. In addition to proving the consistency of the proposed method, we provide encouraging empirical results in a variety of settings. Lastly, and perhaps most interestingly, we show through simulations that the method can be applied successfully to statistics outside the class of linear spectral statistics, such as the largest sample eigenvalue and others.
0
0
0
1
0
0
On parabolic subgroups of Artin-Tits groups of spherical type
We show that, in an Artin-Tits group of spherical type, the intersection of two parabolic subgroups is a parabolic subgroup. Moreover, we show that the set of parabolic subgroups forms a lattice with respect to inclusion. This extends to all Artin-Tits groups of spherical type a result that was previously known for braid groups. To obtain the above results, we show that every element in an Artin-Tits group of spherical type admits a unique minimal parabolic subgroup containing it. Also, the subgroup associated to an element coincides with the subgroup associated to any of its powers or roots. As a consequence, if an element belongs to a parabolic subgroup, all its roots belong to the same parabolic subgroup. We define the simplicial complex of irreducible parabolic subgroups, and we propose it as the analogue, in Artin-Tits groups of spherical type, of the celebrated complex of curves which is an important tool in braid groups, and more generally in mapping class groups. We conjecture that the complex of irreducible parabolic subgroups is $\delta$-hyperbolic.
0
0
1
0
0
0
A statistical test for the Zipf's law by deviations from the Heaps' law
We explore a probabilistic model of an artistic text: words of the text are chosen independently of each other in accordance with a discrete probability distribution on an infinite dictionary. The words are enumerated 1, 2, $\ldots$, and the probability of appearing the $i$'th word is asymptotically a power function. Bahadur proved that in this case the number of different words depends on the length of the text is asymptotically a power function, too. On the other hand, in the applied statistics community, there exist statements supported by empirical observations, the Zipf's and the Heaps' laws. We highlight the links between Bahadur results and Zipf's/Heaps' laws, and introduce and analyse a corresponding statistical test.
0
0
1
1
0
0
Weakly Supervised Classification in High Energy Physics
As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. This paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics - quark versus gluon tagging - we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervised classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.
0
1
0
1
0
0
Profit Driven Decision Trees for Churn Prediction
Customer retention campaigns increasingly rely on predictive models to detect potential churners in a vast customer base. From the perspective of machine learning, the task of predicting customer churn can be presented as a binary classification problem. Using data on historic behavior, classification algorithms are built with the purpose of accurately predicting the probability of a customer defecting. The predictive churn models are then commonly selected based on accuracy related performance measures such as the area under the ROC curve (AUC). However, these models are often not well aligned with the core business requirement of profit maximization, in the sense that, the models fail to take into account not only misclassification costs, but also the benefits originating from a correct classification. Therefore, the aim is to construct churn prediction models that are profitable and preferably interpretable too. The recently developed expected maximum profit measure for customer churn (EMPC) has been proposed in order to select the most profitable churn model. We present a new classifier that integrates the EMPC metric directly into the model construction. Our technique, called ProfTree, uses an evolutionary algorithm for learning profit driven decision trees. In a benchmark study with real-life data sets from various telecommunication service providers, we show that ProfTree achieves significant profit improvements compared to classic accuracy driven tree-based methods.
1
0
0
1
0
0
Carrier loss mechanisms in textured crystalline Si-based solar cells
A quite general device analysis method that allows the direct evaluation of optical and recombination losses in crystalline silicon (c-Si)-based solar cells has been developed. By applying this technique, the optical and physical limiting factors of the state-of-the-art solar cells with ~20% efficiencies have been revealed. In the established method, the carrier loss mechanisms are characterized from the external quantum efficiency (EQE) analysis with very low computational cost. In particular, the EQE analyses of textured c-Si solar cells are implemented by employing the experimental reflectance spectra obtained directly from the actual devices while using flat optical models without any fitting parameters. We find that the developed method provides almost perfect fitting to EQE spectra reported for various textured c-Si solar cells, including c-Si heterojunction solar cells, a dopant-free c-Si solar cell with a MoOx layer, and an n-type passivated emitter with rear locally diffused (PERL) solar cell. The modeling of the recombination loss further allows the extraction of the minority carrier diffusion length and surface recombination velocity from the EQE analysis. Based on the EQE analysis results, the carrier loss mechanisms in different types of c-Si solar cells are discussed.
0
1
0
0
0
0
The generating function for the Airy point process and a system of coupled Painlevé II equations
For a wide class of Hermitian random matrices, the limit distribution of the eigenvalues close to the largest one is governed by the Airy point process. In such ensembles, the limit distribution of the k-th largest eigenvalue is given in terms of the Airy kernel Fredholm determinant or in terms of Tracy-Widom formulas involving solutions of the Painlevé II equation. Limit distributions for quantities involving two or more near-extreme eigenvalues, such as the gap between the k-th and the \ell-th largest eigenvalue or the sum of the k largest eigenvalues, can be expressed in terms of Fredholm determinants of an Airy kernel with several discontinuities. We establish simple Tracy-Widom type expressions for these Fredholm determinants, which involve solutions to systems of coupled Painlevé II equations, and we investigate the asymptotic behavior of these solutions.
0
0
1
0
0
0
Gradient-based Training of Slow Feature Analysis by Differentiable Approximate Whitening
This paper proposes Power Slow Feature Analysis, a gradient-based method to extract temporally-slow features from a high-dimensional input stream that varies on a faster time-scale, and a variant of Slow Feature Analysis (SFA). While displaying performance comparable to hierarchical extensions to the SFA algorithm, such as Hierarchical Slow Feature Analysis, for a small number of output-features, our algorithm allows end-to-end training of arbitrary differentiable approximators (e.g., deep neural networks). We provide experimental evidence that PowerSFA is able to extract meaningful and informative low-dimensional features in the case of a) synthetic low-dimensional data, b) visual data, and also for c) a general dataset for which symmetric non-temporal relations between points can be defined.
0
0
0
1
0
0
Quantum Critical Behavior in the Asymptotic Limit of High Disorder: Entropy Stabilized NiCoCr0.8 Alloys
The behavior of matter near a quantum critical point (QCP) is one of the most exciting and challenging areas of physics research. Emergent phenomena such as high-temperature superconductivity are linked to the proximity to a QCP. Although significant progress has been made in understanding quantum critical behavior in some low dimensional magnetic insulators, the situation in metallic systems is much less clear. Here we demonstrate that NiCoCrx single crystal alloys are remarkable model systems for investigating QCP physics in a metallic environment. For NiCoCrx alloys with x = 0.8, the critical exponents associated with a ferromagnetic quantum critical point (FQCP) are experimentally determined from low temperature magnetization and heat capacity measurements. For the first time, all of the five critical exponents ( gamma-subT =1/2 , beta-subT = 1, delta = 3/2, nuz-subm = 2, alpha-bar-subT = 0) are in remarkable agreement with predictions of Belitz-Kirkpatrick-Vojta (BKV) theory in the asymptotic limit of high disorder. Using these critical exponents, excellent scaling of the magnetization data is demonstrated with no adjustable parameters. We also find a divergence of the magnetic Gruneisen parameter, consistent with a FQCP. This work therefore demonstrates that entropy stabilized concentrated solid solutions represent a unique platform to study quantum critical behavior in a highly tunable class of materials.
0
1
0
0
0
0
Multilayer Network Modeling of Integrated Biological Systems
Biological systems, from a cell to the human brain, are inherently complex. A powerful representation of such systems, described by an intricate web of relationships across multiple scales, is provided by complex networks. Recently, several studies are highlighting how simple networks -- obtained by aggregating or neglecting temporal or categorical description of biological data -- are not able to account for the richness of information characterizing biological systems. More complex models, namely multilayer networks, are needed to account for interdependencies, often varying across time, of biological interacting units within a cell, a tissue or parts of an organism.
0
0
0
0
1
0
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.
1
0
0
0
0
0
Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Efforts to reduce the numerical precision of computations in deep learning training have yielded systems that aggressively quantize weights and activations, yet employ wide high-precision accumulators for partial sums in inner-product operations to preserve the quality of convergence. The absence of any framework to analyze the precision requirements of partial sum accumulations results in conservative design choices. This imposes an upper-bound on the reduction of complexity of multiply-accumulate units. We present a statistical approach to analyze the impact of reduced accumulation precision on deep learning training. Observing that a bad choice for accumulation precision results in loss of information that manifests itself as a reduction in variance in an ensemble of partial sums, we derive a set of equations that relate this variance to the length of accumulation and the minimum number of bits needed for accumulation. We apply our analysis to three benchmark networks: CIFAR-10 ResNet 32, ImageNet ResNet 18 and ImageNet AlexNet. In each case, with accumulation precision set in accordance with our proposed equations, the networks successfully converge to the single precision floating-point baseline. We also show that reducing accumulation precision further degrades the quality of the trained network, proving that our equations produce tight bounds. Overall this analysis enables precise tailoring of computation hardware to the application, yielding area- and power-optimal systems.
1
0
0
1
0
0
The Berkovich realization for rigid analytic motives
We prove that the functor associating to a rigid analytic variety the singular complex of the underlying Berkovich topological space is motivic, and defines the maximal Artin quotient of a motive. We use this to generalize Berkovich's results on the weight-zero part of the étale cohomology of a variety defined over a non-archimedean valued field.
0
0
1
0
0
0
Rates of estimation for determinantal point processes
Determinantal point processes (DPPs) have wide-ranging applications in machine learning, where they are used to enforce the notion of diversity in subset selection problems. Many estimators have been proposed, but surprisingly the basic properties of the maximum likelihood estimator (MLE) have received little attention. In this paper, we study the local geometry of the expected log-likelihood function to prove several rates of convergence for the MLE. We also give a complete characterization of the case where the MLE converges at a parametric rate. Even in the latter case, we also exhibit a potential curse of dimensionality where the asymptotic variance of the MLE is exponentially large in the dimension of the problem.
0
0
1
1
0
0
Negative refraction in Weyl semimetals
We theoretically propose that Weyl semimetals may exhibit negative refraction at some frequencies close to the plasmon frequency, allowing transverse magnetic (TM) electromagnetic waves with frequencies smaller than the plasmon frequency to propagate in the Weyl semimetals. The idea is justified by the calculation of reflection spectra, in which \textit{negative} refractive index at such frequencies gives physically correct spectra. In this case, a TM electromagnetic wave incident to the surface of the Weyl semimetal will be bent with a negative angle of refraction. We argue that the negative refractive index at the specified frequencies of the electromagnetic wave is required to conserve the energy of the wave, in which the incident energy should propagate away from the point of incidence.
0
1
0
0
0
0
Towards Object Life Cycle-Based Variant Generation of Business Process Models
Variability management of process models is a major challenge for Process-Aware Information Systems. Process model variants can be attributed to any of the following reasons: new technologies, governmental rules, organizational context or adoption of new standards. Current approaches to manage variants of process models address issues such as reducing the huge effort of modeling from scratch, preventing redundancy, and controlling inconsistency in process models. Although the effort to manage process model variants has been exerted, there are still limitations. Furthermore, existing approaches do not focus on variants that come from change in organizational perspective of process models. Organizational-driven variant management is an important area that still needs more study that we focus on in this paper. Object Life Cycle (OLC) is an important aspect that may change from an organization to another. This paper introduces an approach inspired by real life scenario to generate consistent process model variants that come from adaptations in the OLC.
1
0
0
0
0
0
Detection of sub-MeV Dark Matter with Three-Dimensional Dirac Materials
We propose the use of three-dimensional Dirac materials as targets for direct detection of sub-MeV dark matter. Dirac materials are characterized by a linear dispersion for low-energy electronic excitations, with a small band gap of O(meV) if lattice symmetries are broken. Dark matter at the keV scale carrying kinetic energy as small as a few meV can scatter and excite an electron across the gap. Alternatively, bosonic dark matter as light as a few meV can be absorbed by the electrons in the target. We develop the formalism for dark matter scattering and absorption in Dirac materials and calculate the experimental reach of these target materials. We find that Dirac materials can play a crucial role in detecting dark matter in the keV to MeV mass range that scatters with electrons via a kinetically mixed dark photon, as the dark photon does not develop an in-medium effective mass. The same target materials provide excellent sensitivity to absorption of light bosonic dark matter in the meV to hundreds of meV mass range, superior to all other existing proposals when the dark matter is a kinetically mixed dark photon.
0
1
0
0
0
0
Formation of Local Resonance Band Gaps in Finite Acoustic Metamaterials: A Closed-form Transfer Function Model
The objective of this paper is to use transfer functions to comprehend the formation of band gaps in locally resonant acoustic metamaterials. Identifying a recursive approach for any number of serially arranged locally resonant mass in mass cells, a closed form expression for the transfer function is derived. Analysis of the end-to-end transfer function helps identify the fundamental mechanism for the band gap formation in a finite metamaterial. This mechanism includes (a) repeated complex conjugate zeros located at the natural frequency of the individual local resonators, (b) the presence of two poles which flank the band gap, and (c) the absence of poles in the band-gap. Analysis of the finite cell dynamics are compared to the Bloch-wave analysis of infinitely long metamaterials to confirm the theoretical limits of the band gap estimated by the transfer function modeling. The analysis also explains how the band gap evolves as the number of cells in the metamaterial chain increases and highlights how the response varies depending on the chosen sensing location along the length of the metamaterial. The proposed transfer function approach to compute and evaluate band gaps in locally resonant structures provides a framework for the exploitation of control techniques to modify and tune band gaps in finite metamaterial realizations.
0
1
1
0
0
0
Online Multi-Label Classification: A Label Compression Method
Many modern applications deal with multi-label data, such as functional categorizations of genes, image labeling and text categorization. Classification of such data with a large number of labels and latent dependencies among them is a challenging task, and it becomes even more challenging when the data is received online and in chunks. Many of the current multi-label classification methods require a lot of time and memory, which make them infeasible for practical real-world applications. In this paper, we propose a fast linear label space dimension reduction method that transforms the labels into a reduced encoded space and trains models on the obtained pseudo labels. Additionally, it provides an analytical method to update the decoding matrix which maps the labels into the original space and is used during the test phase. Experimental results show the effectiveness of this approach in terms of running times and the prediction performance over different measures.
0
0
0
1
0
0
Mutual Alignment Transfer Learning
Training robots for operation in the real world is a complex, time consuming and potentially expensive task. Despite significant success of reinforcement learning in games and simulations, research in real robot applications has not been able to match similar progress. While sample complexity can be reduced by training policies in simulation, such policies can perform sub-optimally on the real platform given imperfect calibration of model dynamics. We present an approach -- supplemental to fine tuning on the real robot -- to further benefit from parallel access to a simulator during training and reduce sample requirements on the real robot. The developed approach harnesses auxiliary rewards to guide the exploration for the real world agent based on the proficiency of the agent in simulation and vice versa. In this context, we demonstrate empirically that the reciprocal alignment for both agents provides further benefit as the agent in simulation can adjust to optimize its behaviour for states commonly visited by the real-world agent.
1
0
0
0
0
0
Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming
The need to efficiently calculate first- and higher-order derivatives of increasingly complex models expressed in Python has stressed or exceeded the capabilities of available tools. In this work, we explore techniques from the field of automatic differentiation (AD) that can give researchers expressive power, performance and strong usability. These include source-code transformation (SCT), flexible gradient surgery, efficient in-place array operations, higher-order derivatives as well as mixing of forward and reverse mode AD. We implement and demonstrate these ideas in the Tangent software library for Python, the first AD framework for a dynamic language that uses SCT.
1
0
0
0
0
0
Scalar and Tensor Parameters for Importing Tensor Index Notation including Einstein Summation Notation
In this paper, we propose a method for importing tensor index notation, including Einstein summation notation, into functional programming. This method involves introducing two types of parameters, i.e, scalar and tensor parameters, and simplified tensor index rules that do not handle expressions that are valid only for the Cartesian coordinate system, in which the index can move up and down freely. An example of such an expression is "c = A_i B_i". As an ordinary function, when a tensor parameter obtains a tensor as an argument, the function treats the tensor argument as a whole. In contrast, when a scalar parameter obtains a tensor as an argument, the function is applied to each component of the tensor. In this paper, we show that introducing these two types of parameters and our simplified index rules enables us to apply arbitrary user-defined functions to tensor arguments using index notation including Einstein summation notation without requiring an additional description to enable each function to handle tensors.
1
0
0
0
0
0
Evidence for coherent spicule oscillations by correcting Hinode/SOT Ca II H in the southeast limb of the Sun
Wave theories of heating the chromosphere, corona, and solar wind due to photospheric fluctuations are strengthened by the existence of observed wave coherency up to the transition region (TR). The coherency of solar spicules' intensity oscillations was explored using the Solar Optical Telescope (SOT) on the Hinode spacecraft with a height increase above the solar limb in active region (AR). We used time sequences near the southeast region from the Hinode/SOT in Ca II H line obtained on April 3, 2015 and applied the de-convolution procedure to the spicule in order to illustrate how effectively our restoration method works on fine structures such as spicules. Moreover, the intensity oscillations at different heights above the solar limb were analysed through wavelet transforms. Afterwards, the phase difference was measured among oscillations at two certain heights in search of evidence for coherent oscillations. The results of wavelet transformations revealed dominant period peaks in 2, 4, 5.5, and 6.5 min at four separate heights. The dominant frequencies for coherency level higher than 75 percent was found to be around 5.5 and 8.5 mHz. Mean phase speeds of 155-360 km s-1 were measured. We found that the mean phase speeds increased with height. The results suggest that the energy flux carried by coherent waves into the corona and heliosphere may be several times larger than previous estimates that were based solely on constant velocities. We provide compelling evidence for the existence of upwardly propagating coherent waves.
0
1
0
0
0
0
Dynamics and transverse relaxation of an unconventional spin-rotation mode in a two-dimensional strongly magnetized electron gas
An unconventional spin-rotation mode emerging in a quantum Hall ferromagnet due to excitation by a laser pulse is studied. This state, macroscopically representing a rotation of the entire electron spin-system to a certain angle, microscopically is not equivalent to a coherent turn of all spins as a single-whole and is presented in the form of a combination of eigen quantum states corresponding to all possible S_z spin numbers. Motion of the macroscopic quantum state is studied microscopically by solving a non-stationary Schroedinger equation and by means of a kinetic approach where damping of the spin-rotation mode is related to an elementary process - transformation of a 'Goldstone spin exciton' to a 'spin-wave exciton'. The system exhibits a spin stochastization mechanism (determined by spatial fluctuations of the g-factor) providing the damping - the transverse spin relaxation, but irrelevant to a decay of spin-wave excitons and thus not providing the longitudinal relaxation - recovery of the S_z number to its equilibrium value.
0
1
0
0
0
0
Opinion diversity and community formation in adaptive networks
It is interesting and of significant importance to investigate how network structures co-evolve with opinions. The existing models of such co-evolution typically lead to the final states where network nodes either reach a global consensus or break into separated communities, each of which holding its own community consensus. Such results, however, can hardly explain the richness of real-life observations that opinions are always diversified with no global or even community consensus, and people seldom, if not never, totally cut off themselves from dissenters. In this article, we show that, a simple model integrating consensus formation, link rewiring and opinion change allows complex system dynamics to emerge, driving the system into a dynamic equilibrium with co-existence of diversified opinions. Specifically, similar opinion holders may form into communities yet with no strict community consensus; and rather than being separated into disconnected communities, different communities remain to be interconnected by non-trivial proportion of inter-community links. More importantly, we show that the complex dynamics may lead to different numbers of communities at steady state with a given tolerance between different opinion holders. We construct a framework for theoretically analyzing the co-evolution process. Theoretical analysis and extensive simulation results reveal some useful insights into the complex co-evolution process, including the formation of dynamic equilibrium, the phase transition between different steady states with different numbers of communities, and the dynamics between opinion distribution and network modularity, etc.
1
1
0
0
0
0
On the local and global comparison of generalized Bajraktarević means
Given two continuous functions $f,g:I\to\mathbb{R}$ such that $g$ is positive and $f/g$ is strictly monotone, a measurable space $(T,A)$, a measurable family of $d$-variable means $m: I^d\times T\to I$, and a probability measure $\mu$ on the measurable sets $A$, the $d$-variable mean $M_{f,g,m;\mu}:I^d\to I$ is defined by $$ M_{f,g,m;\mu}(\pmb{x}) :=\left(\frac{f}{g}\right)^{-1}\left( \frac{\int_T f\big(m(x_1,\dots,x_d,t)\big) d\mu(t)} {\int_T g\big(m(x_1,\dots,x_d,t)\big) d\mu(t)}\right) \qquad(\pmb{x}=(x_1,\dots,x_d)\in I^d). $$ The aim of this paper is to study the local and global comparison problem of these means, i.e., to find conditions for the generating functions $(f,g)$ and $(h,k)$, for the families of means $m$ and $n$, and for the measures $\mu,\nu$ such that the comparison inequality $$ M_{f,g,m;\mu}(\pmb{x})\leq M_{h,k,n;\nu}(\pmb{x}) \qquad(\pmb{x}\in I^d) $$ be satisfied.
0
0
1
0
0
0