title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
A two-dimensional hexagonal sheet of TiO$_2$
We report on the ab initio discovery of a novel putative ground state for quasi two-dimensional TiO$_2$ through a structural search using the minima hopping method with an artificial neural network potential. The structure is based on a honeycomb lattice and is energetically lower than the experimentally reported lepidocrocite sheet by 7~meV/atom, and merely 13~meV/atom higher in energy than the ground state rutile bulk structure. According to our calculations, the hexagonal sheet is stable against mechanical stress, it is chemically inert and can be deposited on various substrates without disrupting the structure. Its properties differ significantly from all known TiO$_2$ bulk phases with a large gap of 5.05~eV that can be tuned through strain engineering.
0
1
0
0
0
0
Fast Markov Chain Monte Carlo Algorithms via Lie Groups
From basic considerations of the Lie group that preserves a target probability measure, we derive the Barker, Metropolis, and ensemble Markov chain Monte Carlo (MCMC) algorithms, as well as two new MCMC algorithms. The convergence properties of these new algorithms successively improve on the state of the art. We illustrate the new algorithms with explicit numerical computations, and we empirically demonstrate the improved convergence on a spin glass.
0
0
1
1
0
0
On the (parameterized) complexity of recognizing well-covered (r,l)-graphs
An $(r, \ell)$-partition of a graph $G$ is a partition of its vertex set into $r$ independent sets and $\ell$ cliques. A graph is $(r, \ell)$ if it admits an $(r, \ell)$-partition. A graph is well-covered if every maximal independent set is also maximum. A graph is $(r,\ell)$-well-covered if it is both $(r,\ell)$ and well-covered. In this paper we consider two different decision problems. In the $(r,\ell)$-Well-Covered Graph problem ($(r,\ell)$WCG for short), we are given a graph $G$, and the question is whether $G$ is an $(r,\ell)$-well-covered graph. In the Well-Covered $(r,\ell)$-Graph problem (WC$(r,\ell)$G for short), we are given an $(r,\ell)$-graph $G$ together with an $(r,\ell)$-partition of $V(G)$ into $r$ independent sets and $\ell$ cliques, and the question is whether $G$ is well-covered. We classify most of these problems into P, coNP-complete, NP-complete, NP-hard, or coNP-hard. Only the cases WC$(r,0)$G for $r\geq 3$ remain open. In addition, we consider the parameterized complexity of these problems for several choices of parameters, such as the size $\alpha$ of a maximum independent set of the input graph, its neighborhood diversity, its clique-width, or the number $\ell$ of cliques in an $(r, \ell)$-partition. In particular, we show that the parameterized problem of deciding whether a general graph is well-covered parameterized by $\alpha$ can be reduced to the WC$(0,\ell)$G problem parameterized by $\ell$. In addition, we prove that both problems are coW[2]-hard but can be solved in XP-time.
1
0
0
0
0
0
Comments on avalanche flow models based on the concept of random kinetic energy
In a series of papers, Bartelt and co-workers developed novel snow-avalanche models in which \emph{random kinetic energy} $R_K$ (a.k.a.\ granular temperature) is a key concept. The earliest models were for a single, constant density layer, using a Voellmy model but with $R_K$-dependent friction parameters. This was then extended to variable density, and finally a suspension layer (powder-snow cloud) was added. The physical basis and mathematical formulation of these models is critically reviewed here, with the following main findings: (i) Key assumptions in the original RKE model differ substantially from established results on dense granular flows; in particular, the effective friction coefficient decreases to zero with velocity in the RKE model. (ii) In the variable-density model, non-canonical interpretation of the energy balance leads to a third-order evolution equation for the flow depth or density, whereas the stated assumptions imply a first-order equation. (iii) The model for the suspension layer neglects gravity and disregards well established theoretical and experimental results on particulate gravity currents. Some options for improving these aspects are discussed.
0
1
0
0
0
0
Continuous Implicit Authentication for Mobile Devices based on Adaptive Neuro-Fuzzy Inference System
As mobile devices have become indispensable in modern life, mobile security is becoming much more important. Traditional password or PIN-like point-of-entry security measures score low on usability and are vulnerable to brute force and other types of attacks. In order to improve mobile security, an adaptive neuro-fuzzy inference system(ANFIS)-based implicit authentication system is proposed in this paper to provide authentication in a continuous and transparent manner.To illustrate the applicability and capability of ANFIS in our implicit authentication system, experiments were conducted on behavioural data collected for up to 12 weeks from different Android users. The ability of the ANFIS-based system to detect an adversary is also tested with scenarios involving an attacker with varying levels of knowledge. The results demonstrate that ANFIS is a feasible and efficient approach for implicit authentication with an average of 95% user recognition rate. Moreover, the use of ANFIS-based system for implicit authentication significantly reduces manual tuning and configuration tasks due to its selflearning capability.
1
0
0
0
0
0
Ultracold heteronuclear three-body systems: How diabaticity limits the universality of recombination into shallow dimers
The mass-imbalanced three-body recombination process that forms a shallow dimer is shown to possess a rich Efimov-Stückelberg landscape, with corresponding spectra that differ fundamentally from the homonuclear case. A semi-analytical treatment of the three-body recombination predicts an unusual spectra with intertwined resonance peaks and minima, and yields in-depth insight into the behavior of the corresponding Efimov spectra. In particular, the patterns of the Efimov-Stückelberg landscape are shown to depend inherently on the degree of diabaticity of the three-body collisions, which strongly affects the universality of the heteronuclear Efimov states.
0
1
0
0
0
0
Variational Inference for Data-Efficient Model Learning in POMDPs
Partially observable Markov decision processes (POMDPs) are a powerful abstraction for tasks that require decision making under uncertainty, and capture a wide range of real world tasks. Today, effective planning approaches exist that generate effective strategies given black-box models of a POMDP task. Yet, an open question is how to acquire accurate models for complex domains. In this paper we propose DELIP, an approach to model learning for POMDPs that utilizes amortized structured variational inference. We empirically show that our model leads to effective control strategies when coupled with state-of-the-art planners. Intuitively, model-based approaches should be particularly beneficial in environments with changing reward structures, or where rewards are initially unknown. Our experiments confirm that DELIP is particularly effective in this setting.
0
0
0
1
0
0
The ANAIS-112 experiment at the Canfranc Underground Laboratory
The ANAIS experiment aims at the confirmation of the DAMA/LIBRA signal at the Canfranc Underground Laboratory (LSC). Several 12.5 kg NaI(Tl) modules produced by Alpha Spectra Inc. have been operated there during the last years in various set-ups; an outstanding light collection at the level of 15 photoelectrons per keV, which allows triggering at 1 keV of visible energy, has been measured for all of them and a complete characterization of their background has been achieved. In the first months of 2017, the full ANAIS-112 set-up consisting of nine Alpha Spectra detectors with a total mass of 112.5 kg was commissioned at LSC and the first dark matter run started in August, 2017. Here, the latest results on the detectors performance and measured background from the commissioning run will be presented and the sensitivity prospects of the ANAIS-112 experiment will be discussed.
0
1
0
0
0
0
A non-ellipticity result, or the impossible taming of the logarithmic strain measure
The logarithmic strain measures $\lVert\log U\rVert^2$, where $\log U$ is the principal matrix logarithm of the stretch tensor $U=\sqrt{F^TF}$ corresponding to the deformation gradient $F$ and $\lVert\,.\,\rVert$ denotes the Frobenius matrix norm, arises naturally via the geodesic distance of $F$ to the special orthogonal group $\operatorname{SO}(n)$. This purely geometric characterization of this strain measure suggests that a viable constitutive law of nonlinear elasticity may be derived from an elastic energy potential which depends solely on this intrinsic property of the deformation, i.e. that an energy function $W\colon\operatorname{GL^+}(n)\to\mathbb{R}$ of the form \begin{equation} W(F)=\Psi(\lVert\log U\rVert^2) \tag{1} \end{equation} with a suitable function $\Psi\colon[0,\infty)\to\mathbb{R}$ should be used to describe finite elastic deformations. However, while such energy functions enjoy a number of favorable properties, we show that it is not possible to find a strictly monotone function $\Psi$ such that $W$ of the form (1) is Legendre-Hadamard elliptic. Similarly, we consider the related isochoric strain measure $\lVert\operatorname{dev}_n\log U\rVert^2$, where $\operatorname{dev}_n \log U$ is the deviatoric part of $\log U$. Although a polyconvex energy function in terms of this strain measure has recently been constructed in the planar case $n=2$, we show that for $n\geq3$, no strictly monotone function $\Psi\colon[0,\infty)\to\mathbb{R}$ exists such that $F\mapsto \Psi(\lVert\operatorname{dev}_n\log U\rVert^2)$ is polyconvex or even rank-one convex. Moreover, a volumetric-isochorically decoupled energy of the form $F\mapsto \Psi(\lVert\operatorname{dev}_n\log U\rVert^2) + W_{\mathrm{vol}}(\det F)$ cannot be rank-one convex for any function $W_{\mathrm{vol}}\colon(0,\infty)\to\mathbb{R}$ if $\Psi$ is strictly monotone.
0
0
1
0
0
0
Interpretable Neural Networks for Predicting Mortality Risk using Multi-modal Electronic Health Records
We present an interpretable neural network for predicting an important clinical outcome (1-year mortality) from multi-modal Electronic Health Record (EHR) data. Our approach builds on prior multi-modal machine learning models by now enabling visualization of how individual factors contribute to the overall outcome risk, assuming other factors remain constant, which was previously impossible. We demonstrate the value of this approach using a large multi-modal clinical dataset including both EHR data and 31,278 echocardiographic videos of the heart from 26,793 patients. We generated separate models for (i) clinical data only (CD) (e.g. age, sex, diagnoses and laboratory values), (ii) numeric variables derived from the videos, which we call echocardiography-derived measures (EDM), and (iii) CD+EDM+raw videos (pixel data). The interpretable multi-modal model maintained performance compared to non-interpretable models (Random Forest, XGBoost), and also performed significantly better than a model using a single modality (average AUC=0.82). Clinically relevant insights and multi-modal variable importance rankings were also facilitated by the new model, which have previously been impossible.
1
0
0
1
0
0
Automatic Temperature Setpoint Tuning of a Thermoforming Machine using Fuzzy Terminal Iterative Learning Control
This paper presents a new way to design a Fuzzy Terminal Iterative Learning Control (TILC) to control the heater temperature setpoints of a thermoforming machine. This fuzzy TILC is based on the inverse of a fuzzy model of this machine, and is built from experimental (or simulation) data with kriging interpolation. The Fuzzy Inference System usually used for a fuzzy model is the zero order Takagi Sugeno Kwan system (constant consequents). In this paper, the 1st order Takagi Sugeno Kwan system is used, with the fuzzy model rules expressed using matrices. This makes the inversion of the fuzzy model much easier than the inversion of the fuzzy model based on the TSK of order 0. Based on simulation results, the proposed fuzzy TILC seems able to give a very good initial guess as to the heater temperature setpoints, making it possible to have almost no wastage of plastic sheets. Simulation results show the effectiveness of the fuzzy TILC compared to a crisp TILC, even though the fuzzy controller is based on a fuzzy model built from noisy data.
1
0
0
0
0
0
A New Class of Integrals Involving Extended Hypergeometric Function
Our purpose in this present paper is to investigate generalized integration formulas containing the extended generalized hypergeometric function and obtained results are expressed in terms of extended hypergeometric function. Certain special cases of the main results presented here are also pointed out for the extended Gauss' hypergeometric and confluent hypergeometric functions.
0
0
1
0
0
0
The Special Theory of Relativity as Applied to the Born-Oppenheimer-Huang Approach
In two recent publications ( Int. J. Quant. Chem. 114, 1645 (2014) and Molec. Phys. 114, 227 (2016)) it was shown that the Born -Hwang (BH) treatment of a molecular system perturbed by an external field yields a set of decoupled vectorial Wave Equations, just like in Electromagnetism. This finding led us to declare on the existence of a new type of Fields, which were termed Molecular Fields. The fact that such fields exist implies that at the vicinity of conical intersections exist a mechanism that transforms a passing-by electric beam into a field which differs from the original electric field. This situation is reminiscent of what is encountered in astronomy where Black Holes formed by massive stars may affect the nature of a near-by beam of light. Thus if the NonAdiabatic-Coupling-Terms (NACT) with their singular points may affect the nature of such a beam (see the above two publications) then it would be interesting to know to what extend NACTs (and consequently also the BH equation) will be affected by the special theory of relativity as introduced by Dirac. Indeed while applying the Dirac approach we derived the relativistic affected NACTs as well as the corresponding BH equation.
0
1
0
0
0
0
An efficient algorithm for finding all possible input nodes for controlling complex networks
Understanding structural controllability of a complex network requires to identify a Minimum Input nodes Set (MIS) of the network. It has been suggested that finding an MIS is equivalent to computing a maximum matching of the network, where the unmatched nodes constitute an MIS. However, maximum matching of a network is often not unique, and finding all MISs may provide deep insights to the controllability of the network. Finding all possible input nodes, which form the union of all MISs, is computationally challenging for large networks. Here we present an efficient enumerative algorithm for the problem. The main idea is to modify a maximum matching algorithm to make it efficient for finding all possible input nodes by computing only one MIS. We rigorously proved the correctness of the new algorithm and evaluated its performance on synthetic and large real networks. The experimental results showed that the new algorithm ran several orders of magnitude faster than the existing method on large real networks.
1
1
0
0
0
0
Transformation thermal convection: Cloaking, concentrating, and camouflage
Heat can generally transfer via thermal conduction, thermal radiation, and thermal convection. All the existing theories of transformation thermotics and optics can treat thermal conduction and thermal radiation, respectively. Unfortunately, thermal convection has never been touched in transformation theories due to the lack of a suitable theory, thus limiting applications associated with heat transfer through fluids (liquid or gas). Here, we develop, for the first time, a general theory of transformation thermal convection by considering the convection-diffusion equation, the Navier-Stokes equation, and the Darcy law. By introducing porous media, we get a set of coupled equations keeping their forms under coordinate transformation. As model applications, the theory helps to show the effects of cloaking, concentrating, and camouflage. Our finite element simulations confirm the theoretical findings. This work offers a general transformation theory for thermal convection, thus revealing some novel behaviors of thermal convection; it not only provides new hints on how to control heat transfer by combining thermal conduction, thermal radiation, and thermal convection, but also benefits the study of mass diffusion and other related fields that contain a set of equations and need to transform velocities at the same time.
0
1
0
0
0
0
Tunable terahertz reflection of graphene via ionic liquid gating
We report a highly efficient tunable THz reflector in graphene. By applying a small gate voltage (up to 3 V), the reflectance of graphene is modulated from a minimum of 0.79% to a maximum of 33.4% using graphene/ionic liquid structures at room temperature, and the reflection tuning is uniform within a wide spectral range (0.1 - 1.5 THz). Our observation is explained by the Drude model, which describes the THz wave-induced intraband transition in graphene. This tunable reflectance of graphene may contribute to broadband THz mirrors, deformable THz mirrors, variable THz beam splitters and other optical components.
0
1
0
0
0
0
Second-Order Kernel Online Convex Optimization with Adaptive Sketching
Kernel online convex optimization (KOCO) is a framework combining the expressiveness of non-parametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only $\mathcal{O}(t)$ time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal $\mathcal{O}(\sqrt{T})$ regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve $\mathcal{O}(\log(\text{Det}(\boldsymbol{K})))$ regret, which we show scales as $\mathcal{O}(d_{\text{eff}}\log T)$, where $d_{\text{eff}}$ is the effective dimension of the problem and is usually much smaller than $\mathcal{O}(\sqrt{T})$. The main drawback of second-order methods is their much higher $\mathcal{O}(t^2)$ space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves $\mathcal{O}(d_{\text{eff}}\log T)$ regret. To address the computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix $\boldsymbol{K}_t$, and show that for a chosen parameter $\gamma \leq 1$ our Sketched-KONS reduces the space and time complexity by a factor of $\gamma^2$ to $\mathcal{O}(t^2\gamma^2)$ space and time per iteration, while incurring only $1/\gamma$ times more regret.
1
0
0
1
0
0
On Corecursive Algebras for Functors Preserving Coproducts
For an endofunctor $H$ on a hyper-extensive category preserving countable coproducts we describe the free corecursive algebra on $Y$ as the coproduct of the final coalgebra for $H$ and the free $H$-algebra on $Y$. As a consequence, we derive that $H$ is a cia functor, i.e., its corecursive algebras are precisely the cias (completely iterative algebras). Also all functors $H(-) + Y$ are then cia functors. For finitary set functors we prove that, conversely, if $H$ is a cia functor, then it has the form $H = W \times (-) + Y$ for some sets $W$ and $Y$.
1
0
0
0
0
0
Semisimple characters for inner froms I: GL_n(D)
The article is about the representation theory of an inner form~$G$ of a general linear group over a non-archimedean local field. We introduce semisimple characters for~$G$ whose intertwining classes describe conjecturally via Local Langlands correspondence the behavior on wild inertia. These characters also play a potential role to understand the classification of irreducible smooth representations of inner forms of classical groups. We prove the intertwining formula for semisimple characters and an intertwining implies conjugacy like theorem. Further we show that endo-parameters for~$G$, i.e. invariants consisting of simple endo-classes and a numerical part, classify the intertwining classes of semisimple characters for~$G$. They should be the counter part for restrictions of Langlands-parameters to wild inertia under Local Langlands correspondence.
0
0
1
0
0
0
Quantum Memristors in Quantum Photonics
We propose a method to build quantum memristors in quantum photonic platforms. We firstly design an effective beam splitter, which is tunable in real-time, by means of a Mach-Zehnder-type array with two equal 50:50 beam splitters and a tunable retarder, which allows us to control its reflectivity. Then, we show that this tunable beam splitter, when equipped with weak measurements and classical feedback, behaves as a quantum memristor. Indeed, in order to prove its quantumness, we show how to codify quantum information in the coherent beams. Moreover, we estimate the memory capability of the quantum memristor. Finally, we show the feasibility of the proposed setup in integrated quantum photonics.
1
0
0
1
0
0
Spin dynamics and magnetic-field-induced polarization of excitons in ultrathin GaAs/AlAs quantum wells with indirect band gap and type-II band alignment
The exciton spin dynamics are investigated both experimentally and theoretically in two-monolayer-thick GaAs/AlAs quantum wells with an indirect band gap and a type-II band alignment. The magnetic-field-induced circular polarization of photoluminescence, $P_c$, is studied as function of the magnetic field strength and direction as well as sample temperature. The observed nonmonotonic behaviour of these functions is provided by the interplay of bright and dark exciton states contributing to the emission. To interpret the experiment, we have developed a kinetic master equation model which accounts for the dynamics of the spin states in this exciton quartet, radiative and nonradiative recombination processes, and redistribution of excitons between these states as result of spin relaxation. The model offers quantitative agreement with experiment and allows us to evaluate, for the studied structure, the heavy-hole $g$ factor, $g_{hh}=+3.5$, and the spin relaxation times of electron, $\tau_{se} = 33~\mu$s, and hole, $\tau_{sh} = 3~\mu$s, bound in the exciton.
0
1
0
0
0
0
Resonant inelastic x-ray scattering probes the electron-phonon coupling in the spin-liquid kappa-(BEDT-TTF)2Cu2(CN)3
Resonant inelastic x-ray scattering at the N K edge reveals clearly resolved harmonics of the anion plane vibrations in the kappa-(BEDT-TTF)2Cu2(CN)3 spin-liquid insulator. Tuning the incoming light energy at the K edge of two distinct N sites permits to excite different sets of phonon modes. Cyanide CN stretching mode is selected at the edge of the ordered N sites which are more strongly connected to the BEDT-TTF molecules, while positionally disordered N sites show multi-mode excitation. Combining measurements with calculations on an anion plane cluster permits to estimate the sitedependent electron-phonon coupling of the modes related to nitrogen excitation.
0
1
0
0
0
0
Thermodynamic and kinetic fragility of Freon113: the most fragile plastic crystal
We present a dynamic and thermodynamic study of the orientational glass former Freon113 (CCl2F-CClF2) in order to analyze its kinetic and thermodynamic fragilities. Freon113 displays internal molecular degrees of freedom which promote a complex energy landscape. Experimental specific heat and its microscopic origin, the vibrational density of states from inelastic neutron scattering, together with the orientational dynamics obtained by means of dielectric spectroscopy have revealed the highest fragility value, both thermodynamic and kinetic, found for this orientational glass former. The excess in both Debye-reduced specific heat and density of states (boson peak) evidences the existence of glassy low-energy excitations. We demonstrate that early proposed correlations between the boson peak and the Debye specific heat value are elusive as revealed by the clear counterexample of the studied case.
0
1
0
0
0
0
Semi-supervised Feature Learning For Improving Writer Identification
Data augmentation is usually used by supervised learning approaches for offline writer identification, but such approaches require extra training data and potentially lead to overfitting errors. In this study, a semi-supervised feature learning pipeline was proposed to improve the performance of writer identification by training with extra unlabeled data and the original labeled data simultaneously. Specifically, we proposed a weighted label smoothing regularization (WLSR) method for data augmentation, which assigned the weighted uniform label distribution to the extra unlabeled data. The WLSR method could regularize the convolutional neural network (CNN) baseline to allow more discriminative features to be learned to represent the properties of different writing styles. The experimental results on well-known benchmark datasets (ICDAR2013 and CVL) showed that our proposed semi-supervised feature learning approach could significantly improve the baseline measurement and perform competitively with existing writer identification approaches. Our findings provide new insights into offline write identification.
0
0
0
1
0
0
Error Bounds for Piecewise Smooth and Switching Regression
The paper deals with regression problems, in which the nonsmooth target is assumed to switch between different operating modes. Specifically, piecewise smooth (PWS) regression considers target functions switching deterministically via a partition of the input space, while switching regression considers arbitrary switching laws. The paper derives generalization error bounds in these two settings by following the approach based on Rademacher complexities. For PWS regression, our derivation involves a chaining argument and a decomposition of the covering numbers of PWS classes in terms of the ones of their component functions and the capacity of the classifier partitioning the input space. This yields error bounds with a radical dependency on the number of modes. For switching regression, the decomposition can be performed directly at the level of the Rademacher complexities, which yields bounds with a linear dependency on the number of modes. By using once more chaining and a decomposition at the level of covering numbers, we show how to recover a radical dependency. Examples of applications are given in particular for PWS and swichting regression with linear and kernel-based component functions.
1
0
0
1
0
0
Civil Asset Forfeiture: A Judicial Perspective
Civil Asset Forfeiture (CAF) is a longstanding and controversial legal process viewed on the one hand as a powerful tool for combating drug crimes and on the other hand as a violation of the rights of US citizens. Data used to support both sides of the controversy to date has come from government sources representing records of the events at the time of occurrence. Court dockets represent litigation events initiated following the forfeiture, however, and can thus provide a new perspective on the CAF legal process. This paper will show new evidence supporting existing claims about the growth of the practice and bias in its application based on the quantitative analysis of data derived from these court cases.
1
0
0
0
0
0
Covariance-Insured Screening
Modern bio-technologies have produced a vast amount of high-throughput data with the number of predictors far greater than the sample size. In order to identify more novel biomarkers and understand biological mechanisms, it is vital to detect signals weakly associated with outcomes among ultrahigh-dimensional predictors. However, existing screening methods, which typically ignore correlation information, are likely to miss these weak signals. By incorporating the inter-feature dependence, we propose a covariance-insured screening methodology to identify predictors that are jointly informative but only marginally weakly associated with outcomes. The validity of the method is examined via extensive simulations and real data studies for selecting potential genetic factors related to the onset of cancer.
0
0
0
1
0
0
Amari Functors and Dynamics in Gauge Structures
We deal with finite dimensional differentiable manifolds. All items are concerned with are differentiable as well. The class of differentiability is $C^\infty$. A metric structure in a vector bundle $E$ is a constant rank symmetric bilinear vector bundle homomorphism of $E\times E$ in the trivial bundle line bundle. We address the question whether a given gauge structure in $E$ is metric. That is the main concerns. We use generalized Amari functors of the information geometry for introducing two index functions defined in the moduli space of gauge structures in $E$. Beside we introduce a differential equation whose analysis allows to link the new index functions just mentioned with the main concerns. We sketch applications in the differential geometry theory of statistics. Reader interested in a former forum on the question whether a giving connection is metric are referred to appendix.
0
0
1
0
0
0
An Experimental Analysis of the Power Consumption of Convolutional Neural Networks for Keyword Spotting
Nearly all previous work on small-footprint keyword spotting with neural networks quantify model footprint in terms of the number of parameters and multiply operations for a feedforward inference pass. These values are, however, proxy measures since empirical performance in actual deployments is determined by many factors. In this paper, we study the power consumption of a family of convolutional neural networks for keyword spotting on a Raspberry Pi. We find that both proxies are good predictors of energy usage, although the number of multiplies is more predictive than the number of model parameters. We also confirm that models with the highest accuracies are, unsurprisingly, the most power hungry.
1
0
0
0
0
0
Formal Geometric Quantization III, Functoriality in the spin-c setting
In this paper, we prove a functorial aspect of the formal geometric quantization procedure of non-compact spin-c manifolds.
0
0
1
0
0
0
Infinitesimal perturbation analysis for risk measures based on the Smith max-stable random field
When using risk or dependence measures based on a given underlying model, it is essential to be able to quantify the sensitivity or robustness of these measures with respect to the model parameters. In this paper, we consider an underlying model which is very popular in spatial extremes, the Smith max-stable random field. We study the sensitivity properties of risk or dependence measures based on the values of this field at a finite number of locations. Max-stable fields play a key role, e.g., in the modelling of natural disasters. As their multivariate density is generally not available for more than three locations, the Likelihood Ratio Method cannot be used to estimate the derivatives of the risk measures with respect to the model parameters. Thus, we focus on a pathwise method, the Infinitesimal Perturbation Analysis (IPA). We provide a convenient and tractable sufficient condition for performing IPA, which is intricate to obtain because of the very structure of max-stable fields involving pointwise maxima over an infinite number of random functions. IPA enables the consistent estimation of the considered measures' derivatives with respect to the parameters characterizing the spatial dependence. We carry out a simulation study which shows that the approach performs well in various configurations.
0
0
0
0
0
1
Modeling and Analysis of Two-Way Relay Non-Orthogonal Multiple Access Systems
A two-way relay non-orthogonal multiple access (TWR-NOMA) system is investigated, where two groups of NOMA users exchange messages with the aid of one half-duplex (HD) decode-and-forward (DF) relay. Since the signal-plus-interference-to-noise ratios (SINRs) of NOMA signals mainly depend on effective successive interference cancellation (SIC) schemes, imperfect SIC (ipSIC) and perfect SIC (pSIC) are taken into account. In order to characterize the performance of TWR-NOMA systems, we first derive closed-form expressions for both exact and asymptotic outage probabilities of NOMA users' signals with ipSIC/pSIC. Based on the derived results, the diversity order and throughput of the system are examined. Then we study the ergodic rates of users' signals by providing the asymptotic analysis in high SNR regimes. Lastly, numerical simulations are provided to verify the analytical results and show that: 1) TWR-NOMA is superior to TWR-OMA in terms of outage probability in low SNR regimes; 2) Due to the impact of interference signal (IS) at the relay, error floors and throughput ceilings exist in outage probabilities and ergodic rates for TWR-NOMA, respectively; and 3) In delay-limited transmission mode, TWR-NOMA with ipSIC and pSIC have almost the same energy efficiency. However, in delay-tolerant transmission mode, TWR-NOMA with pSIC is capable of achieving larger energy efficiency compared to TWR-NOMA with ipSIC.
1
0
0
0
0
0
Totally positive matrices and dilogarithm identities
We show that two involutions on the variety $N_n^+$ of upper triangular totally positive matrices are related, on the one hand, to the tetrahedron equation and, on the other hand, to the action of the symmetric group $S_3$ on some subvariety of $N_n^+$ and on the set of certain functions on $N_n^+$. Using these involutions, we obtain a family of dilogarithm identities involving minors of totally positive matrices. These identities admit a form manifestly invariant under the action of the symmetric group $S_3$.
0
0
1
0
0
0
On the zeros of random harmonic polynomials: the Weyl model
Li and Wei (2009) studied the density of zeros of Gaussian harmonic polynomials with independent Gaussian coefficients. They derived a formula for the expected number of zeros of random harmonic polynomials as well as asymptotics for the case that the polynomials are drawn from the Kostlan ensemble. In this paper we extend their work to cover the case that the polynomials are drawn from the Weyl ensemble by deriving asymptotics for this class of harmonic polynomials.
0
0
1
0
0
0
Online deforestation detection
Deforestation detection using satellite images can make an important contribution to forest management. Current approaches can be broadly divided into those that compare two images taken at similar periods of the year and those that monitor changes by using multiple images taken during the growing season. The CMFDA algorithm described in Zhu et al. (2012) is an algorithm that builds on the latter category by implementing a year-long, continuous, time-series based approach to monitoring images. This algorithm was developed for 30m resolution, 16-day frequency reflectance data from the Landsat satellite. In this work we adapt the algorithm to 1km, 16-day frequency reflectance data from the modis sensor aboard the Terra satellite. The CMFDA algorithm is composed of two submodels which are fitted on a pixel-by-pixel basis. The first estimates the amount of surface reflectance as a function of the day of the year. The second estimates the occurrence of a deforestation event by comparing the last few predicted and real reflectance values. For this comparison, the reflectance observations for six different bands are first combined into a forest index. Real and predicted values of the forest index are then compared and high absolute differences for consecutive observation dates are flagged as deforestation events. Our adapted algorithm also uses the two model framework. However, since the modis 13A2 dataset used, includes reflectance data for different spectral bands than those included in the Landsat dataset, we cannot construct the forest index. Instead we propose two contrasting approaches: a multivariate and an index approach similar to that of CMFDA.
1
0
0
1
0
0
Analysis of Different Approaches of Parallel Block Processing for K-Means Clustering Algorithm
Distributed Computation has been a recent trend in engineering research. Parallel Computation is widely used in different areas of Data Mining, Image Processing, Simulating Models, Aerodynamics and so forth. One of the major usage of Parallel Processing is widely implemented for clustering the satellite images of size more than dimension of 1000x1000 in a legacy system. This paper mainly focuses on the different approaches of parallel block processing such as row-shaped, column-shaped and square-shaped. These approaches are applied for classification problem. These approaches is applied to the K-Means clustering algorithm as this is widely used for the detection of features for high resolution orthoimagery satellite images. The different approaches are analyzed, which lead to reduction in execution time and resulted the influence of improvement in performance measurement compared to sequential K-Means Clustering algorithm.
1
0
0
0
0
0
Preserving Order of Data When Validating Defect Prediction Models
[Context] The use of defect prediction models, such as classifiers, can support testing resource allocations by using data of the previous releases of the same project for predicting which software components are likely to be defective. A validation technique, hereinafter technique defines a specific way to split available data in training and test sets to measure a classifier accuracy. Time-series techniques have the unique ability to preserve the temporal order of data; i.e., preventing the testing set to have data antecedent to the training set. [Aim] The aim of this paper is twofold: first we check if there is a difference in the classifiers accuracy measured by time-series versus non-time-series techniques. Afterward, we check for a possible reason for this difference, i.e., if defect rates change across releases of a project. [Method] Our method consists of measuring the accuracy, i.e., AUC, of 10 classifiers on 13 open and two closed projects by using three validation techniques, namely cross validation, bootstrap, and walk-forward, where only the latter is a time-series technique. [Results] We find that the AUC of the same classifier used on the same project and measured by 10-fold varies compared to when measured by walk-forward in the range [-0.20, 0.22], and it is statistically different in 45% of the cases. Similarly, the AUC measured by bootstrap varies compared to when measured by walk-forward in the range [-0.17, 0.43], and it is statistically different in 56% of the cases. [Conclusions] We recommend choosing the technique to be used by carefully considering the conclusions to draw, the property of the available datasets, and the level of realism with the classifier usage scenario.
1
0
0
0
0
0
Graphettes: Constant-time determination of graphlet and orbit identity including (possibly disconnected) graphlets up to size 8
Graphlets are small connected induced subgraphs of a larger graph $G$. Graphlets are now commonly used to quantify local and global topology of networks in the field. Methods exist to exhaustively enumerate all graphlets (and their orbits) in large networks as efficiently as possible using orbit counting equations. However, the number of graphlets in $G$ is exponential in both the number of nodes and edges in $G$. Enumerating them all is already unacceptably expensive on existing large networks, and the problem will only get worse as networks continue to grow in size and density. Here we introduce an efficient method designed to aid statistical sampling of graphlets up to size $k=8$ from a large network. We define graphettes as the generalization of graphlets allowing for disconnected graphlets. Given a particular (undirected) graphette $g$, we introduce the idea of the canonical graphette $\mathcal K(g)$ as a representative member of the isomorphism group $Iso(g)$ of $g$. We compute the mapping $\mathcal K$, in the form of a lookup table, from all $2^{k(k-1)/2}$ undirected graphettes $g$ of size $k\le 8$ to their canonical representatives $\mathcal K(g)$, as well as the permutation that transforms $g$ to $\mathcal K(g)$. We also compute all automorphism orbits for each canonical graphette. Thus, given any $k\le 8$ nodes in a graph $G$, we can in constant time infer which graphette it is, as well as which orbit each of the $k$ nodes belongs to. Sampling a large number $N$ of such $k$-sets of nodes provides an approximation of both the distribution of graphlets and orbits across $G$, and the orbit degree vector at each node.
1
0
0
0
0
0
The toric sections: a simple introduction
We review, from a didactic point of view, the definition of a toric section and the different shapes it can take. We'll then discuss some properties of this curve, investigate its analogies and differences with the most renowned conic section and show how to build its general quartic equation. A curious and unexpected result was to find that, with some algebraic manipulation, a toric section can also be obtained as the intersection of a cylinder with a cone. Finally we'll show how it is possible to construct and represent toric sections in the 3D Graphics view of Geogebra. In the article only elementary algebra is used, and the requirements to follow it are just some notion of goniometry and of tridimensional analytic geometry.
0
0
1
0
0
0
Quantum Field Theory, Quantum Geometry, and Quantum Algebras
We demonstrate how one can see quantization of geometry, and quantum algebraic structure in supersymmetric gauge theory.
0
0
1
0
0
0
Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning
Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, the most neuromorphic hardware is trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy.
1
0
0
0
0
0
Discrete Wavelet Transform Based Algorithm for Recognition of QRS Complexes
This paper proposes the application of Discrete Wavelet Transform (DWT) to detect the QRS (ECG is characterized by a recurrent wave sequence of P, QRS and T-wave) of an electrocardiogram (ECG) signal. Wavelet Transform provides localization in both time and frequency. In preprocessing stage, DWT is used to remove the baseline wander in the ECG signal. The performance of the algorithm of QRS detection is evaluated against the standard MIT BIH (Massachusetts Institute of Technology, Beth Israel Hospital) Arrhythmia database. The average QRS complexes detection rate of 98.1 % is achieved.
1
0
0
0
0
0
Multipolar moments of weak lensing signal around clusters. Weighing filaments in harmonic space
Context. Upcoming weak lensing surveys such as Euclid will provide an unprecedented opportunity to quantify the geometry and topology of the cosmic web, in particular in the vicinity of lensing clusters. Aims. Understanding the connectivity of the cosmic web with unbiased mass tracers, such as weak lensing, is of prime importance to probe the underlying cosmology, seek dynamical signatures of dark matter, and quantify environmental effects on galaxy formation. Methods. Mock catalogues of galaxy clusters are extracted from the N-body PLUS simulation. For each cluster, the aperture multipolar moments of the convergence are calculated in two annuli (inside and outside the virial radius). By stacking their modulus, a statistical estimator is built to characterise the angular mass distribution around clusters. The moments are compared to predictions from perturbation theory and spherical collapse. Results. The main weakly chromatic excess of multipolar power on large scales is understood as arising from the contraction of the primordial cosmic web driven by the growing potential well of the cluster. Besides this boost, the quadrupole prevails in the cluster (ellipsoidal) core, while at the outskirts, harmonic distortions are spread on small angular modes, and trace the non-linear sharpening of the filamentary structures. Predictions for the signal amplitude as a function of the cluster-centric distance, mass, and redshift are presented. The prospects of measuring this signal are estimated for current and future lensing data sets. Conclusions. The Euclid mission should provide all the necessary information for studying the cosmic evolution of the connectivity of the cosmic web around lensing clusters using multipolar moments and probing unique signatures of, for example, baryons and warm dark matter.
0
1
0
0
0
0
Charge and spin transport on graphene grain boundaries in a quantizing magnetic field
We study charge and spin transport along grain boundaries in single layer graphene in the presence of a quantizing magnetic field. Transport states in a grain boundary are produced by hybridization of Landau zero modes with interfacial states. In selected energy regimes quantum Hall edge states can be deflected either fully or partially into grain boundary states. The degree of edge state deflection is studied in the nonlocal conductance and in the shot noise. We also consider the possibility of grain boundaries as gate-switchable spin filters, a functionality enabled by counterpropagating transport channels laterally confined in the grain boundary.
0
1
0
0
0
0
Point Cloud Movement For Fully Lagrangian Meshfree Methods
In Lagrangian meshfree methods, the underlying spatial discretization, referred to as a point cloud or a particle cloud, moves with the flow velocity. In this paper, we consider different numerical methods of performing this movement of points or particles. The movement is most commonly done by a first order method, which assumes the velocity to be constant within a time step. We show that this method is very inaccurate and that it introduces volume and mass conservation errors. We further propose new methods for the same which prescribe an additional ODE system that describes the characteristic velocity. Movement is then performed along this characteristic velocity. The first new way of moving points is an extension of mesh-based streamline tracing ideas to meshfree methods. In the second way, the movement is done based on the difference in approximated streamlines between two time levels, which approximates the pathlines in unsteady flow. Numerical comparisons show these method to be vastly superior to the conventionally used first order method.
0
1
1
0
0
0
A fast ILP-based Heuristic for the robust design of Body Wireless Sensor Networks
We consider the problem of optimally designing a body wireless sensor network, while taking into account the uncertainty of data generation of biosensors. Since the related min-max robustness Integer Linear Programming (ILP) problem can be difficult to solve even for state-of-the-art commercial optimization solvers, we propose an original heuristic for its solution. The heuristic combines deterministic and probabilistic variable fixing strategies, guided by the information coming from strengthened linear relaxations of the ILP robust model, and includes a very large neighborhood search for reparation and improvement of generated solutions, formulated as an ILP problem solved exactly. Computational tests on realistic instances show that our heuristic finds solutions of much higher quality than a state-of-the-art solver and than an effective benchmark heuristic.
1
0
1
0
0
0
Strain broadening of the 1042-nm zero-phonon line of the NV- center in diamond: a promising spectroscopic tool for defect tomography
The negatively charged nitrogen-vacancy (NV-) center in diamond is a promising candidate for many quantum applications. Here, we examine the splitting and broadening of the center's infrared (IR) zero-phonon line (ZPL). We develop a model for these effects that accounts for the strain induced by photo-dependent microscopic distributions of defects. We apply this model to interpret observed variations of the IR ZPL shape with temperature and photoexcitation conditions. We identify an anomalous temperature dependent broadening mechanism and that defects other than the substitutional nitrogen center significantly contribute to strain broadening. The former conclusion suggests the presence of a strong Jahn-Teller effect in the center's singlet levels and the latter indicates that major sources of broadening are yet to be identified. These conclusions have important implications for the understanding of the center and the engineering of diamond quantum devices. Finally, we propose that the IR ZPL can be used as a sensitive spectroscopic tool for probing microscopic strain fields and performing defect tomography.
0
1
0
0
0
0
Learning Depthwise Separable Graph Convolution from Data Manifold
Convolution Neural Network (CNN) has gained tremendous success in computer vision tasks with its outstanding ability to capture the local latent features. Recently, there has been an increasing interest in extending convolution operations to the non-Euclidean geometry. Although various types of convolution operations have been proposed for graphs or manifolds, their connections with traditional convolution over grid-structured data are not well-understood. In this paper, we show that depthwise separable convolution can be successfully generalized for the unification of both graph-based and grid-based convolution methods. Based on this insight we propose a novel Depthwise Separable Graph Convolution (DSGC) approach which is compatible with the tradition convolution network and subsumes existing convolution methods as special cases. It is equipped with the combined strengths in model expressiveness, compatibility (relatively small number of parameters), modularity and computational efficiency in training. Extensive experiments show the outstanding performance of DSGC in comparison with strong baselines on multi-domain benchmark datasets.
1
0
0
1
0
0
Candidate Hα emission and absorption line sources in the Galactic Bulge Survey
We present a catalogue of candidate H{\alpha} emission and absorption line sources and blue objects in the Galactic Bulge Survey (GBS) region. We use a point source catalogue of the GBS fields (two strips of (l x b) = (6 x 1) degrees centred at b = 1.5 above and below the Galactic centre), covering the magnitude range 16 < r' < 22.5. We utilize (r'-i', r'-H{\alpha}) colour-colour diagrams to select H{\alpha} emission and absorption line candidates, and also identify blue objects (compared to field stars) using the r'-i' colour index. We identify 1337 H{\alpha} emission line candidates and 336 H{\alpha} absorption line candidates. These catalogues likely contain a plethora of sources, ranging from active (binary) stars, early-type emission line objects, cataclysmic variables (CVs) and low-mass X-ray binaries (LMXBs) to background active galactic nuclei (AGN). The 389 blue objects we identify are likely systems containing a compact object, such as CVs, planetary nebulae and LMXBs. Hot subluminous dwarfs (sdO/B stars) are also expected to be found as blue outliers. Crossmatching our outliers with the GBS X-ray catalogue yields sixteen sources, including seven (magnetic) CVs and one qLMXB candidate among the emission line candidates, and one background AGN for the absorption line candidates. One of the blue outliers is a high state AM CVn system. Spectroscopic observations combined with the multi-wavelength coverage of this area, including X-ray, ultraviolet and (time-resolved) optical and infrared observations, can be used to further constrain the nature of individual sources.
0
1
0
0
0
0
Timely Updates over an Erasure Channel
Using an age of information (AoI) metric, we examine the transmission of coded updates through a binary erasure channel to a monitor/receiver. We start by deriving the average status update age of an infinite incremental redundancy (IIR) system in which the transmission of a k-symbol update continuesuntil k symbols are received. This system is then compared to a fixed redundancy (FR) system in which each update is transmitted as an n symbol packet and the packet is successfully received if and only if at least k symbols are received. If fewer than k symbols are received, the update is discarded. Unlike the IIR system, the FR system requires no feedback from the receiver. For a single monitor system, we show that tuning the redundancy to the symbol erasure rate enables the FR system to perform as well as the IIR system. As the number of monitors is increased, the FR system outperforms the IIR system that guarantees delivery of all updates to all monitors.
1
0
0
0
0
0
Stellar Absorption Line Analysis of Local Star-Forming Galaxies: The Relation Between Stellar Mass, Metallicity, Dust Attenuation and Star Formation Rate
We analyze the optical continuum of star-forming galaxies in SDSS by fitting stacked spectra with stellar population synthesis models to investigate the relation between stellar mass, stellar metallicity, dust attenuation and star formation rate. We fit models calculated with star formation and chemical evolution histories that are derived empirically from multi-epoch observations of the stellar mass---star formation rate and the stellar mass---gas-phase metallicity relations, respectively. We also fit linear combinations of single burst models with a range of metallicities and ages. Star formation and chemical evolution histories are unconstrained for these models. The stellar mass---stellar metallicity relations obtained from the two methods agree with the relation measured from individual supergiant stars in nearby galaxies. These relations are also consistent with the relation obtained from emission line analysis of gas-phase metallicity after accounting for systematic offsets in the gas-phase-metallicity. We measure dust attenuation of the stellar continuum and show that its dependence on stellar mass and star formation rate is consistent with previously reported results derived from nebular emission lines. However, stellar continuum attenuation is smaller than nebular emission line attenuation. The continuum-to-nebular attenuation ratio depends on stellar mass and is smaller in more massive galaxies. Our consistent analysis of stellar continuum and nebular emission lines paves the way for a comprehensive investigation of stellar metallicities of star-forming and quiescent galaxies.
0
1
0
0
0
0
Stratification as a general variance reduction method for Markov chain Monte Carlo
The Eigenvector Method for Umbrella Sampling (EMUS) belongs to a popular class of methods in statistical mechanics which adapt the principle of stratified survey sampling to the computation of free energies. By theoretical analysis and numerical experiments, we demonstrate that EMUS is an efficient general method for computing averages with respect to arbitrary target distributions. We show that EMUS can be dramatically more efficient than direct MCMC when the target distribution is multimodal or when the goal is to compute tail probabilities.
0
1
0
1
0
0
Adaptive MCMC via Combining Local Samplers
Markov chain Monte Carlo (MCMC) methods are widely used in machine learning. One of the major problems with MCMC is the question of how to design chains that mix fast over the whole space; in particular, how to select the parameters of an MCMC algorithm. Here we take a different approach and, similarly to parallel MCMC methods, instead of trying to find a single chain to sample from the whole distribution, we combine samples from several chains run in parallel, each exploring only a few modes. The chains are prioritized based on the kernel Stein discrepancy, which provides a good measure of performance locally. The samples from the independent chains are combined using a novel technique for estimating the probability of different regions of the sample space. Experimental results demonstrate that the resulting algorithm may provide significant speedups in different sampling problems. Most importantly, when combined with the state-of-the-art NUTS algorithm as the base MCMC sampler, our algorithm remained competitive with the basic version of NUTS on sampling from unimodal distributions, while significantly outperformed state-of-the-art competitors on synthetic multimodal problems as well as on a challenging sensor localization task.
1
0
0
1
0
0
GAMBIT: The Global and Modular Beyond-the-Standard-Model Inference Tool
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.
0
1
0
0
0
0
Odd holes in bull-free graphs
The complexity of testing whether a graph contains an induced odd cycle of length at least five is currently unknown. In this paper we show that this can be done in polynomial time if the input graph has no induced subgraph isomorphic to the bull (a triangle with two disjoint pendant edges).
1
0
0
0
0
0
Magnetic-Field-Induced Superconductivity in Ultrathin Pb Films with Magnetic Impurities
It is well known that external magnetic fields and magnetic moments of impurities both suppress superconductivity. Here, we demonstrate that their combined effect enhances the superconductivity of a few atomic layer thick Pb films grown on a cleaved GaAs(110) surface. A Ce-doped film, where superconductivity is totally suppressed at zero-field, actually turns superconducting when an external magnetic field is applied parallel to the conducting plane. For films with Mn adatoms, the screening of the magnetic moment by conduction electrons, i.e., the Kondo singlet formation, becomes important. We found that the degree of screening can be reduced by capping the Pb film with a Au layer, and observed the positive magnetic field dependence of the superconducting transition temperature.
0
1
0
0
0
0
Unimodal Category and the Monotonicity Conjecture
We completely characterize the unimodal category for functions $f:\mathbb R\to[0,\infty)$ using a decomposition theorem obtained by generalizing the sweeping algorithm of Baryshnikov and Ghrist. We also give a characterization of the unimodal category for functions $f:S^1\to[0,\infty)$ and provide an algorithm to compute the unimodal category of such a function in the case of finitely many critical points. We then turn to the monotonicity conjecture of Baryshnikov and Ghrist. We show that this conjecture is true for functions on $\mathbb R$ and $S^1$ using the above characterizations and that it is false on certain graphs and on the Euclidean plane by providing explicit counterexamples. We also show that it holds for functions on the Euclidean plane whose Morse-Smale graph is a tree using a result of Hickok, Villatoro and Wang.
0
0
1
0
0
0
Holomorphic primary fields in free CFT4 and Calabi-Yau orbifolds
Counting formulae for general primary fields in free four dimensional conformal field theories of scalars, vectors and matrices are derived. These are specialised to count primaries which obey extremality conditions defined in terms of the dimensions and left or right spins (i.e. in terms of relations between the charges under the Cartan subgroup of $SO(4,2)$). The construction of primary fields for scalar field theory is mapped to a problem of determining multi-variable polynomials subject to a system of symmetry and differential constraints. For the extremal primaries, we give a construction in terms of holomorphic polynomial functions on permutation orbifolds, which are shown to be Calabi-Yau spaces.
0
0
1
0
0
0
A Deep Generative Framework for Paraphrase Generation
Paraphrase generation is an important problem in NLP, especially in question answering, information retrieval, information extraction, conversation systems, to name a few. In this paper, we address the problem of generating paraphrases automatically. Our proposed method is based on a combination of deep generative models (VAE) with sequence-to-sequence models (LSTM) to generate paraphrases, given an input sentence. Traditional VAEs when combined with recurrent neural networks can generate free text but they are not suitable for paraphrase generation for a given sentence. We address this problem by conditioning the both, encoder and decoder sides of VAE, on the original sentence, so that it can generate the given sentence's paraphrases. Unlike most existing models, our model is simple, modular and can generate multiple paraphrases, for a given sentence. Quantitative evaluation of the proposed method on a benchmark paraphrase dataset demonstrates its efficacy, and its performance improvement over the state-of-the-art methods by a significant margin, whereas qualitative human evaluation indicate that the generated paraphrases are well-formed, grammatically correct, and are relevant to the input sentence. Furthermore, we evaluate our method on a newly released question paraphrase dataset, and establish a new baseline for future research.
1
0
0
0
0
0
Stiff-response-induced instability for chemotactic bacteria and flux-limited Keller-Segel equation
Collective motion of chemotactic bacteria as E. Coli relies, at the individual level, on a continuous reorientation by runs and tumbles. It has been established that the length of run is decided by a stiff response to a temporal sensingof chemical cues along the pathway.We describe a novel mechanism for pattern formation stemming from the stiffness of chemotactic response relying on a kinetic chemotaxis model which includes a recently discovered formalism for the bacterial chemotaxis. We prove instability both for amicroscopic description in the space-velocity space and for the macroscopic equation, a flux-limited Keller-Segel equation, which has attracted much attention recently.A remarkable property is that the unstable frequencies remain bounded, as it is the case in Turing instability. Numerical illustrations based on a powerful Monte Carlo method show that the stationary homogeneous state of population density isdestabilized and periodic patterns are generated in realistic ranges of parameters. These theoretical developments are in accordance with several biological observations.
0
1
1
0
0
0
The Pfaffian state in an electron gas with small Landau level gaps
Landau level mixing plays an important role in the Pfaffian (or anti-Pfaffian) states. In ZnO the Landau level gap is essentially an order of magnitude smaller than that in a GaAs quantum well. We introduce the screened Coulomb interaction in a single Landau level to tackle that situation. Here we study the overlap of the ground state and the Pfaffian (or anti-Pfaffian) state at evendenominator fractional quantum Hall (FQH) states present in ZnO. The overlap is strongly system size-dependent which suggests a newly proposed particle-hole symmetry Pfaffian ground state in the extreme Landau level mixing limit. When the ratio of Coulomb interaction to the Landau level gap \k{appa} varies, we find a possible topological phase transition in the range 2 < \k{appa} < 3, which was actually observed in an experiment. We then study how the width of quantum well combined with screening influences the overlap.
0
1
0
0
0
0
Three-Dimensional Numerical Modeling of Shear Stimulation of Naturally Fractured Reservoirs
Shear dilation based hydraulic stimulations enable exploitation of geothermal energy from reservoirs with inadequate initial permeability. While contributing to enhancing the reservoir's permeability, hydraulic stimulation processes may lead to undesired seismic activity. Here, we present a three dimensional numerical model aiming to increase understanding of this mechanism and its consequences. The fractured reservoir is modeled as a network of explicitly represented large scale fractures immersed in a permeable rock matrix. The numerical formulation is constructed by coupling three physical processes: fluid flow, fracture deformation, and rock matrix deformation. For flow simulations, the discrete fracture matrix model is used, which allows the fluid transport from high permeable conductive fractures to the rock matrix and vice versa. The mechanical behavior of the fractures is modeled using a hyperbolic model with reversible and irreversible deformations. Linear elasticity is assumed for the mechanical deformation and stress alteration of the rock matrix. Fractures are modeled as lower dimensional surfaces embodied in the domain, subjected to specific governing equations for their deformation along the tangential and normal directions. Both the fluid flow and momentum balance equations are approximated by finite volume discretizations. The new numerical model is demonstrated considering a three dimensional fractured formation with a network of 20 explicitly represented fractures. The effects of fluid exchange between fractures and rock matrix on the permeability evolution and the generated seismicity are examined for test cases resembling realistic reservoir conditions.
0
1
0
0
0
0
Decentralized DC MicroGrid Monitoring and Optimization via Primary Control Perturbations
We treat the emerging power systems with direct current (DC) MicroGrids, characterized with high penetration of power electronic converters. We rely on the power electronics to propose a decentralized solution for autonomous learning of and adaptation to the operating conditions of the DC Mirogrids; the goal is to eliminate the need to rely on an external communication system for such purpose. The solution works within the primary droop control loops and uses only local bus voltage measurements. Each controller is able to estimate (i) the generation capacities of power sources, (ii) the load demands, and (iii) the conductances of the distribution lines. To define a well-conditioned estimation problem, we employ decentralized strategy where the primary droop controllers temporarily switch between operating points in a coordinated manner, following amplitude-modulated training sequences. We study the use of the estimator in a decentralized solution of the Optimal Economic Dispatch problem. The evaluations confirm the usefulness of the proposed solution for autonomous MicroGrid operation.
1
0
0
0
0
0
AI4AI: Quantitative Methods for Classifying Host Species from Avian Influenza DNA Sequence
Avian Influenza breakouts cause millions of dollars in damage each year globally, especially in Asian countries such as China and South Korea. The impact magnitude of a breakout directly correlates to time required to fully understand the influenza virus, particularly the interspecies pathogenicity. The procedure requires laboratory tests that require resources typically lacking in a breakout emergency. In this study, we propose new quantitative methods utilizing machine learning and deep learning to correctly classify host species given raw DNA sequence data of the influenza virus, and provide probabilities for each classification. The best deep learning models achieve top-1 classification accuracy of 47%, and top-3 classification accuracy of 82%, on a dataset of 11 host species classes.
0
0
0
1
1
0
Constructing tame supercuspidal representations
A new approach to Jiu-Kang Yu's construction of tame supercuspidal representations of $p$-adic reductive groups is presented. Connections with the theory of cuspidal Deligne-Lusztig representations of finite groups of Lie type are also discussed.
0
0
1
0
0
0
Cohomologies on hypercomplex manifolds
We review some cohomological aspects of complex and hypercomplex manifolds and underline the differences between both realms. Furthermore, we try to highlight the similarities between compact complex surfaces on one hand and compact hypercomplex manifolds of real dimension 8 with holonomy of the Obata connection in SL(2,H) on the other hand.
0
0
1
0
0
0
Automorphism groups of quandles and related groups
In this paper we study different questions concerning automorphisms of quandles. For a conjugation quandle $Q={\rm Conj}(G)$ of a group $G$ we determine several subgroups of ${\rm Aut}(Q)$ and find necessary and sufficient conditions when these subgroups coincide with the whole group ${\rm Aut}(Q)$. In particular, we prove that ${\rm Aut}({\rm Conj}(G))={\rm Z}(G)\rtimes {\rm Aut}(G)$ if and only if either ${\rm Z}(G)=1$ or $G$ is one of the groups $\mathbb{Z}_2$, $\mathbb{Z}_2^2$ or $\mathbb{Z}_3$. For a big list of Takasaki quandles $T(G)$ of an abelian group $G$ with $2$-torsion we prove that the group of inner automorphisms ${\rm Inn}(T(G))$ is a Coxeter group. We study automorphisms of certain extensions of quandles and determine some interesting subgroups of the automorphism groups of these quandles. Also we classify finite quandles $Q$ with $3\leq k$-transitive action of ${\rm Aut}(Q)$.
0
0
1
0
0
0
Forward Amortized Inference for Likelihood-Free Variational Marginalization
In this paper, we introduce a new form of amortized variational inference by using the forward KL divergence in a joint-contrastive variational loss. The resulting forward amortized variational inference is a likelihood-free method as its gradient can be sampled without bias and without requiring any evaluation of either the model joint distribution or its derivatives. We prove that our new variational loss is optimized by the exact posterior marginals in the fully factorized mean-field approximation, a property that is not shared with the more conventional reverse KL inference. Furthermore, we show that forward amortized inference can be easily marginalized over large families of latent variables in order to obtain a marginalized variational posterior. We consider two examples of variational marginalization. In our first example we train a Bayesian forecaster for predicting a simplified chaotic model of atmospheric convection. In the second example we train an amortized variational approximation of a Bayesian optimal classifier by marginalizing over the model space. The result is a powerful meta-classification network that can solve arbitrary classification problems without further training.
0
0
0
1
0
0
Beyond-CMOS Device Benchmarking for Boolean and Non-Boolean Logic Applications
The latest results of benchmarking research are presented for a variety of beyond-CMOS charge- and spin-based devices. In addition to improving the device-level models, several new device proposals and a few majorly modified devices are investigated. Deep pipelining circuits are employed to boost the throughput of low-power devices. Furthermore, the benchmarking methodology is extended to interconnect-centric analyses and non-Boolean logic applications. In contrast to Boolean circuits, non-Boolean circuits based on the cellular neural network demonstrate that spintronic devices can potentially outperform conventional CMOS devices.
1
0
0
0
0
0
Cosmology and Convention
I argue that some important elements of the current cosmological model are "conventionalist" in the sense defined by Karl Popper. These elements include dark matter and dark energy; both are auxiliary hypotheses that were invoked in response to observations that falsified the standard model as it existed at the time. The use of conventionalist stratagems in response to unexpected observations implies that the field of cosmology is in a state of "degenerating problemshift" in the language of Imre Lakatos. I show that the "concordance" argument, often put forward by cosmologists in support of the current paradigm, is weaker than the convergence arguments that were made in the past in support of the atomic theory of matter or the quantization of energy.
0
1
0
0
0
0
Kernel-estimated Nonparametric Overlap-Based Syncytial Clustering
Standard clustering algorithms usually find regular-structured clusters such as ellipsoidally- or spherically-dispersed groups, but are more challenged with groups lacking formal structure or definition. Syncytial clustering is the name that we introduce for methods that merge groups obtained from standard clustering algorithms in order to reveal complex group structure in the data. Here, we develop a distribution-free fully-automated syncytial clustering algorithm that can be used with $k$-means and other algorithms. Our approach computes the cumulative distribution function of the normed residuals from an appropriately fit $k$-groups model and calculates the nonparametric overlap between each pair of groups. Groups with high pairwise overlaps are merged as long as the generalized overlap decreases. Our methodology is always a top performer in identifying groups with regular and irregular structures in several datasets. The approach is also used to identify the distinct kinds of gamma ray bursts in the Burst and Transient Source Experiment 4Br catalog and also the distinct kinds of activation in a functional Magnetic Resonance Imaging study.
0
0
0
1
0
0
Goal-oriented Trajectories for Efficient Exploration
Exploration is a difficult challenge in reinforcement learning and even recent state-of-the art curiosity-based methods rely on the simple epsilon-greedy strategy to generate novelty. We argue that pure random walks do not succeed to properly expand the exploration area in most environments and propose to replace single random action choices by random goals selection followed by several steps in their direction. This approach is compatible with any curiosity-based exploration and off-policy reinforcement learning agents and generates longer and safer trajectories than individual random actions. To illustrate this, we present a task-independent agent that learns to reach coordinates in screen frames and demonstrate its ability to explore with the game Super Mario Bros. improving significantly the score of a baseline DQN agent.
0
0
0
1
0
0
Adaptive Sampling Strategies for Stochastic Optimization
In this paper, we propose a stochastic optimization method that adaptively controls the sample size used in the computation of gradient approximations. Unlike other variance reduction techniques that either require additional storage or the regular computation of full gradients, the proposed method reduces variance by increasing the sample size as needed. The decision to increase the sample size is governed by an inner product test that ensures that search directions are descent directions with high probability. We show that the inner product test improves upon the well known norm test, and can be used as a basis for an algorithm that is globally convergent on nonconvex functions and enjoys a global linear rate of convergence on strongly convex functions. Numerical experiments on logistic regression problems illustrate the performance of the algorithm.
0
0
0
1
0
0
Explicit estimates for the distribution of numbers free of large prime factors
There is a large literature on the asymptotic distribution of numbers free of large prime factors, so-called $\textit{smooth}$ or $\textit{friable}$ numbers. But there is very little known about this distribution that is numerically explicit. In this paper we follow the general plan for the saddle point argument of Hildebrand and Tenenbaum, giving explicit and fairly tight intervals in which the true count lies. We give two numerical examples of our method, and with the larger one, our interval is so tight we can exclude the famous Dickman-de Bruijn asymptotic estimate as too small and the Hildebrand-Tenenbaum main term as too large.
0
0
1
0
0
0
Completely integrally closed Prufer $v$-multiplication domains
We study the effects on $D$ of assuming that the power series ring $D[[X]]$ is a $v$-domain or a PVMD. We show that a PVMD $D$ is completely integrally closed if and only if $\bigcap_{n=1}^{\infty }(I^{n})_{v}=(0)$ for every proper $t$-invertible $t$-ideal $I$ of $D$. Using this, we show that if $D$ is an AGCD domain, then $D[[X]]$ is integrally closed if and only if $D$ is a completely integrally closed PVMD with torsion $t$-class group. We also determine several classes of PVMDs for which being Archimedean is equivalent to being completely integrally closed and give some new characterizations of integral domains related to Krull domains.
0
0
1
0
0
0
Masked Autoregressive Flow for Density Estimation
Autoregressive models are among the best performing neural density estimators. We describe an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data. By constructing a stack of autoregressive models, each modelling the random numbers of the next model in the stack, we obtain a type of normalizing flow suitable for density estimation, which we call Masked Autoregressive Flow. This type of flow is closely related to Inverse Autoregressive Flow and is a generalization of Real NVP. Masked Autoregressive Flow achieves state-of-the-art performance in a range of general-purpose density estimation tasks.
1
0
0
1
0
0
Temperature-dependent optical properties of plasmonic titanium nitride thin films
Due to their exceptional plasmonic properties, noble metals such as gold and silver have been the materials of choice for the demonstration of various plasmonic and nanophotonic phenomena. However, noble metals' softness, lack of tailorability and low melting point along with challenges in thin film fabrication and device integration have prevented the realization of real-life plasmonic devices.In the recent years, titanium nitride (TiN) has emerged as a promising plasmonic material with good metallic and refractory (high temperature stable) properties. The refractory nature of TiN could enable practical plasmonic devices operating at elevated temperatures for energy conversion and harsh-environment industries such as gas and oil. Here we report on the temperature dependent dielectric functions of TiN thin films of varying thicknesses in the technologically relevant visible and near-infrared wavelength range from 330 nm to 2000 nm for temperatures up to 900 0C using in-situ high temperature ellipsometry. Our findings show that the complex dielectric function of TiN at elevated temperatures deviates from the optical parameters at room temperature, indicating degradation in plasmonic properties both in the real and imaginary parts of the dielectric constant. However, quite strikingly, the relative changes of the optical properties of TiN are significantly smaller compared to its noble metal counterparts. Using simulations, we demonstrate that incorporating the temperature-induced deviations into the numerical models leads to significant differences in the optical responses of high temperature nanophotonic systems. These studies hold the key for accurate modeling of high temperature TiN based optical elements and nanophotonic systems for energy conversion, harsh-environment sensors and heat-assisted applications.
0
1
0
0
0
0
Succinct Partial Sums and Fenwick Trees
We consider the well-studied partial sums problem in succint space where one is to maintain an array of n k-bit integers subject to updates such that partial sums queries can be efficiently answered. We present two succint versions of the Fenwick Tree - which is known for its simplicity and practicality. Our results hold in the encoding model where one is allowed to reuse the space from the input data. Our main result is the first that only requires nk + o(n) bits of space while still supporting sum/update in O(log_b n) / O(b log_b n) time where 2 <= b <= log^O(1) n. The second result shows how optimal time for sum/update can be achieved while only slightly increasing the space usage to nk + o(nk) bits. Beyond Fenwick Trees, the results are primarily based on bit-packing and sampling - making them very practical - and they also allow for simple optimal parallelization.
1
0
0
0
0
0
Deep Morphing: Detecting bone structures in fluoroscopic X-ray images with prior knowledge
We propose approaches based on deep learning to localize objects in images when only a small training dataset is available and the images have low quality. That applies to many problems in medical image processing, and in particular to the analysis of fluoroscopic (low-dose) X-ray images, where the images have low contrast. We solve the problem by incorporating high-level information about the objects, which could be a simple geometrical model, like a circular outline, or a more complex statistical model. A simple geometrical representation can sufficiently describe some objects and only requires minimal labeling. Statistical shape models can be used to represent more complex objects. We propose computationally efficient two-stage approaches, which we call deep morphing, for both representations by fitting the representation to the output of a deep segmentation network.
0
0
0
1
0
0
Persistence Flamelets: multiscale Persistent Homology for kernel density exploration
In recent years there has been noticeable interest in the study of the "shape of data". Among the many ways a "shape" could be defined, topology is the most general one, as it describes an object in terms of its connectivity structure: connected components (topological features of dimension 0), cycles (features of dimension 1) and so on. There is a growing number of techniques, generally denoted as Topological Data Analysis, aimed at estimating topological invariants of a fixed object; when we allow this object to change, however, little has been done to investigate the evolution in its topology. In this work we define the Persistence Flamelets, a multiscale version of one of the most popular tool in TDA, the Persistence Landscape. We examine its theoretical properties and we show how it could be used to gain insights on KDEs bandwidth parameter.
0
0
1
1
0
0
Effective modeling of ground penetrating radar in fractured media using analytic solutions for propagation, thin-bed interaction and dipolar scattering
We propose a new approach to model ground penetrating radar signals that propagate through a homogeneous and isotropic medium, and are scattered at thin planar fractures of arbitrary dip, azimuth, thickness and material filling. We use analytical expressions for the Maxwell equations in a homogeneous space to describe the propagation of the signal in the rock matrix, and account for frequency-dependent dispersion and attenuation through the empirical Jonscher formulation. We discretize fractures into elements that are linearly polarized by the incoming electric field that arrives from the source to each element, locally, as a plane wave. To model the effective source wavelet we use a generalized Gamma distribution to define the antenna dipole moment. We combine microscopic and macroscopic Maxwell's equations to derive an analytic expression for the response of each element, which describes the full electric dipole radiation patterns along with effective reflection coefficients of thin layers. Our results compare favorably with finite-difference time-domain modeling in the case of constant electrical parameters of the rock-matrix and fracture filling. Compared with traditional finite-difference time-domain modeling, the proposed approach is faster and more flexible in terms of fracture orientations. A comparison with published laboratory results suggests that the modeling approach can reproduce the main characteristics of the reflected wavelet.
0
1
0
0
0
0
Crystallites in Color Glass Beads of the 19th Century and Their Influence on Fatal Deterioration of Glass
Glass corrosion is a crucial problem in keeping and conservation of beadworks in museums. All kinds of glass beads undergo deterioration but blue-green lead-potassium glass beads of the 19th century are subjected to the destruction to the greatest extent. Blue-green lead-potassium glass beads of the 19th century obtained from exhibits kept in Russian museums were studied with the purpose to determine the causes of the observed phenomenon. For the comparison, yellow lead beads of the 19th century were also explored. Both kinds of beads contain Sb but yellow ones are stable. Using scanning electron microscopy, energy dispersive X-ray microspectrometry, electron backscatter diffraction, transmission electron microscopy and X-ray powder analysis, we have registered the presence of crystallites of orthorhombic KSbOSiO$_4$ and cubic Pb$_2$Sb$_{1.5}$Fe$_{0.5}$O$_{6.5}$ in glass matrix of blue-green and yellow beads, respectively. Both compounds form at rather high temperatures obviously during glass melting and/or melt cooling. We suppose that the crystallites generate internal tensile strain in glass during its cooling which causes formation of multiple microcracks in inner domains of blue-green beads. We suggest that the deterioration degree depends on quantity of the precipitates, their sizes and their temperature coefficients of linear expansion. In blue-green beads, the crystallites are distributed in their sizes from $\sim\,$200 nm to several tens of $\mu$m and tend to gather in large colonies. The sizes of crystallites in yellow beads are several hundreds of nm and their clusters contain few crystallites. This explains the difference in corrosion of these kinds of beads containing crystals of Sb compounds.
0
1
0
0
0
0
CO2 infrared emission as a diagnostic of planet-forming regions of disks
[Abridged] The infrared ro-vibrational emission lines from organic molecules in the inner regions of protoplanetary disks are unique probes of the physical and chemical structure of planet forming regions and the processes that shape them. The non-LTE excitation effects of carbon dioxide (CO2) are studied in a full disk model to evaluate: (i) what the emitting regions of the different CO2 ro-vibrational bands are; (ii) how the CO2 abundance can be best traced using CO2 ro-vibrational lines using future JWST data and; (iii) what the excitation and abundances tell us about the inner disk physics and chemistry. CO2 is a major ice component and its abundance can potentially test models with migrating icy pebbles across the iceline. A full non-LTE CO2 excitation model has been built. The characteristics of the model are tested using non-LTE slab models. Subsequently the CO2 line formation has been modelled using a two-dimensional disk model representative of T-Tauri disks. The CO2 gas that emits in the 15 $\mu$m and 4.5 $\mu$m regions of the spectrum is not in LTE and arises in the upper layers of disks, pumped by infrared radiation. The v$_2$ 15 $\mu$m feature is dominated by optically thick emission for most of the models that fit the observations and increases linearly with source luminosity. Its narrowness compared with that of other molecules stems from a combination of the low rotational excitation temperature (~250 K) and the inherently narrower feature for CO2. The inferred CO2 abundances derived for observed disks are more than two orders of magnitude lower than those in interstellar ices (~10$^5$), similar to earlier LTE disk estimates. Line-to-continuum ratios are low, of order a few %, thus high signal-to-noise (S/N > 300) observations are needed for individual line detections. Prospects of accurate abundance retreival with JWST-MIRI and JWST-NIRSpec are discussed.
0
1
0
0
0
0
Public Evidence from Secret Ballots
Elections seem simple---aren't they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.
1
0
0
0
0
0
Imprints of Zero-Age Velocity Dispersions and Dynamical Heating on the Age-Velocity dispersion Relation
Observations of stars in the the solar vicinity show a clear tendency for old stars to have larger velocity dispersions. This relation is called the age-velocity dispersion relation (AVR) and it is believed to provide insight into the heating history of the Milky Way galaxy. Here, in order to investigate the origin of the AVR, we performed smoothed particle hydrodynamic simulations of the self-gravitating multiphase gas disks in the static disk-halo potentials. Star formation from cold and dense gas is taken into account, and we analyze the evolution of these star particles. We find that exponents of simulated AVR and the ratio of the radial to vertical velocity dispersion are close to the observed values. We also find that the simulated AVR is not a simple consequence of dynamical heating. The evolution tracks of stars with different epochs evolve gradually in the age-velocity dispersion plane as a result of: (1) the decrease in velocity dispersion in star forming regions, and (2) the decrease in the number of cold/dense/gas as scattering sources. These results suggest that the AVR involves not only the heating history of a stellar disk, but also the historical evolution of the ISM in a galaxy.
0
1
0
0
0
0
Spatial Random Sampling: A Structure-Preserving Data Sketching Tool
Random column sampling is not guaranteed to yield data sketches that preserve the underlying structures of the data and may not sample sufficiently from less-populated data clusters. Also, adaptive sampling can often provide accurate low rank approximations, yet may fall short of producing descriptive data sketches, especially when the cluster centers are linearly dependent. Motivated by that, this paper introduces a novel randomized column sampling tool dubbed Spatial Random Sampling (SRS), in which data points are sampled based on their proximity to randomly sampled points on the unit sphere. The most compelling feature of SRS is that the corresponding probability of sampling from a given data cluster is proportional to the surface area the cluster occupies on the unit sphere, independently from the size of the cluster population. Although it is fully randomized, SRS is shown to provide descriptive and balanced data representations. The proposed idea addresses a pressing need in data science and holds potential to inspire many novel approaches for analysis of big data.
1
0
0
1
0
0
Adiponitrile-LiTFSI solution as alkylcarbonate free electrolyte for LTO/NMC Li-ion batteries
Recently, dinitriles (NC(CH2)nCN) and especially adiponitrile (ADN, n=4) have attracted the attention as secure electrolyte solvents due to their chemical stability, high boiling points, high flash points and low vapor pressure. The good solvating properties of ADN toward lithium salts and its high electrochemical stability (~ 6V vs. Li/Li+) make it suitable for safer Li-ions cells without performances loss. In this study, ADN is used as a single electrolyte solvent with lithium bis(trimethylsulfonyl)imide (LiTFSI). This electrolyte allows the use of aluminum collectors as almost no corrosion occurs at voltages up to 4.2 V. Physico-chemical properties of ADN-LiTFSI electrolyte such as salt dissolution, conductivity and viscosity were determined. The cycling performances of batteries using Li4Ti5O12 (LTO) as anode and LiNi1/3Co1/3Mn1/3O2 (NMC) as cathode were determined. The results indicate that LTO/NMC batteries exhibit excellent rate capabilities with a columbic efficiency close to 100%. As an example, cells were able to reach a capacity of 165 mAh.g-1 at 0.1C and a capacity retention of more than 98% after 200 cycles at 0.5C. In addition, electrodes analyses by SEM, XPS and electrochemical impedance spectroscopy after cycling confirming minimal surface changes of the electrodes in the studied battery system
0
1
0
0
0
0
Sea of Lights: Practical Device-to-Device Security Bootstrapping in the Dark
Practical solutions to bootstrap security in today's information and communication systems critically depend on centralized services for authentication as well as key and trust management. This is particularly true for mobile users. Identity providers such as Google or Facebook have active user bases of two billion each, and the subscriber number of mobile operators exceeds five billion unique users as of early 2018. If these centralized services go completely `dark' due to natural or man made disasters, large scale blackouts, or country-wide censorship, the users are left without practical solutions to bootstrap security on their mobile devices. Existing distributed solutions, for instance, the so-called web-of-trust are not sufficiently lightweight. Furthermore, they support neither cross-application on mobile devices nor strong protection of key material using hardware security modules. We propose Sea of Lights(SoL), a practical lightweight scheme for bootstrapping device-to-device security wirelessly, thus, enabling secure distributed self-organized networks. It is tailored to operate `in the dark' and provides strong protection of key material as well as an intuitive means to build a lightweight web-of-trust. SoL is particularly well suited for local or urban operation in scenarios such as the coordination of emergency response, where it helps containing/limiting the spreading of misinformation. As a proof of concept, we implement SoL in the Android platform and hence test its feasibility on real mobile devices. We further evaluate its key performance aspects using simulation.
1
0
0
0
0
0
On the correlation between a level of structure order and properties of composites. In Memory of Yu.L. Klimontovich
Proposed the computerized method for calculating the relative level of order composites. Correlation between a level of structure order and properties of solids is shown. Discussed the possibility of clarifying the terminology used in describing the structure.
1
0
0
0
0
0
Schmidt's subspace theorem for moving hypersurface targets
It was discovered that there is a formal analogy between Nevanlinna theory and Diophantine approximation. Via Vojta's dictionary, the Second Main Theorem in Nevanlinna theory corresponds to Schmidt's Subspace Theorem in Diophantine approximation. Recently, Cherry, Dethloff, and Tan (arXiv:1503.08801v2 [math.CV]) obtained a Second Main Theorem for moving hypersurfaces intersecting projective varieites. In this paper, we shall give the counterpart of their Second Main Theorem in Diophantine approximation.
0
0
1
0
0
0
HoNVis: Visualizing and Exploring Higher-Order Networks
Unlike the conventional first-order network (FoN), the higher-order network (HoN) provides a more accurate description of transitions by creating additional nodes to encode higher-order dependencies. However, there exists no visualization and exploration tool for the HoN. For applications such as the development of strategies to control species invasion through global shipping which is known to exhibit higher-order dependencies, the existing FoN visualization tools are limited. In this paper, we present HoNVis, a novel visual analytics framework for exploring higher-order dependencies of the global ocean shipping network. Our framework leverages coordinated multiple views to reveal the network structure at three levels of detail (i.e., the global, local, and individual port levels). Users can quickly identify ports of interest at the global level and specify a port to investigate its higher-order nodes at the individual port level. Investigating a larger-scale impact is enabled through the exploration of HoN at the local level. Using the global ocean shipping network data, we demonstrate the effectiveness of our approach with a real-world use case conducted by domain experts specializing in species invasion. Finally, we discuss the generalizability of this framework to other real-world applications such as information diffusion in social networks and epidemic spreading through air transportation.
1
1
0
0
0
0
Direct observation of coupled geochemical and geomechanical impacts on chalk microstructural evolution under elevated CO2 pressure. Part I
The dissolution of porous media in a geologic formation induced by the injection of massive amounts of CO2 can undermine the mechanical stability of the formation structure before carbon mineralization takes place. The geomechanical impact of geologic carbon storage is therefore closely related to the structural sustainability of the chosen reservoir as well as the probability of buoyancy driven CO2 leakage through caprocks. Here we show, with a combination of ex situ nanotomography and in situ microtomography, that the presence of dissolved CO2 in water produces a homogeneous dissolution pattern in natural chalk microstructure. This pattern stems from a greater apparent solubility of chalk and therefore a greater reactive subvolume in a sample. When a porous medium dissolves homogeneously in an imposed flow field, three geomechanical effects were observed: material compaction, fracturing and grain relocation. These phenomena demonstrated distinct feedbacks to the migration of the dissolution front and severely complicated the infiltration instability problem. We conclude that the presence of dissolved CO2 makes the dissolution front less susceptible to spatial and temporal perturbations in the strongly coupled geochemical and geomechanical processes.
0
1
0
0
0
0
Symplectic resolutions for Higgs moduli spaces
In this paper, we study the algebraic symplectic geometry of the singular moduli spaces of Higgs bundles of degree $0$ and rank $n$ on a compact Riemann surface $X$ of genus $g$. In particular, we prove that such moduli spaces are symplectic singularities, in the sense of Beauville [Bea00], and admit a projective symplectic resolution if and only if $g=1$ or $(g, n)=(2,2)$. These results are an application of a recent paper by Bellamy and Schedler [BS16] via the so-called Isosingularity Theorem.
0
0
1
0
0
0
Investigation of Defect Modes of Chiral Photonic Crystals
Some properties of defect modes of cholesteric liquid crystals (CLC) are presented. It is shown that when the CLC layer is thin the density of states and emission intensity are maximum for the defect mode, whereas when the CLC layer is thick, these peaks are observed at the edges of the photonic band gap. Similarly, when the gain is low, the density of states and emission intensity are maximum for the defect mode, whereas at high gains these peaks are also observed at the edges of the photonic band gap. The possibilities of low-threshold lasing and obtaining high-Q microcavities have been investigated.
0
1
0
0
0
0
Crystal Growth of Cu6(Ge,Si)6O18.6H2O and Assignment of UV-VIS Spectra in Comparison to Dehydrated Dioptase and Selected Cu(II) Oxo-Compounds Including Cuprates
It is reported on growth of mm-sized single-crystals of the low-dimensional S = 1/2 spin compound Cu6(Ge,Si)6O18.6H2O by a diffusion technique in aqueous solution. A route to form Si-rich crystals down to possibly dioptase, the pure silicate, is discussed. Further, the assignment of dd excitations from UV-VIS spectra of the hexahydrate and the fully dehydrated compound is proposed in comparison to dioptase and selected Cu(II) oxo-compounds using bond strength considerations. Non-doped cuprates as layer compounds show higher excitation energies than the title compound. However, when the antiferromagnetic interaction energy as Jzln(2) is taken into account for cuprates, a single linear relationship between the Dqe excitation energy and equatorial Cu(II)-O bond strength is confirmed for all compounds. A linear representation is also confirmed between 2A1g energies and a function of axial and equatorial Cu-O bond distances, when auxiliary axial bonds are used for four-coordinated compounds. The quotient Dt/Ds of experimental orbital energies deviating from the general trend to smaller values indicates the existence of H2O respectively Cl1- axial ligands in comparison to oxo-ligands, whereas larger Dt/Dqe values indicate missing axial bonds. The quotient of the excitation energy 2A1g by 2x2Eg-2B2g allows to check for correctness of the assignment and to distinguish between axial oxo-ligands and others like H2O or Cl1-. Some assignments previously reported were corrected.
0
1
0
0
0
0
Optical reconfiguration and polarization control in semi-continuous gold films close to the percolation threshold
Controlling and confining light by exciting plasmons in resonant metallic nanostructures is an essential aspect of many new emerging optical technologies. Here we explore the possibility of controllably reconfiguring the intrinsic optical properties of semi-continuous gold films, by inducing permanent morphological changes with a femtosecond (fs)-pulsed laser above a critical power. Optical transmission spectroscopy measurements show a correlation between the spectra of the morphologically modified films and the wavelength, polarization, and the intensity of the laser used for alteration. In order to understand the modifications induced by the laser writing, we explore the near-field properties of these films with electron energy-loss spectroscopy (EELS). A comparison between our experimental data and full-wave simulations on the exact film morphologies hints toward a restructuring of the intrinsic plasmonic eigenmodes of the metallic film by photothermal effects. We explain these optical changes with a simple model and demonstrate experimentally that laser writing can be used to controllably modify the optical properties of these semi-continuous films. These metal films offer an easy-to-fabricate and scalable platform for technological applications such as molecular sensing and ultra-dense data storage.
0
1
0
0
0
0
Some Repeated-Root Constacyclic Codes over Galois Rings
Codes over Galois rings have been studied extensively during the last three decades. Negacyclic codes over $GR(2^a,m)$ of length $2^s$ have been characterized: the ring $\mathcal{R}_2(a,m,-1)= \frac{GR(2^a,m)[x]}{\langle x^{2^s}+1\rangle}$ is a chain ring. Furthermore, these results have been generalized to $\lambda$-constacyclic codes for any unit $\lambda$ of the form $4z-1$, $z\in GR(2^a, m)$. In this paper, we study more general cases and investigate all cases where $\mathcal{R}_p(a,m,\gamma)= \frac{GR(p^a,m)[x]}{\langle x^{p^s}-\gamma \rangle}$ is a chain ring. In particular, necessary and sufficient conditions for the ring $\mathcal{R}_p(a,m,\gamma)$ to be a chain ring are obtained. In addition, by using this structure we investigate all $\gamma$-constacyclic codes over $GR(p^a,m)$ when $\mathcal{R}_p(a,m,\gamma)$ is a chain ring. Necessary and sufficient conditions for the existence of self-orthogonal and self-dual $\gamma$-constacyclic codes are also provided. Among others, for any prime $p$, the structure of $\mathcal{R}_p(a,m,\gamma)=\frac{GR(p^a,m)[x]}{\langle x^{p^s}-\gamma\rangle}$ is used to establish the Hamming and homogeneous distances of $\gamma$-constacyclic codes.
1
0
1
0
0
0
Statistical solutions and Onsager's conjecture
We prove a version of Onsager's conjecture on the conservation of energy for the incompressible Euler equations in the context of statistical solutions, as introduced recently by Fjordholm et al. As a byproduct, we also obtain a new proof for the conservative direction of Onsager's conjecture for weak solutions. Dedicated to Edriss S. Titi on the occasion of his 60th birthday.
0
1
1
0
0
0
The sum of log-normal variates in geometric Brownian motion
Geometric Brownian motion (GBM) is a key model for representing self-reproducing entities. Self-reproduction may be considered the definition of life [5], and the dynamics it induces are of interest to those concerned with living systems from biology to economics. Trajectories of GBM are distributed according to the well-known log-normal density, broadening with time. However, in many applications, what's of interest is not a single trajectory but the sum, or average, of several trajectories. The distribution of these objects is more complicated. Here we show two different ways of finding their typical trajectories. We make use of an intriguing connection to spin glasses: the expected free energy of the random energy model is an average of log-normal variates. We make the mapping to GBM explicit and find that the free energy result gives qualitatively correct behavior for GBM trajectories. We then also compute the typical sum of lognormal variates using Ito calculus. This alternative route is in close quantitative agreement with numerical work.
0
0
0
0
0
1
Slow Spin Dynamics and Self-Sustained Clusters in Sparsely Connected Systems
To identify emerging microscopic structures in low temperature spin glasses, we study self-sustained clusters (SSC) in spin models defined on sparse random graphs. A message-passing algorithm is developed to determine the probability of individual spins to belong to SSC. Results for specific instances, which compare the predicted SSC associations with the dynamical properties of spins obtained from numerical simulations, show that SSC association identifies individual slow-evolving spins. This insight gives rise to a powerful approach for predicting individual spin dynamics from a single snapshot of an equilibrium spin configuration, namely from limited static information, which can be used to devise generic prediction tools applicable to a wide range of areas.
0
1
0
0
0
0