title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
A Conic Integer Programming Approach to Constrained Assortment Optimization under the Mixed Multinomial Logit Model
We consider the constrained assortment optimization problem under the mixed multinomial logit model. Even moderately sized instances of this problem are challenging to solve directly using standard mixed-integer linear optimization formulations. This has motivated recent research exploring customized optimization strategies and approximation techniques. In contrast, we develop a novel conic quadratic mixed-integer formulation. This new formulation, together with McCormick inequalities exploiting the capacity constraints, enables the solution of large instances using commercial optimization software.
0
0
1
0
0
0
Voyager 1 Measurements Beyond the Heliopause of Galactic Cosmic Ray Helium, Boron, Carbon, Oxygen, Magnesium, Silicon and Iron Nuclei with Energies 0.5 to >1.5 GeV/nuc
We have obtained the energy spectra of cosmic ray He, B, C, O, Mg, S and Fe nuclei in the range 0.5-1.5 GeV/nuc and above using the penetrating particle mode of the High Energy Telescope, part of the Cosmic Ray Science (CRS) experiment on Voyagers 1 and 2. The data analysis procedures are the same as those used to obtain similar spectra from the identical V2 HET telescope while it was in the heliosphere between about 23 and 54 AU. The time period of analysis includes 4 years of data beyond the heliopause (HP). These new interstellar spectra are compared with various earlier experiments at the same energies at the Earth to determine the solar modulation parameter, phi. These new spectra are also compared with recent measurements of the spectra of the same nuclei measured by the same telescope at low energies. It is found that the ratio of intensities at 100 MeV/nuc to those at 1.0 GeV/nuc are significantly Z dependent. Some of this Z dependence can be explained by the Z2 dependence of energy loss by ionization in the 7-10 g/cm2 of interstellar H and He traversed by cosmic rays of these energies in the galaxy; some by the Z dependent loss due to nuclear interactions in this same material; some by possible differences in the source spectra of these nuclei and some by the non-uniformity of the source distribution and propagation conditions. The observed features of the spectra, also including a Z dependence of the peak intensities of the various nuclei, pose interesting problems related to the propagation and source distribution of these cosmic rays.
0
1
0
0
0
0
Block Mean Approximation for Efficient Second Order Optimization
Advanced optimization algorithms such as Newton method and AdaGrad benefit from second order derivative or second order statistics to achieve better descent directions and faster convergence rates. At their heart, such algorithms need to compute the inverse or inverse square root of a matrix whose size is quadratic of the dimensionality of the search space. For high dimensional search spaces, the matrix inversion or inversion of square root becomes overwhelming which in turn demands for approximate methods. In this work, we propose a new matrix approximation method which divides a matrix into blocks and represents each block by one or two numbers. The method allows efficient computation of matrix inverse and inverse square root. We apply our method to AdaGrad in training deep neural networks. Experiments show encouraging results compared to the diagonal approximation.
0
0
0
1
0
0
A Minimum Discounted Reward Hamilton-Jacobi Formulation for Computing Reachable Sets
We propose a novel formulation for approximating reachable sets through a minimum discounted reward optimal control problem. The formulation yields a continuous solution that can be obtained by solving a Hamilton-Jacobi equation. Furthermore, the numerical approximation to this solution can be obtained as the unique fixed-point to a contraction mapping. This allows for more efficient solution methods that could not be applied under traditional formulations for solving reachable sets. In addition, this formulation provides a link between reinforcement learning and learning reachable sets for systems with unknown dynamics, allowing algorithms from the former to be applied to the latter. We use two benchmark examples, double integrator, and pursuit-evasion games, to show the correctness of the formulation as well as its strengths in comparison to previous work.
1
0
0
0
0
0
Imbedding results in Musielak-Orlicz spaces with an application to anisotropic nonlinear Neumann problems
We prove a continuous embedding that allows us to obtain a boundary trace imbedding result for anisotropic Musielak-Orlicz spaces, which we then apply to obtain an existence result for Neumann problems with nonlinearities on the boundary associated to some anisotropic nonlinear elliptic equations in Musielak-Orlicz spaces constructed from Musielak-Orlicz functions on which and on their conjugates we do not assume the $\Delta_2$-condition. The uniqueness is also studied.
0
0
1
0
0
0
Defend against advanced persistent threats: An optimal control approach
The new cyber attack pattern of advanced persistent threat (APT) has posed a serious threat to modern society. This paper addresses the APT defense problem, i.e., the problem of how to effectively defend against an APT campaign. Based on a novel APT attack-defense model, the effectiveness of an APT defense strategy is quantified. Thereby, the APT defense problem is modeled as an optimal control problem, in which an optimal control stands for a most effective APT defense strategy. The existence of an optimal control is proved, and an optimality system is derived. Consequently, an optimal control can be figured out by solving the optimality system. Some examples of the optimal control are given. Finally, the influence of some factors on the effectiveness of an optimal control is examined through computer experiments. These findings help organizations to work out policies of defending against APTs.
1
0
0
0
0
0
Challenges in Designing Datasets and Validation for Autonomous Driving
Autonomous driving is getting a lot of attention in the last decade and will be the hot topic at least until the first successful certification of a car with Level 5 autonomy. There are many public datasets in the academic community. However, they are far away from what a robust industrial production system needs. There is a large gap between academic and industrial setting and a substantial way from a research prototype, built on public datasets, to a deployable solution which is a challenging task. In this paper, we focus on bad practices that often happen in the autonomous driving from an industrial deployment perspective. Data design deserves at least the same amount of attention as the model design. There is very little attention paid to these issues in the scientific community, and we hope this paper encourages better formalization of dataset design. More specifically, we focus on the datasets design and validation scheme for autonomous driving, where we would like to highlight the common problems, wrong assumptions, and steps towards avoiding them, as well as some open problems.
1
0
0
1
0
0
Improving Adversarial Robustness via Promoting Ensemble Diversity
Though deep neural networks have achieved significant progress on various tasks, often enhanced by model ensemble, existing high-performance models can be vulnerable to adversarial attacks. Many efforts have been devoted to enhancing the robustness of individual networks and then constructing a straightforward ensemble, e.g., by directly averaging the outputs, which ignores the interaction among networks. This paper presents a new method that explores the interaction among individual networks to improve robustness for ensemble models. Technically, we define a new notion of ensemble diversity in the adversarial setting as the diversity among non-maximal predictions of individual members, and present an adaptive diversity promoting (ADP) regularizer to encourage the diversity, which leads to globally better robustness for the ensemble by making adversarial examples difficult to transfer among individual members. Our method is computationally efficient and compatible with the defense methods acting on individual networks. Empirical results on various datasets verify that our method can improve adversarial robustness while maintaining state-of-the-art accuracy on normal examples.
1
0
0
1
0
0
Majorana stripe order on the surface of a three-dimensional topological insulator
The issue on the effect of interactions in topological states concerns not only interacting topological phases but also novel symmetry-breaking phases and phase transitions. Here we study the interaction effect on Majorana zero modes (MZMs) bound to a square vortex lattice in two-dimensional (2D) topological superconductors. Under the neutrality condition, where single-body hybridization between MZMs is prohibited by an emergent symmetry, a minimal square-lattice model for MZMs can be faithfully mapped to a quantum spin model, which has no sign problem in the world-line quantum Monte Carlo simulation. Guided by an insight from a further duality mapping, we demonstrate that the interaction induces a Majorana stripe state, a gapped state spontaneously breaking lattice translational and rotational symmetries, as opposed to the previously conjectured topological quantum criticality. Away from neutrality, a mean-field theory suggests a quantum critical point induced by hybridization.
0
1
0
0
0
0
Academic Engagement and Commercialization in an Institutional Transition Environment: Evidence from Shanghai Maritime University
Does academic engagement accelerate or crowd out the commercialization of university knowledge? Research on this topic seldom considers the impact of the institutional environment, especially when a formal institution for encouraging the commercial activities of scholars has not yet been established. This study investigates this question in the context of China, which is in the institutional transition stage. Based on a survey of scholars from Shanghai Maritime University, we demonstrate that academic engagement has a positive impact on commercialization and that this impact is greater for risk-averse scholars than for other risk-seeking scholars. Our results suggest that in an institutional transition environment, the government should consider encouraging academic engagement to stimulate the commercialization activities of conservative scholars.
0
0
0
0
0
1
Affine-Gradient Based Local Binary Pattern Descriptor for Texture Classiffication
We present a novel Affine-Gradient based Local Binary Pattern (AGLBP) descriptor for texture classification. It is very hard to describe complicated texture using single type information, such as Local Binary Pattern (LBP), which just utilizes the sign information of the difference between the pixel and its local neighbors. Our descriptor has three characteristics: 1) In order to make full use of the information contained in the texture, the Affine-Gradient, which is different from Euclidean-Gradient and invariant to affine transformation is incorporated into AGLBP. 2) An improved method is proposed for rotation invariance, which depends on the reference direction calculating respect to local neighbors. 3) Feature selection method, considering both the statistical frequency and the intraclass variance of the training dataset, is also applied to reduce the dimensionality of descriptors. Experiments on three standard texture datasets, Outex12, Outex10 and KTH-TIPS2, are conducted to evaluate the performance of AGLBP. The results show that our proposed descriptor gets better performance comparing to some state-of-the-art rotation texture descriptors in texture classification.
1
0
0
0
0
0
Decomposition theorems for asymptotic property C and property A
We combine aspects of the notions of finite decomposition complexity and asymptotic property C into a notion that we call finite APC-decomposition complexity. Any space with finite decomposition complexity has finite APC-decomposition complexity and any space with asymptotic property C has finite APC-decomposition complexity. Moreover, finite APC-decomposition complexity implies property A for metric spaces. We also show that finite APC-decomposition complexity is preserved by direct products of groups and spaces, amalgamated products of groups, and group extensions, among other constructions.
0
0
1
0
0
0
Three dimensional free-surface flow over arbitrary bottom topography
We consider steady nonlinear free surface flow past an arbitrary bottom topography in three dimensions, concentrating on the shape of the wave pattern that forms on the surface of the fluid. Assuming ideal fluid flow, the problem is formulated using a boundary integral method and discretised to produce a nonlinear system of algebraic equations. The Jacobian of this system is dense due to integrals being evaluated over the entire free surface. To overcome the computational difficulty and large memory requirements, a Jacobian-free Newton Krylov (JFNK) method is utilised. Using a block-banded approximation of the Jacobian from the linearised system as a preconditioner for the JFNK scheme, we find significant reductions in computational time and memory required for generating numerical solutions. These improvements also allow for a larger number of mesh points over the free surface and the bottom topography. We present a range of numerical solutions for both subcritical and supercritical regimes, and for a variety of bottom configurations. We discuss nonlinear features of the wave patterns as well as their relationship to ship wakes.
0
1
0
0
0
0
Spin-polaron formation and magnetic state diagram in La doped $CaMnO_3$
$La_xCa_{1-x}MnO_3$ (LCMO) has been studied in the framework of density functional theory (DFT) using Hubbard-U correction. We show that the formation of spin-polarons of different configurations is possible in the G-type antiferromagentic phase. We also show that the spin-polaron (SP) solutions are stabilized due to an interplay of magnetic and lattice effects at lower La concentrations and mostly due to the lattice contribution at larger concentrations. Our results indicate that the development of SPs is unfavorable in the C- and A-type antiferromagnetic phases. The theoretically obtained magnetic state diagram is in good agreement with previously reported experimental results
0
1
0
0
0
0
Power-law citation distributions are not scale-free
We analyze time evolution of statistical distributions of citations to scientific papers published in one year. While these distributions can be fitted by a power-law dependence we find that they are nonstationary and the exponent of the power law fit decreases with time and does not come to saturation. We attribute the nonstationarity of citation distributions to different longevity of the low-cited and highly-cited papers. By measuring citation trajectories of papers we found that citation careers of the low-cited papers come to saturation after 10-15 years while those of the highly-cited papers continue to increase indefinitely: the papers that exceed some citation threshold become runaways. Thus, we show that although citation distribution can look as a power-law, it is not scale-free and there is a hidden dynamic scale associated with the onset of runaways. We compare our measurements to our recently developed model of citation dynamics based on copying/redirection/triadic closure and find explanations to our empirical observations.
1
0
0
0
0
0
Atmospheric stellar parameters for large surveys using FASMA, a new spectral synthesis package
In the era of vast spectroscopic surveys focusing on Galactic stellar populations, astronomers want to exploit the large quantity and good quality of data to derive their atmospheric parameters without losing precision from automatic procedures. In this work, we developed a new spectral package, FASMA, to estimate the stellar atmospheric parameters (namely effective temperature, surface gravity, and metallicity) in a fast and robust way. This method is suitable for spectra of FGK-type stars in medium and high resolution. The spectroscopic analysis is based on the spectral synthesis technique using the radiative transfer code, MOOG. The line list is comprised of mainly iron lines in the optical spectrum. The atomic data are calibrated after the Sun and Arcturus. We use two comparison samples to test our method, i) a sample of 451 FGK-type dwarfs from the high resolution HARPS spectrograph, and ii) the Gaia-ESO benchmark stars using both high and medium resolution spectra. We explore biases in our method from the analysis of synthetic spectra covering the parameter space of our interest. We show that our spectral package is able to provide reliable results for a wide range of stellar parameters, different rotational velocities, different instrumental resolutions, and for different spectral regions of the VLT-GIRAFFE spectrographs, used among others for the Gaia-ESO survey. FASMA estimates stellar parameters in less than 15 min for high resolution and 3 min for medium resolution spectra. The complete package is publicly available to the community.
0
1
0
0
0
0
Bingham flow in porous media with obstacles of different size
By using the unfolding operators for periodic homogenization, we give a general compactness result for a class of functions defined on bounded domains presenting perforations of two different size. Then we apply this result to the homogenization of the flow of a Bingham fluid in a porous medium with solid obstacles of different size. Next we give the interpretation of the limit problem in term of a non linear Darcy law.
0
0
1
0
0
0
Eigenstate entanglement in the Sachdev-Ye-Kitaev model
In the Sachdev-Ye-Kitaev model, we argue that the entanglement entropy of any eigenstate (including the ground state) obeys a volume law, whose coefficient can be calculated analytically from the energy and subsystem size. We expect that the argument applies to a broader class of chaotic models with all-to-all interactions.
0
1
0
0
0
0
Finite presheaves and $A$-finite generation of unstable algebras mod nilpotents
Inspired by the work of Henn, Lannes and Schwartz on unstable algebras over the Steenrod algebra modulo nilpotents, a characterization of unstable algebras that are $A$-finitely generated up to nilpotents is given in terms of the associated presheaf, by introducing the notion of a finite presheaf. In particular, this gives the natural characterization of the (co)analytic presheaves that are important in the theory of Henn, Lannes and Schwartz. However, finite presheaves remain imperfectly understood, as illustrated by examples. One important class of examples is shown to be provided by unstable algebras of finite transcendence degree (under a necessary weak finiteness condition). For unstable Hopf algebras, it is shown that the situation is much better: the associated presheaf is finite if and only if its growth function is polynomial. This leads to a description of unstable Hopf algebras modulo nilpotents in the spirit of Henn, Lannes and Schwartz.
0
0
1
0
0
0
Ejection of rocky and icy material from binary star systems: Implications for the origin and composition of 1I/`Oumuamua
In single star systems like our own Solar system, comets dominate the mass budget of bodies that are ejected into interstellar space, since they form further away and are less tightly bound. However 1I/`Oumuamua, the first interstellar object detected, appears asteroidal in its spectra and in its lack of detectable activity. We argue that the galactic budget of interstellar objects like 1I/`Oumuamua should be dominated by planetesimal material ejected during planet formation in circumbinary systems, rather than in single star systems or widely separated binaries. We further show that in circumbinary systems, rocky bodies should be ejected in comparable numbers to icy ones. This suggests that a substantial fraction of additional interstellar objects discovered in the future should display an active coma. We find that the rocky population, of which 1I/`Oumuamua seems to be a member, should be predominantly sourced from A-type and late B-star binaries.
0
1
0
0
0
0
Two-photon excitation of rubidium atoms inside porous glass
We study the two-photon laser excitation to the 5D$_{5/2}$ energy level of $^{85}$Rb atoms contained in the interstices of a porous material made from sintered ground glass with typical pore dimensions in the 10 - 100 $\mu$m range. The excitation spectra show unusual flat-top lineshapes which are shown to be the consequence of wave-vector randomization of the laser light in the porous material. For large atomic densities, the spectra are affected by radiation trapping around the D2 transitions. The effect of the transient atomic response limited by time of flight between pores walls appears to have a minor influence in the excitation spectra. It is however revealed by the shortening of the temporal evolution of the emitted blue light following a sudden switch-off of the laser excitation.
0
1
0
0
0
0
Readout of the atomtronic quantum interference device
A Bose-Einstein condensate confined in ring shaped lattices interrupted by a weak link and pierced by an effective magnetic flux defines the atomic counterpart of the superconducting quantum interference device: the atomtronic quantum interference device (AQUID). In this paper, we report on the detection of current states in the system through a self-heterodyne protocol. Following the original proposal of the NIST and Paris groups, the ring-condensate many-body wave function interferes with a reference condensate expanding from the center of the ring. We focus on the rf-AQUID which realizes effective qubit dynamics. Both the Bose-Hubbard and Gross-Pitaevskii dynamics are studied. For the Bose-Hubbard dynamics, we demonstrate that the self-heterodyne protocol can be applied, but higher-order correlations in the evolution of the interfering condensates are measured to readout of the current states of the system. We study how states with macroscopic quantum coherence can be told apart analyzing the noise in the time of flight of the ring condensate.
0
1
0
0
0
0
Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy
Deep learning networks have achieved state-of-the-art accuracies on computer vision workloads like image classification and object detection. The performant systems, however, typically involve big models with numerous parameters. Once trained, a challenging aspect for such top performing models is deployment on resource constrained inference systems - the models (often deep networks or wide networks or both) are compute and memory intensive. Low-precision numerics and model compression using knowledge distillation are popular techniques to lower both the compute requirements and memory footprint of these deployed models. In this paper, we study the combination of these two techniques and show that the performance of low-precision networks can be significantly improved by using knowledge distillation techniques. Our approach, Apprentice, achieves state-of-the-art accuracies using ternary precision and 4-bit precision for variants of ResNet architecture on ImageNet dataset. We present three schemes using which one can apply knowledge distillation techniques to various stages of the train-and-deploy pipeline.
1
0
0
0
0
0
An IDE-Based Context-Aware Meta Search Engine
Traditional web search forces the developers to leave their working environments and look for solutions in the web browsers. It often does not consider the context of their programming problems. The context-switching between the web browser and the working environment is time-consuming and distracting, and the keyword-based traditional search often does not help much in problem solving. In this paper, we propose an Eclipse IDE-based web search solution that collects the data from three web search APIs-- Google, Yahoo, Bing and a programming Q & A site-- Stack Overflow. It then provides search results within IDE taking not only the content of the selected error into account but also the problem context, popularity and search engine recommendation of the result links. Experiments with 25 run time errors and exceptions show that the proposed approach outperforms the keyword-based search approaches with a recommendation accuracy of 96%. We also validate the results with a user study involving five prospective participants where we get a result agreement of 64.28%. While the preliminary results are promising, the approach needs to be further validated with more errors and exceptions followed by a user study with more participants to establish itself as a complete IDE-based web search solution.
1
0
0
0
0
0
Sharing Data Homomorphically Encrypted with Different Encryption Keys
In this paper, we propose the first homomorphic based proxy re-encryption (HPRE) solution that allows different users to share data they outsourced homomorphically encrypted using their respective public keys with the possibility to process such data remotely. More clearly, this scheme makes possible to switch the public encryption key to another one without the help of a trusted third party. Its originality stands on a method we propose so as to compute the difference between two encrypted data without decrypting them and with no extra communications. Basically, in our HPRE scheme, the two users, the delegator and the delegate, ask the cloud server to generate an encrypted noise based on a secret key, both users previously agreed on. Based on our solution for comparing encrypted data, the cloud computes in clear the differences in-between the encrypted noise and the encrypted data of the delegator, obtaining thus blinded data. By next the cloud encrypts these differences with the public key of the delegate. As the noise is also known of the delegate, this one just has to remove it to get access to the data encrypted with his public key. This solution has been experimented in the case of the sharing of images outsourced into a semihonest cloud server.
1
0
0
0
0
0
Low temperature features in the heat capacity of unary metals and intermetallics for the example of bulk aluminum and Al$_3$Sc
We explore the competition and coupling of vibrational and electronic contributions to the heat capacity of Al and Al$_3$Sc at temperatures below 50 K combining experimental calorimetry with highly converged finite temperature density functional theory calculations. We find that semilocal exchange correlation functionals accurately describe the rich feature set observed for these temperatures, including electron-phonon coupling. Using different representations of the heat capacity, we are therefore able to identify and explain deviations from the Debye behaviour in the low-temperature limit and in the temperature regime 30 - 50 K as well as the reduction of these features due to the addition of Sc.
0
1
0
0
0
0
Designing spin and orbital exchange Hamiltonians with ultrashort electric field transients
We demonstrate how electric fields with arbitrary time profile can be used to control the time-dependent parameters of spin and orbital exchange Hamiltonians. Analytic expressions for the exchange constants are derived from a time-dependent Schrieffer-Wolff transformation, and the validity of the resulting effective Hamiltonian is verified for the case of a quarter-filled two-orbital Hubbard model, by comparing to the results of a full nonequilibrium dynamical mean-field theory simulation. The ability to manipulate Hamiltonians with arbitrary time-dependent fields, beyond the paradigm of Floquet engineering, opens the possibility to control intertwined spin and orbital order using laser or THz pulses which are tailored to minimize electronic excitations.
0
1
0
0
0
0
A Novel Comprehensive Approach for Estimating Concept Semantic Similarity in WordNet
Computation of semantic similarity between concepts is an important foundation for many research works. This paper focuses on IC computing methods and IC measures, which estimate the semantic similarities between concepts by exploiting the topological parameters of the taxonomy. Based on analyzing representative IC computing methods and typical semantic similarity measures, we propose a new hybrid IC computing method. Through adopting the parameter dhyp and lch, we utilize the new IC computing method and propose a novel comprehensive measure of semantic similarity between concepts. An experiment based on WordNet "is a" taxonomy has been designed to test representative measures and our measure on benchmark dataset R&G, and the results show that our measure can obviously improve the similarity accuracy. We evaluate the proposed approach by comparing the correlation coefficients between five measures and the artificial data. The results show that our proposal outperforms the previous measures.
1
0
0
0
0
0
Tannakian duality for affine homogeneous spaces
Associated to any closed quantum subgroup $G\subset U_N^+$ and any index set $I\subset\{1,\ldots,N\}$ is a certain homogeneous space $X_{G,I}\subset S^{N-1}_{\mathbb C,+}$, called affine homogeneous space. We discuss here the abstract axiomatization of the algebraic manifolds $X\subset S^{N-1}_{\mathbb C,+}$ which can appear in this way, by using Tannakian duality methods.
0
0
1
0
0
0
Strong comparison principle for the fractional $p$-Laplacian and applications to starshaped rings
In the following we show the strong comparison principle for the fractional $p$-Laplacian, i.e. we analyze functions $v,w$ which satisfy $v\geq w$ in $\mathbb{R}^N$ and \[ (-\Delta)^s_pv+q(x)|v|^{p-2}v\geq (-\Delta)^s_pw+q(x)|w|^{p-2}w \quad \text{in $D$,} \] where $s\in(0,1)$, $p>1$, $D\subset \mathbb{R}^N$ is an open set, and $q\in L^{\infty}(\mathbb{R}^N)$ is a nonnegative function. Under suitable conditions on $s,p$ and some regularity assumptions on $v,w$ we show that either $v\equiv w$ in $\mathbb{R}^N$ or $v>w$ in $D$. Moreover, we apply this result to analyze the geometry of nonnegative solutions in starshaped rings and in the half space.
0
0
1
0
0
0
The uniformity and time-invariance of the intra-cluster metal distribution in galaxy clusters from the IllustrisTNG simulations
The distribution of metals in the intra-cluster medium encodes important information about the enrichment history and formation of galaxy clusters. Here we explore the metal content of clusters in IllustrisTNG - a new suite of galaxy formation simulations building on the Illustris project. Our cluster sample contains 20 objects in TNG100 - a ~(100 Mpc)^3 volume simulation with 2x1820^3 resolution elements, and 370 objects in TNG300 - a ~(300 Mpc)^3 volume simulation with 2x2500^3 resolution elements. The z=0 metallicity profiles agree with observations, and the enrichment history is consistent with observational data going beyond z~1, showing nearly no metallicity evolution. The abundance profiles vary only minimally within the cluster samples, especially in the outskirts with a relative scatter of ~15%. The average metallicity profile flattens towards the center, where we find a logarithmic slope of -0.1 compared to -0.5 in the outskirts. Cool core clusters have more centrally peaked metallicity profiles (~0.8 solar) compared to non-cool core systems (~0.5 solar), similar to observational trends. Si/Fe and O/Fe radial profiles follow positive gradients. The outer abundance profiles do not evolve below z~2, whereas the inner profiles flatten towards z=0. More than ~80% of the metals in the intra-cluster medium have been accreted from the proto-cluster environment, which has been enriched to ~0.1 solar already at z~2. We conclude that the intra-cluster metal distribution is uniform among our cluster sample, nearly time-invariant in the outskirts for more than 10 Gyr, and forms through a universal enrichment history.
0
1
0
0
0
0
Improving Native Ads CTR Prediction by Large Scale Event Embedding and Recurrent Networks
Click through rate (CTR) prediction is very important for Native advertisement but also hard as there is no direct query intent. In this paper we propose a large-scale event embedding scheme to encode the each user browsing event by training a Siamese network with weak supervision on the users' consecutive events. The CTR prediction problem is modeled as a supervised recurrent neural network, which naturally model the user history as a sequence of events. Our proposed recurrent models utilizing pretrained event embedding vectors and an attention layer to model the user history. Our experiments demonstrate that our model significantly outperforms the baseline and some variants.
0
0
0
1
0
0
On the Communication Cost of Determining an Approximate Nearest Lattice Point
We consider the closest lattice point problem in a distributed network setting and study the communication cost and the error probability for computing an approximate nearest lattice point, using the nearest-plane algorithm, due to Babai. Two distinct communication models, centralized and interactive, are considered. The importance of proper basis selection is addressed. Assuming a reduced basis for a two-dimensional lattice, we determine the approximation error of the nearest plane algorithm. The communication cost for determining the Babai point, or equivalently, for constructing the rectangular nearest-plane partition, is calculated in the interactive setting. For the centralized model, an algorithm is presented for reducing the communication cost of the nearest plane algorithm in an arbitrary number of dimensions.
1
0
0
0
0
0
One shot entanglement assisted classical and quantum communication over noisy quantum channels: A hypothesis testing and convex split approach
Capacity of a quantum channel characterizes the limits of reliable communication through a noisy quantum channel. This fundamental information theoretic question is very well studied specially in the setting of many independent uses of the channel. An important scenario, both from practical and conceptual point of view, is when the channel can be used only once. This is known as the one-shot channel coding problem. We provide a tight characterization of the one-shot entanglement assisted classical capacity of a quantum channel. We arrive at our result by introducing a simple decoding technique which we refer to as position-based decoding. We also consider two other important quantum network scenarios: quantum channel with a jammer and quantum broadcast channel. For these problems, we use the recently introduced convex split technique [Anshu, Devabathini and Jain 2014] in addition to position based decoding. Our approach exhibits that the simultaneous use of these two techniques provides a uniform and conceptually simple framework for designing communication protocols for quantum networks.
1
0
0
0
0
0
Conformally variational Riemannian invariants
Conformally variational Riemannian invariants (CVIs), such as the scalar curvature, are homogeneous scalar invariants which arise as the gradient of a Riemannian functional. We establish a wide range of stability and rigidity results involving CVIs, generalizing many such results for the scalar curvature.
0
0
1
0
0
0
Caching Meets Millimeter Wave Communications for Enhanced Mobility Management in 5G Networks
One of the most promising approaches to overcome the uncertainty and dynamic channel variations of millimeter wave (mmW) communications is to deploy dual-mode base stations that integrate both mmW and microwave ($\mu$W) frequencies. If properly designed, such dual-mode base stations can enhance mobility and handover in highly mobile wireless environments. In this paper, a novel approach for analyzing and managing mobility in joint $\mu$W-mmW networks is proposed. The proposed approach leverages device-level caching along with the capabilities of dual-mode base stations to minimize handover failures, reduce inter-frequency measurement energy consumption, and provide seamless mobility in emerging dense heterogeneous networks. First, fundamental results on the caching capabilities, including caching probability and cache duration are derived for the proposed dual-mode network scenario. Second, the average achievable rate of caching is derived for mobile users. Third, the proposed cache-enabled mobility management problem is formulated as a dynamic matching game between mobile user equipments (MUEs) and small base stations (SBSs). The goal of this game is to find a distributed handover mechanism that subject to the network constraints on HOFs and limited cache sizes, allows each MUE to choose between executing an HO to a target SBS, being connected to the macrocell base station (MBS), or perform a transparent HO by using the cached content. The formulated matching game allows capturing the dynamics of the mobility management problem caused by HOFs. To solve this dynamic matching problem, a novel algorithm is proposed and its convergence to a two-sided dynamically stable HO policy is proved. Numerical results corroborate the analytical derivations and show that the proposed solution will provides significant reductions in both the HOF and energy consumption by MUEs.
1
0
0
0
0
0
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Data poisoning is an attack on machine learning models wherein the attacker adds examples to the training set to manipulate the behavior of the model at test time. This paper explores poisoning attacks on neural nets. The proposed attacks use "clean-labels"; they don't require the attacker to have any control over the labeling of training data. They are also targeted; they control the behavior of the classifier on a $\textit{specific}$ test instance without degrading overall classifier performance. For example, an attacker could add a seemingly innocuous image (that is properly labeled) to a training set for a face recognition engine, and control the identity of a chosen person at test time. Because the attacker does not need to control the labeling function, poisons could be entered into the training set simply by leaving them on the web and waiting for them to be scraped by a data collection bot. We present an optimization-based method for crafting poisons, and show that just one single poison image can control classifier behavior when transfer learning is used. For full end-to-end training, we present a "watermarking" strategy that makes poisoning reliable using multiple ($\approx$50) poisoned training instances. We demonstrate our method by generating poisoned frog images from the CIFAR dataset and using them to manipulate image classifiers.
0
0
0
1
0
0
Sorting Phenomena in a Mathematical Model For Two Mutually Attracting/Repelling Species
Macroscopic models for systems involving diffusion, short-range repulsion, and long-range attraction have been studied extensively in the last decades. In this paper we extend the analysis to a system for two species interacting with each other according to different inner- and intra-species attractions. Under suitable conditions on this self- and crosswise attraction an interesting effect can be observed, namely phase separation into neighbouring regions, each of which contains only one of the species. We prove that the intersection of the support of the stationary solutions of the continuum model for the two species has zero Lebesgue measure, while the support of the sum of the two densities is simply connected. Preliminary results indicate the existence of phase separation, i.e. spatial sorting of the different species. A detailed analysis in one spatial dimension follows. The existence and shape of segregated stationary solutions is shown via the Krein-Rutman theorem. Moreover, for small repulsion/nonlinear diffusion, also uniqueness of these stationary states is proved.
0
0
1
0
0
0
Ab initio design of drug carriers for zoledronate guest molecule using phosphonated and sulfonated calix[4]arene and calix[4]resorcinarene host molecules
Monomolecular drug carriers based on calix[n]-arenes and -resorcinarenes containing the interior cavity can enhance the affinity and specificity of the osteoporosis inhibitor drug zoledronate (ZOD). In this work we investigate the suitability of nine different calix[4]-arenes and -resorcinarenes based macrocycles as hosts for the ZOD guest molecule by conducting {\it ab initio} density functional theory calculations for structures and energetics of eighteen different host-guest complexes. For the optimized molecular structures of the free, phosphonated, sulfonated calix[4]-arenes and -resorcinarenes, the geometric sizes of their interior cavities are measured and compared with those of the host-guest complexes in order to check the appropriateness for host-guest complex formation. Our calculations of binding energies indicate that in gaseous states some of the complexes might be unstable but in aqueous states almost all of the complexes can be formed spontaneously. Of the two different docking ways, the insertion of ZOD with the \ce{P-C-P} branch into the cavity of host is easier than that with the nitrogen containing heterocycle of ZOD. The work will open a way for developing effective drug delivering systems for the ZOD drug and promote experimentalists to synthesize them.
0
1
0
0
0
0
Skeleton-based Action Recognition of People Handling Objects
In visual surveillance systems, it is necessary to recognize the behavior of people handling objects such as a phone, a cup, or a plastic bag. In this paper, to address this problem, we propose a new framework for recognizing object-related human actions by graph convolutional networks using human and object poses. In this framework, we construct skeletal graphs of reliable human poses by selectively sampling the informative frames in a video, which include human joints with high confidence scores obtained in pose estimation. The skeletal graphs generated from the sampled frames represent human poses related to the object position in both the spatial and temporal domains, and these graphs are used as inputs to the graph convolutional networks. Through experiments over an open benchmark and our own data sets, we verify the validity of our framework in that our method outperforms the state-of-the-art method for skeleton-based action recognition.
1
0
0
0
0
0
Active classification with comparison queries
We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size $n$ using approximately $O(\log n)$ queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, $\Omega(n)$ queries are required. Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.
1
0
0
0
0
0
Behavior of digital sequences through exotic numeration systems
Many digital functions studied in the literature, e.g., the summatory function of the base-$k$ sum-of-digits function, have a behavior showing some periodic fluctuation. Such functions are usually studied using techniques from analytic number theory or linear algebra. In this paper we develop a method based on exotic numeration systems and we apply it on two examples motivated by the study of generalized Pascal triangles and binomial coefficients of words.
1
0
0
0
0
0
Quantum cognition goes beyond-quantum: modeling the collective participant in psychological measurements
In psychological measurements, two levels should be distinguished: the 'individual level', relative to the different participants in a given cognitive situation, and the 'collective level', relative to the overall statistics of their outcomes, which we propose to associate with a notion of 'collective participant'. When the distinction between these two levels is properly formalized, it reveals why the modeling of the collective participant generally requires beyond-quantum - non-Bornian - probabilistic models, when sequential measurements at the individual level are considered, and this though a pure quantum description remains valid for single measurement situations.
0
0
0
0
1
0
Geometry of Projective Perfectoid and Integer Partitions
Line bundles of rational degree are defined using Perfectoid spaces, and their co-homology computed via standard Čech complex along with Kunneth formula. A new concept of `braided dimension' is introduced, which helps convert the curse of infinite dimensionality into a boon, which is then used to do Bezout type computations, define euler characters, describe ampleness and link integer partitions with geometry. This new concept of 'Braided dimension' gives a space within a space within a space an infinite tower of spaces, all intricately braided into each other. Finally, the concept of Blow Up over perfectoid space is introduced.
0
0
1
0
0
0
Determinantal Generalizations of Instrumental Variables
Linear structural equation models relate the components of a random vector using linear interdependencies and Gaussian noise. Each such model can be naturally associated with a mixed graph whose vertices correspond to the components of the random vector. The graph contains directed edges that represent the linear relationships between components, and bidirected edges that encode unobserved confounding. We study the problem of generic identifiability, that is, whether a generic choice of linear and confounding effects can be uniquely recovered from the joint covariance matrix of the observed random vector. An existing combinatorial criterion for establishing generic identifiability is the half-trek criterion (HTC), which uses the existence of trek systems in the mixed graph to iteratively discover generically invertible linear equation systems in polynomial time. By focusing on edges one at a time, we establish new sufficient and necessary conditions for generic identifiability of edge effects extending those of the HTC. In particular, we show how edge coefficients can be recovered as quotients of subdeterminants of the covariance matrix, which constitutes a determinantal generalization of formulas obtained when using instrumental variables for identification.
0
0
1
1
0
0
Computational insights and the observation of SiC nanograin assembly: towards 2D silicon carbide
While an increasing number of two-dimensional (2D) materials, including graphene and silicene, have already been realized, others have only been predicted. An interesting example is the two-dimensional form of silicon carbide (2D-SiC). Here, we present an observation of atomically thin and hexagonally bonded nanosized grains of SiC assembling temporarily in graphene oxide pores during an atomic resolution scanning transmission electron microscopy experiment. Even though these small grains do not fully represent the bulk crystal, simulations indicate that their electronic structure already approaches that of 2D-SiC. This is predicted to be flat, but some doubts have remained regarding the preference of Si for sp$^{3}$ hybridization. Exploring a number of corrugated morphologies, we find completely flat 2D-SiC to have the lowest energy. We further compute its phonon dispersion, with a Raman-active transverse optical mode, and estimate the core level binding energies. Finally, we study the chemical reactivity of 2D-SiC, suggesting it is like silicene unstable against molecular absorption or interlayer linking. Nonetheless, it can form stable van der Waals-bonded bilayers with either graphene or hexagonal boron nitride, promising to further enrich the family of two-dimensional materials once bulk synthesis is achieved.
0
1
0
0
0
0
Non-Homogeneous Hydrodynamic Systems and Quasi-Stäckel Hamiltonians
In this paper we present a novel construction of non-homogeneous hydrodynamic equations from what we call quasi-Stäckel systems, that is non-commutatively integrable systems constructed from appropriate maximally superintegrable Stäckel systems. We describe the relations between Poisson algebras generated by quasi-Stäckel Hamiltonians and the corresponding Lie algebras of vector fields of non-homogeneous hydrodynamic systems. We also apply Stäckel transform to obtain new non-homogeneous equations of considered type.
0
1
0
0
0
0
Latest results of the Tunka Radio Extension (ISVHECRI2016)
The Tunka Radio Extension (Tunka-Rex) is an antenna array consisting of 63 antennas at the location of the TAIGA facility (Tunka Advanced Instrument for cosmic ray physics and Gamma Astronomy) in Eastern Siberia, nearby Lake Baikal. Tunka-Rex is triggered by the air-Cherenkov array Tunka-133 during clear and moonless winter nights and by the scintillator array Tunka-Grande during the remaining time. Tunka-Rex measures the radio emission from the same air-showers as Tunka-133 and Tunka-Grande, but with a higher threshold of about 100 PeV. During the first stages of its operation, Tunka-Rex has proven, that sparse radio arrays can measure air-showers with an energy resolution of better than 15\% and the depth of the shower maximum with a resolution of better than 40 g/cm\textsuperscript{2}. To improve and interpret our measurements as well as to study systematic uncertainties due to interaction models, we perform radio simulations with CORSIKA and CoREAS. In this overview we present the setup of Tunka-Rex, discuss the achieved results and the prospects of mass-composition studies with radio arrays.
0
1
0
0
0
0
The Rabi frequency on the $H^3Δ_1$ to $C^1Π$ transition in ThO: influence of interaction with electric and magnetic fields
Calculations of the correlations between the Rabi frequency on the $H^3\Delta_1$ to $C^1\Pi$ transition in ThO molecule and experimental setup parameters in the electron electric dipole moment (eEDM) search experiment is performed. Calculations are required for estimations of systematic errors in the experiment due to imperfections in laser beams used to prepare the molecule and read out the eEDM signal.
0
1
0
0
0
0
Deriving mesoscopic models of collective behaviour for finite populations
Animal groups exhibit emergent properties that are a consequence of local interactions. Linking individual-level behaviour to coarse-grained descriptions of animal groups has been a question of fundamental interest. Here, we present two complementary approaches to deriving coarse-grained descriptions of collective behaviour at so-called mesoscopic scales, which account for the stochasticity arising from the finite sizes of animal groups. We construct stochastic differential equations (SDEs) for a coarse-grained variable that describes the order/consensus within a group. The first method of construction is based on van Kampen's system-size expansion of transition rates. The second method employs Gillespie's chemical Langevin equations. We apply these two methods to two microscopic models from the literature, in which organisms stochastically interact and choose between two directions/choices of foraging. These `binary-choice' models differ only in the types of interactions between individuals, with one assuming simple pair-wise interactions, and the other incorporating higher-order effects. In both cases, the derived mesoscopic SDEs have multiplicative, or state-dependent, noise. However, the different models demonstrate the contrasting effects of noise: increasing order in the pair-wise interaction model, whilst reducing order in the higher-order interaction model. Although both methods yield identical SDEs for such binary-choice, or one-dimensional, systems, the relative tractability of the chemical Langevin approach is beneficial in generalizations to higher-dimensions. In summary, this book chapter provides a pedagogical review of two complementary methods to construct mesoscopic descriptions from microscopic rules and demonstrates how resultant multiplicative noise can have counter-intuitive effects on shaping collective behaviour.
0
0
0
0
1
0
First Indirect X-Ray Imaging Tests With An 88-mm Diameter Single Crystal
Using the 1-BM-C beamline at the Advanced Photon Source (APS), we have performed the initial indirect x-ray imaging point-spread-function (PSF) test of a unique 88-mm diameter YAG:Ce single crystal of only 100-micron thickness. The crystal was bonded to a fiber optic plate (FOP) for mechanical support and to allow the option for FO coupling to a large format camera. This configuration resolution was compared to that of self-supported 25-mm diameter crystals, with and without an Al reflective coating. An upstream monochromator was used to select 17-keV x-rays from the broadband APS bending magnet source of synchrotron radiation. The upstream, adjustable Mo collimators were then used to provide a series of x-ray source transverse sizes from 200 microns down to about 15-20 microns (FWHM) at the crystal surface. The emitted scintillator radiation was in this case lens coupled to the ANDOR Neo sCMOS camera, and the indirect x-ray images were processed offline by a MATLAB-based image processing program. Based on single Gaussian peak fits to the x-ray image projected profiles, we observed a 10.5 micron PSF. This sample thus exhibited superior spatial resolution to standard P43 polycrystalline phosphors of the same thickness which would have about a 100-micron PSF. This single crystal resolution combined with the 88-mm diameter makes it a candidate to support future x-ray diffraction or wafer topography experiments.
0
1
0
0
0
0
A randomized Halton algorithm in R
Randomized quasi-Monte Carlo (RQMC) sampling can bring orders of magnitude reduction in variance compared to plain Monte Carlo (MC) sampling. The extent of the efficiency gain varies from problem to problem and can be hard to predict. This article presents an R function rhalton that produces scrambled versions of Halton sequences. On some problems it brings efficiency gains of several thousand fold. On other problems, the efficiency gain is minor. The code is designed to make it easy to determine whether a given integrand will benefit from RQMC sampling. An RQMC sample of n points in $[0,1]^d$ can be extended later to a larger n and/or d.
1
0
0
1
0
0
Leveraging Node Attributes for Incomplete Relational Data
Relational data are usually highly incomplete in practice, which inspires us to leverage side information to improve the performance of community detection and link prediction. This paper presents a Bayesian probabilistic approach that incorporates various kinds of node attributes encoded in binary form in relational models with Poisson likelihood. Our method works flexibly with both directed and undirected relational networks. The inference can be done by efficient Gibbs sampling which leverages sparsity of both networks and node attributes. Extensive experiments show that our models achieve the state-of-the-art link prediction results, especially with highly incomplete relational data.
1
0
0
1
0
0
pMR: A high-performance communication library
On many parallel machines, the time LQCD applications spent in communication is a significant contribution to the total wall-clock time, especially in the strong-scaling limit. We present a novel high-performance communication library that can be used as a de facto drop-in replacement for MPI in existing software. Its lightweight nature that avoids some of the unnecessary overhead introduced by MPI allows us to improve the communication performance of applications without any algorithmic or complicated implementation changes. As a first real-world benchmark, we make use of the pMR library in the coarse-grid solve of the Regensburg implementation of the DD-$\alpha$AMG algorithm. On realistic lattices, we see an improvement of a factor 2x in pure communication time and total execution time savings of up to 20%.
1
1
0
0
0
0
Extending a Function Just by Multiplying and Dividing Function Values: Smoothness and Prime Identities
We describe a purely-multiplicative method for extending an analytic function. It calculates the value of an analytic function at a point, merely by multiplying together function values and reciprocals of function values at other points closer to the origin. The function values are taken at the points of geometric sequences, independent of the function, whose geometric ratios are arbitrary. The method exposes an "elastic invariance" property of all analytic functions. We show how to simplify and truncate multiplicative function extensions for practical calculations. If we choose each geometric ratio to be the reciprocal of a power of a prime number, we obtain a prime functional identity, which contains a generalization of the Möbius function (with the same denominator as the Rieman zeta function), and generates prime number identities.
0
0
1
0
0
0
Towards Modeling the Interaction of Spatial-Associative Neural Network Representations for Multisensory Perception
Our daily perceptual experience is driven by different neural mechanisms that yield multisensory interaction as the interplay between exogenous stimuli and endogenous expectations. While the interaction of multisensory cues according to their spatiotemporal properties and the formation of multisensory feature-based representations have been widely studied, the interaction of spatial-associative neural representations has received considerably less attention. In this paper, we propose a neural network architecture that models the interaction of spatial-associative representations to perform causal inference of audiovisual stimuli. We investigate the spatial alignment of exogenous audiovisual stimuli modulated by associative congruence. In the spatial layer, topographically arranged networks account for the interaction of audiovisual input in terms of population codes. In the associative layer, congruent audiovisual representations are obtained via the experience-driven development of feature-based associations. Levels of congruency are obtained as a by-product of the neurodynamics of self-organizing networks, where the amount of neural activation triggered by the input can be expressed via a nonlinear distance function. Our novel proposal is that activity-driven levels of congruency can be used as top-down modulatory projections to spatially distributed representations of sensory input, e.g. semantically related audiovisual pairs will yield a higher level of integration than unrelated pairs. Furthermore, levels of neural response in unimodal layers may be seen as sensory reliability for the dynamic weighting of crossmodal cues. We describe a series of planned experiments to validate our model in the tasks of multisensory interaction on the basis of semantic congruence and unimodal cue reliability.
0
0
0
0
1
0
Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics
To reduce data collection time for deep learning of robust robotic grasp plans, we explore training from a synthetic dataset of 6.7 million point clouds, grasps, and analytic grasp metrics generated from thousands of 3D models from Dex-Net 1.0 in randomized poses on a table. We use the resulting dataset, Dex-Net 2.0, to train a Grasp Quality Convolutional Neural Network (GQ-CNN) model that rapidly predicts the probability of success of grasps from depth images, where grasps are specified as the planar position, angle, and depth of a gripper relative to an RGB-D sensor. Experiments with over 1,000 trials on an ABB YuMi comparing grasp planning methods on singulated objects suggest that a GQ-CNN trained with only synthetic data from Dex-Net 2.0 can be used to plan grasps in 0.8sec with a success rate of 93% on eight known objects with adversarial geometry and is 3x faster than registering point clouds to a precomputed dataset of objects and indexing grasps. The Dex-Net 2.0 grasp planner also has the highest success rate on a dataset of 10 novel rigid objects and achieves 99% precision (one false positive out of 69 grasps classified as robust) on a dataset of 40 novel household objects, some of which are articulated or deformable. Code, datasets, videos, and supplementary material are available at this http URL .
1
0
0
0
0
0
Effects of temperature and strain rate on mechanical behaviors of Stone-Wales defective monolayer black phosphorene
The mechanical behaviors of monolayer black phosphorene (MBP) are explored by molecular dynamics (MD) simulations using reactive force field. It is revealed that the temperature and strain rate have significant influence on mechanical behaviors of MBP, and they are further weakened by SW (Stone-Wales) defects. In general, the tensile strength for both of the pristine and SW defective MBP decreases with the increase of temperature or decreasing of strain rate. Surprisingly, for relatively high temperature and low strain rate, phase transition from the black phosphorene to a mixture of {\beta}-phase ({\beta}-P) and {\gamma}-phase ({\gamma}-P) is observed for the SW-2 defective MBP under armchair tension, while self-healing of the SW-2 defect is observed under zigzag tension. A deformation map of SW-2 defective MBP under armchair tension at different temperature and strain rate is established, which is useful for the design of phosphorene allotropes by strain. The results presented herein yield useful insights for designing and tuning the structure, and the mechanical and physical properties of phosphorene.
0
1
0
0
0
0
Ward identities for charge and heat currents of particle-particle and particle-hole pairs
The Ward identities for the charge and heat currents are derived for particle-particle and particle-hole pairs. They are the exact constraints on the current-vertex functions imposed by conservation laws and should be satisfied by consistent theories. While the Ward identity for the charge current of electrons is well established, that for the heat current is not understood correctly. Thus the correct interpretation is presented. On this firm basis the Ward identities for pairs are discussed. As the application of the identity we criticize some inconsistent results in the studies of the superconducting fluctuation transport and the transport anomaly in the normal state of high-Tc superconductors.
0
1
0
0
0
0
Ultrafast imprinting of topologically protected magnetic textures via pulsed electrons
Short electron pulses are demonstrated to trigger and control magnetic excitations, even at low electron current densities. We show that the tangential magnetic field surrounding a picosecond electron pulse can imprint topologically protected magnetic textures such as skyrmions in a sample with a residual Dzyaloshinskii-Moriya spin-orbital coupling. Characteristics of the created excitations such as the topological charge can be steered via the duration and the strength of the electron pulses. The study points to a possible way for a spatio-temporally controlled generation of skyrmionic excitations.
0
1
0
0
0
0
Recurrent Environment Simulators
Models that can simulate how environments change in response to actions can be used by agents to plan and act efficiently. We improve on previous environment simulators from high-dimensional pixel observations by introducing recurrent neural networks that are able to make temporally and spatially coherent predictions for hundreds of time-steps into the future. We present an in-depth analysis of the factors affecting performance, providing the most extensive attempt to advance the understanding of the properties of these models. We address the issue of computationally inefficiency with a model that does not need to generate a high-dimensional image at each time-step. We show that our approach can be used to improve exploration and is adaptable to many diverse environments, namely 10 Atari games, a 3D car racing environment, and complex 3D mazes.
1
0
0
1
0
0
Exceeding the Shockley-Queisser limit within the detailed balance framework
The Shockley-Queisser limit is one of the most fundamental results in the field of photovoltaics. Based on the principle of detailed balance, it defines an upper limit for a single junction solar cell that uses an absorber material with a specific band gap. Although methods exist that allow a solar cell to exceed the Shockley-Queisser limit, here we show that it is possible to exceed the Shockley-Queisser limit without considering any of these additions. Merely by introducing an absorptivity that does not assume that every photon with an energy above the band gap is absorbed, efficiencies above the Shockley-Queisser limit are obtained. This is related to the fact that assuming optimal absorption properties also maximizes the recombination current within the detailed balance approach. We conclude that considering a finite thickness for the absorber layer allows the efficiency to exceed the Shockley-Queisser limit, and that this is more likely to occur for materials with small band gaps.
0
1
0
0
0
0
Effect of disorder on the optical response of NiPt and Ni$_3$Pt alloys
In this communication we present a detailed study of the effect of chemical disorder on the optical response of Ni$_{1-x}$Pt$_x$ (0.1$\leq$ x $\leq$0.75) and Ni$_{3(1-x)/3}$Pt$_x$ (0.1$\leq$ x $\leq$0.3). We shall propose a formalism which will combine a Kubo-Greenwood approach with a DFT based tight-binding linear muffin-tin orbitals (TB-LMTO) basis and augmented space recursion (ASR) technique to explicitly incorporate the effect of disorder. We show that chemical disorder has a large impact on optical response of Ni-Pt systems. In ordered Ni-Pt alloys, the optical conductivity peaks are sharp. But as we switch on chemical disorder, the UV peak becomes broadened and its position as a function of composition and disorder carries the signature of a phase transition from NiPt to Ni$_3$Pt with decreasing Pt concentration. Quantitatively this agrees well with Massalski's Ni-Pt phase diagram \cite{massal}. Both ordered NiPt and Ni$_3$Pt have an optical conductivity transition at 4.12 eV. But disordered NiPt has an optical conductivity transition at 3.93 eV. If we decrease the Pt content, it results a chemical phase transition from NiPt to Ni$_3$Pt and shifts the peak position by 1.67 eV to the ultraviolet range at 5.6 eV. There is a significant broadening of UV peak with increasing Pt content due to enhancement of 3d(Ni)-5d(Pt) bonding. Chemical disorder enhances the optical response of NiPt alloys nearly one order of magnitude. Our study also shows the fragile magnetic effect on optical response of disordered Ni$_{1-x}$Pt$_x$ (0.4$<$ x $<$0.6) binary alloys. Our theoretical predictions agree more than reasonably well with both earlier experimental as well as theoretical investigations.
0
1
0
0
0
0
Train on Validation: Squeezing the Data Lemon
Model selection on validation data is an essential step in machine learning. While the mixing of data between training and validation is considered taboo, practitioners often violate it to increase performance. Here, we offer a simple, practical method for using the validation set for training, which allows for a continuous, controlled trade-off between performance and overfitting of model selection. We define the notion of on-average-validation-stable algorithms as one in which using small portions of validation data for training does not overfit the model selection process. We then prove that stable algorithms are also validation stable. Finally, we demonstrate our method on the MNIST and CIFAR-10 datasets using stable algorithms as well as state-of-the-art neural networks. Our results show significant increase in test performance with a minor trade-off in bias admitted to the model selection process.
0
0
0
1
0
0
Local migration quantification method for scratch assays
Motivation: The scratch assay is a standard experimental protocol used to characterize cell migration. It can be used to identify genes that regulate migration and evaluate the efficacy of potential drugs that inhibit cancer invasion. In these experiments, a scratch is made on a cell monolayer and recolonisation of the scratched region is imaged to quantify cell migration rates. A drawback of this methodology is the lack of its reproducibility resulting in irregular cell-free areas with crooked leading edges. Existing quantification methods deal poorly with such resulting irregularities present in the data. Results: We introduce a new quantification method that can analyse low quality experimental data. By considering in-silico and in-vitro data, we show that the method provides a more accurate statistical classification of the migration rates than two established quantification methods. The application of this method will enable the quantification of migration rates of scratch assay data previously unsuitable for analysis. Availability and Implementation: The source code and the implementation of the algorithm as a GUI along with an example dataset and user instructions, are available in this https URL. The datasets are available in this https URL.
0
0
0
0
1
0
Assessing the level of merging errors for coauthorship data: a Bayesian model
Robust analysis of coauthorship networks is based on high quality data. However, ground-truth data are usually unavailable. Empirical data suffer several types of errors, a typical one of which is called merging error, identifying different persons as one entity. Specific features of authors have been used to reduce these errors. We proposed a Bayesian model to calculate the information of any given features of authors. Based on the features, the model can be utilized to calculate the rate of merging errors for entities. Therefore, the model helps to find informative features for detecting heavily compromised entities. It has potential contributions to improving the quality of empirical data.
1
0
0
0
0
0
Blood-based metabolic signatures in Alzheimer's disease
Introduction: Identification of blood-based metabolic changes might provide early and easy-to-obtain biomarkers. Methods: We included 127 AD patients and 121 controls with CSF-biomarker-confirmed diagnosis (cut-off tau/A$\beta_{42}$: 0.52). Mass spectrometry platforms determined the concentrations of 53 amine, 22 organic acid, 120 lipid, and 40 oxidative stress compounds. Multiple signatures were assessed: differential expression (nested linear models), classification (logistic regression), and regulatory (network extraction). Results: Twenty-six metabolites were differentially expressed. Metabolites improved the classification performance of clinical variables from 74% to 79%. Network models identified 5 hubs of metabolic dysregulation: Tyrosine, glycylglycine, glutamine, lysophosphatic acid C18:2 and platelet activating factor C16:0. The metabolite network for APOE $\epsilon$4 negative AD patients was less cohesive compared to the network for APOE $\epsilon$4 positive AD patients. Discussion: Multiple signatures point to various promising peripheral markers for further validation. The network differences in AD patients according to APOE genotype may reflect different pathways to AD.
0
0
0
1
0
0
The Indecomposable Solutions of Linear Congruences
This article considers the minimal non-zero (= indecomposable) solutions of the linear congruence $1\cdot x_1 + \cdots + (m-1)\cdot x_{m-1} \equiv 0 \pmod m$ for unknown non-negative integers $x_1, \ldots, x_n$, and characterizes the solutions that attain the Eggleton-Erdős bound. Furthermore it discusses the asymptotic behaviour of the number of indecomposable solutions. The results have direct interpretations in terms of zero-sum sequences and invariant theory.
0
0
1
0
0
0
Weak multiplier Hopf algebras III. Integrals and duality
Let $(A,\Delta)$ be a weak multiplier Hopf algebra. It is a pair of a non-degenerate algebra $A$, with or without identity, and a coproduct $\Delta$ on $A$, satisfying certain properties. The main difference with multiplier Hopf algebras is that now, the canonical maps $T_1$ and $T_2$ on $A\otimes A$, defined by $$T_1(a\otimes b)=\Delta(a)(1\otimes b) \qquad\quad\text{and}\qquad\quad T_2(c\otimes a)=(c\otimes 1)\Delta(a),$$ are no longer assumed to be bijective. Also recall that a weak multiplier Hopf algebra is called regular if its antipode is a bijective map from $A$ to itself. In this paper, we introduce and study the notion of integrals on such regular weak multiplier Hopf algebras. A left integral is a non-zero linear functional on $A$ that is left invariant (in an appropriate sense). Similarly for a right integral. For a regular weak multiplier Hopf algebra $(A,\Delta)$ with (sufficiently many) integrals, we construct the dual $(\widehat A,\widehat\Delta)$. It is again a regular weak multiplier Hopf algebra with (sufficiently many) integrals. This duality extends the known duality of finite-dimensional weak Hopf algebras to this more general case. It also extends the duality of multiplier Hopf algebras with integrals, the so-called algebraic quantum groups. For this reason, we will sometimes call a regular weak multiplier Hopf algebra with enough integrals an algebraic quantum groupoid. We discuss the relation of our work with the work on duality for algebraic quantum groupoids by Timmermann. We also illustrate this duality with a particular example in a separate paper. In this paper, we only mention the main definitions and results for this example. However, we do consider the two natural weak multiplier Hopf algebras associated with a groupoid in detail and show that they are dual to each other in the sense of the above duality.
0
0
1
0
0
0
Monte-Carlo Tree Search by Best Arm Identification
Recent advances in bandit tools and techniques for sequential learning are steadily enabling new applications and are promising the resolution of a range of challenging related problems. We study the game tree search problem, where the goal is to quickly identify the optimal move in a given game tree by sequentially sampling its stochastic payoffs. We develop new algorithms for trees of arbitrary depth, that operate by summarizing all deeper levels of the tree into confidence intervals at depth one, and applying a best arm identification procedure at the root. We prove new sample complexity guarantees with a refined dependence on the problem instance. We show experimentally that our algorithms outperform existing elimination-based algorithms and match previous special-purpose methods for depth-two trees.
1
0
0
1
0
0
Efficient Model-Based Deep Reinforcement Learning with Variational State Tabulation
Modern reinforcement learning algorithms reach super-human performance on many board and video games, but they are sample inefficient, i.e. they typically require significantly more playing experience than humans to reach an equal performance level. To improve sample efficiency, an agent may build a model of the environment and use planning methods to update its policy. In this article we introduce Variational State Tabulation (VaST), which maps an environment with a high-dimensional state space (e.g. the space of visual inputs) to an abstract tabular model. Prioritized sweeping with small backups, a highly efficient planning method, can then be used to update state-action values. We show how VaST can rapidly learn to maximize reward in tasks like 3D navigation and efficiently adapt to sudden changes in rewards or transition probabilities.
0
0
0
1
0
0
Griffiths Singularities in the Random Quantum Ising Antiferromagnet: A Tree Tensor Network Renormalization Group Study
The antiferromagnetic Ising chain in both transverse and longitudinal magnetic fields is one of the paradigmatic models of a quantum phase transition. The antiferromagnetic system exhibits a zero-temperature critical line separating an antiferromagnetic phase and a paramagnetic phase; the critical line connects an integrable quantum critical point at zero longitudinal field and a classical first-order transition point at zero transverse field. Using a strong-disorder renormalization group method formulated as a tree tensor network, we study the zero-temperature phase of the quantum Ising chain with bond randomness. We introduce a new matrix product operator representation of high-order moments, which provides an efficient and accurate tool for determining quantum phase transitions via the Binder cumulant of the order parameter. Our results demonstrate an infinite-randomness quantum critical point in zero longitudinal field accompanied by pronounced quantum Griffiths singularities, arising from rare ordered regions with anomalously slow fluctuations inside the paramagnetic phase. The strong Griffiths effects are signaled by a large dynamical exponent $z>1$, which characterizes a power-law density of low-energy states of the localized rare regions and becomes infinite at the quantum critical point. Upon application of a longitudinal field, the quantum phase transition between the paramagnetic phase and the antiferromagnetic phase is completely destroyed. Furthermore, quantum Griffiths effects are suppressed, showing $z<1$, when the dynamics of the rare regions is hampered by the longitudinal field.
0
1
0
0
0
0
Which Neural Net Architectures Give Rise To Exploding and Vanishing Gradients?
We give a rigorous analysis of the statistical behavior of gradients in a randomly initialized fully connected network N with ReLU activations. Our results show that the empirical variance of the squares of the entries in the input-output Jacobian of N is exponential in a simple architecture-dependent constant beta, given by the sum of the reciprocals of the hidden layer widths. When beta is large, the gradients computed by N at initialization vary wildly. Our approach complements the mean field theory analysis of random networks. From this point of view, we rigorously compute finite width corrections to the statistics of gradients at the edge of chaos.
0
0
0
1
0
0
Curriculum-Based Neighborhood Sampling For Sequence Prediction
The task of multi-step ahead prediction in language models is challenging considering the discrepancy between training and testing. At test time, a language model is required to make predictions given past predictions as input, instead of the past targets that are provided during training. This difference, known as exposure bias, can lead to the compounding of errors along a generated sequence at test time. In order to improve generalization in neural language models and address compounding errors, we propose a curriculum learning based method that gradually changes an initially deterministic teacher policy to a gradually more stochastic policy, which we refer to as \textit{Nearest-Neighbor Replacement Sampling}. A chosen input at a given timestep is replaced with a sampled nearest neighbor of the past target with a truncated probability proportional to the cosine similarity between the original word and its top $k$ most similar words. This allows the teacher to explore alternatives when the teacher provides a sub-optimal policy or when the initial policy is difficult for the learner to model. The proposed strategy is straightforward, online and requires little additional memory requirements. We report our main findings on two language modelling benchmarks and find that the proposed approach performs particularly well when used in conjunction with scheduled sampling, that too attempts to mitigate compounding errors in language models.
0
0
0
1
0
0
Randomized Composable Coresets for Matching and Vertex Cover
A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say $k$, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and round-optimal protocol. While many fundamental graph problems admit efficient solutions in this model, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a polylog$(n)$ approximation requires essentially sending the entire input graph from each machine. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit randomized composable coresets of size $\widetilde{O}(n)$ that yield an $\widetilde{O}(1)$-approximate solution. This results in an $\widetilde{O}(1)$-approximation simultaneous protocol for these problems with $\widetilde{O}(nk)$ total communication when the input is randomly partitioned across $k$ machines. We further prove the optimality of our results. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication
1
0
0
0
0
0
simode: R Package for statistical inference of ordinary differential equations using separable integral-matching
In this paper we describe simode: Separable Integral Matching for Ordinary Differential Equations. The statistical methodologies applied in the package focus on several minimization procedures of an integral-matching criterion function, taking advantage of the mathematical structure of the differential equations like separability of parameters from equations. Application of integral based methods to parameter estimation of ordinary differential equations was shown to yield more accurate and stable results comparing to derivative based ones. Linear features such as separability were shown to ease optimization and inference. We demonstrate the functionalities of the package using various systems of ordinary differential equations.
0
0
0
1
0
0
Positive scalar curvature and the Euler class
We prove the following generalization of the classical Lichnerowicz vanishing theorem: if $F$ is an oriented flat vector bundle over a closed spin manifold $M$ such that $TM$ carries a metric of positive scalar curvature, then $<\widehat A(TM)e(F),[M]>=0$, where $e(F)$ is the Euler class of $F$.
0
0
1
0
0
0
Online Service with Delay
In this paper, we introduce the online service with delay problem. In this problem, there are $n$ points in a metric space that issue service requests over time, and a server that serves these requests. The goal is to minimize the sum of distance traveled by the server and the total delay in serving the requests. This problem models the fundamental tradeoff between batching requests to improve locality and reducing delay to improve response time, that has many applications in operations management, operating systems, logistics, supply chain management, and scheduling. Our main result is to show a poly-logarithmic competitive ratio for the online service with delay problem. This result is obtained by an algorithm that we call the preemptive service algorithm. The salient feature of this algorithm is a process called preemptive service, which uses a novel combination of (recursive) time forwarding and spatial exploration on a metric space. We hope this technique will be useful for related problems such as reordering buffer management, online TSP, vehicle routing, etc. We also generalize our results to $k > 1$ servers.
1
0
0
0
0
0
Comparison of dynamic mechanical properties of non-superheated and superheated A357 alloys
The influence of superheat treatment on the microstructure and dynamic mechanical properties of A357 alloys has been investigated. The study of microstructure was performed by the optical microscope. Dynamic mechanical properties (storage modulus, loss modulus, and damping capacity) were measured by the dynamic mechanical analyzer (DMA). Microstructure showed coarser and angular eutectic Si particles with larger {\alpha}-Al dendrites in the non-superheated A357 alloy. In contrast, finer and rounded eutectic Si particles together with smaller and preferred oriented {\alpha}-Al dendrites have been observed in the superheated A357 alloy. Dynamic mechanical properties showed an increasing trend of loss modulus and damping capacity meanwhile a decreasing trend of storage modulus at elevated temperatures for superheated and non-superheated A357 alloys. The high damping capacity of superheated A357 has been ascribed to the grain boundary damping at elevated temperatures.
0
1
0
0
0
0
On the accuracy and usefulness of analytic energy models for contemporary multicore processors
This paper presents refinements to the execution-cache-memory performance model and a previously published power model for multicore processors. The combination of both enables a very accurate prediction of performance and energy consumption of contemporary multicore processors as a function of relevant parameters such as number of active cores as well as core and Uncore frequencies. Model validation is performed on the Sandy Bridge-EP and Broadwell-EP microarchitectures. Production-related variations in chip quality are demonstrated through a statistical analysis of the fit parameters obtained on one hundred Broadwell-EP CPUs of the same model. Insights from the models are used to explain the performance- and energy-related behavior of the processors for scalable as well as saturating (i.e., memory-bound) codes. In the process we demonstrate the models' capability to identify optimal operating points with respect to highest performance, lowest energy-to-solution, and lowest energy-delay product and identify a set of best practices for energy-efficient execution.
1
0
0
0
0
0
A Simple Solution for Maximum Range Flight
Within the standard framework of quasi-steady flight, this paper derives a speed that realizes the maximal obtainable range per unit of fuel. If this speed is chosen at each instant of a flight plan $h(x)$ giving altitude $h$ as a function of distance $x$, a variational problem for finding an optimal $h(x)$ can be formulated and solved. It yields flight plans with maximal range, and these turn out to consist of mainly three phases using the optimal speed: starting with a climb at maximal continuous admissible thrust, ending with a continuous descent at idle thrust, and in between with a transition based on a solution of the Euler-Lagrange equation for the variational problem. A similar variational problem is derived and solved for speed-restricted flights, e.g. at 250 KIAS below 10000 ft. In contrast to the literature, the approach of this paper does not need more than standard ordinary differential equations solving variational problems to derive range-optimal trajectories. Various numerical examplesbased on a Standard Business Jet are added for illustration.
0
0
1
0
0
0
Multiscale Residual Mixture of PCA: Dynamic Dictionaries for Optimal Basis Learning
In this paper we are interested in the problem of learning an over-complete basis and a methodology such that the reconstruction or inverse problem does not need optimization. We analyze the optimality of the presented approaches, their link to popular already known techniques s.a. Artificial Neural Networks,k-means or Oja's learning rule. Finally, we will see that one approach to reach the optimal dictionary is a factorial and hierarchical approach. The derived approach lead to a formulation of a Deep Oja Network. We present results on different tasks and present the resulting very efficient learning algorithm which brings a new vision on the training of deep nets. Finally, the theoretical work shows that deep frameworks are one way to efficiently have over-complete (combinatorially large) dictionary yet allowing easy reconstruction. We thus present the Deep Residual Oja Network (DRON). We demonstrate that a recursive deep approach working on the residuals allow exponential decrease of the error w.r.t. the depth.
1
0
0
1
0
0
On factorizations of graphical maps
We study the categories governing infinity (wheeled) properads. The graphical category, which was already known to be generalized Reedy, is in fact an Eilenberg-Zilber category. A minor alteration to the definition of the wheeled graphical category allows us to show that it is a generalized Reedy category. Finally, we present model structures for Segal properads and Segal wheeled properads.
0
0
1
0
0
0
Raking-ratio empirical process with auxiliary information learning
The raking-ratio method is a statistical and computational method which adjusts the empirical measure to match the true probability of sets in a finite partition. We study the asymptotic behavior of the raking-ratio empirical process indexed by a class of functions when the auxiliary information is given by the learning of the probability of sets in partitions from another sample larger than the sample of the statistician. Under some metric entropy hypothesis and conditions on the size of the independent samples, we establish the strong approximation of this process with estimated auxiliary information and show in particular that weak convergence is the same as the classical raking-ratio empirical process. We also give possible statistical applications of these results like strengthening the $Z$-test and the chi-square goodness of fit test.
0
0
1
1
0
0
Improving drug sensitivity predictions in precision medicine through active expert knowledge elicitation
Predicting the efficacy of a drug for a given individual, using high-dimensional genomic measurements, is at the core of precision medicine. However, identifying features on which to base the predictions remains a challenge, especially when the sample size is small. Incorporating expert knowledge offers a promising alternative to improve a prediction model, but collecting such knowledge is laborious to the expert if the number of candidate features is very large. We introduce a probabilistic model that can incorporate expert feedback about the impact of genomic measurements on the sensitivity of a cancer cell for a given drug. We also present two methods to intelligently collect this feedback from the expert, using experimental design and multi-armed bandit models. In a multiple myeloma blood cancer data set (n=51), expert knowledge decreased the prediction error by 8%. Furthermore, the intelligent approaches can be used to reduce the workload of feedback collection to less than 30% on average compared to a naive approach.
1
0
0
1
0
0
On Vague Computers
Vagueness is something everyone is familiar with. In fact, most people think that vagueness is closely related to language and exists only there. However, vagueness is a property of the physical world. Quantum computers harness superposition and entanglement to perform their computational tasks. Both superposition and entanglement are vague processes. Thus quantum computers, which process exact data without "exploiting" vagueness, are actually vague computers.
1
0
0
0
0
0
Exploiting OxRAM Resistive Switching for Dynamic Range Improvement of CMOS Image Sensors
We present a unique application of OxRAM devices in CMOS Image Sensors (CIS) for dynamic range (DR) improvement. We propose a modified 3T-APS (Active Pixel Sensor) circuit that incorporates OxRAM in 1T-1R configuration. DR improvement is achieved by resistive compression of the pixel output signal through autonomous programming of OxRAM device resistance during exposure. We show that by carefully preconditioning the OxRAM resistance, pixel DR can be enhanced. Detailed impact of OxRAM SET-to-RESET and RESET-to-SET transitions on pixel DR is discussed. For experimental validation with specific OxRAM preprogrammed states, a 4 Kb 10 nm thick HfOx (1T-1R) matrix was fabricated and characterized. Best case, relative pixel DR improvement of ~ 50 dB was obtained for our design.
1
0
0
0
0
0
Magnetic control of Goos-Hanchen shifts in a yttrium-iron-garnet film
We investigate the Goos-Hanchen (G-H) shifts reflected and transmitted by a yttrium-iron-garnet (YIG) film for both normal and oblique incidence. It is found that the nonreciprocity effect of the MO material does not only result in a nonvanishing reflected shift at normal incidence, but also leads to a slab-thickness-independent term which breaks the symmetry between the reflected and transmitted shifts at oblique incidence. The asymptotic behaviors of the normal-incidence reflected shift are obtained in the vicinity of two characteristic frequencies corresponding to a minimum reflectivity and a total reflection, respectively. Moreover, the coexistence of two types of negative-reflected-shift (NRS) at oblique incidence is discussed. We show that the reversal of the shifts from positive to negative values can be realized by tuning the magnitude of applied magnetic field, the frequency of incident wave and the slab thickness as well as the incident angle. In addition, we further investigate two special cases for practical purposes: the reflected shift with a total reflection and the transmitted shift with a total transmission. Numerical simulations are also performed to verify our analytical results.
0
1
0
0
0
0
Effective mass of quasiparticles from thermodynamics
We discuss the potential advantages of calculating the effective mass of quasiparticles in the interacting electron liquid from the low-temperature free energy vis-a-vis the conventional approach, in which the effective mass is obtained from approximate calculations of the self-energy, or from a quantum Monte Carlo evaluation of the energy of a variational "quasiparticle wave function". While raw quantum Monte Carlo data are presently too sparse to allow for an accurate determination of the effective mass, the values estimated by this method are numerically close to the ones obtained in previous calculations using diagrammatic many-body theory. In contrast to this, a recently published parametrization of quantum Monte Carlo data for the free energy of the homogeneous electron liquid yields effective masses that considerably deviate from previous calculations and even change sign for low densities, reflecting an unphysical negative entropy. We suggest that this anomaly is related to the treatment of the exchange energy at finite temperature.
0
1
0
0
0
0
Dynamics over Signed Networks
A signed network is a network with each link associated with a positive or negative sign. Models for nodes interacting over such signed networks, where two different types of interactions take place along the positive and negative links, respectively, arise from various biological, social, political, and economic systems. As modifications to the conventional DeGroot dynamics for positive links, two basic types of negative interactions along negative links, namely the opposing rule and the repelling rule, have been proposed and studied in the literature. This paper reviews a few fundamental convergence results for such dynamics over deterministic or random signed networks under a unified algebraic-graphical method. We show that a systematic tool of studying node state evolution over signed networks can be obtained utilizing generalized Perron-Frobenius theory, graph theory, and elementary algebraic recursions.
1
0
0
0
0
0
On the Estimation of Entropy in the FastICA Algorithm
The fastICA algorithm is a popular dimension reduction technique used to reveal patterns in data. Here we show that the approximations used in fastICA can result in patterns not being successfully recognised. We demonstrate this problem using a two-dimensional example where a clear structure is immediately visible to the naked eye, but where the projection chosen by fastICA fails to reveal this structure. This implies that care is needed when applying fastICA. We discuss how the problem arises and how it is intrinsically connected to the approximations that form the basis of the computational efficiency of fastICA.
0
0
0
1
0
0
Spatial-Temporal Imaging of Anisotropic Photocarrier Dynamics in Black Phosphorus
As an emerging single elemental layered material with a low symmetry in-plane crystal lattice, black phosphorus (BP) has attracted significant research interest owing to its unique electronic and optoelectronic properties, including its widely tunable bandgap, polarization dependent photoresponse and highly anisotropic in-plane charge transport. Despite extensive study of the steady-state charge transport in BP, there has not been direct characterization and visualization of the hot carriers dynamics in BP immediately after photoexcitation, which is crucial to understanding the performance of BP-based optoelectronic devices. Here we use the newly developed scanning ultrafast electron microscopy (SUEM) to directly visualize the motion of photo-excited hot carriers on the surface of BP in both space and time. We observe highly anisotropic in-plane diffusion of hot holes, with a 15-times higher diffusivity along the armchair (x-) direction than that along the zigzag (y-) direction. Our results provide direct evidence of anisotropic hot carrier transport in BP and demonstrate the capability of SUEM to resolve ultrafast hot carrier dynamics in layered two-dimensional materials.
0
1
0
0
0
0
The maximal order of iterated multiplicative functions
Following Wigert, a great number of authors including Ramanujan, Gronwall, Erdős, Ivić, Heppner, J. Knopfmacher, Nicolas, Schwarz, Wirsing, Freiman, Shiu et al. determined the maximal order of several multiplicative functions, generalizing Wigert's result \[\max_{n\leq x} \log d(n)= (\log 2+o(1))\frac{\log x}{\log \log x}.\] On the contrary, for many multiplicative functions, the maximal order of iterations of the functions remains wide open. The case of the iterated divisor function was only recently solved, answering a question of Ramanujan (1915). Here, we determine the maximal order of $\log f(f(n))$ for a class of multiplicative functions $f$ which are related to the divisor function. As a corollary, we apply this to the function counting representations as sums of two squares of non-negative integers, also known as $r_2(n)/4$, and obtain an asymptotic formula: \[\max_{n\leq x} \log f(f(n))= (c+o(1))\frac{\sqrt{\log x}}{\log \log x},\] with some explicitly given positive constant $c$.
0
0
1
0
0
0
Extending applicability of bimetric theory: chameleon bigravity
This article extends bimetric formulations of massive gravity to make the mass of the graviton to depend on its environment. This minimal extension offers a novel way to reconcile massive gravity with local tests of general relativity without invoking the Vainshtein mechanism. On cosmological scales, it is argued that the model is stable and that it circumvents the Higuchi bound, hence relaxing the constraints on the parameter space. Moreover, with this extension the strong coupling scale is also environmentally dependent in such a way that it is kept sufficiently higher than the expansion rate all the way up to the very early universe, while the present graviton mass is low enough to be phenomenologically interesting. In this sense the extended bigravity theory serves as a partial UV completion of the standard bigravity theory. This extension is very generic and robust and a simple specific example is described.
0
1
0
0
0
0
The Scaling Limit of High-Dimensional Online Independent Component Analysis
We analyze the dynamics of an online algorithm for independent component analysis in the high-dimensional scaling limit. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measure of the target feature vector and the estimates provided by the algorithm will converge weakly to a deterministic measured-valued process that can be characterized as the unique solution of a nonlinear PDE. Numerical solutions of this PDE, which involves two spatial variables and one time variable, can be efficiently obtained. These solutions provide detailed information about the performance of the ICA algorithm, as many practical performance metrics are functionals of the joint empirical measures. Numerical simulations show that our asymptotic analysis is accurate even for moderate dimensions. In addition to providing a tool for understanding the performance of the algorithm, our PDE analysis also provides useful insight. In particular, in the high-dimensional limit, the original coupled dynamics associated with the algorithm will be asymptotically "decoupled", with each coordinate independently solving a 1-D effective minimization problem via stochastic gradient descent. Exploiting this insight to design new algorithms for achieving optimal trade-offs between computational and statistical efficiency may prove an interesting line of future research.
1
1
0
1
0
0
Direct characterization of a nonlinear photonic circuit's wave function with laser light
Integrated photonics is a leading platform for quantum technologies including nonclassical state generation \cite{Vergyris:2016-35975:SRP, Solntsev:2014-31007:PRX, Silverstone:2014-104:NPHOT, Solntsev:2016:RPH}, demonstration of quantum computational complexity \cite{Lamitral_NJP2016} and secure quantum communications \cite{Zhang:2014-130501:PRL}. As photonic circuits grow in complexity, full quantum tomography becomes impractical, and therefore an efficient method for their characterization \cite{Lobino:2008-563:SCI, Rahimi-Keshari:2011-13006:NJP} is essential. Here we propose and demonstrate a fast, reliable method for reconstructing the two-photon state produced by an arbitrary quadratically nonlinear optical circuit. By establishing a rigorous correspondence between the generated quantum state and classical sum-frequency generation measurements from laser light, we overcome the limitations of previous approaches for lossy multimode devices \cite{Liscidini:2013-193602:PRL, Helt:2015-1460:OL}. We applied this protocol to a multi-channel nonlinear waveguide network, and measured a 99.28$\pm$0.31\% fidelity between classical and quantum characterization. This technique enables fast and precise evaluation of nonlinear quantum photonic networks, a crucial step towards complex, large-scale, device production.
0
1
0
0
0
0
The Accuracy of Confidence Intervals for Field Normalised Indicators
When comparing the average citation impact of research groups, universities and countries, field normalisation reduces the influence of discipline and time. Confidence intervals for these indicators can help with attempts to infer whether differences between sets of publications are due to chance factors. Although both bootstrapping and formulae have been proposed for these, their accuracy is unknown. In response, this article uses simulated data to systematically compare the accuracy of confidence limits in the simplest possible case, a single field and year. The results suggest that the MNLCS (Mean Normalised Log-transformed Citation Score) confidence interval formula is conservative for large groups but almost always safe, whereas bootstrap MNLCS confidence intervals tend to be accurate but can be unsafe for smaller world or group sample sizes. In contrast, bootstrap MNCS (Mean Normalised Citation Score) confidence intervals can be very unsafe, although their accuracy increases with sample sizes.
1
0
0
0
0
0
Generalized Theta Functions. I
Generalizations of classical theta functions are proposed that include any even number of analytic parameters for which conditions of quasi-periodicity are fulfilled and that are representations of extended Heisenberg group. Differential equations for generalized theta functions and finite non-unitary representations of extended Heisenberg group are presented so as other properties and possible applications are pointed out such as a projective embedding of tori by means of generalized theta functions.
0
1
0
0
0
0
Constructing grids for molecular quantum dynamics using an autoencoder
A challenge for molecular quantum dynamics (QD) calculations is the curse of dimensionality with respect to the nuclear degrees of freedom. A common approach that works especially well for fast reactive processes is to reduce the dimensionality of the system to a few most relevant coordinates. Identifying these can become a very difficult task, since they often are highly unintuitive. We present a machine learning approach that utilizes an autoencoder that is trained to find a low-dimensional representation of a set of molecular configurations. These configurations are generated by trajectory calculations performed on the reactive molecular systems of interest. The resulting low-dimensional representation can be used to generate a potential energy surface grid in the desired subspace. Using the G-matrix formalism to calculate the kinetic energy operator, QD calculations can be carried out on this grid. In addition to step-by-step instructions for the grid construction, we present the application to a test system.
0
1
0
0
0
0
3D Pursuit-Evasion for AUVs
In this paper, we consider the problem of pursuit-evasion using multiple Autonomous Underwater Vehicles (AUVs) in a 3D water volume, with and without simple obstacles. Pursuit-evasion is a well studied topic in robotics, but the results are mostly set in 2D environments, using unlimited line of sight sensing. We propose an algorithm for range limited sensing in 3D environments that captures a finite speed evader based on one single previous observation of its location. The pursuers are first moved to form a maximal cage formation, based on their number and sensor ranges, containing all of the possible evader locations. The cage is then shrunk until every part of that volume is sensed, thereby capturing the evader. The pursuers need only limited sensing range and low bandwidth communication, making the algorithm well suited for an underwater environment.
1
0
0
0
0
0