text
stringlengths
6
128k
We use the distribution of accreted stars in SDSS-Gaia DR2 to demonstrate that a non-trivial fraction of the dark matter halo within Galactocentric radii of 7.5-10 kpc and $|z| > 2.5$ kpc is in substructure, and thus may not be in equilibrium. Using a mixture likelihood analysis, we separate the contributions of an old, isotropic stellar halo and a younger anisotropic population. The latter dominates and is uniform within the region studied. It can be explained as the tidal debris of a disrupted massive satellite on a highly radial orbit, and is consistent with mounting evidence from recent studies. Simulations that track the tidal debris from such mergers find that the dark matter traces the kinematics of its stellar counterpart. If so, our results indicate that a component of the nearby dark matter halo that is sourced by luminous satellites is in kinematic substructure referred to as debris flow. These results challenge the Standard Halo Model, which is discrepant with the distribution recovered from the stellar data, and have important ramifications for the interpretation of direct detection experiments.
Surveillance of drug overdose deaths relies on death certificates for identification of the substances that caused death. Drugs and drug classes can be identified through the International Classification of Diseases, 10th Revision (ICD-10) codes present on death certificates. However, ICD-10 codes do not always provide high levels of specificity in drug identification. To achieve more fine-grained identification of substances on a death certificate, the free-text cause of death section, completed by the medical certifier, must be analyzed. Current methods for analyzing free-text death certificates rely solely on look-up tables for identifying specific substances, which must be frequently updated and maintained. To improve identification of drugs on death certificates, a deep learning named-entity recognition model was developed, which achieved an F1-score of 99.13%. This model can identify new drug misspellings and novel substances that are not present on current surveillance look-up tables, enhancing the surveillance of drug overdose deaths.
We present a novel homogeneous and geometrically flat exact solution of Einstein's General Relativity equations for an ideal fluid. The solution, which describes an expanding/contracting hypercylinder, fits well with the observational pillars upon which rely the standard FLRW cosmology and, furthermore, it can naturally solve some of its most outstanding problems.
We analyze two reduction methods for nonholonomic systems that are invariant under the action of a Lie group on the configuration space. Our approach for obtaining the reduced equations is entirely based on the observation that the dynamics can be represented by a second-order differential equations vector field and that in both cases the reduced dynamics can be described by expressing that vector field in terms of an appropriately chosen anholonomic frame.
For $ C^*$-algebras $ \mathfrak{A}, A$ and $ B $ where $ A $ and $ B $ are $ \mathfrak{A} $-bimodules with compatible actions, we consider amalgamated $ \mathfrak{A} $-module tensor product of $ A $ and $ B $ and study its relation with the C*-tensor product of $A$ and $B$ for the min and max norms. We introduce and study the notions of module tensorizing maps, module exactness, and module nuclear pairs of $ C^*$-algebras in this setting. We give concrete examples of $C^*$-algebras on inverse semigroups.
Polynomial invariants are fundamental objects in analysis on Lie groups and symmetric spaces. Invariant differential operators on symmetric spaces are described by Weyl group invariant polynomial. In this article we give a simple criterion that ensure that the restriction of invariant polynomials to subspaces is surjective. In another paper we will apply our criterion to problems in Fourier analysis on projective/injective limits, specifically to theorems of Paley--Wiener type.
OPM is a small collection of CUTEst unconstrained and bound-constrained nonlinear optimization problems, which can be used in Matlab for testing optimization algorithms directly (i.e. without installing additional software).
We introduce the notion of \emph{joint spectrum} of a compact set of matrices $S \subset GL_d(\mathbb{C})$, which is a multi-dimensional generalization of the joint spectral radius. We begin with a thorough study of its properties (under various assumptions: irreducibility, Zariski-density, domination). Several classical properties of the joint spectral radius are shown to hold in this generalized setting and an analogue of the Lagarias-Wang finiteness conjecture is discussed. Then we relate the joint spectrum to matrix valued random processes and study what points of it can be realized as Lyapunov vectors. We also show how the joint spectrum encodes all word metrics on reductive groups. Several examples are worked out in detail.
It has been known for a long time that the equivariant 2+1 wave map into the 2-sphere blows up if the initial data are chosen appropriately. Here, we present numerical evidence for the stability of the blow-up phenomenon under explicit violations of equivariance.
This paper proposes a new method for joint design of radiofrequency (RF) and gradient waveforms in Magnetic Resonance Imaging (MRI), and applies it to the design of 3D spatially tailored saturation and inversion pulses. The joint design of both waveforms is characterized by the ODE Bloch equations, to which there is no known direct solution. Existing approaches therefore typically rely on simplified problem formulations based on, e.g., the small-tip approximation or constraining the gradient waveforms to particular shapes, and often apply only to specific objective functions for a narrow set of design goals (e.g., ignoring hardware constraints). This paper develops and exploits an auto-differentiable Bloch simulator to directly compute Jacobians of the (Bloch-simulated) excitation pattern with respect to RF and gradient waveforms. This approach is compatible with \emph{arbitrary} sub-differentiable loss functions, and optimizes the RF and gradients directly without restricting the waveform shapes. For computational efficiency, we derive and implement explicit Bloch simulator Jacobians (approximately halving computation time and memory usage). To enforce hardware limits (peak RF, gradient, and slew rate), we use a change of variables that makes the 3D pulse design problem effectively unconstrained; we then optimize the resulting problem directly using the proposed auto-differentiation framework. We demonstrate our approach with two kinds of 3D excitation pulses that cannot be easily designed with conventional approaches: Outer-volume saturation (90{\deg} flip angle), and inner-volume inversion.
Optimal power flow (OPF) is a key problem in power system operations. OPF problems that use the nonlinear AC power flow equations to accurately model the network physics have inherent challenges associated with non-convexity. To address these challenges, recent research has applied various convex relaxation approaches to OPF problems. The QC relaxation is a promising approach that convexifies the trigonometric and product terms in the OPF problem by enclosing these terms in convex envelopes. The accuracy of the QC relaxation strongly depends on the tightness of these envelopes. This paper presents two improvements to these envelopes. The first improvement leverages a polar representation of the branch admittances in addition to the rectangular representation used previously. The second improvement is based on a coordinate transformation via a complex per unit base power normalization that rotates the power flow equations. The trigonometric envelopes resulting from this rotation can be tighter than the corresponding envelopes in previous QC relaxation formulations. Using an empirical analysis with a variety of test cases, this paper suggests an appropriate value for the angle of the complex base power. Comparing the results with a state-of-the-art QC formulation reveals the advantages of the proposed improvements.
In this paper, we investigate the problem of semi-global minimal time robust stabilization of analytic control systems with controls entering linearly, by means of a hybrid state feedback law. It is shown that, in the absence of minimal time singular trajectories, the solutions of the closed-loop system converge to the origin in quasi minimal time (for a given bound on the controller) with a robustness property with respect to small measurement noise, external disturbances and actuator noise.
Spectra and spin structures of Andreev interface states and the Josephson current are investigated theoretically in junctions between clean superconductors (SC) with ordered interlayers. The Josephson current through the ferromagnet-insulator-ferromagnet interlayer can exhibit a nonmonotonic dependence on the misorientation angle. The characteristic behavior takes place if the pi state is the equilibrium state of the junction in the particular case of parallel magnetizations. We find a novel channel of quasiparticle reflection (Q reflection) from the simplest two-sublattice antiferromagnet (AF) on a bipartite lattice. As a combined effect of Andreev and Q reflections, Andreev states arise at the AF/SC interface. When the Q reflection dominates the specular one, Andreev bound states have almost zero energy on AF/ s-wave SC interfaces, whereas they lie near the edge of the continuous spectrum for AF/d-wave SC boundaries. For an s-wave SC/AF/s-wave SC junction, the bound states are found to split and carry the supercurrent. Our analytical results are based on a novel quasiclassical approach, which applies to interfaces involving itinerant antiferromagnets. Similar effects can take place on interfaces of superconductors with charge density wave materials (CDW), including the possible d-density wave state (DDW) of the cuprates.
Modelling, specifying and reasoning about complex systems requires to process in an integrated fashion declarative and procedural aspects of the target domain. The paper reports on an experiment conducted with a propositional version of Logic Programming Petri Nets (LPPNs), a notation extending Petri Nets with logic programming constructs. Two semantics are presented: a denotational semantics that fully maps the notation to ASP via Event Calculus; and a hybrid operational semantics that process separately the causal mechanisms via Petri nets, and the constraints associated to objects and to events via Answer Set Programming (ASP). These two alternative specifications enable an empirical evaluation in terms of computational efficiency. Experimental results show that the hybrid semantics is more efficient w.r.t. sequences, whereas the two semantics follows the same behaviour w.r.t. branchings (although the denotational one performs better in absolute terms).
Stable gravitating lumps with a false vacuum core surrounded by the true vacuum in a scalar field potential exist in the presence of fermions at the core. These objects may exist in the universe at various scales.
Goals are first-class entities in a self-adaptive system (SAS) as they guide the self-adaptation. A SAS often operates in dynamic and partially unknown environments, which cause uncertainty that the SAS has to address to achieve its goals. Moreover, besides the environment, other classes of uncertainty have been identified. However, these various classes and their sources are not systematically addressed by current approaches throughout the life cycle of the SAS. In general, uncertainty typically makes the assurance provision of SAS goals exclusively at design time not viable. This calls for an assurance process that spans the whole life cycle of the SAS. In this work, we propose a goal-oriented assurance process that supports taming different sources (within different classes) of uncertainty from defining the goals at design time to performing self-adaptation at runtime. Based on a goal model augmented with uncertainty annotations, we automatically generate parametric symbolic formulae with parameterized uncertainties at design time using symbolic model checking. These formulae and the goal model guide the synthesis of adaptation policies by engineers. At runtime, the generated formulae are evaluated to resolve the uncertainty and to steer the self-adaptation using the policies. In this paper, we focus on reliability and cost properties, for which we evaluate our approach on the Body Sensor Network (BSN) implemented in OpenDaVINCI. The results of the validation are promising and show that our approach is able to systematically tame multiple classes of uncertainty, and that it is effective and efficient in providing assurances for the goals of self-adaptive systems.
Kurdish, an Indo-European language spoken by over 30 million speakers, is considered a dialect continuum and known for its diversity in language varieties. Previous studies addressing language and speech technology for Kurdish handle it in a monolithic way as a macro-language, resulting in disparities for dialects and varieties for which there are few resources and tools available. In this paper, we take a step towards developing resources for language and speech technology for varieties of Central Kurdish, creating a corpus by transcribing movies and TV series as an alternative to fieldwork. Additionally, we report the performance of machine translation, automatic speech recognition, and language identification as downstream tasks evaluated on Central Kurdish varieties. Data and models are publicly available under an open license at https://github.com/sinaahmadi/CORDI.
We introduce a set of consistency conditions on the S-matrix of theories of massless particles of arbitrary spin in four-dimensional Minkowski space-time. We find that in most cases the constraints, derived from the conditions, can only be satisfied if the S-matrix is trivial. Our conditions apply to theories where four-particle scattering amplitudes can be obtained from three-particle ones via a recent technique called BCFW construction. We call theories in this class constructible. We propose a program for performing a systematic search of constructible theories that can have non-trivial S-matrices. As illustrations, we provide simple proofs of already known facts like the impossibility of spin $s > 2$ non-trivial S-matrices, the impossibility of several spin 2 interacting particles and the uniqueness of a theory with spin 2 and spin 3/2 particles.
Atom interferometry relies on the separation and recombination of atom wavepackets. When the two paths overlap perfectly at the end of the interferometer, the phase is insensitive to the atomic velocity distribution. Here, we show that, when the separation and recombination is performed using a Raman transition there is a displacement of the atomic wavepacket due to a phase shift during light pulses. Because of the variation of the laser intensity seen by the atoms, there is an imperfect cancellation of these displacements. The observation of a velocity-dependent phase shift on the interferometer is the signature of this effect, which has been modeled. Thanks to the signature we have identified, we are able to compensate for this effect by applying laser power ramps during the interferometer to mitigate intensity variations.
The holographic dual of a gravitational theory around the de Sitter background is argued to be a Euclidean conformal gravity theory in one fewer dimensions. The measure for the holographic theory naturally includes a sum over topologies as well as conformal structures.
The spin-Peierls transition is modeled in the dimer phase of the spin-$1/2$ chain with exchanges $J_1$, $J_2 = \alpha J_1$ between first and second neighbors. The degenerate ground state generates an energy cusp that qualitatively changes the dimerization $\delta(T)$ compared to Peierls systems with nondegenerate ground states. The parameters $J_1 = 160$ K, $\alpha = 0.35$ plus a lattice stiffness account for the magnetic susceptibility of CuGeO$_3$, its specific heat anomaly, and the $T$ dependence of the lowest gap.
Audio-visual speech recognition (AVSR) has gained remarkable success for ameliorating the noise-robustness of speech recognition. Mainstream methods focus on fusing audio and visual inputs to obtain modality-invariant representations. However, such representations are prone to over-reliance on audio modality as it is much easier to recognize than video modality in clean conditions. As a result, the AVSR model underestimates the importance of visual stream in face of noise corruption. To this end, we leverage visual modality-specific representations to provide stable complementary information for the AVSR task. Specifically, we propose a reinforcement learning (RL) based framework called MSRL, where the agent dynamically harmonizes modality-invariant and modality-specific representations in the auto-regressive decoding process. We customize a reward function directly related to task-specific metrics (i.e., word error rate), which encourages the MSRL to effectively explore the optimal integration strategy. Experimental results on the LRS3 dataset show that the proposed method achieves state-of-the-art in both clean and various noisy conditions. Furthermore, we demonstrate the better generality of MSRL system than other baselines when test set contains unseen noises.
The VIMOS VLT Deep Survey (VVDS), designed to measure 150,000 galaxy redshifts, requires a dedicated data reduction and analysis pipeline to process in a timely fashion the large amount of spectroscopic data being produced. This requirement has lead to the development of the VIMOS Interactive Pipeline and Graphical Interface (VIPGI), a new software package designed to simplify to a very high degree the task of reducing astronomical data obtained with VIMOS, the imaging spectrograph built by the VIRMOS Consortium for the European Southern Observatory, and mounted on Unit 3 (Melipal) of the Very Large Telescope (VLT) at Paranal Observatory (Chile). VIPGI provides the astronomer with specially designed VIMOS data reduction functions, a VIMOS-centric data organizer, and dedicated data browsing and plotting tools, that can be used to verify the quality and accuracy of the various stages of the data reduction process. The quality and accuracy of the data reduction pipeline are comparable to those obtained using well known IRAF tasks, but the speed of the data reduction process is significantly increased, thanks to the large set of dedicated features. In this paper we discuss the details of the MOS data reduction pipeline implemented in VIPGI, as applied to the reduction of some 20,000 VVDS spectra, assessing quantitatively the accuracy of the various reduction steps. We also provide a more general overview of VIPGI capabilities, a tool that can be used for the reduction of any kind of VIMOS data.
Leader election is one of the fundamental and well-studied problems in distributed computing. In this paper, we initiate the study of leader election using mobile agents. Suppose $n$ agents are positioned initially arbitrarily on the nodes of an arbitrary, anonymous, $n$-node, $m$-edge graph $G$. The agents relocate themselves autonomously on the nodes of $G$ and elect an agent as a leader such that the leader agent knows it is a leader and the other agents know they are not leaders. The objective is to minimize time and memory requirements. Following the literature, we consider the synchronous setting in which each agent performs its operations synchronously with others and hence the time complexity can be measured in rounds. The quest in this paper is to provide solutions without agents knowing any graph parameter, such as $n$, a priori. We first establish that, without agents knowing any graph parameter a priori, there exists a deterministic algorithm to elect an agent as a leader in $O(m)$ rounds with $O(n\log n)$ bits at each agent. Using this leader election result, we develop a deterministic algorithm for agents to construct a minimum spanning tree of $G$ in $O(m+n\log n)$ rounds using $O(n \log n)$ bits memory at each agent, without agents knowing any graph parameter a priori. Finally, using the same leader election result, we provide improved time/memory results for other fundamental distributed graph problems, namely, gathering, maximal independent set, and minimal dominating sets, removing the assumptions on agents knowing graph parameters a priori.
In the O(3) sigma-model description of gapped spin systems, S=1 magnons can only decay into three lower energy magnons. We argue that the symmetry of the quantum spin Hamiltonian often allows decay into two magnons, and compute this decay rate in model systems. Two magnon decay is present in Haldane gap S=1 spin chains, even though it cannot be induced by any allowed term written in powers and gradients of the sigma-model field. We compare our results with recent measurements of Stone et al. (cond-mat/0511266) on a two-dimensional spin system.
The Q&U Bolometric Interferometer for Cosmology (QUBIC) is a novel kind of polarimeter optimized for the measurement of the B-mode polarization of the Cosmic Microwave Back-ground (CMB), which is one of the major challenges of observational cosmology. The signal is expected to be of the order of a few tens of nK, prone to instrumental systematic effects and polluted by various astrophysical foregrounds which can only be controlled through multichroic observations. QUBIC is designed to address these observational issues with a novel approach that combines the advantages of interferometry in terms of control of instrumental systematics with those of bolometric detectors in terms of wide-band, background-limited sensitivity.
To put new constraints on the r-mode instability window, we analyse the formation of millisecond pulsars (MSPs) within the recycling scenario, making use of three sets of observations: (a) X-ray observations of neutron stars (NSs) in low-mass X-ray binaries; (b) timing of millisecond pulsars; and (c) X-ray and UV observations of MSPs. As shown in previous works, r-mode dissipation by shear viscosity is not sufficient to explain observational set (a), and enhanced r-mode dissipation at the red-shifted internal temperatures $T^\infty\sim 10^8$ K is required to stabilize the observed NSs. Here, we argue that models with enhanced bulk viscosity can hardly lead to a self-consistent explanation of observational set (a) due to strong neutrino emission, which is typical for these models (unrealistically powerful energy source is required to keep NSs at the observed temperatures). We also demonstrate that the observational set (b), combined with the theory of internal heating and NS cooling, provides evidence of enhanced r-mode dissipation at low temperatures, $T^\infty\sim 2\times 10^7$ K. Observational set (c) allows us to set an upper limit on the internal temperatures of MSPs, $T^\infty<2\times 10^7$ K (assuming a canonical NS with the accreted crust). Recycling scenario can produce MSPs at these temperatures only if r-mode instability is suppressed in the whole MSP spin frequency range ($\nu\lesssim 750$ Hz) at temperatures $2\times 10^7\lesssim T^\infty\lesssim 3 \times 10^7$ K, providing thus a new constraint on the r-mode instability window. These observational constraints are analysed in more details in application to the resonance uplift scenario of Gusakov et al. [Phys. Rev. Lett., 112 (2014), 151101].
We describe a new finite element method (FEM) to construct continuous equilibrium distribution functions of stellar systems. The method is a generalization of Schwarzschild's orbit superposition method from the space of discrete functions to continuous ones. In contrast to Schwarzschild's method, FEM produces a continuous distribution function (DF) and satisfies the intra element continuity and Jeans equations. The method employs two finite-element meshes, one in configuration space and one in action space. The DF is represented by its values at the nodes of the action-space mesh and by interpolating functions inside the elements. The Galerkin projection of all equations that involve the DF leads to a linear system of equations, which can be solved for the nodal values of the DF using linear or quadratic programming, or other optimization methods. We illustrate the superior performance of FEM by constructing ergodic and anisotropic equilibrium DFs for spherical stellar systems (Hernquist models). We also show that explicitly constraining the DF by the Jeans equations leads to smoother and/or more accurate solutions with both Schwarzschild's method and FEM.
Evidence for an explicitly exotic state with isospin 2 and spin-parity 2^+ near the $\rho\rho$ threshold and nontrivial complementary indications of the unusual quark composition of the $f_0(980)$ and $a_0(980)$ states obtained from the reactions of two-photon formation of neutral meson resonances are discussed, together with puzzling phenomena in the channels $\gamma\gamma\to\rho^0\phi$ and $\gamma\gamma\to\rho^0\rho^0$ at high energies.
In this work, we report that the Hawking radiation effect on fermions is fundamentally different from the case of scalar particles. Intrinsic properties of fermions (exclusion principle and spin) affect strongly the interaction of fermions with both Hawking radiation and metric of the spacetime. In particular we have found the following: first, while the fermion vacuum state seen by the Rindler observer is an entangled state in which the right and left Rindler wedge states appear in correlated pairs as in the case of the scalar particles, the entanglement disappears in the excited state due to the exclusion principle; second, the spin of the fermion experiences the Winger rotation under a uniform acceleration; and third, the quantum information of fermions encoded in spin (entangled state is composed of different spin states but with the same mode function) is dissipated not by the Hawking radiation but by the Wigner rotation as the pair approaches the event horizon.
Statistical physics is employed to evaluate the performance of error-correcting codes in the case of finite message length for an ensemble of Gallager's error correcting codes. We follow Gallager's approach of upper-bounding the average decoding error rate, but invoke the replica method to reproduce the tightest general bound to date, and to improve on the most accurate zero-error noise level threshold reported in the literature. The relation between the methods used and those presented in the information theory literature are explored.
It is well known that excessively heavy supersymmetric particles (sparticles) are disfavored to explain the $(g-2)_\mu$ anomaly, but some people overlook that moderately light sparticles are also disfavored by the LHC probes of supersymmetry. We take the Next-to-Minimal Supersymmetric Standard Model as an example to emphasize the latter point. It is found that, if the theory is required to explain the anomaly at $2\sigma$ level and meanwhile keep consistent with the LHC results, the following lower bounds may be set: $\tan \beta \gtrsim 20$, $|M_1| \gtrsim 275~{\rm GeV}$, $M_2 \gtrsim 300~{\rm GeV}$, $\mu \gtrsim 460~{\rm GeV}$, $m_{\tilde{\mu}_L} \gtrsim 310~{\rm GeV}$, and $m_{\tilde{\mu}_R} \gtrsim 350~{\rm GeV}$, where $M_1$ and $M_2$ denote gaugino masses, $\mu$ represents the Higgsino mass, and $m_{\tilde{\mu}_L}$ and $m_{\tilde{\mu}_R}$ are the mass of Smuons with $L$ and $R$ denoting their dominant chiral component. This observation has significant impacts on dark matter (DM) physics, e.g., the popular $Z$- and Higgs-funnel regions have been excluded, and the Bino-dominated neutralino DM has to co-annihilate with the Wino-dominated electroweakinos (in most cases) and/or Smuons (in few cases) to obtain the correct density. It is also inferred that these conclusions should apply to the Minimal Supersymmetric Standard Model since the underlying physics for the bounds are the same.
We present a study on the efficacy of adversarial training on transformer neural network models, with respect to the task of detecting check-worthy claims. In this work, we introduce the first adversarially-regularized, transformer-based claim spotter model that achieves state-of-the-art results on multiple challenging benchmarks. We obtain a 4.70 point F1-score improvement over current state-of-the-art models on the ClaimBuster Dataset and CLEF2019 Dataset, respectively. In the process, we propose a method to apply adversarial training to transformer models, which has the potential to be generalized to many similar text classification tasks. Along with our results, we are releasing our codebase and manually labeled datasets. We also showcase our models' real world usage via a live public API.
In order to increase their robustness against environmental fluctuations, many biological populations have developed bet-hedging mechanisms in which the population `bets' against the presence of prolonged favorable environmental conditions by having a few individual behaving as if they sensed a threatening or stressful environment. As a result, the population (as a whole) increases its chances of surviving environmental fluctuations in the long term, while sacrificing short-term performance. In this paper, we propose a theoretical framework, based on Markov jump linear systems, to model and evaluate the performance of bet-hedging strategies in the presence of stochastic fluctuations. We illustrate our results using numerical simulations.
We discuss the problem of initial states for a system of coupled scalar fields out of equilibrium in the one-loop approximation. The fields consist of classical background fields, taken constant in space, and quantum fluctuations. If the initial state is the adiabatic vacuum, i.e., the ground state of a Fock space of particle excitations that diagonalize the mass matrix, the energy-momentum tensor is infinite at t=0, its most singular part behaves as 1/t. When the system is coupled to gravity this presents a problem that we solve by a Bogoliubov transformation of the naive initial state. As a side result we also discuss the canonical formalism and the adiabatic particle number for such a system. Most of the formalism is presented for Minkowksi space. Embedding the system and its dynamics into a flat FRW universe is straightforward and we briefly address the essential modifications.
A numerical investigation of a two dimensional integrated fiber grating coupler capable of exciting several LP fiber modes in both TE and TM polarization is presented. Simulation results and an assessment of the numerical complexity of the 3D, fully vectorial finite element model of the device are shown.
Using a minimum model consisting of a magnetic quantum dot and an electron lead, we investigate spin pumping by its precessing magnetization. Focusing on the "adiabaticity", which is quantified using a comparison between the frequency of precession and the relaxation rate of the relevant system, we investigate the role of nonadiabaticity in spin pumping by obtaining the dependence of the spin current generated on the frequency of precession using full counting statistics. This evaluation shows that the steady-state population of the quantum dot remains unchanged by the precession owing to the rotational symmetry about the axis of precession. This implies that in the adiabatic limit the spin current is absent and that spin pumping is entirely a nonadiabatic effect. We also find that the nonadiabatic spin current depends linearly on the frequency in the low-frequency regime and exhibits an oscillation in the high-frequency regime. The oscillation points to an enhancement of spin pumping by tuning the frequency of precession.
Tidal torque theory and simulations of large scale structure predict spin vectors of massive galaxies should be coplanar with sheets in the cosmic web. Recently demonstrated, the giants (K$_{s}$ $\leq$ -22.5 mag) in the Local Volume beyond the Local Sheet have spin vectors directed close to the plane of the Local Supercluster, supporting the predictions of Tidal Torque Theory. However, the giants in the Local Sheet encircling the Local Group display a distinctly different arrangement, suggesting that the mass asymmetry of the Local Group or its progenitor torqued them from their primordial spin directions. To investigate the origin of the spin alignment of giants locally, analogues of the Local Sheet were identified in the SDSS DR9. Similar to the Local Sheet, analogues have an interacting pair of disk galaxies isolated from the remaining sheet members. Modified sheets in which there is no interacting pair of disk galaxies were identified as a control sample. Galaxies in face-on control sheets do not display axis ratios predominantly weighted toward low values, contrary to the expectation of tidal torque theory. For face-on and edge-on sheets, the distribution of axis ratios for galaxies in analogues is distinct from that in controls with a confidence of 97.6 $\%$ $\&$ 96.9$\%$, respectively. This corroborates the hypothesis that an interacting pair can affect spin directions of neighbouring galaxies.
We present the results of high-frequency mixing experiments performed upon parallel quantum point-contacts defined in the two-dimensional electron gas of an Al_{x}Ga_{1-x}As/GaAs-heterostructure. The parallel geometry, fabricated using a novel double-resist technology, enables the point-contact device to be impedance matched over a wide frequency range and, in addition, increases the power levels of the mixing signal while simultaneously reducing the parasitic source-drain capacitance. Here, we consider two parallel quantum point-contact devices with 155 and 110 point-contacts respectively; both devices operated successfully at liquid helium and liquid nitrogen temperatures with a minimal conversion loss of 13 dB.
We present a model in which planetesimal disks are built from the combination of planetesimal formation and accretion of radially drifting pebbles onto existing planetesimals. In this model, the rate of accretion of pebbles onto planetesimals quickly outpaces the rate of direct planetesimal formation in the inner disk. This allows for the formation of a high mass inner disk without the need for enhanced planetesimal formation or a massive protoplanetary disk. Our proposed mechanism for planetesimal disk growth does not require any special conditions to operate. Consequently, we expect that high mass planetesimal disks form naturally in nearly all systems. The extent of this growth is controlled by the total mass in pebbles that drifts through the inner disk. Anything that reduces the rate or duration of pebble delivery will correspondingly reduce the final mass of the planetesimal disk. Therefore, we expect that low mass stars (with less massive protoplanetary disks), low metallicity stars and stars with giant planets should all grow less massive planetesimal disks. The evolution of planetesimal disks into planetary systems remains a mystery. However, we argue that late stage planet formation models should begin with a massive disk. This reinforces the idea that massive and compact planetary systems could form in situ but does not exclude the possibility that significant migration occurs post-planet formation.
This study empirically tests the $\textit{Narrative Economics}$ hypothesis, which posits that narratives (ideas that are spread virally and affect public beliefs) can influence economic fluctuations. We introduce two curated datasets containing posts from X (formerly Twitter) which capture economy-related narratives (Data will be shared upon paper acceptance). Employing Natural Language Processing (NLP) methods, we extract and summarize narratives from the tweets. We test their predictive power for $\textit{macroeconomic}$ forecasting by incorporating the tweets' or the extracted narratives' representations in downstream financial prediction tasks. Our work highlights the challenges in improving macroeconomic models with narrative data, paving the way for the research community to realistically address this important challenge. From a scientific perspective, our investigation offers valuable insights and NLP tools for narrative extraction and summarization using Large Language Models (LLMs), contributing to future research on the role of narratives in economics.
I consider some selected topics in chiral perturbation theory (CHPT). For the meson sector, emphasis is put on processes involving pions in the isospin zero S-wave which require multi-loop calculations. The advantages and shortcomings of heavy baryon CHPT are discussed. Some recent results on the structure of the baryons are also presented.
A theoretical description of photon-pair production in polarized positron-electron annihilation is presented. Complete one-loop electroweak radiative corrections are calculated taking into account the exact dependence on the electron mass. Analytical results are derived with the help of the SANC~system. The relevant contributions to the cross section are calculated analytically using the helicity amplitude approach. The cases of unpolarized and longitudinally polarized fermions in the initial state are investigated. Calculations are realized in the Monte Carlo integrator MCSANCee and generator ReneSANCe which allow one the implementation of any experimental cuts used in the analysis of $e^+e^-$ annihilation data of both low and high energies.
The susceptibility of deep neural networks (DNNs) to adversarial examples has prompted an increase in the deployment of adversarial attacks. Image-agnostic universal adversarial perturbations (UAPs) are much more threatening, but many limitations exist to implementing UAPs in real-world scenarios where only binary decisions are returned. In this research, we propose Decision-BADGE, a novel method to craft universal adversarial perturbations for executing decision-based black-box attacks. To optimize perturbation with decisions, we addressed two challenges, namely the magnitude and the direction of the gradient. First, we use batch loss, differences from distributions of ground truth, and accumulating decisions in batches to determine the magnitude of the gradient. This magnitude is applied in the direction of the revised simultaneous perturbation stochastic approximation (SPSA) to update the perturbation. This simple yet efficient method can be easily extended to score-based attacks as well as targeted attacks. Experimental validation across multiple victim models demonstrates that the Decision-BADGE outperforms existing attack methods, even image-specific and score-based attacks. In particular, our proposed method shows a superior success rate with less training time. The research also shows that Decision-BADGE can successfully deceive unseen victim models and accurately target specific classes.
In this study, one of the mean-field theories in nematics, the Maier-Saupe theory (MST), is generalized within Tsallis Thermostatistics (TT). The variation of the order parameter versus temperature has been investigated and compared with experimental data for PAA (p-azoxyanisole). So far we believe that this is the first attempt of the application of TT in liquid crystals. It is well known that MST fails to explain the experimental data for some of the nematics, one of which is PAA. However generalized MST (GMST) is able to account for the experimental data for a long range of temperatures. Also in this study, the effect of nonextensivity is shown for various values of the entropic index.
The extragalactic background light (EBL) contains important information about stellar and galaxy evolution. It leaves imprint on the very high energy $\gamma$-ray spectra from sources at cosmological distances due to the process of pair production. In this work we propose to {\em measure} the EBL directly by extracting the collective attenuation effects in a number of $\gamma$-ray sources at different redshifts. Using a Markov Chain Monte Carlo fitting method, the EBL intensities and the intrinsic spectral parameters of $\gamma$-ray sources are derived simultaneously. No prior shape of EBL is assumed in the fit. With this method, we can for the first time to derive the spectral shape of the EBL model-independently. Our result shows the expected features predicted by the present EBL models and thus support the understanding of the EBL origin.
We design a new observable, the expansion rate fluctuation $\eta$, to characterize deviations from the linear relation between redshift and distance in the local universe. We also show how to compress the resulting signal into spherical harmonic coefficients in order to better decipher the structure and symmetries of the anisotropies in the local expansion rate. We apply this analysis scheme to several public catalogs of redshift-independent distances, the Cosmicflows-3 and Pantheon data sets, covering the redshift range $0.01<z<0.05$. The leading anisotropic signal is stored in the dipole. Within the standard cosmological model, it is interpreted as a bulk motion ($307 \pm 23$ km/s) of the entire local volume in a direction aligned at better than $4$ degrees with the bulk component of the Local Group velocity with respect to the CMB. This term alone, however, provides an overly simplistic and inaccurate description of the angular anisotropies of the expansion rate. We find that the quadrupole contribution is non-negligible ($\sim 50\%$ of the anisotropic signal), in fact, statistically significant, and signaling a substantial shearing of gravity in the volume covered by the data. In addition, the 3D structure of the quadrupole is axisymmetric, with the expansion axis aligned along the axis of the dipole. Implications for the determination of the $H_0$ parameter are also discussed.
We show that, in the presence of bulk masses, sterile neutrinos propagating in large extra dimensions (LED) can induce electron-neutrino appearance effects. This is in contrast to what happens in the standard LED scenario and hence LED models with explicit bulk masses have the potential to address the MiniBooNE and LSND appearance results, as well as the reactor and Gallium anomalies. A special feature in our scenario is that the mixing of the first KK modes to active neutrinos can be suppressed, making the contribution of heavier sterile neutrinos to oscillations relatively more important. We study the implications of this neutrino mass generation mechanism for current and future neutrino oscillation experiments, and show that the Short-Baseline Neutrino Program at Fermilab will be able to efficiently probe such a scenario. In addition, this framework leads to massive Dirac neutrinos and thus precludes any signal in neutrinoless double beta decay experiments.
We study the thermal and curvature effect to spontaneous symmetry breaking in phi^4 theory. The effective potential is evaluated in D-dimensional static universe with positive curvature R X S^{D-1} or negative curvature R X H^{D-1}. It is shown that temperature and positive curvature suppress the symmetry breaking, while negative curvature enhances it. To consider the back-reaction we numerically solve the gap equation and the Einstein equation simultaneously. The solution gives the relationship between the temperature and the scale factor.
The excitation of internal gravity waves by penetrative convective plumes is investigated using 2-D direct simulations of compressible convection. The wave generation is quantitatively studied from the linear response of the radiative zone to the plumes penetration, using projections onto the g-modes solutions of the associated linear eigenvalue problem for the perturbations. This allows an accurate determination of both the spectrum and amplitudes of the stochastically excited modes. Using time-frequency diagrams of the mode amplitudes, we then show that the lifetime of a mode is around twice its period and that during times of significant excitation up to 40% of the total kinetic energy may be contained into g-modes.
It has been shown that relevance judgment of documents is influenced by multiple factors beyond topicality. Some multidimensional user relevance models (MURM) proposed in literature have investigated the impact of different dimensions of relevance on user judgment. Our hypothesis is that a user might give more importance to certain relevance dimensions in a session which might change dynamically as the session progresses. This motivates the need to capture the weights of different relevance dimensions using feedback and build a model to rank documents for subsequent queries according to these weights. We propose a geometric model inspired by the mathematical framework of Quantum theory to capture the user's importance given to each dimension of relevance and test our hypothesis on data from a web search engine and TREC Session track.
A hypergraph can be obtained from a simplicial complex by deleting some non-maximal simplices. By [11], a hypergraph gives an associated simplicial complex. By [4], the embedded homology of a hypergraph is the homology of the infimum chain complex, or equivalently, the homology of the supremum chain complex. In this paper, we generalize the discrete Morse theory for simplicial complexes by R. Forman [5-7] and give a discrete Morse theory for hypergraphs. We use the critical simplices of the associated simplicial complex to construct a sub-chain complex of the infimum chain complex and a sub-chain complex of the supremum chain complex, then prove that the embedded homology of a hypergraph is isomorphic to the homology of the constructed chain complexes. Moreover, we define discrete Morse functions on hypergraphs and compute the embedded homology in terms of the critical hyperedges. As by-products, we derive some Morse inequalities and collapse results for hypergraphs.
Quantum computation and quantum information are of great current interest in computer science, mathematics, physical sciences and engineering. They will likely lead to a new wave of technological innovations in communication, computation and cryptography. As the theory of quantum physics is fundamentally stochastic, randomness and uncertainty are deeply rooted in quantum computation, quantum simulation and quantum information. Consequently quantum algorithms are random in nature, and quantum simulation utilizes Monte Carlo techniques extensively. Thus statistics can play an important role in quantum computation and quantum simulation, which in turn offer great potential to revolutionize computational statistics. While only pseudo-random numbers can be generated by classical computers, quantum computers are able to produce genuine random numbers; quantum computers can exponentially or quadratically speed up median evaluation, Monte Carlo integration and Markov chain simulation. This paper gives a brief review on quantum computation, quantum simulation and quantum information. We introduce the basic concepts of quantum computation and quantum simulation and present quantum algorithms that are known to be much faster than the available classic algorithms. We provide a statistical framework for the analysis of quantum algorithms and quantum simulation.
The Swampland program aims to distinguish effective theories which can be completed into quantum gravity in the ultraviolet from those which cannot. This article forms an introduction to the field, assuming only a knowledge of quantum field theory and general relativity. It also forms a comprehensive review, covering the range of ideas that are part of the field, from the Weak Gravity Conjecture, through compactifications of String Theory, to the de Sitter conjecture.
We characterize the functions $f\colon [0,1] \longrightarrow [0,1]$ for which there exists a measurable set $C\subseteq [0,1]$ of positive measure satisfying $\frac{|C\cap I|}{|I|}<f(|I|)$ for any nontrivial interval $I \subseteq [0,1]$. As an application, we prove that on any $C^1$ curve it is possible to construct a Lipschitz function that cannot be approximated by Lipschitz functions attaining their Lipschitz constant.
Web crawling is the problem of keeping a cache of webpages fresh, i.e., having the most recent copy available when a page is requested. This problem is usually coupled with the natural restriction that the bandwidth available to the web crawler is limited. The corresponding optimization problem was solved optimally by Azar et al. [2018] under the assumption that, for each webpage, both the elapsed time between two changes and the elapsed time between two requests follow a Poisson distribution with known parameters. In this paper, we study the same control problem but under the assumption that the change rates are unknown a priori, and thus we need to estimate them in an online fashion using only partial observations (i.e., single-bit signals indicating whether the page has changed since the last refresh). As a point of departure, we characterise the conditions under which one can solve the problem with such partial observability. Next, we propose a practical estimator and compute confidence intervals for it in terms of the elapsed time between the observations. Finally, we show that the explore-and-commit algorithm achieves an $\mathcal{O}(\sqrt{T})$ regret with a carefully chosen exploration horizon. Our simulation study shows that our online policy scales well and achieves close to optimal performance for a wide range of the parameters.
The paper continues nlin.SI/0212019 by giving three more examples of using cyclic bases of zero-curvature representations in studies of relation between strong Lax pairs and recursion operators.
Stroke is a major cause of mortality and long--term disability in the world. Predictive outcome models in stroke are valuable for personalized treatment, rehabilitation planning and in controlled clinical trials. In this paper we design a new model to predict outcome in the short-term, the putative therapeutic window for several treatments. Our regression-based model has a parametric form that is designed to address many challenges common in medical datasets like highly correlated variables and class imbalance. Empirically our model outperforms the best--known previous models in predicting short--term outcomes and in inferring the most effective treatments that improve outcome.
In this article we prove that if the $q-$fractional operator $(~_{q}\nabla_{qa}^\alpha y)(t)$ of order $0<\alpha\leq 1$ , $0<q<1$ and starting at some $qa \in T_q=\{q^k: k \in \mathbb{Z}\}\cup \{0\},~~a>0$ is positive such that $y(a) \geq 0$, then $y(t)$ is $c_q(\alpha)-$increasing, $c_q(\alpha)=\frac{1-q^\alpha}{1-q}q^{1-\alpha}$. Conversely, if y(t) is increasing and $y(a)\geq 0$, then $(~_{q}\nabla_{qa}^\alpha y)(t)\geq 0$. As an application, we proved a $q-$fractional version of the Mean-Value Theorem.
We investigate several geometric models of network which simultaneously have some nice global properties, that the small diameter property, the small-community phenomenon, which is defined to capture the common experience that (almost) every one in our society belongs to some meaningful small communities by the authors (2011), and that under certain conditions on the parameters, the power law degree distribution, which significantly strengths the results given by van den Esker (2008), and Jordan (2010). The results above, together with our previous progress in Li and Peng (2011), build a mathematical foundation for the study of communities and the small-community phenomenon in various networks. In the proof of the power law degree distribution, we develop the method of alternating concentration analysis to build concentration inequality by alternatively and iteratively applying both the sub- and super-martingale inequalities, which seems powerful, and which may have more potential applications.
We present a uniform analysis of the atmospheric escape rate of Neptune-like planets with estimated radius and mass (restricted to $M_{\rm p}<30\,M_{\oplus}$). For each planet we compute the restricted Jeans escape parameter, $\Lambda$, for a hydrogen atom evaluated at the planetary mass, radius, and equilibrium temperature. Values of $\Lambda\lesssim20$ suggest extremely high mass-loss rates. We identify 27 planets (out of 167) that are simultaneously consistent with hydrogen-dominated atmospheres and are expected to exhibit extreme mass-loss rates. We further estimate the mass-loss rates ($L_{\rm hy}$) of these planets with tailored atmospheric hydrodynamic models. We compare $L_{\rm hy}$ to the energy-limited (maximum-possible high-energy driven) mass-loss rates. We confirm that 25 planets (15\% of the sample) exhibit extremely high mass-loss rates ($L_{\rm hy}>0.1\,M_{\oplus}{\rm Gyr}^{-1}$), well in excess of the energy-limited mass-loss rates. This constitutes a contradiction, since the hydrogen envelopes cannot be retained given the high mass-loss rates. We hypothesize that these planets are not truly under such high mass-loss rates. Instead, either hydrodynamic models overestimate the mass-loss rates, transit-timing-variation measurements underestimate the planetary masses, optical transit observations overestimate the planetary radii (due to high-altitude clouds), or Neptunes have consistently higher albedos than Jupiter planets. We conclude that at least one of these established estimations/techniques is consistently producing biased values for Neptune planets. Such an important fraction of exoplanets with misinterpreted parameters can significantly bias our view of populations studies, like the observed mass--radius distribution of exoplanets for example.
We define a representation framework for extracting spatial information from radiology reports (Rad-SpRL). We annotated a total of 2000 chest X-ray reports with 4 spatial roles corresponding to the common radiology entities. Our focus is on extracting detailed information of a radiologist's interpretation containing a radiographic finding, its anatomical location, corresponding probable diagnoses, as well as associated hedging terms. For this, we propose a deep learning-based natural language processing (NLP) method involving both word and character-level encodings. Specifically, we utilize a bidirectional long short-term memory (Bi-LSTM) conditional random field (CRF) model for extracting the spatial roles. The model achieved average F1 measures of 90.28 and 94.61 for extracting the Trajector and Landmark roles respectively whereas the performance was moderate for Diagnosis and Hedge roles with average F1 of 71.47 and 73.27 respectively. The corpus will soon be made available upon request.
The young star clusters we observe today are the building blocks of a new generation of stars and planets in our Galaxy and beyond. Despite their fundamental role we still lack knowledge about the conditions under which star clusters form and the impact of these often harsh environments on the evolution of their stellar and substellar members. We demonstrate the vital role numerical simulations play to uncover both key issues. Using dynamical models of different star cluster environments we show the variety of effects stellar interactions potentially have. Moreover, our significantly improved measure of mass segregation reveals that it can occur rapidly even for star clusters without substructure. This finding is a critical step to resolve the controversial debate on mass segregation in young star clusters and provides strong constraints on their initial conditions.
This paper is devoted to the study of infinitesimal limit cycles that can bifurcate from zero-Hopf equilibria of differential systems based on the averaging method. We develop an efficient symbolic program using Maple for computing the averaged functions of any order for continuous differential systems in arbitrary dimension. The program allows us to systematically analyze zero-Hopf bifurcations of polynomial differential systems using symbolic computation methods. We show that for the first-order averaging, $\ell\in\{0,1,\ldots,2^{n-3}\}$ limit cycles can bifurcate from the zero-Hopf equilibrium for the general class of perturbed differential systems and up to the second-order averaging, the maximum number of limit cycles can be determined by computing the mixed volume of a polynomial system obtained from the averaged functions. A number of examples are presented to demonstrate the effectiveness of the proposed algorithmic approach.
We present a detailed study of a continuum random phase approximation approach to quasielastic electron-nucleus and neutrino-nucleus scattering. The formalism is validated by confronting ($e,e'$) cross-section predictions with electron scattering data for the nuclear targets $^{12}$C, $^{16}$O, and $^{40}$Ca, in the kinematic region where quasielastic scattering is expected to dominate. We examine the longitudinal and transverse contributions to $^{12}$C($e,e'$) and compare them with the available data. Further, we study the $^{12}$C($\nu_{\mu},\mu^{-}$) cross sections relevant for accelerator-based neutrino-oscillation experiments. We pay special attention to low-energy excitations which can account for non-negligible contributions in measurements, and require a beyond-Fermi-gas formalism.
Mining data streams is one of the main studies in machine learning area due to its application in many knowledge areas. One of the major challenges on mining data streams is concept drift, which requires the learner to discard the current concept and adapt to a new one. Ensemble-based drift detection algorithms have been used successfully to the classification task but usually maintain a fixed size ensemble of learners running the risk of needlessly spending processing time and memory. In this paper we present improvements to the Scale-free Network Regressor (SFNR), a dynamic ensemble-based method for regression that employs social networks theory. In order to detect concept drifts SFNR uses the Adaptive Window (ADWIN) algorithm. Results show improvements in accuracy, especially in concept drift situations and better performance compared to other state-of-the-art algorithms in both real and synthetic data.
The objective of the paper is to asses the specific spectral scaling properties of magnetic reconnection associated fluctuations/turbulence at the Earthward and tailward outflow regions observed simultaneously by the Cluster and Double Star (TC-2) spacecraft on September 26, 2005. Systematic comparisons of spectral characteristics, including variance anisotropy and scale-dependent spectral anisotropy features in wave vector space were possible due to the well-documented reconnection events, occurring between the positions of Cluster (X = -14--16 $R_e$) and TC-2 (X = -6.6 $R_e$). Another factor of key importance is that the magnetometers on the spacecraft are similar. The comparisons provide further evidence for asymmetry of physical processes in Earthward/tailward reconnection outflow regions. Variance anisotropy and spectral anisotropy angles estimated from the multi-scale magnetic fluctuations in the tailward outflow region show features which are characteristic for magnetohydrodynamic cascading turbulence in the presence of a local mean magnetic field. The multi-scale magnetic fluctuations in the Earthward outflow region are exhibiting more power, lack of variance and scale dependent anisotropies, but also having larger anisotropy angles. In this region the magnetic field is more dipolar, the main processes driving turbulence are flow breaking/mixing, perhaps combined with turbulence ageing and non-cascade related multi-scale energy sources.
We consider the Cauchy problem of the system of nonlinear Schr\"odinger equations with derivative nonlinearlity. This system was introduced by Colin-Colin (2004) as a model of laser-plasma interactions. We study existence of ground state solutions and the global well-posedness of this system by using the variational methods. We also consider the stability of traveling waves for this system. These problems are proposed by Colin-Colin as the open problems. We give a subset of the ground-states set which satisfies the condition of stability. In particular, we prove the stability of the set of traveling waves with small speed for $1$-dimension.
We describe a canonical reduction of AKSZ-BV theories to the cohomology of the source manifold. We get a finite dimensional BV theory that describes the contribution of the zero modes to the full QFT. Integration can be defined and correlators can be computed. As an illustration of the general construction we consider two dimensional Poisson sigma model and three dimensional Courant sigma model. When the source manifold is compact, the reduced theory is a generalization of the AKSZ construction where we take as source the cohomology ring. We present the possible generalizations of the AKSZ theory.
We describe quantum hydrodynamic equations with the Coulomb exchange interaction for three and two dimensional plasmas. Explicit form of the force densities are derived. We present non-linear Schrodinger equations (NLSEs) for the Coulomb quantum plasmas with the exchange interaction. We show contribution of the exchange interaction in the dispersion of the Langmuir, and ion-acoustic waves. We consider influence of the spin polarization ratio on strength of the Coulomb exchange interaction. This is important since exchange interaction between particles with same spin direction and particles with opposite spin directions are different. At small particle concentrations $n_{0}<<10^{25}cm^{-3}$ and small polarization the exchange interaction gives small decrease of the Fermi pressure. With increasing of polarization role of the exchange interaction becomes more important, so that it can overcome the Fermi pressure. The exchange interaction also decreases contribution of the Langmuir frequency. Ion-acoustic waves do not exist in limit of large polarization since the exchange interaction changes the sign of pressure. At large particle concentrations $n_{0}>>10^{25}cm^{-3}$ the Fermi pressure prevails over the exchange interaction for all polarizations. Similar picture we obtain for two dimensional quantum plasmas.
We study the S-matrix of planar $\mathcal{N}=4$ supersymmetric Yang-Mills theory when external momenta are restricted to a two-dimensional subspace of Minkowski space. We find significant simplifications and new, interesting structures for tree and loop amplitudes in two-dimensional kinematics; in particular, the higher-point amplitudes we consider can be obtained from those with lowest-points by a collinear uplifting. Based on a compact formula for one-loop N${}^2$MHV amplitudes, we use an equation proposed previously to compute, for the first time, the complete two-loop NMHV and three-loop MHV octagons, which we conjecture to uplift to give the full $n$-point amplitudes up to simpler logarithmic terms or dilogarithmic terms.
In recent years, many types of elliptical Radon transforms that integrate functions over various sets of ellipses/ellipsoids have been considered, relating to studies in bistatic synthetic aperture radar, ultrasound reflection tomography, radio tomography, and migration imaging. In this article, we consider the transform that integrates a given function in $\mathbf R^n$ over a set of ellipses (when $n=2$) or ellipsoids of rotation (when $n\geq 3$) with foci restricted to a hyperplane. We show a relation between this elliptical Radon transform and the regular Radon transform, and provide the inversion formula for the elliptical Radon transform using this relation. Numerical simulations are performed to demonstrate the suggested algorithms in two dimensions, and these simulations are also provided in this article.
The deviations of non-linear perturbations of black holes from the linear case are important in the context of ringdown signals with large signal-to-noise ratio. To facilitate a comparison between the two we derive several results of linear perturbation theory in coordinates which may be adopted in numerical work. Specifically, our results are derived in Kerr-Schild coordinates adjusted by a general height function. In the first part of the paper we address the questions: for an initial configuration of a massless scalar field, what is the amplitude of the excited quasinormal mode (QNM) for any observer outside outside the event horizon, and furthermore what is the resulting tail contribution? This is done by constructing the full Green's function for the problem with exact solutions of the confluent Heun equation satisfying appropriate boundary conditions. In the second part of the paper, we detail new developments to our pseudospectral numerical relativity code bamps to handle scalar fields. In the linear regime we employ precisely the Kerr-Schild coordinates treated by our previous analysis. In particular, we evolve pure QNM type initial data along with several other types of initial data and report on the presence of overtone modes in the signal.
For a group $G$, let $U$ be the group of units of the integral group ring $\mathbb{Z}G$. The group $G$ is said to have the normalizer property if $\text{N}_U(G)=\text{Z}(U)G$. It is shown that Blackburn groups have the normalizer property. These are the groups which have non-normal finite subgroups, with the intersection of all of them being nontrivial. Groups $G$ for which class-preserving automorphisms are inner automorphisms, $\text{Out}_c(G)=1$, have the normalizer property. Recently, Herman and Li have shown that $\text{Out}_c(G)=1$ for a finite Blackburn group $G$. We show that $\text{out}_c(G)=1$ for the members $G$ of a few classes of metabelian groups, from which the Herman--Li result follows. Together with recent work of Hertweck, Iwaki, Jespers and Juriaans, our main result implies that, for an arbitrary group $G$, the group of hypercentral units of $U$ is contained in $\text{Z}(U)G$.
In a recent article [Phys. Rev. Applied 6, 014017 (2016)], Chyba and Hand propose a new scheme to generate electric power continuously at the expense of Earth's kinetic energy of rotation, by using an appropriately shaped cylindrical shell of a well chosen conducting ferrite, rigidly attached to the Earth. No experimental confirmation is reported for the new prediction. In the present Refutation, I first use today's standard electromagnetism and essentially the same model as Chyba and Hand to show in a very simple way that no device of the proposed type can produce continuous electric power, whatever its configuration or size, in agreement with widespread expectation. Next, I show that the prediction of non-zero continuous power by Chyba and Hand results from a confusion of frames of reference at a critical step of their derivation. When the confusion is clarified, the prediction becomes exactly zero and the article under discussion appears as pointless. At the end, I comment about the persistent invocation by Chyba and Hand of the misleading legacy notion that quasi-static magnetic fields have an intrinsic velocity, and other questionable concepts.
This paper presents 28 GHz and 73 GHz empirically-derived large-scale and small-scale channel model parameters that characterize average temporal and angular properties of multipaths. Omnidirectional azimuth scans at both the transmitter and receiver used high gain directional antennas, from which global 3GPP modeling parameters for the mean global azimuth and zenith spreads of arrival were found to be 22 degrees and 6.2 degrees at 28 GHz, and 37.1 degrees and 3.8 degrees at 73 GHz, respectively, in non-line of sight (NLOS). Small-scale spatial measurements at 28 GHz reveal a mean cross-polar ratio for individual multipath components of 29.7 dB and 16.7 dB in line of sight and NLOS, respectively. Small-scale parameters extracted using the KPowerMeans algorithm yielded on average 5.3 and 4.6 clusters at 28 GHz and 73 GHz, respectively, in NLOS. The time cluster - spatial lobe (TCSL) modeling approach uses an alternative physically-based binning procedure and recreates 3GPP model parameters to generate channel impulse responses, as well as new parameters like the RMS lobe angular spreads useful in quantifying millimeter-wave directionality. The TCSL algorithm faithfully reproduces first- and second-order statistics of measured millimeter-wave channels.
In this paper, we study the sensitivity of the spectral clustering based community detection algorithm subject to a Erdos-Renyi type random noise model. We prove phase transitions in community detectability as a function of the external edge connection probability and the noisy edge presence probability under a general network model where two arbitrarily connected communities are interconnected by random external edges. Specifically, the community detection performance transitions from almost perfect detectability to low detectability as the inter-community edge connection probability exceeds some critical value. We derive upper and lower bounds on the critical value and show that the bounds are identical when the two communities have the same size. The phase transition results are validated using network simulations. Using the derived expressions for the phase transition threshold we propose a method for estimating this threshold from observed data.
A theory of self-excitation in the metal-insulator-metal structure doped with metal nanowires is developed for the case where the power is provided by an external source of radio waves. Both the transient stage of self-excitation and the steady-state regime of self-oscillation are analyzed in a fully analytical form. The numerical estimates demonstrate that this effect can be used for diverse practical purposes, in particular, for radio frequency power harvesting. These findings extend the approach developed in nano-optics to the field of electrical engineering.
We here present a variable-range hopping model to describe the chirality-induced spin selectivity along the DNA double helix. In this model, DNA is considered as a one-dimensional disordered system, where electrons are transported by chiral phonon-assisted hopping between localized states. Owing to the coupling between the electron spin and the vorticity of chiral phonons, electric toroidal monopole appears in the charge-to-spin conductances as a manifestation of true chirality. Our model quantitatively explains the temperature dependence of the spin polarization observed in experiments.
Global and local regularities of functions are analyzed in anisotropic function spaces, under a common framework, that of hyperbolic wavelet bases. Local and directional regularity features are characterized by means of global quantities constructed upon the coefficients of hyperbolic wavelet decompositions. A multifractal analysis is introduced, that jointly accounts for scale invariance and anisotropy. Its properties are studied in depth.
Clustering algorithms start with a fixed divergence, which captures the possibly asymmetric distance between a sample and a centroid. In the mixture model setting, the sample distribution plays the same role. When all attributes have the same topology and dispersion, the data are said to be homogeneous. If the prior knowledge of the distribution is inaccurate or the set of plausible distributions is large, an adaptive approach is essential. The motivation is more compelling for heterogeneous data, where the dispersion or the topology differs among attributes. We propose an adaptive approach to clustering using classes of parametrized Bregman divergences. We first show that the density of a steep exponential dispersion model (EDM) can be represented with a Bregman divergence. We then propose AdaCluster, an expectation-maximization (EM) algorithm to cluster heterogeneous data using classes of steep EDMs. We compare AdaCluster with EM for a Gaussian mixture model on synthetic data and nine UCI data sets. We also propose an adaptive hard clustering algorithm based on Generalized Method of Moments. We compare the hard clustering algorithm with k-means on the UCI data sets. We empirically verified that adaptively learning the underlying topology yields better clustering of heterogeneous data.
We compute string field theory Hamiltonian matrix elements and compare them with matrix elements of the dilatation operator in gauge theory. We get precise agreement between the string field theory and gauge theory computations once the correct cubic Hamiltonian matrix elements in string field theory and a particular basis of states in gauge theory are used. We proceed to compute the matrix elements of the dilatation operator to order g_2^2 in this same basis. This calculation makes a prediction for string field theory Hamiltonian matrix elements to order g_2^2, which have not yet been computed. However, our gauge theory results precisely match the results of the recent computation by Pearson et al. of the order g_2^2 Hamiltonian matrix elements of the string bit model.
The fractional quantum Hall effect (FQHE) observed at half filling of the second Landau level is believed to be caused by a pairing of composite fermions captured by the Moore-Read Pfaffian wave function. The generating Hamiltonian for the Moore-Read Pfaffian is a purely three-body model that breaks particle-hole symmetry and lacks other properties, such as dominate two-body repulsive interactions, expected from a physical model of the FQHE. We use exact diagonalization to study the low energy states of a more physical two-body generator model derived from the three-body model. We find that the two-body model exhibits the essential features expected from the Moore-Read Pfaffian: pairing, non-Abelian anyon excitations, and a neutral fermion mode. The model also satisfies constraints expected for a physical model of the FQHE at half-filling because it is: short range, spatially decaying, particle-hole symmetric, and supports a roton mode with a robust spectral gap in the thermodynamic limit. Hence, this two-body model offers a bridge between artificial three-body generator models for paired states and the physical Coulomb interaction and can be used to further explore properties of non-Abelian physics in the FQHE.
In this paper, we completely solve the existence of large sets of $(3,\lambda)$-GDDs of type $g^u$ and the existence of a simple $(3,\lambda)$-GDD of type $g^u$.
The dynamics of finite dimension open quantum systems is studied with the help of the simplest possible form of projection operators, namely the ones which project only onto one dimensional subspaces. The simplicity of the action of the projection operators always leads to an analytical solution of the dynamical master equation, even in the non-Markovian case, in any perturbative order. The analytical solution correctly reproduces the short-time dynamics, and can be used to recursively recover the dynamics for an arbitrary time interval with arbitrary precision. The necessary number of relevant degrees of freedom to completely characterise an open quantum system is $(n-1)(n+2)/2,$ where $n$ is the dimension of the Hilbert space of the open system. The method is illustrated by two examples, the relaxation of a qubit in a thermal bath and the dynamics of two interacting qubits in a common environment.
A rigourous theory for the determination of the van der Waals interactions in colloidal systems is presented. The method is based on fluctuational electrodynamics and a multiple-scattering method which provides the electromagnetic Green's tensor. In particular, expressions for the Green's tensor are presented for arbitrary, finite, collections of colloidal particles, for infinitely periodic or defected crystals as well as for finite slabs of crystals. The presented formalism allows for {\it ab initio} calculations of the vdW interactions is colloidal systems since it takes fully into account retardation, many-body, multipolar and near-fields effects.
We present Tweet2Vec, a novel method for generating general-purpose vector representation of tweets. The model learns tweet embeddings using character-level CNN-LSTM encoder-decoder. We trained our model on 3 million, randomly selected English-language tweets. The model was evaluated using two methods: tweet semantic similarity and tweet sentiment categorization, outperforming the previous state-of-the-art in both tasks. The evaluations demonstrate the power of the tweet embeddings generated by our model for various tweet categorization tasks. The vector representations generated by our model are generic, and hence can be applied to a variety of tasks. Though the model presented in this paper is trained on English-language tweets, the method presented can be used to learn tweet embeddings for different languages.
In this paper we study some operators associated to the Rarita-Schwinger operators. They arise from the difference between the Dirac operator and the Rarita-Schwinger operators. These operators are called remaining operators. They are based on the Dirac operator and projection operators $I-P_k.$ The fundamental solutions of these operators are harmonic polynomials, homogeneous of degree $k$. First we study the remaining operators and their representation theory in Euclidean space. Second, we can extend the remaining operators in Euclidean space to the sphere under the Cayley transformation.
ExpressInHost (https://gitlab.com/a.raguin/expressinhost) is a GTK/C++ based user friendly graphical interface that allows tuning the codon sequence of an mRNA for recombinant protein expression in a host microorganism. Heterologous gene expression is widely implemented in biotechnology companies and academic research laboratories. However, expression of recombinant proteins can be challenging. On the one hand, maximising translation speed is important, especially in scalable production processes relevant to biotechnology companies, but on the other hand, solubility problems often arise as a consequence, since translation "pauses" might be key to allow the nascent polypeptide chain to fold appropriately. To address this challenge, we have developed a software that offers three distinct modes to tune codon sequences using the genetic code redundancy. The tuning strategies implemented take into account the specific tRNA resources of the host and that of the native organism. They balance rapid translation and native speed mimicking, which might be important to allow proper protein folding, thereby avoiding protein solubility problems.
We present our system, CruzAffect, for the CL-Aff Shared Task 2019. CruzAffect consists of several types of robust and efficient models for affective classification tasks. We utilize both traditional classifiers, such as XGBoosted Forest, as well as a deep learning Convolutional Neural Networks (CNN) classifier. We explore rich feature sets such as syntactic features, emotional features, and profile features, and utilize several sentiment lexicons, to discover essential indicators of social involvement and control that a subject might exercise in their happy moments, as described in textual snippets from the HappyDB database. The data comes with a labeled set (10K), and a larger unlabeled set (70K). We therefore use supervised methods on the 10K dataset, and a bootstrapped semi-supervised approach for the 70K. We evaluate these models for binary classification of agency and social labels (Task 1), as well as multi-class prediction for concepts labels (Task 2). We obtain promising results on the held-out data, suggesting that the proposed feature sets effectively represent the data for affective classification tasks. We also build concepts models that discover general themes recurring in happy moments. Our results indicate that generic characteristics are shared between the classes of agency, social and concepts, suggesting it should be possible to build general models for affective classification tasks.
WR 140 is a canonical massive "colliding wind" binary system in which periodically-varying X-ray emission is produced by the collision between the wind of the WC7 and O4-5 star components in the space between the two stars. We have obtained X-ray observations using the RXTE satellite observatory through almost one complete orbital cycle including two consecutive periastron passages. We discuss the results of this observing campaign, and the implications of the X-ray data for our understanding of the orbital dynamics and the stellar mass loss.
The effective on-site Coulomb interaction (Hubbard $U$) between localized \textit{d} electrons in 3\textit{d}, 4\textit{d}, and 5\textit{d} transition metals is calculated employing a new parameter-free realization of the constrained random-phase approximation using Wannier functions within the full-potential linearized augmented-plane-wave method. The $U$ values lie between 1.5 and 5.7 eV and depend on the crystal structure, spin polarization, \textit{d} electron number, and \textit{d} orbital filling. On the basis of the calculated $U$ parameters, we discuss the strength of the electronic correlations and the instability of the paramagnetic state towards the ferromagnetic one for 3\textit{d} metals.
We study the entropy of entanglement of the ground state in a wide family of one-dimensional quantum spin chains whose interaction is of finite range and translation invariant. Such systems can be thought of as generalizations of the XY model. The chain is divided in two parts: one containing the first consecutive L spins; the second the remaining ones. In this setting the entropy of entanglement is the von Neumann entropy of either part. At the core of our computation is the explicit evaluation of the leading order term as L tends to infinity of the determinant of a block-Toeplitz matrix whose symbol belongs to a general class of 2 x 2 matrix functions. The asymptotics of such determinant is computed in terms of multi-dimensional theta-functions associated to a hyperelliptic curve of genus g >= 1, which enter into the solution of a Riemann-Hilbert problem. Phase transitions for thes systems are characterized by the branch points of the hyperelliptic curve approaching the unit circle. In these circumstances the entropy diverges logarithmically. We also recover, as particular cases, the formulae for the entropy discovered by Jin and Korepin (2004) for the XX model and Its, Jin and Korepin (2005,2006) for the XY model.
Coexistence of ferromagnetism, piezoelectricity and valley in two-dimensional (2D) materials is crucial to advance multifunctional electronic technologies. Here, Janus ScXY (X$\neq$Y=Cl, Br and I) monolayers are predicted to be in-plane piezoelectric ferromagnetic (FM) semiconductors with dynamical, mechanical and thermal stabilities. The predicted piezoelectric strain coefficients $d_{11}$ and $d_{31}$ (absolute values) are higher than ones of most 2D materials. Moreover, the $d_{31}$ (absolute value) of ScClI reaches up to 1.14 pm/V, which is highly desirable for ultrathin piezoelectric device application. To obtain spontaneous valley polarization, charge doping are explored to tune the direction of magnetization of ScXY. By appropriate hole doping, their easy magnetization axis can change from in-plane to out-of-plane, resulting in spontaneous valley polarization. Taking ScBrI with 0.20 holes per f.u. as a example, under the action of an in-plane electric field, the hole carriers of K valley turn towards one edge of the sample, which will produce anomalous valley Hall effect (AVHE), and the hole carriers of $\Gamma$ valley move in a straight line. These findings could pave the way for designing piezoelectric and valleytronic devices.
Jacobi-type iterative algorithms for the eigenvalue decomposition, singular value decomposition, and Takagi factorization of complex matrices are presented. They are implemented as compact Fortran 77 subroutines in a freely available library.
Let $A$ be a finite set and $\phi:A^Z\to R$ be a locally constant potential. For each $\beta>0$ ("inverse temperature"), there is a unique Gibbs measure $\mu_{\beta\phi}$. We prove that, as $\beta\to+\infty$, the family $(\mu_{\beta\phi})_{\beta>0}$ converges (in weak-$^*$ topology) to a measure we characterize. It is concentrated on a certain subshift of finite type which is a finite union of transitive subshifts of finite type. The two main tools are an approximation by periodic orbits and the Perron-Frobenius Theorem for matrices \'a la Birkhoff. The crucial idea we bring is a "renormalization" procedure which explains convergence and provides a recursive algorithm to compute the weights of the ergodic decomposition of the limit.
Many important robotics problems are partially observable in the sense that a single visual or force-feedback measurement is insufficient to reconstruct the state. Standard approaches involve learning a policy over beliefs or observation-action histories. However, both of these have drawbacks; it is expensive to track the belief online, and it is hard to learn policies directly over histories. We propose a method for policy learning under partial observability called the Belief-Grounded Network (BGN) in which an auxiliary belief-reconstruction loss incentivizes a neural network to concisely summarize its input history. Since the resulting policy is a function of the history rather than the belief, it can be executed easily at runtime. We compare BGN against several baselines on classic benchmark tasks as well as three novel robotic touch-sensing tasks. BGN outperforms all other tested methods and its learned policies work well when transferred onto a physical robot.
Numerical 1D-3V solutions of the Wong-Yang-Mills equations with anisotropic particle momentum distributions are presented. They confirm the existence of plasma instabilities for weak initial fields and their saturation at a level where the particle motion is affected, similar to Abelian plasmas. The isotropization of the particle momenta by strong random fields is shown explicitly, as well as their nearly exponential distribution up to a typical hard scale, which arises from scattering off field fluctuations. By variation of the lattice spacing we show that the effects described here are independent of the UV field modes near the end of the Brioullin zone.
We explored how steric effects influence the rate of hydrogen atom transfer (HAT) reactions between oxyradicals and alkanes. Quantum chemical computations of transition states show that activation barriers and reaction enthalpies are both influenced by bulky substituents on the radical, but less so by substituents on the alkane. The activation barriers correlate with reaction enthalpies via the Evans-Polanyi relationship, even when steric effects are important. Dispersion effects can additionally stabilize transition states in some cases.
In this paper, we discuss the mechanics and planning algorithms to slide an object on a horizontal planar surface via frictional patch contact made with its top surface. Here, we propose an asymmetric dual limit surface model to determine slip boundary conditions for both the top and bottom contact. With this model, we obtain a range of twists that can keep the object in sticking contact with the robot end-effector while slipping on the supporting plane. Based on these constraints, we derive a planning algorithm to slide objects with only top contact to arbitrary goal poses without slippage between end effector and the object. We validate the proposed model empirically and demonstrate its predictive accuracy on a variety of object geometries and motions. We also evaluate the planning algorithm over a variety of objects and goals demonstrate an orientation error improvement of 90\% when compared to methods naive to linear path planners.