text
stringlengths
6
128k
The ground state and magnetization process of an exactly solved spin-$1/2$ Ising-Heisenberg orthogonal-dimer chain with two different gyromagnetic factors of the Ising and Heisenberg spins are investigated in detail. It is shown that the investigated quantum spin chain exhibits up to seven possible ground states depending on a mutual interplay of the magnetic field, intra- and inter-dimer coupling constants. More specifically, the frustrated and modulated quantum antiferromagnetic phases are responsible in zero-temperature magnetization curves for a zero magnetization plateau. The intermediate 1/11- and 5/11-plateaus emerge due to the frustrated and modulated quantum ferrimagnetic phases, while the intermediate 9/11- and 10/11-plateaus can be attributed to the quantum and classical ferrimagnetic phases. It is conjectured that the magnetization plateau experimentally observed in a high-field magnetization curve of 3$d$-4$f$ heterobimetallic coordination polymer [\{Dy(hfac)$_2$(CH$_3$OH)\}$_2$\{Cu(dmg)(Hdmg)\}$_2$]$_n$ (H$_2$dmg $=$ dimethylglyoxime; Hhfac $=$ 1,1,1,5,5,5-hexafluoropentane-2,4-dione) could be attributed to the classical and quantum ferrimagnetic phases.
Traditional machine learning methods applied to the material sciences have often predicted invariant, scalar properties of material systems to great effect. Newer, coordinate equivariant models promise to provide a coordinate system dependent output in a well defined manner, but recent applications often neglect a direct prediction of directional (i.e. coordinate system dependent) quantities and instead are used to predict still just invariant quantities. This component-wise prediction of tensorial properties is achieved by decomposing tensors into harmonic subspaces via a \textit{tensor spherical harmonic decomposition}, by which we may also associate arbitrary tensors with the irreducible representations of the rotation group. This essentially allows us to read off tensors component-wise from the output representations of these equivariant models. In this work, we present results for the prediction of various material property tensors directly from crystalline structures. Namely, given some material's crystalline structure, we may predict tensor components of dielectric, piezoelectric, and elasticity tensors directly from the output of a $SE(3)$ equivariant model.
We propose a probabilistic framework for interpreting and developing hard thresholding sparse signal reconstruction methods and present several new algorithms based on this framework. The measurements follow an underdetermined linear model, where the regression-coefficient vector is the sum of an unknown deterministic sparse signal component and a zero-mean white Gaussian component with an unknown variance. We first derive an expectation-conditional maximization either (ECME) iteration that guarantees convergence to a local maximum of the likelihood function of the unknown parameters for a given signal sparsity level. To analyze the reconstruction accuracy, we introduce the minimum sparse subspace quotient (SSQ), a more flexible measure of the sampling operator than the well-established restricted isometry property (RIP). We prove that, if the minimum SSQ is sufficiently large, ECME achieves perfect or near-optimal recovery of sparse or approximately sparse signals, respectively. We also propose a double overrelaxation (DORE) thresholding scheme for accelerating the ECME iteration. If the signal sparsity level is unknown, we introduce an unconstrained sparsity selection (USS) criterion for its selection and show that, under certain conditions, applying this criterion is equivalent to finding the sparsest solution of the underlying underdetermined linear system. Finally, we present our automatic double overrelaxation (ADORE) thresholding method that utilizes the USS criterion to select the signal sparsity level. We apply the proposed schemes to reconstruct sparse and approximately sparse signals from tomographic projections and compressive samples.
We study mechanisms for wavenumber selection in a minimal model for run-and-tumble dynamics. We show that nonlinearity in tumbling rates induces the existence of a plethora of traveling- and standing-wave patterns, as well as a subtle selection mechanism for the wavenumbers of spatio-temporally periodic waves. We comment on possible implications for rippling patterns observed in colonies of myxobacteria.
We introduce the "partial-$p$" operation in a massive Euclidean $\lambda\phi^{4}$ scalar field theory describing anisotropic Lifshitz critical behavior. We then develop a minimal subtraction a la $Bogoliubov-Parasyuk-Hepp-Zimmermann$ renormalization scheme. As an application we compute critical exponents diagrammatically using the orthogonal approximation at least up to two-loop order and show their equivalence with other renormalization techniques. We discuss possible applications of the method in other field-theoretic contexts.
Subgraph-enhanced graph neural networks (SGNN) can increase the expressive power of the standard message-passing framework. This model family represents each graph as a collection of subgraphs, generally extracted by random sampling or with hand-crafted heuristics. Our key observation is that by selecting "meaningful" subgraphs, besides improving the expressivity of a GNN, it is also possible to obtain interpretable results. For this purpose, we introduce a novel framework that jointly predicts the class of the graph and a set of explanatory sparse subgraphs, which can be analyzed to understand the decision process of the classifier. We compare the performance of our framework against standard subgraph extraction policies, like random node/edge deletion strategies. The subgraphs produced by our framework allow to achieve comparable performance in terms of accuracy, with the additional benefit of providing explanations.
Reconfigurable intelligent surfaces (RISs) have attracted wide interest from industry and academia since they can shape the wireless environment into a desirable form with a low cost. In practice, RISs have three types of implementations: 1) reflective, where signals can be reflected to the users on the same side of the base station (BS), 2) transmissive, where signals can penetrate the RIS to serve the users on the opposite side of the BS, and 3) hybrid, where the RISs have a dual function of reflection and transmission. However, existing works focus on the reflective type RISs, and the other two types of RISs are not well investigated. In this letter, a downlink multi-user RIS-assisted communication network is considered, where the RIS can be one of these types. We derive the system sum-rate, and discuss which type can yield the best performance under a specific user distribution. Numerical results verify our analysis.
Thermal atmospheric tides have a strong impact on the rotation of terrestrial planets. They can lock these planets into an asynchronous rotation state of equilibrium. We aim at characterizing the dependence of the tidal torque resulting from the semidiurnal thermal tide on the tidal frequency, the planet orbital radius, and the atmospheric surface pressure. The tidal torque is computed from full 3D simulations of the atmospheric climate and mean flows using a generic version of the LMDZ general circulation model (GCM) in the case of a nitrogen-dominated atmosphere. Numerical results are discussed with the help of an updated linear analytical framework. Power scaling laws governing the evolution of the torque with the planet orbital radius and surface pressure are derived. The tidal torque exhibits i) a thermal peak in the vicinity of synchronization, ii) a resonant peak associated with the excitation of the Lamb mode in the high frequency range, and iii) well defined frequency slopes outside these resonances. These features are well explained by our linear theory. Whatever the star-planet distance and surface pressure, the torque frequency spectrum -- when rescaled with the relevant power laws -- always presents the same behaviour. This allows us to provide a single and easily usable empirical formula describing the atmospheric tidal torque over the whole parameter space. With such a formula, the effect of the atmospheric tidal torque can be implemented in evolutionary models of the rotational dynamics of a planet in a computationally efficient, and yet relatively accurate way.
An expression is derived for the strain energy of a polymer chain under an arbitrary three-dimensional deformation with finite strains. For a Gaussian chain, this expression is reduced to the conventional Moony--Rivlin constitutive law, while for non-Gaussian chains it implies novel constitutive relations. Based on the three-chain approximation, explicit formulas are developed for the strain energy of a chain modeled as a self-avoiding random walk. In the case of self-avoiding chains with stretched-exponential distribution function of end-to-end vectors, the strain energy density of a network is described by the Ogden law with only two material constants. For the des Cloizeaux distribution function, the constitutive equation involves three adjustable parameters. The governing equations are verified by fitting observations on uniaxial tension, uniaxial compression and biaxial tension of elastomers. Good agreement is demonstrated between the experimental data and the results of numerical analysis. An analytical formula is derived for the ratio of the Young's modulus of a self-avoiding chain to that of a Gaussian chain. It is found that the elastic modulus per chain in the Ogden network exceeds that in a Gaussian network by a factor of three, whereas the elastic modulus of a chain with the generalized stretched exponential distribution function equals about half of the modulus of a Gaussian chain.
Currently, public-key compression of supersingular isogeny Diffie-Hellman (SIDH) and its variant, supersingular isogeny key encapsulation (SIKE) involve pairing computation and discrete logarithm computation. In this paper, we propose novel methods to compute only 3 discrete logarithms instead of 4, in exchange for computing a lookup table efficiently. The algorithms also allow us to make a trade-off between memory and efficiency. Our implementation shows that the efficiency of our algorithms is close to that of the previous work, and our algorithms perform better in some special cases.
We introduce an algorithm which solves mean payoff games in polynomial time on average, assuming the distribution of the games satisfies a flip invariance property on the set of actions associated with every state. The algorithm is a tropical analogue of the shadow-vertex simplex algorithm, which solves mean payoff games via linear feasibility problems over the tropical semiring $(\mathbb{R} \cup \{-\infty\}, \max, +)$. The key ingredient in our approach is that the shadow-vertex pivoting rule can be transferred to tropical polyhedra, and that its computation reduces to optimal assignment problems through Pl\"ucker relations.
A novel splitting scheme to solve parametric multiconvex programs is presented. It consists of a fixed number of proximal alternating minimisations and a dual update per time step, which makes it attractive in a real-time Nonlinear Model Predictive Control (NMPC) framework and for distributed computing environments. Assuming that the parametric program is semi-algebraic and that its KKT points are strongly regular, a contraction estimate is derived and it is proven that the sub-optimality error remains stable if two key parameters are tuned properly. Efficacy of the method is demonstrated by solving a bilinear NMPC problem to control a DC motor.
The anomalous concentration of radiocarbon in 774/775 attracted intense discussion on its origin, including the possible extreme solar event(s) exceeding any events in observational history. Anticipating such extreme solar events, auroral records were also surveyed in historical documents and those including the red celestial sign after sunset in the Anglo-Saxon Chronicle (ASC) were subjected to consideration. Usoskin et al. (2013: U13) interpreted this record as an aurora and suggested enhanced solar activity around 774/775. Conversely, Neuhauser and Neuhauser (2015a, 2015b: N15a and N15b) interpreted "after sunset" as during sunset or twilight; they considered this sign as a halo display and suggested a solar minimum around 774. However, so far these records have not been discussed in comparison with eyewitness auroral records during the known extreme space-weather events, although they were discussed in relationship with potential extreme events in 774/775. Therefore, we reconstruct the observational details based on the original records in the ASC and philological references, compare them with eyewitness auroral observations during known extreme space-weather events, and consider contemporary solar activity. We clarify the observation was indeed "after sunset", reject the solar halo hypothesis, define the observational time span between 25 Mar. 775 and 25 Dec. 777, and note the parallel halo drawing in 806 in the ASC shown in N15b was not based on the original observation in England. We show examples of eyewitness auroral observations during twilight in known space-weather events, and this celestial sign does not contradict the observational evidence. Accordingly, we consider this event happened after the onset of the event in 774/775, but shows relatively enhanced solar activity, with other historical auroral records in the mid-770s, as also confirmed by the Be data from ice cores.
We show that the existence of a well-known type of ideals on a regular cardinal $\lambda$ implies a compactness property concerning the specialisability of a tree of height $\lambda$ with no cofinal branches. We also use Neeman's method of side conditions to show that the existence of such ideals is consistent with stationarily many appropriate guessing models. These objects suffice to extend the main theorem of \cite{mhpr_spe}: one can generically specialise any branchless tree of height $\kappa^{++}$ with a ${<}\kappa$-closed, $\kappa^{+}$-proper, and $\kappa^{++}$-preserving forcing, which has the $\kappa^+$-approximation property.
A general field-antifield BV formalism for antisymplectic first class constraints is proposed. It is as general as the corresponding symplectic BFV-BRST formulation and it is demonstrated to be consistent with a previously proposed formalism for antisymplectic second class constraints through a generalized conversion to corresponding first class constraints. Thereby the basic concept of gauge symmetry is extended to apply to quite a new class of gauge theories potentially possible to exist.
Analyzing data from multiple neuroimaging studies has great potential in terms of increasing statistical power, enabling detection of effects of smaller magnitude than would be possible when analyzing each study separately and also allowing to systematically investigate between-study differences. Restrictions due to privacy or proprietary data as well as more practical concerns can make it hard to share neuroimaging datasets, such that analyzing all data in a common location might be impractical or impossible. Meta-analytic methods provide a way to overcome this issue, by combining aggregated quantities like model parameters or risk ratios. Most meta-analytic tools focus on parametric statistical models, and methods for meta-analyzing semi-parametric models like generalized additive models have not been well developed. Parametric models are often not appropriate in neuroimaging, where for instance age-brain relationships may take forms that are difficult to accurately describe using such models. In this paper we introduce meta-GAM, a method for meta-analysis of generalized additive models which does not require individual participant data, and hence is suitable for increasing statistical power while upholding privacy and other regulatory concerns. We extend previous works by enabling the analysis of multiple model terms as well as multivariate smooth functions. In addition, we show how meta-analytic $p$-values can be computed for smooth terms. The proposed methods are shown to perform well in simulation experiments, and are demonstrated in a real data analysis on hippocampal volume and self-reported sleep quality data from the Lifebrain consortium. We argue that application of meta-GAM is especially beneficial in lifespan neuroscience and imaging genetics. The methods are implemented in an accompanying R package \verb!metagam!, which is also demonstrated.
We investigate four finiteness conditions related to residual finiteness: complete separability, strong subsemigroup separability, weak subsemigroup separability and monogenic subsemigroup separability. For each of these properties we examine under which conditions the property is preserved under direct products. We also consider if any of the properties are inherited by the factors in a direct product. We give necessary and sufficient conditions for finite semigroups to preserve the properties of strong subsemigroup separability and monogenic subsemigroup separability in a direct product.
In the original Art Gallery Problem (AGP), one seeks the minimum number of guards required to cover a polygon $P$. We consider the Chromatic AGP (CAGP), where the guards are colored. As long as $P$ is completely covered, the number of guards does not matter, but guards with overlapping visibility regions must have different colors. This problem has applications in landmark-based mobile robot navigation: Guards are landmarks, which have to be distinguishable (hence the colors), and are used to encode motion primitives, \eg, "move towards the red landmark". Let $\chi_G(P)$, the chromatic number of $P$, denote the minimum number of colors required to color any guard cover of $P$. We show that determining, whether $\chi_G(P) \leq k$ is \NP-hard for all $k \geq 2$. Keeping the number of colors minimal is of great interest for robot navigation, because less types of landmarks lead to cheaper and more reliable recognition.
An infinite family of irreducible homogeneous free divisors in $K[x, y, z]$ is constructed. Indeed, we identify sets of monomials $X$ such that the general polynomial supported on $X$ is a free divisor.
We describe a construction of complex geometrical analysis which corresponds to the classical theory of spherical harmonics.
The job of a physicist is to describe Nature. General features, hypotheses and theories help to describe physics phenomena at a more abstract, fundamental level, and are sometimes tacitly assigned some sort of real existence; doing so appears to be of little harm in most of classical physics. However, missing any tangible connection to everyday experience, one better always bears in mind the descriptive nature of any efforts to grasp the quantum. And elementary particles interact in the quantum world, of course. When communicating the world of elementary particles to the general public, the Bayesian approach of an ever ongoing updating of the depiction of reality turns out to be virtually indispensable. The human experience of providing a series of increasingly better descriptions generates plenty of personal pleasures, for researchers as well as for amateurs. A suggestive analogy for improving our understanding of the world, even the seemingly paradoxical quantum world, may be found in recent insight into how congenitally blind children and young adults learn to see, after having received successful eye surgery.
We provide a simple algorithm for recognizing and performing Reidemeister moves in a Gauss diagram.
A formalism is introduced which may describe both standard linearized waves and gravitational waves in Isaacson's high-frequency limit. After emphasizing main differences between the two approximation techniques we generalize the Isaacson method to non-vacuum spacetimes. Then we present three large explicit classes of solutions for high-frequency gravitational waves in particular backgrounds. These involve non-expanding (plane, spherical or hyperboloidal), cylindrical, and expanding (spherical) waves propagating in various universes which may contain a cosmological constant and electromagnetic field. Relations of high-frequency gravitational perturbations of these types to corresponding exact radiative spacetimes are described.
Single atoms or atom-like emitters are the purest source of on-demand single photons, they are intrinsically incapable of multi-photon emission. To demonstrate this degree of purity we have realized a tunable, on-demand source of single photons using a single ion trapped at the common focus of high numerical aperture lenses. Our trapped-ion source produces single-photon pulses at a rate of 200 kHz with g$^2(0) = (1.9 \pm 0.2) \times 10^{-3}$, without any background subtraction. The corresponding residual background is accounted for exclusively by detector dark counts. We further characterize the performance of our source by measuring the violation of a non-Gaussian state witness and show that its output corresponds to ideal attenuated single photons. Combined with current efforts to enhance collection efficiency from single emitters, our results suggest that single trapped ions are not only ideal stationary qubits for quantum information processing, but promising sources of light for scalable optical quantum networks.
In the context of the recent COVID-19 outbreak, quarantine has been used to "flatten the curve" and slow the spread of the disease. In this paper, we show that this is not the only benefit of quarantine for the mitigation of an SIR epidemic spreading on a graph. Indeed, human contact networks exhibit a powerlaw structure, which means immunizing nodes at random is extremely ineffective at slowing the epidemic, while immunizing high-degree nodes can efficiently guarantee herd immunity. We theoretically prove that if quarantines are declared at the right moment, high-degree nodes are disproportionately in the Removed state, which is a form of targeted immunization. Even if quarantines are declared too early, subsequent waves of infection spread slower than the first waves. This leads us to propose an opening and closing strategy aiming at immunizing the graph while infecting the minimum number of individuals, guaranteeing the population is now robust to future infections. To the best of our knowledge, this is the only strategy that guarantees herd immunity without requiring vaccines. We extensively verify our results on simulated and real-life networks.
This work discusses the numerical approximation of a nonlinear reaction-advection-diffusion equation, which is a dimensionless form of the Weertman equation. This equation models steadily-moving dislocations in materials science. It reduces to the celebrated Peierls-Nabarro equation when its advection term is set to zero. The approach rests on considering a time-dependent formulation, which admits the equation under study as its long-time limit. Introducing a Preconditioned Collocation Scheme based on Fourier transforms, the iterative numerical method presented solves the time-dependent problem, delivering at convergence the desired numerical solution to the Weertman equation. Although it rests on an explicit time-evolution scheme, the method allows for large time steps, and captures the solution in a robust manner. Numerical results illustrate the efficiency of the approach for several types of nonlinearities.
Using a microscopic numerical approach suitable to describe disordered antiferromagnets, with application to $Fe_{x}Zn_{1-x}F_{2}$, it is shown that the characteristics of the spin glass phase found for $x=0.25$ is much in agreement with the scenario predicted by the scaling theory of the droplet model.
We consider the gravity-capillary water waves problem in a domain $\Omega_t \subset \mathbb{T} \times \mathbb{R}$ with substantial geometric features. Namely, we consider a variable bottom, smooth obstacles in the flow and a constant background current. We utilize a vortex sheet model introduced by Ambrose, et. al. in arXiv:2108.01786. We show that the water waves problem is locally-in-time well-posed in this geometric setting and study the lifespan of solutions. We then add a damping term and derive evolution equations that account for the damper. Ultimately, we show that the same well-posedness and lifespan results apply to the damped system. We primarily utilize energy methods.
The cohomology of the Hilbert schemes of points on smooth projective surfaces can be approached both with vertex algebra tools and equivariant tools. Using the first tool, we study the existence and the structure of universal formulas for the Chern classes of the tangent bundle over the Hilbert scheme of points on a projective surface. The second tool leads then to nice generating formulas in the particular case of the Hilbert scheme of points on the affine plane.
The controlled creation of defect center---nanocavity systems is one of the outstanding challenges for efficiently interfacing spin quantum memories with photons for photon-based entanglement operations in a quantum network. Here, we demonstrate direct, maskless creation of atom-like single silicon-vacancy (SiV) centers in diamond nanostructures via focused ion beam implantation with $\sim 32$ nm lateral precision and $< 50$ nm positioning accuracy relative to a nanocavity. Moreover, we determine the Si+ ion to SiV center conversion yield to $\sim 2.5\%$ and observe a 10-fold conversion yield increase by additional electron irradiation. We extract inhomogeneously broadened ensemble emission linewidths of $\sim 51$ GHz, and close to lifetime-limited single-emitter transition linewidths down to $126 \pm13$ MHz corresponding to $\sim 1.4$-times the natural linewidth. This demonstration of deterministic creation of optically coherent solid-state single quantum systems is an important step towards development of scalable quantum optical devices.
We study the dynamics of gyrotactic swimmers in turbulence, whose orientation is governed by gravitational torque and local fluid velocity gradient. The gyrotaxis strength is measured by the ratio of the Kolmogorov time scale to the reorientation time scale due to gravity, and a large value of this ratio means the gyrotaxis is strong. By means of direct numerical simulations, we investigate the effects of swimming velocity and gyrotactic stability on spatial accumulation and alignment. Three-dimensional Vorono{\"\i} analysis is used to study the spatial distribution and time evolution of the particle concentration. We study spatial distribution by examing the overall preferential sampling and where clusters and voids (subsets of particles that have small and large Vorono{\"\i} volumes respectively) form. Compared with the ensemble particles, the preferential sampling of clusters and voids is found to be more pronounced. The clustering of fast swimmers lasts much longer than slower swimmers when the gyrotaxis is strong and intermediate, but an opposite trend emerges when the gyrotaxis is weak. In addition, we study the preferential alignment with the Lagrangian stretching direction, with which passive slender rods have been known to align. We show that the Lagrangian alignment is reduced by the swimming velocity when the gyrotaxis is weak, while the Lagrangian alignment is enhanced for the regime in which gyrotaxis is strong.
We refine existing general network optimization techniques, give new characterizations for the class of problems to which they can be applied, and show that they can also be used to solve various two-player games in almost linear time. Among these is a new variant of the network interdiction problem, where the interdictor wants to destroy high-capacity paths from the source to the destination using a vertex-wise limited budget of arc removals. We also show that replacing the limit average in mean payoff games by the maximum weight results in a class of games amenable to these techniques.
Railroad transportation plays a vital role in the future of sustainable mobility. Besides building new infrastructure, capacity can be improved by modern train control systems, e.g., based on moving blocks. At the same time, there is only limited work on how to optimally route trains using the potential gained by these systems. Recently, an initial approach for train routing with moving block control has been proposed to address this demand. However, detailed evaluations on so-called lazy constraints are missing, and no publicly available implementation exists. In this work, we close this gap by providing an extended approach as well as a flexible open-source implementation that can use different solving strategies. Using that, we experimentally evaluate what choices should be made when implementing a lazy constraint approach. The corresponding implementation and benchmarks are publicly available as part of the Munich Train Control Toolkit (MTCT) at https://github.com/cda-tum/mtct.
We present a scenario for efficient magnetization of very young galaxies about 0.5 Gigayears after the Big-Bang by a cosmic ray-driven dynamo. These objects experience a phase of strong star formation during this first $10^9$ years. We transfer the knowledge of the connection between star formation and the production rate of cosmic rays by supernova remnants to such high redshift objects. Since the supernova rate is a direct measure for the production rate of cosmic rays we conclude that very young galaxies must be strong sources of cosmic rays. The key argument of our model is the finding that magnetic fields and cosmic rays are dynamically coupled, i.e. a strong cosmic ray source contains strong magnetic fields since the relativistic particles drive an efficient dynamo in a galaxy via their buoyancy. We construct a phenomenological model of a dynamo driven by buoyancy of cosmic rays and show that if azimuthal shearing is strong enough the dynamo amplification timescale is close to the buoyancy timescale of the order of several $10^7 \div 10^8$ yr. We predict that young galaxies are strongly magnetized and may contribute significantly to the gamma-ray-background.
The polynomial $x^n+1$ over finite fields has been of interest due to its applications in the study of negacyclic codes over finite fields. In this paper, a rigorous treatment of the factorization of $x^n+1$ over finite fields is given as well as its applications. Explicit and recursive methods for factorizing $x^n+1$ over finite fields are provided together with the enumeration formula. As applications, some families of negacyclic codes are revisited with more clear and simpler forms.
We demonstrate that detection of the heavier minimal supersymmetric model CP-even Higgs boson $H^0$ will be possible at the LHC via its $H^0\to h^0h^0\to 4b$ and/or $H^0\to A^0A^0\to 4b$ decay channels for significant portions of the $( m_{A^0},\tan\beta)$ model parameter space. At low $ m_{A^0}$ ($\lsim 60\gev$), {\it both} the $H^0\to A^0A^0\to 4b$ and $H^0\to h^0h^0\to 4b$ modes yield a viable signal for most $\tan\beta$ values; viability for the $h^0h^0$ channel extends up to $\mhh\sim 2\mt$ when the model parameter $\tan\beta$ is not large. At the Tevatron, the $h^0h^0$ and $A^0A^0$ channels are both potentially viable at low $ m_{A^0}$ for sufficiently good $b$-tagging efficiency and purity.
We consider a classical overdamped Brownian particle moving in a symmetric periodic potential. We show that a net particle flow can be produced by adiabatically changing two external periodic potentials with a spatial and a temporal phase difference. The classical pumped current is found to be independent of the friction and to vanish both in the limit of low and high temperature. Below a critical temperature, adiabatic pumping appears to be more efficient than transport due to a constant external force.
The resolved mass assembly of Milky-Way-mass galaxies has been previously studied in simulations, the local universe, and at higher redshifts using infrared (IR) light profiles. To better characterize the mass assembly of Milky Way Analogues (MWAs), as well as their changes in star-formation rate and color gradients, we construct resolved stellar mass and star-formation rate maps of MWA progenitors selected with abundance matching techniques up to z $\sim$ 2 using deep, multi-wavelength imaging data from the Hubble Frontier Fields. Our results using stellar mass profiles agree well with previous studies that utilize IR light profiles, showing that the inner 2 kpc of the galaxies and the regions beyond 2 kpc exhibit similar rates of stellar mass growth. This indicates the progenitors of MWAs from $z\sim 2$ to the present do not preferentially grow their bulges or their disks. The evolution of the star-formation rate (SFR) profiles indicate greater decrease in SFR density in the inner regions versus the outer regions. S\'ersic parameters indicate modest growth in the central regions at lower redshifts, perhaps indicating slight bulge growth. However, the S\'ersic index does not rise above $n \sim 2$ until $z < 0.5$, meaning these galaxies are still disk dominated systems. We find that the half-mass radii of the MWA progenitors increase between $1.5 < z < 2$, but remain constant at later epochs ($z < 1.5$). This implies mild bulge growth since $z\sim 2$ in MWA progenitors, in line with previous MWA mass assembly studies.
Although Casimir forces are inseparable from their fluctuations, little is known about these fluctuations in soft matter systems. We use the membrane stress tensor to study the fluctuations of the membrane-mediated Casimir-like force. This method enables us to recover the Casimir force between two inclusions and to calculate its variance. We show that the Casimir force is dominated by its fluctuations. Furthermore, when the distance d between the inclusions is decreased from infinity, the variance of the Casimir force decreases as -1/d^2. This distance dependence shares a common physical origin with the Casimir force itself.
In particle physics, semi-supervised machine learning is an attractive option to reduce model dependencies searches beyond the Standard Model. When utilizing semi-supervised techniques in training machine learning models in the search for bosons at the Large Hadron Collider, the over-training of the model must be investigated. Internal fluctuations of the phase space and bias in training can cause semi-supervised models to label false signals within the phase space due to over-fitting. The issue of false signal generation in semi-supervised models has not been fully analyzed and therefore utilizing a toy Monte Carlo model, the probability of such situations occurring must be quantified. This investigation of $Z\gamma$ resonances is performed using a pure background Monte Carlo sample. Through unique pure background samples extracted to mimic ATLAS data in a background-plus-signal region, multiple runs enable the probability of these fake signals occurring due to over-training to be thoroughly investigated.
We consider the stochastic background of gravitational waves produced during the radiation-dominated hot big bang as a constraint on the primordial density perturbation on comoving length scales much smaller than those directly probed by the cosmic microwave background or large-scale structure. We place weak upper bounds on the primordial density perturbation from current data. Future detectors such as BBO and DECIGO will place much stronger constraints on the primordial density perturbation on small scales.
We probe the gravitational force perpendicular to the Galactic plane at the position of the Sun based on a sample of red giants, with measurements taken from the DR3 Gaia catalogue. Measurements far out of the Galactic plane up to 3.5 kpc allow us to determine directly the total mass density, where dark matter is dominant and the stellar and gas densities are very low. In a complementary way, we have also used a new determination of the local baryonic mass density to help determine the density of dark matter in the Galactic plane at the solar position. For the local mass density of dark matter, we obtained $\rho_\mathrm{dm}$=0.0128$\pm $0.0008= 0.486 $\pm$0.030 Gev cm$^{-3}$. For the flattening of the gravitational potential of the dark halo, it is $q_\mathrm{\phi,h}$=0.843$\pm0.035$. For its density, $q_\mathrm{\rho,h}$=0.781$\pm$0.055.
In present work, we discuss some topological features of charged particles interacting a uniform magnetic field in a finite volume. The edge state solutions are presented, as a signature of non-trivial topological systems, the energy spectrum of edge states show up in the gap between allowed energy bands. By treating total momentum of two-body system as a continuous distributed parameter in complex plane, the analytic properties of solutions of finite volume system in a magnetic field is also discussed.
Preliminary results of identical-particle correlations probing the geometric substructure of the particle-emitting source at RHIC are presented. An $m_T$-independent scaling of pion HBT radii from large (central Au+Au) to small (p+p) collision systems naively suggests comparable flow strength in all of them. Multidimensional correlation functions are studied in detail using a spherical decomposition method. In the light systems, the presence of significant long-range non-femtoscopic correlations complicates the extraction of HBT radii.
We reconsider the moments of the reduced density matrix of two disjoint intervals and of its partial transpose with respect to one interval for critical free fermionic lattice models. It is known that these matrices are sums of either two or four Gaussian matrices and hence their moments can be reconstructed as computable sums of products of Gaussian operators. We find that, in the scaling limit, each term in these sums is in one-to-one correspondence with the partition function of the corresponding conformal field theory on the underlying Riemann surface with a given spin structure. The analytical findings have been checked against numerical results for the Ising chain and for the XX spin chain at the critical point.
Floquet Majorana edge modes capture the topological features of periodically driven superconductors. We present a Kitaev chain with multiple time periodic driving and demonstrate how the avoidance of bands crossing is altered, which gives rise to new regions supporting Majorana edge modes. A one dimensional generalized method was proposed to predict Majorana edge modes via the Zak phase of the Floquet bands. We also study the time independent effective Hamiltonian at high frequency limit and introduce diverse index to characterize topological phases with different relative phase between the multiple driving. Our work enriches the physics of driven system and paves the way for locating Majorana edge modes in larger parameter space.
The Noisy Max mechanism and its variations are fundamental private selection algorithms that are used to select items from a set of candidates (such as the most common diseases in a population), while controlling the privacy leakage in the underlying data. A recently proposed extension, Noisy Top-k with Gap, provides numerical information about how much better the selected items are compared to the non-selected items (e.g., how much more common are the selected diseases). This extra information comes at no privacy cost but crucially relies on infinite precision for the privacy guarantees. In this paper, we provide a finite-precision secure implementation of this algorithm that takes advantage of integer arithmetic.
An entropy-bounded Discontinuous Galerkin (EBDG) scheme is proposed in which the solution is regularized by constraining the entropy. The resulting scheme is able to stabilize the solution in the vicinity of discontinuities and retains the optimal accuracy for smooth solutions. The properties of the limiting operator according to the entropy-minimum principle are proofed analytically, and an optimal CFL-criterion is derived. We provide a rigorous description for locally imposing entropy constraints to capture multiple discontinuities. Significant advantages of the EBDG-scheme are the general applicability to arbitrary high-order elements and its simple implementation for two- and three-dimensional configurations. Numerical tests confirm the properties of the scheme, and particular focus is attributed to the robustness in treating discontinuities on arbitrary meshes.
We take an argument of G\"odel's from his ground-breaking 1931 paper, generalize it, and examine its validity. The argument in question is this: the sentence $G$ says about itself that it is not provable, and $G$ is indeed not provable; therefore, $G$ is true.
In this paper we focus on a feedback mechanism for unsourced random access (URA) communications. We propose an algorithm to construct feedback packets broadcasted to the users by the base station (BS) as well as the feedback packet format that allows the users to estimate their channels and infer positive or negative feedback based on the presented thresholding algorithm. We demonstrate that the proposed feedback imposes a much smaller complexity burden on the users compared to the feedback that positively acknowledges all successful or negatively acknowledges all undecoded users. We also show that the proposed feedback technique can lead to a substantial reduction in the packet error rates and signal-to-noise ratios (SNR)s required to support various numbers of active users in the system.
Let $G$ and $\tilde G$ be Kleinian groups whose limit sets $S$ and $\tilde S$, respectively, are homeomorphic to the standard Sierpi\'nski carpet, and such that every complementary component of each of $S$ and $\tilde S$ is a round disc. We assume that the groups $G$ and $\tilde G$ act cocompactly on triples on their respective limit sets. The main theorem of the paper states that any quasiregular map (in a suitably defined sense) from an open connected subset of $S$ to $\tilde S$ is the restriction of a M\"obius transformation that takes $S$ onto $\tilde S$, in particular it has no branching. This theorem applies to the fundamental groups of compact hyperbolic 3-manifolds with non-empty totally geodesic boundaries. One consequence of the main theorem is the following result. Assume that $G$ is a torsion-free hyperbolic group whose boundary at infinity $\dee_\infty G$ is a Sierpi\'nski carpet that embeds quasisymmetrically into the standard 2-sphere. Then there exists a group $H$ that contains $G$ as a finite index subgroup and such that any quasisymmetric map $f$ between open connected subsets of $\dee_\infty G$ is the restriction of the induced boundary map of an element $h\in H$.
The CZTI (Cadmium Zinc Telluride Imager) onboard AstroSat is a high energy coded mask imager and spectrometer in the energy range of 20 - 100 keV. Above 100 keV, the dominance of Compton scattering cross-section in CZTI results in a significant number of 2-pixel Compton events and these have been successfully utilized for polarization analysis of Crab pulsar and nebula (and transients like Gamma-ray bursts) in 100 - 380 keV. These 2-pixel Compton events can also be used to extend the spectroscopic energy range of CZTI up to 380 keV for bright sources. However, unlike the spectroscopy in primary energy range, where simultaneous background measurement is available from masked pixels, Compton spectroscopy requires blank sky observation for background measurement. Background subtraction, in this case, is non-trivial because of the presence of both short-term and long-term temporal variations in the data, which depend on multiple factors like earth rotation and the effect of South Atlantic Anomaly (SAA) regions etc. We have developed a methodology of background selection and subtraction that takes into account for these effects. Here, we describe these background selection and subtraction techniques and validate them using spectroscopy of Crab in the extended energy range of 30 - 380 keV region, and compare the obtained spectral parameters with the INTEGRAL results. This new capability allows for the extension of the energy range of AstroSat spectroscopy and will also enable the simultaneous spectro-polarimetric study of other bright sources like Cygnus X-1.
Because closed timelike curves are consistent with general relativity, many have asserted that time travel into the past is physically possible if not technologically infeasible. However, the possibility of time travel into the past rests on the unstated and false assumption that zero change to the past implies zero change to the present. I show that this assumption is logically inconsistent; as such, it renders time travel into the past both unscientific and pseudoscientific.
Metasurfaces have received a lot of attentions recently due to their versatile capability in manipulating electromagnetic wave. Advanced designs to satisfy multiple objectives with non-linear constraints have motivated researchers in using machine learning (ML) techniques like deep learning (DL) for accelerated design of metasurfaces. For metasurfaces, it is difficult to make quantitative comparisons between different ML models without having a common and yet complex dataset used in many disciplines like image classification. Many studies were directed to a relatively constrained datasets that are limited to specified patterns or shapes in metasurfaces. In this paper, we present our SUTD polarized reflection of complex metasurfaces (SUTD-PRCM) dataset, which contains approximately 260,000 samples of complex metasurfaces created from electromagnetic simulation, and it has been used to benchmark our DL models. The metasurface patterns are divided into different classes to facilitate different degree of complexity, which involves identifying and exploiting the relationship between the patterns and the electromagnetic responses that can be compared in using different DL models. With the release of this SUTD-PRCM dataset, we hope that it will be useful for benchmarking existing or future DL models developed in the ML community. We also propose a classification problem that is less encountered and apply neural architecture search to have a preliminary understanding of potential modification to the neural architecture that will improve the prediction by DL models. Our finding shows that convolution stacking is not the dominant element of the neural architecture anymore, which implies that low-level features are preferred over the traditional deep hierarchical high-level features thus explains why deep convolutional neural network based models are not performing well in our dataset.
Thin films of silicon oxide (SiOx) are mixtures of semiconductive c-Si nanoclusters (NC) embedded in an insulating g-SiO2 matrix. Tour et al. have shown that a trenched thin film geometry enables the NC to form semiconductive filamentary arrays when driven by an applied field. The field required to form reversible nanoscale switching networks (NSN) decreases rapidly within a few cycles, or by annealing at 600 C in even fewer cycles, and is stable to 700C. Here we discuss an elastic mechanism that explains why a vertical edge across the planar Si-SiOx interface is necessary to form NSN. The discussion shows that the formation mechanism is intrinsic and need not occur locally at the edge, but can occur anywhere in the SiOx film, given the unpinned nanoscale vertical edge geometry.
Controlling the type and density of charge carriers by doping is the key step for developing graphene electronics. However, direct doping of graphene is rather a challenge. Based on first-principles calculations, a concept of overcoming doping difficulty in graphene via substrate is reported.We find that doping could be strongly enhanced in epitaxial graphene grown on silicon carbide substrate. Compared to free-standing graphene, the formation energies of the dopants can decrease by as much as 8 eV. The type and density of the charge carriers of epitaxial graphene layer can be effectively manipulated by suitable dopants and surface passivation. More importantly, contrasting to the direct doping of graphene, the charge carriers in epitaxial graphene layer are weakly scattered by dopants due to the spatial separation between dopants and the conducting channel. Finally, we show that a similar idea can also be used to control magnetic properties, for example, induce a half-metallic state in the epitaxial graphene without magnetic impurity doping.
We present Karl G. Jansky Very Large Array (VLA) observations of 44 GHz continuum and CO J=2-1 line emission in BR1202-0725 at z=4.7 (a starburst galaxy and quasar pair) and BRI1335-0417 at z=4.4 (also hosting a quasar). With the full 8 GHz bandwidth capabilities of the upgraded VLA, we study the (rest-frame) 250 GHz thermal dust continuum emission for the first time along with the cold molecular gas traced by the Low-J CO line emission. The measured CO J=2-1 line luminosities of BR1202-0725 are L'(CO) = (8.7+/-0.8)x10^10 K km/s pc^2 and L'(CO) = (6.0+/-0.5)x10^10 K km/s pc^2 for the submm galaxy (SMG) and quasar, which are equal to previous measurements of the CO J=5-4 line luminosities implying thermalized line emission and we estimate a combined cold molecular gas mass of ~9x10^10 Msun. In BRI1335-0417 we measure L'(CO) = (7.3+/-0.6)x10^10 K km/s pc^2. We detect continuum emission in the SMG BR1202-0725 North (S(44GHz) = 51+/-6 microJy), while the quasar is detected with S(44GHz) = 24+/-6 microJy and in BRI1335-0417 we measure S(44GHz) = 40+/-7 microJy. Combining our continuum observations with previous data at (rest-frame) far-infrared and cm-wavelengths, we fit three component models in order to estimate the star-formation rates. This spectral energy distribution fitting suggests that the dominant contribution to the observed 44~GHz continuum is thermal dust emission, while either thermal free-free or synchrotron emission contributes less than 30%.
Backward terahertz radiation can be produced by a high-intensity laser normally incident upon an underdense plasma. It is found that terahertz radiation is generated by electrons refluxing along the bubble shell. These shell electrons have similar dynamic trajectories and emit backward radiations to vacuum. This scheme has been proved through electron dynamic calculations as well as by using an ionic sphere model. In addition, the bubble shape is found to influence the radiation frequency, and this scheme can be implemented in both uniform and up-ramp density gradient plasma targets. The terahertz radiation may be used for diagnosing the electron bubble shape in the interaction between an intense laser and plasma. All results are presented via 2.5 dimensional particle-in-cell simulations.
The discovery of gravitational waves and black holes has started a new era of gravitational wave astronomy that allows us to probe the underpinning features of gravity and astrophysics in extreme environments of the universe. In this article, we investigate one such study with an extreme mass-ratio inspiral system where the primary object is a spherically symmetric static black hole immersed in a dark matter halo governed by the Hernquist density distribution. We consider the eccentric equatorial orbital motion of the steller-mass object orbiting around the primary and compute measurable effects. We examine the behaviour of dark matter mass and halo radius in generated gravitational wave fluxes and the evolution of eccentric orbital parameters -- eccentricity and semi-latus rectum. We further provide an estimate of gravitational wave dephasing and find the seminal role of low-frequency detectors in the observational prospects of such an astrophysical environment.
We consider fully nonlinear Hamilton-Jacobi-Bellman equations associated to diffusion control problems involving a finite set-valued (or switching) control and possibly a continuum-valued control. In previous works (Akian, Fodjo, 2016 and 2017), we introduced a lower complexity probabilistic numerical algorithm for such equations by combining max-plus and numerical probabilistic approaches. The max-plus approach is in the spirit of the one of McEneaney, Kaise and Han (2011), and is based on the distributivity of monotone operators with respect to suprema. The numerical probabilistic approach is in the spirit of the one proposed by Fahim, Touzi and Warin (2011). A difficulty of the latter algorithm was in the critical constraints imposed on the Hamiltonian to ensure the monotonicity of the scheme, hence the convergence of the algorithm. Here, we present new probabilistic schemes which are monotone under rather weak assumptions, and show error estimates for these schemes. These estimates will be used in further works to study the probabilistic max-plus method.
We present results on the production of high transverse momentum charm mesons in collisions of 515 GeV/c negative pions with beryllium and copper targets. The experiment recorded a large sample of events containing high transverse momentum showers detected in an electromagnetic calorimeter. From these data, a sample of charm mesons has been reconstructed via their decay into the fully charged K pi pi mode. A measurement of the single inclusive transverse momentum distribution of charged D mesons from 1 to 8 GeV/c is presented. An extrapolation of the measured differential cross section yields an integrated charged D cross section of 11.4+-2.7(stat)+-3.3(syst) microbarns per nucleon for charged D mesons with Feynman x greater than zero. The data are compared with expectations based upon next-to-leading order perturbative QCD, as well as with results from PYTHIA. We also compare our integrated charged D cross section with measurements from other experiments.
In this paper we develop a general method for constructing 3-point functions in conformal field theory with affine Lie group symmetry, continuing our recent work on 2-point functions. The results are provided in terms of triangular coordinates used in a wave function description of vectors in highest weight modules. In this framework, complicated couplings translate into ordinary products of certain elementary polynomials. The discussions pertain to all simple Lie groups and arbitrary integrable representation. An interesting by-product is a general procedure for computing tensor product coefficients, essentially by counting integer solutions to certain inequalities. As an illustration of the construction, we consider in great detail the three cases SL(3), SL(4) and SO(5).
We investigate symmetry properties of the Bethe ansatz wave functions for the Heisenberg $XXZ$ spin chain. The $XXZ$ Hamiltonian commutes simultaneously with the shift operator $T$ and the lattice inversion operator $V$ in the space of $\Omega=\pm 1$ with $\Omega$ the eigenvalue of $T$. We show that the Bethe ansatz solutions with normalizable wave functions cannot be the eigenstates of $T$ and $V$ with quantum number $(\Omega,\Upsilon)=(\pm 1,\mp 1)$ where $\Upsilon$ is the eigenvalue of $V$. Therefore the Bethe ansatz wave functions should be singular for nondegenerate eigenstates of the Hamiltonian with quantum number $(\Omega,\Upsilon)=(\pm 1,\mp 1)$. It is also shown that such states exist in any nontrivial down-spin number sector and that the number of them diverges exponentially with the chain length.
Using a combination of incentive modeling and empirical meta-analyses, this paper provides a pointed critique at the incentive systems that drive venture capital firms to optimize their practices towards activities that increase General Partner utility yet are disjoint from improving the underlying asset of startup equity. We propose a "distributed venture firm" powered by software automations and governed by a set of functional teams called "Pods" that carry out specific tasks with immediate and long-term payouts given on a deal-by-deal basis. Avenues are provided for further research to validate this model and discover likely paths to implementation.
We present a numerical model which describes the propagation of a single femtosecond laser pulse in a medium of which the optical properties dynamically change within the duration of the pulse. We use a Finite Difference Time Domain (FDTD) method to solve the Maxwell's equations coupled to equations describing the changes in the material properties. We use the model to simulate the self-reflectivity of strongly focused femtosecond laser pulses on silicon and gold under laser ablation condition. We compare the simulations to experimental results and find excellent agreement.
We study nonlinear resolvents of holomorphic generators of one-parameter semigroups acting in the open unit disk. The class of nonlinear resolvents can be studied in the framework of geometric function theory because it consists of univalent functions. In this paper we establish distortion and covering results, find order of starlikeness and of strong starlikeness of resolvents. This provides that any resolvent admits quasiconformal extension to the complex plane $\C$. In addition, we obtain some characteristics of semigroups generated by these resolvents.
We study an information design problem with two informed senders and a receiver in which, in contrast to traditional Bayesian persuasion settings, senders do not have commitment power. In our setting, a trusted mediator/platform gathers data from the senders and recommends the receiver which action to play. We characterize the set of implementable action distributions that can be obtained in equilibrium, and provide an $O(n \log n)$ algorithm (where $n$ is the number of states) that computes the optimal equilibrium for the senders. Additionally, we show that the optimal equilibrium for the receiver can be obtained by a simple revelation mechanism.
Moser & Tardos have developed a powerful algorithmic approach (henceforth "MT") to the Lovasz Local Lemma (LLL); the basic operation done in MT and its variants is a search for "bad" events in a current configuration. In the initial stage of MT, the variables are set independently. We examine the distributions on these variables which arise during intermediate stages of MT. We show that these configurations have a more or less "random" form, building further on the "MT-distribution" concept of Haeupler et al. in understanding the (intermediate and) output distribution of MT. This has a variety of algorithmic applications; the most important is that bad events can be found relatively quickly, improving upon MT across the complexity spectrum: it makes some polynomial-time algorithms sub-linear (e.g., for Latin transversals, which are of basic combinatorial interest), gives lower-degree polynomial run-times in some settings, transforms certain super-polynomial-time algorithms into polynomial-time ones, and leads to Las Vegas algorithms for some coloring problems for which only Monte Carlo algorithms were known. We show that in certain conditions when the LLL condition is violated, a variant of the MT algorithm can still produce a distribution which avoids most of the bad events. We show in some cases this MT variant can run faster than the original MT algorithm itself, and develop the first-known criterion for the case of the asymmetric LLL. This can be used to find partial Latin transversals -- improving upon earlier bounds of Stein (1975) -- among other applications. We furthermore give applications in enumeration, showing that most applications (where we aim for all or most of the bad events to be avoided) have many more solutions than known before by proving that the MT-distribution has "large" min-entropy and hence that its support-size is large.
The pervasive use of information and communication technology (ICT) in modern societies enables countless opportunities for individuals, institutions, businesses and scientists, but also raises difficult ethical and social problems. In particular, ICT helped to make societies more complex and thus harder to understand, which impedes social and political interventions to avoid harm and to increase the common good. To overcome this obstacle, the large-scale EU flagship proposal FuturICT intends to create a platform for accessing global human knowledge as a public good and instruments to increase our understanding of the information society by making use of ICT-based research. In this contribution, we outline the ethical justification for such an endeavor. We argue that the ethical issues raised by FuturICT research projects overlap substantially with many of the known ethical problems emerging from ICT use in general. By referring to the notion of Value Sensitive Design, we show for the example of privacy how this core value of responsible ICT can be protected in pursuing research in the framework of FuturICT. In addition, we discuss further ethical issues and outline the institutional design of FuturICT allowing to address them.
In this note, we propose several unsolved problems concerning the irrotational oscillation of a water droplet under zero gravity. We will derive the governing equation of this physical model, and convert it to a quasilinear dispersive partial differential equation defined on the sphere, which formally resembles the capillary water waves equation but describes oscillation defined on curved manifold instead. Three types of unsolved mathematical problems related to this model will be discussed in observation of hydrodynamical experiments under zero gravity: (1) Strichartz type inequalities for the linearized problem (2) existence of periodic solutons (3) normal form reduction and generic lifespan estimate. It is pointed out that all of these problems are closely related to certain Diophantine equations, especially the third one.
For any regular cardinal $\kappa$ and ordinal $\eta<\kappa^{++}$ it is consistent that $2^{\kappa}$ is as large as you wish, and every function $f:\eta \to [\kappa,2^{\kappa}]\cap Card$ with $f(\alpha)=\kappa$ for $cf(\alpha)<\kappa$ is the cardinal sequence of some locally compact scattered space.
The kinematic plane of stars near the Sun has proven an indispensable tool for untangling the complexities of the structure of our Milky Way (MW). With ever improving data, numerous kinematic "moving groups" of stars have been better characterized and new ones continue to be discovered. Here we present an improved method for detecting these groups using MGwave, a new open-source 2D wavelet transformation code that we have developed. Our code implements similar techniques to previous wavelet software; however, we include a more robust significance methodology and also allow for the investigation of underdensities which can eventually provide further information about the MW's non-axisymmetric features. Applying MGwave to the latest data release from Gaia (DR3), we detect 47 groups of stars with coherent velocities. We reproduce the majority of the previously detected moving groups in addition to identifying three additional significant candidates: one within Arcturus, and two in regions without much substructure at low $V_R$. Finally, we have followed these associations of stars beyond the solar neighborhood, from Galactocentric radius of 6.5 to 10 kpc. Most detected groups are extended throughout radius indicating that they are streams of stars possibly due to non-axisymmetric features of the MW.
Finding nearest neighbors in high-dimensional spaces is a fundamental operation in many diverse application domains. Locality Sensitive Hashing (LSH) is one of the most popular techniques for finding approximate nearest neighbor searches in high-dimensional spaces. The main benefits of LSH are its sub-linear query performance and theoretical guarantees on the query accuracy. In this survey paper, we provide a review of state-of-the-art LSH and Distributed LSH techniques. Most importantly, unlike any other prior survey, we present how Locality Sensitive Hashing is utilized in different application domains.
In this work, we explore a class of compact charged spheres that have been tested against experimental and observational constraints with some known compact stars candidates. The study is performed by considering the self-gravitating, charged, isotropic fluids which is more pliability in solving the Einstein-Maxwell equations. In order to determine the interior geometry, we utilize the Vaidya-Tikekar geometry for the metric potential with Riessner-Nordstrom metric as an exterior solution. In this models, we determine constants after selecting some particular values of M and R, for the compact objects SAX J1808.4-3658, Her X-1 and 4U 1538-52. The most striking consequence is that hydrostatic equilibrium is maintained for different forces, and the situation is clarified by using the generalized Tolman-Oppenheimer-Volkoff (TOV) equation. In addition to this, we also present the energy conditions, speeds of sound and compactness of stars that are very much compatible to that for a physically acceptable stellar model. Arising solutions are also compared with graphical representations that provide strong evidences for more realistic and viable models, both at theoretical and astrophysical scale.
We have shown in detail that the low-temperature expansion for the non-perturbative gluon pressure has the Hagedorn-type structure. Its exponential spectrum of all the effective gluonic excitations are expressed in terms of the mass gap. It is this which is responsible for the large-scale dynamical structure of the QCD ground state. The non-perturbative gluon pressure properly scaled has a maximum at some characteristic temperature $T=T_c = 266.5 \ \MeV$, separating the low- and high temperature regions. It is exponentially suppressed in the $T \rightarrow 0$ limit. In the $T \rightarrow T_c$ limit it demonstrates an exponential rise in the number of dynamical degrees of freedom. Its exponential increase behavior with temperature is valid only up to $T_c$. This makes it possible to identify $T_c$ with the Hagedorn-type transition temperature $T_h$, i.e., to put $T_h=T_c$ within the mass gap approach to QCD at finite temperature. The non-perturbative gluon pressure has a complicated dependence on the mass gap and temperature near $T_c$ and up to approximately $(4-5)T_c$. In the limit of very high temperatures $T \rightarrow \infty$ its polynomial character is confirmed, containing the terms proportional to $T^2$ and $T$, multiplied by the corresponding powers of the mass gap. \end{abstract}
We prove a version of the Stokes formula for differential forms on locally convex spaces. The main tool used for proving this formula is the surface layer theorem proved in another paper by the author. Moreover, for differential forms of a Sobolev-type class relative to a differentiable measure, we compute the operator adjoint to the exterior differential in terms of standard operations of calculus of differential forms and the logarithmic derivative. Previously, this connection was established under essentially stronger assumptions.
Extremely red quasars (ERQs) are an interesting sample of quasars in the Baryon Oscillation Spectroscopic Sample (BOSS) in the redshift range of $2.0 - 3.4$ and have extreme red colours of $i-W3\ge4.6$. Core ERQs have strong CIV emission lines with rest equivalent width of $\ge100$\AA. Many core ERQs also have CIV line profiles with peculiar boxy shapes which distinguish them from normal blue quasars. We show, using a combination of kernel density estimation and local outlier factor analyses on a space of the $i-W3$ colour, CIV rest equivalent width and line kurtosis, that core ERQs likely represent a separate population rather than a smooth transition between normal blue quasars and the quasars in the tail of the colour-REW distribution. We apply our analyses to find new criteria for selecting ERQs in this 3D parameter space. Our final selection produces $133$ quasars, which are \emph{three} times more likely to have a visually verified CIV broad absorption line feature than the previous core ERQ sample. We further show that our newly selected sample are extreme objects in the intersection of the WISE AGN catalogue with the MILLIQUAS quasar catalogue in the colour-colour space of ($W1-W2$, $W2-W3$). This paper validates an improved selection method for red quasars which can be applied to future datasets such as the quasar catalogue from the Dark Energy Spectroscopic Instrument (DESI).
The problem of sorting with priced information was introduced by [Charikar, Fagin, Guruswami, Kleinberg, Raghavan, Sahai (CFGKRS), STOC 2000]. In this setting, different comparisons have different (potentially infinite) costs. The goal is to find a sorting algorithm with small competitive ratio, defined as the (worst-case) ratio of the algorithm's cost to the cost of the cheapest proof of the sorted order. The simple case of bichromatic sorting posed by [CFGKRS] remains open: We are given two sets $A$ and $B$ of total size $N$, and the cost of an $A-A$ comparison or a $B-B$ comparison is higher than an $A-B$ comparison. The goal is to sort $A \cup B$. An $\Omega(\log N)$ lower bound on competitive ratio follows from unit-cost sorting. Note that this is a generalization of the famous nuts and bolts problem, where $A-A$ and $B-B$ comparisons have infinite cost, and elements of $A$ and $B$ are guaranteed to alternate in the final sorted order. In this paper we give a randomized algorithm InversionSort with an almost-optimal w.h.p. competitive ratio of $O(\log^{3} N)$. This is the first algorithm for bichromatic sorting with a $o(N)$ competitive ratio.
The anomalous Hall effect is mainly used to probe the magnetization orientation in ferromagnetic materials. A less explored aspect is the torque acting back on magnetization, an effect that can be important at high currents. The spin-orbit coupling of the conduction electrons causes spin-up and spin-down electrons to scatter to opposite sides when a charge current flows in the sample. This is equivalent to a spin current with orientation and flow perpendicular to the driving charge current, leading to a non-equilibrium spin accumulation that exerts a torque on the bulk magnetization through the s-d exchange interaction. The symmetry of this toque is that of an uniaxial anisotropy along the driving current. The large screening currents generated with laser pulses in all-optical magnetic switching experiments make for practical uses of this torque.
The low-complexity assumption in linear systems can often be expressed as rank deficiency in data matrices with generalized Hankel structure. This makes it possible to denoise the data by estimating the underlying structured low-rank matrix. However, standard low-rank approximation approaches are not guaranteed to perform well in estimating the noise-free matrix. In this paper, recent results in matrix denoising by singular value shrinkage are reviewed. A novel approach is proposed to solve the low-rank Hankel matrix denoising problem by using an iterative algorithm in structured low-rank approximation modified with data-driven singular value shrinkage. It is shown numerically in both the input-output trajectory denoising and the impulse response denoising problems, that the proposed method performs the best in terms of estimating the noise-free matrix among existing algorithms of low-rank matrix approximation and denoising.
CVSO 30 is a young, active, weak-line T Tauri star; it possibly hosts the only known planetary system with both a transiting hot-Jupiter and a cold-Jupiter candidate (CVSO 30 b and c). We analyzed archival ROSAT, Chandra, and XMM-Newton data to study the coronal emission in the system. According to our modeling, CVSO 30 shows a quiescent X-ray luminosity of about 8e29 erg/s. The X-ray absorbing column is consistent with interstellar absorption. XMM-Newton observed a flare, during which a transit of the candidate CVSO 30 b was expected, but no significant transit-induced variation in the X-ray flux is detectable. While the hot-Jupiter candidate CVSO 30 b has continuously been undergoing mass loss powered by the high-energy irradiation, we conclude that its evaporation lifetime is considerably longer than the estimated stellar age of 2.6 Myr.
We study the effective behavior of random, heterogeneous, anisotropic, second order phase transitions energies that arise in the study of pattern formations in physical-chemical systems. Specifically, we study the asymptotic behavior, as $\epsilon$ goes to zero, of random heterogeneous anisotropic functionals in which the second order perturbation competes not only with a double well potential but also with a possibly negative contribution given by the first order term. We prove that, under suitable growth conditions and under a stationarity assumption, the functionals $\Gamma$-converge almost surely to a surface energy whose density is independent of the space variable. Furthermore, we show that the limit surface density can be described via a suitable cell formula and is deterministic when ergodicity is assumed.
We present a control approach for autonomous vehicles based on deep reinforcement learning. A neural network agent is trained to map its estimated state to acceleration and steering commands given the objective of reaching a specific target state while considering detected obstacles. Learning is performed using state-of-the-art proximal policy optimization in combination with a simulated environment. Training from scratch takes five to nine hours. The resulting agent is evaluated within simulation and subsequently applied to control a full-size research vehicle. For this, the autonomous exploration of a parking lot is considered, including turning maneuvers and obstacle avoidance. Altogether, this work is among the first examples to successfully apply deep reinforcement learning to a real vehicle.
Mobile augmented reality (MAR) blends a real scenario with overlaid virtual content, which has been envisioned as one of the ubiquitous interfaces to the Metaverse. Due to the limited computing power and battery life of MAR devices, it is common to offload the computation tasks to edge or cloud servers in close proximity. However, existing offloading solutions developed for MAR tasks suffer from high migration overhead, poor scalability, and short-sightedness when applied in provisioning multi-user MAR services. To address these issues, a MAR service-oriented task offloading scheme is designed and evaluated in edge-cloud computing networks. Specifically, the task interdependency of MAR applications is firstly analyzed and modeled by using directed acyclic graphs. Then, we propose a look-ahead offloading scheme based on a modified Monte Carlo tree (MMCT) search, which can run several multi-step executions in advance to get an estimate of the long-term effect of immediate action. Experiment results show that the proposed offloading scheme can effectively improve the quality of service (QoS) in provisioning multi-user MAR services, compared to four benchmark schemes. Furthermore, it is also shown that the proposed solution is stable and suitable for applications in a highly volatile environment.
Pedestrian detection has achieved great improvements with the help of Convolutional Neural Networks (CNNs). CNN can learn high-level features from input images, but the insufficient spatial resolution of CNN feature channels (feature maps) may cause a loss of information, which is harmful especially to small instances. In this paper, we propose a new pedestrian detection framework, which extends the successful RPN+BF framework to combine handcrafted features and CNN features. RoI-pooling is used to extract features from both handcrafted channels (e.g. HOG+LUV, CheckerBoards or RotatedFilters) and CNN channels. Since handcrafted channels always have higher spatial resolution than CNN channels, we apply RoI-pooling with larger output resolution to handcrafted channels to keep more detailed information. Our ablation experiments show that the developed handcrafted features can reach better detection accuracy than the CNN features extracted from the VGG-16 net, and a performance gain can be achieved by combining them. Experimental results on Caltech pedestrian dataset with the original annotations and the improved annotations demonstrate the effectiveness of the proposed approach. When using a more advanced RPN in our framework, our approach can be further improved and get competitive results on both benchmarks.
In this paper, we propose a spectral method for deriving functions that are jointly smooth on multiple observed manifolds. This allows us to register measurements of the same phenomenon by heterogeneous sensors, and to reject sensor-specific noise. Our method is unsupervised and primarily consists of two steps. First, using kernels, we obtain a subspace spanning smooth functions on each separate manifold. Then, we apply a spectral method to the obtained subspaces and discover functions that are jointly smooth on all manifolds. We show analytically that our method is guaranteed to provide a set of orthogonal functions that are as jointly smooth as possible, ordered by increasing Dirichlet energy from the smoothest to the least smooth. In addition, we show that the extracted functions can be efficiently extended to unseen data using the Nystr\"{o}m method. We demonstrate the proposed method on both simulated and real measured data and compare the results to nonlinear variants of the seminal Canonical Correlation Analysis (CCA). Particularly, we show superior results for sleep stage identification. In addition, we show how the proposed method can be leveraged for finding minimal realizations of parameter spaces of nonlinear dynamical systems.
Given an initial matching and a policy objective on the distribution of agent types to institutions, we study the existence of a mechanism that weakly improves the distributional objective and satisfies constrained efficiency, individual rationality, and strategy-proofness. We show that such a mechanism need not exist in general. We introduce a new notion of discrete concavity, which we call pseudo M$^{\natural}$-concavity, and construct a mechanism with the desirable properties when the distributional objective satisfies this notion. We provide several practically relevant distributional objectives that are pseudo M$^{\natural}$-concave.
In this paper we study the Nevanlinna-Pick matrix interpolation problem in the Carath\'eodory class with infinite data (both in the nondegenerate and degenerate cases). We develop the Sz\"okefalvi-Nagy and Kor\'anyi operator approach to obtain an analytic description of all solutions of the problem. Simple necessary and sufficient conditions for the determinacy of the problem are given.
A new trigger for NEMO Phase 2 tower based on the time differences of the PMT hits has been studied. Such a trigger uses only a fixed number of PMT hits in a chosen time windows. The background trigger rate is drastically reduced requiring hits from different PMTs. A 87% trigger efficiency was estimated by Montecarlo simulation for muon tracks with at least 5 PMT hits. The trigger rate estimated by Montecarlo was also measured on raw data. The results from Montecarlo simulations and raw data are reported.
We show that the dispersionless limits of the Pfaff-KP (also known as the DKP or Pfaff lattice) and the Pfaff-Toda hierarchies admit a reformulation through elliptic functions. In the elliptic form they look like natural elliptic deformations of the KP and 2D Toda hierarchy respectively.
A two-type version of the frog model on $\mathbb{Z}^d$ is formulated, where active type $i$ particles move according to lazy random walks with probability $p_i$ of jumping in each time step ($i=1,2$). Each site is independently assigned a random number of particles. At time 0, the particles at the origin are activated and assigned type 1 and the particles at one other site are activated and assigned type 2, while all other particles are sleeping. When an active type $i$ particle moves to a new site, any sleeping particles there are activated and assigned type $i$, with an arbitrary tie-breaker deciding the type if the site is hit by particles of both types in the same time step. We show that the event $G_i$ that type $i$ activates infinitely many particles has positive probability for all $p_1,p_2\in(0,1]$ ($i=1,2$). Furthermore, if $p_1=p_2$, then the types can coexist in the sense that $\mathbb{P}(G_1\cap G_2)>0$. We also formulate several open problems. For instance, we conjecture that, when the initial number of particles per site has a heavy tail, the types can coexist also when $p_1\neq p_2$.
Density-functional theory for superfluid systems is developed in the framework of the functional renormalization group based on the effective action formalism. We introduce the effective action for the particle-number and nonlocal pairing densities and demonstrate that the Hohenberg-Kohn theorem for superfluid systems is established in terms of the effective action. The flow equation for the effective action is then derived, where the flow parameter runs from $0$ to $1$, corresponding to the non-interacting and interacting systems. From the flow equation and the variational equation that the equilibrium density satisfies, we obtain the exact expression for the Kohn-Sham potential generalized to including the pairing potentials. The resultant Kohn-Sham potential has a nice feature that it expresses the microscopic formulae of the external, Hartree, pairing, and exchange-correlation terms, separately. It is shown that our Kohn-Sham potential gives the ground-state energy of the Hartree-Fock-Bogoliubov theory by neglecting the correlations. An advantage of our exact formalism lies in the fact that it provides ways to systematically improve the correlation part.
In this paper we first provide a general formula of inclusion for the Dini-Hadamard epsilon-subdifferential of the difference of two functions and show that it becomes equality in case the functions are directionally approximately starshaped at a given point and a weak topological assumption is fulfilled. To this end we give a useful characterization of the Dini-Hadamard epsilon-subdifferential by means of sponges. The achieved results are employed in the formulation of optimality conditions via the Dini-Hadamard subdifferential for cone-constrained optimization problems having the difference of two functions as objective.
The medical datasets are usually faced with the problem of scarcity and data imbalance. Moreover, annotating large datasets for semantic segmentation of medical lesions is domain-knowledge and time-consuming. In this paper, we propose a new object-blend method(short in soft-CP) that combines the Copy-Paste augmentation method for semantic segmentation of medical lesions offline, ensuring the correct edge information around the lession to solve the issue above-mentioned. We proved the method's validity with several datasets in different imaging modalities. In our experiments on the KiTS19[2] dataset, Soft-CP outperforms existing medical lesions synthesis approaches. The Soft-CP augementation provides gains of +26.5% DSC in the low data regime(10% of data) and +10.2% DSC in the high data regime(all of data), In offline training data, the ratio of real images to synthetic images is 3:1.
The Optimal Filtering (OF) reconstruction of the sampled signals from a particle detector such as a liquid ionization calorimeter relies on the knowledge of the normalized pulse shapes. This knowledge is always imprecise, since there are residual differences between the true ionization pulse shapes and the predicted ones, whatever the method used to model or fit the particle--induced signals. The systematic error introduced by the residuals on the signal amplitude estimate is analyzed, as well as the effect on the quality factor provided by the OF reconstruction. An analysis method to evaluate the residuals from a sample of signals is developed and tested with a simulation tool. The correction obtained is showed to preserve the original amplitude normalization, while restoring the expected $\chi^2 $--like behavior of the quality factor.
To support future 6G mobile applications, the mobile edge computing (MEC) network needs to be jointly optimized for computing, pushing, and caching to reduce transmission load and computation cost. To achieve this, we propose a framework based on deep reinforcement learning that enables the dynamic orchestration of these three activities for the MEC network. The framework can implicitly predict user future requests using deep networks and push or cache the appropriate content to enhance performance. To address the curse of dimensionality resulting from considering three activities collectively, we adopt the soft actor-critic reinforcement learning in continuous space and design the action quantization and correction specifically to fit the discrete optimization problem. We conduct simulations in a single-user single-server MEC network setting and demonstrate that the proposed framework effectively decreases both transmission load and computing cost under various configurations of cache size and tolerable service delay.
Entropy generation in a chemical reaction is analyzed without using the general formalism of non-equilibrium thermodynamics at a level adequate for advanced undergraduates. In a first approach to the problem, the phenomenological kinetic equation of an elementary first order reaction is used to show that entropy production is always positive. A second approach assumes that the reaction is near equilibrium to prove that the entropy generated is always greater than zero, without any reference to the kinetics of the reaction. Finally, it is shown that entropy generation is related to fluctuations in the number of particles at equilibrium, i.e. it is associated to a microscopic process.
We study the dynamics of clusters of Active Brownian Disks generated by Motility-Induced Phase Separation, by applying an algorithm that we devised to track cluster trajectories. We identify an aggregation mechanism that goes beyond Ostwald ripening but also yields $z=3$. Active clusters of mass $M$ self-propel with enhanced diffusivity $D\sim$ Pe$^2/\sqrt{M}$. Their fast motion drives aggregation into large fractal structures, which are patchworks of diverse hexatic orders, and coexist with regular, orientationally uniform, smaller ones. To bring out the impact of activity, we perform a comparative study of a passive system that evidences major differences with the active case.
We study $2d$ conformal field theory (CFT) at large central charge $c$ and finite temperature $T$ with heavy operators inserted at spatial infinity. The heavy operators produce a nearly thermalized steady state at an effective temperature $T_{\rm eff}\leq T$. Under some assumptions, we find an effective Schwarzian-like description of these states and, when they exist, their gravity duals. We use this description to compute the Lyapunov exponents for light operators to be $2\pi T_{\rm eff}$, so that scrambling is suppressed by the heavy insertions.
We systematically study how the integrality of the conformal characters shapes the space of fermionic rational conformal field theories in two dimensions. The integrality suggests that conformal characters on torus with a given choice of spin structures should be invariant under a principal congruence subgroup of $\mathrm{PSL}(2,\mathbb{Z})$. The invariance strongly constrains the possible values of the central charge as well as the conformal weights in both Neveu-Schwarz and Ramond sectors, which improves the conventional holomorphic modular bootstrap method in a significant manner. This allows us to make much progress on the classification of fermionic rational conformal field theories with the number of independent characters less than five.