text
stringlengths
6
128k
I review the development of numerical evolution codes for general relativity based upon the characteristic initial value problem. Progress in characteristic evolution is traced from the early stage of 1D feasibility studies to 2D axisymmetric codes that accurately simulate the oscillations and gravitational collapse of relativistic stars and to current 3D codes that provide pieces of a binary black hole spacetime. Cauchy codes have now been successful at simulating all aspects of the binary black hole problem inside an artificially constructed outer boundary. A prime application of characteristic evolution is to extend such simulations to null infinity where the waveform from the binary inspiral and merger can be unambiguously computed. This has now been accomplished by Cauchy-characteristic extraction, where data for the characteristic evolution is supplied by Cauchy data on an extraction worldtube inside the artificial outer boundary. The ultimate application of characteristic evolution is to eliminate the role of this outer boundary by constructing a global solution via Cauchy-characteristic matching. Progress in this direction is discussed.
Let $K$ be a centrally symmetric spherical and simplicial polytope, whose vertices form a $\frac{1}{4n}-$net in the unit sphere in $\mathbb{R}^n$. We prove a uniform lower bound on the norms of all hyperplane projections $P: X \to X$, where $X$ is the $n$-dimensional normed space with the unit ball $K$. The estimate is given in terms of the determinant function of vertices and faces of $K$. In particular, if $N \geq n^{4n}$ and $K = \conv \{ \pm x_1, \pm x_2, \ldots, \pm x_N \}$, where $x_1, x_2, \ldots, x_N$ are independent random points distributed uniformly in the unit sphere, then every hyperplane projection $P: X \to X$ satisfies an inequality $\|P\|_X \geq 1+c_nN^{-(2n^2+4n+6)}$ (for some explicit constant $c_n$), with the probability at least $1 - \frac{3}{N}.$
Recently, there has been an increased interest in studying quantum entanglement and quantum coherence. Since both of these properties are attributed to the existence of quantum superposition, it would be useful to determine if some type of correlation between them exists. Hence, the purpose of this paper is to explore the type of the correlation in several systems with different types of anisotropy. The focus will be on the XY spin chains with the Dzyaloshinskii-Moriya interaction and the type of the mentioned bond will be explored using the quantum renormalization group method.
In this short note, we describe the so-called homogeneous involution on finite-dimensional graded-division algebra over an algebraically closed field. We also compute their graded polynomial identities with involution. As pointed out by L. Fonseca and T. de Mello, a homogeneous involution naturally appears when dealing with graded polynomial identities and a compatible involution.
It is sometimes argued that observation of tensor modes from inflation would provide the first evidence for quantum gravity. However, in the usual inflationary formalism, also the scalar modes involve quantised metric perturbations. We consider the issue in a semiclassical setup in which only matter is quantised, and spacetime is classical. We assume that the state collapses on a spacelike hypersurface, and find that the spectrum of scalar perturbations depends on the hypersurface. For reasonable choices, we can recover the usual inflationary predictions for scalar perturbations in minimally coupled single-field models. In models where non-minimal coupling to gravity is important and the field value is sub-Planckian, we do not get a nearly scale-invariant spectrum of scalar perturbations. As gravitational waves are only produced at second order, the tensor-to-scalar ratio is negligible. We conclude that detection of inflationary gravitational waves would indeed be needed to have observational evidence of quantisation of gravity.
We demonstrate a one-dimensional optical lattice clock with ultracold 171Yb atoms, which is free from the linear Zeeman effect. The absolute frequency of the 1S0(F = 1/2) - 3P0(F = 1/2) clock transition in 171Yb is determined to be 518 295 836 590 864(28) Hz with respect to the SI second.
The issue concerning the nature and the role of microstructural inhomogeneities in iron chalcogenide superconducting crystals of FeTe0.65Se0.35 and their correlation with transport properties of this system was addressed. Presented data demonstrate that chemical disorder originating from the kinetics of the crystal growth process significantly influences the superconducting properties of an Fe-Te-Se system. Transport measurements of the transition temperature and critical current density performed for microscopic bridges allow us to deduce the local properties of a superconductor with microstructural inhomogeneities, and significant differences were noted. The variances observed in the local properties were explained as a consequence of weak superconducting links existing in the studied crystals. The results confirm that inhomogeneous spatial distribution of ions and small hexagonal symmetry nanoscale regions with nanoscale phase separation also seem to enhance the superconductivity in this system with respect to the values of the critical current density. Magnetic measurements confirm the conclusions drawn from the transport measurements.
We show that for most pairs of surfaces, there exists a finite subgraph of the flip graph of the first surface so that any injective homomorphism of this finite subgraph into the flip graph of the second surface can be extended uniquely to an injective homomorphism between the two flip graphs. Combined with a result of Aramayona-Koberda-Parlier, this implies that any such injective homomorphism of this finite set is induced by an embedding of the surfaces. We also include images of several flip graphs in an appendix.
We present the results of a detailed analysis of the XMM-Newton observation of the galaxy cluster Abell 3921. The X-ray morphology of the cluster is elliptical, with the centroid offset from the brightest cluster galaxy by 17 arcsec, and with a pronounced extension toward the NW. Subtraction of a 2D beta model fit to the main cluster emission reveals a large scale, irregular residual structure in the direction of the extension, containing both diffuse emission from the intra cluster medium, and extended emission from the second and third-brightest cluster galaxies (BG2 and BG3). The greatest concentration of galaxies in the subcluster lies at the extreme northern edge of the residual. The cluster exhibits a remarkable temperature structure, in particular a bar of significantly hotter gas, oriented SE-NW and stretching from the centre of the cluster towards BG2 and BG3. Our detailed study of the morphological and thermal structure points to an off-axis merger between a main cluster and a less massive galaxy cluster infalling from the SE. From comparison of the temperature map with numerical simulations, and with independent calculations based on simple physical assumptions, we conclude that the merging event is ~0.5 Gyr old. The cluster is thus perhaps the best X-ray observed candidate so far of an intermediate mass ratio, moderate impact parameter merger.
We study the possibility of probing dark energy behaviour using gravitational wave experiments like LISA and Advanced LIGO. Using two popular parameterizations for dark energy equation of state, we show that with current sensitivities of LISA and Advanced LIGO to detect the stochastic gravitational waves, it is possible to probe a large section of parameter space for the dark energy equation of state which is allowed by present cosmological observations.
We present our analysis on the photon structure functions at small Bjorken variable x in the framework of the holographic QCD. In the kinematic region, a photon can fluctuate into vector mesons and behaves like a hadron rather than a pointlike particle. Assuming the Pomeron exchange dominance, the dominant hadronic contribution to the structure functions is computed by convoluting the probe and target photon density distributions obtained from the wave functions of the U(1) vector field in the five-dimensional AdS space and the Brower-Polchinski-Strassler-Tan Pomeron exchange kernel. Our calculations are in agreement with both the experimental data from OPAL collaboration at LEP and those calculated from the parton distribution functions of the photon proposed by Gl\"uck, Reya, and Schienbein. The predictions presented here will be tested at future linear colliders, such as the planned International Linear Collider.
The precise value of the neutron lifetime is of fundamental importance to particle physics and cosmology. The neutron lifetime recently obtained, 878.5 +/- 0.7stat +/- 0.3sys s, is the most accurate one to date. The new result for the neutron lifetime differs from the world average value by 6.5 standard deviations. The impact of the new result on testing of Standard Model and on data analysis for the primordial nucleosynthesis model is scrutinized.
We present a realization of quantized charge pumping. A lateral quantum dot is defined by metallic split gates in a GaAs/AlGaAs heterostructure. A surface acoustic wave whose wavelength is twice the dot length is used to pump single electrons through the dot at a frequency f=3GHz. The pumped current shows a regular pattern of quantization at values I=nef over a range of gate voltage and wave amplitude settings. The observed values of n, the number of electrons transported per wave cycle, are determined by the number of electronic states in the quantum dot brought into resonance with the fermi level of the electron reservoirs during the pumping cycle.
Despite their importance in a wide variety of applications, the estimation of ionization cross sections for large molecules continues to present challenges for both experiment and theory. Machine learning algorithms have been shown to be an effective mechanism for estimating cross section data for atomic targets and a select number of molecular targets. We present an efficient machine learning model for predicting ionization cross sections for a broad array of molecular targets. Our model is a 3-layer neural network that is trained using published experimental datasets. There is minimal input to the network, making it widely applicable. We show that with training on as few as 10 molecular datasets, the network is able to predict the experimental cross sections of additional molecules with an accuracy similar to experimental uncertainties in existing data. As the number of training molecular datasets increased, the network's predictions became more accurate and, in the worst case, were within 30% of accepted experimental values. In many cases, predictions were within 10% of accepted values. Using a network trained on datasets for 25 different molecules, we present predictions for an additional 27 molecules, including alkanes, alkenes, molecules with ring structures, and DNA nucleotide bases.
The rearrangement step of nuclear fission occurs within 0.17 yoctosecond, in a new state of nuclear matter characterized by the formation of closed shells of nucleons. The determination of its lifetime is now based on the prompt neutron emission law. The width of isotopic distributions measures the uncertainty in the neutron number of the fragments. Magic mass numbers, 82 and 126, play a major role in the mass distributions. Arguments are presented in favour of an all-neutron state. The boson field responsible for the new collective interaction has to be searched for.
We show that gravitational interactions between massless thermal modes and a nucleating Coleman-de Luccia bubble may lead to efficient decoherence and strongly suppress metastable vacuum decay for bubbles that are small compared to the Hubble radius. The vacuum decay rate including gravity and thermal photon interactions has the exponential scaling $\Gamma\sim\Gamma_{CDL}^{2}$, where $\Gamma_{CDL}$ is the Coleman-de Luccia decay rate neglecting photon interactions. For the lowest metastable initial state an efficient quantum Zeno effect occurs due to thermal radiation of temperatures as low as the de Sitter temperature. This strong decoherence effect is a consequence of gravitational interactions with light external mode. We argue that efficient decoherence does not occur for the case of Hawking-Moss decay. This observation is consistent with requirements set by Poincare recurrence in de Sitter space.
I will present a method for providing initial guesses to a linear solver for systems with multiple shifts. This can also be extended to the case of multiple sources each with a different shift.
Many re-ranking strategies in search systems rely on stochastic ranking policies, encoded as Doubly-Stochastic (DS) matrices, that satisfy desired ranking constraints in expectation, e.g., Fairness of Exposure (FOE). These strategies are generally two-stage pipelines: \emph{i)} an offline re-ranking policy construction step and \emph{ii)} an online sampling of rankings step. Building a re-ranking policy requires repeatedly solving a constrained optimization problem, one for each issued query. Thus, it is necessary to recompute the optimization procedure for any new/unseen query. Regarding sampling, the Birkhoff-von-Neumann decomposition (BvND) is the favored approach to draw rankings from any DS-based policy. However, the BvND is too costly to compute online. Hence, the BvND as a sampling solution is memory-consuming as it can grow as $\gO(N\, n^2)$ for $N$ queries and $n$ documents. This paper offers a novel, fast, lightweight way to predict fair stochastic re-ranking policies: Constrained Meta-Optimal Transport (CoMOT). This method fits a neural network shared across queries like a learning-to-rank system. We also introduce Gumbel-Matching Sampling (GumMS), an online sampling approach from DS-based policies. Our proposed pipeline, CoMOT + GumMS, only needs to store the parameters of a single model, and it generalizes to unseen queries. We empirically evaluated our pipeline on the TREC 2019 and 2020 datasets under FOE constraints. Our experiments show that CoMOT rapidly predicts fair re-ranking policies on held-out data, with a speed-up proportional to the average number of documents per query. It also displays fairness and ranking performance similar to the original optimization-based policy. Furthermore, we empirically validate the effectiveness of GumMS to approximate DS-based policies in expectation.
It is well-known that the action of a quantum channel on a state can be represented, using an auxiliary space, as the partial trace of an associated bipartite state. Recently, it was observed that for the bipartite state associated with the optimal average input of the channel, the entanglement of formation is simply the entropy of the reduced density matrix minus the Holevo capacity. It is natural to ask if every bipartite state can be associated with some channel in this way. We show that the answer is negative.
Possible saturation of betatron acceleration of dust particles behind strong shock fronts from supernovae is considered. It is argued that the efficiency of the nonthermal dust destruction should be substantially lower than the value estimated from a traditional description of betatron acceleration of dust grains behind radiative shock waves. The inhibition of the nonthermal destruction can be connected with the mirror instability developed in the dust component behind strong shocks with the velocity 3 times exceeding the Alfv\'en speed. The instability develops on characteristic time scales much shorter the age of a supernova remnant, thus its influence on the efficiency of dust destruction can be substantial: in the range of shock velocities 100 km s$^{-1}<v_s<300$ km s$^{-1}$ the destruction efficiency can be an order of magnitude lower that normally estimated.
Two-photon photopolymerization of UV curing resins is an attractive method for the fabrication of microscopic transparent objects with size in the tens of micrometers range. We have been using this method to produce three-dimensional structures for optical micromanipulation, in an optical system based on a femtosecond laser. By carefully adjusting the laser power and the exposure time we were able to create micro-objects with well-defined 3D features and with resolution below the diffraction limit of light. We discuss the performance and capabilities of a microfabrication system, with some examples of its products.
The present era has witnessed a new dawn in technology innovations with the entry and use of nanomaterials in the industries and in the products used and to be used in day-to-day life creating a huge possibility of ending up in the food chain. Several studies in the past has highlighted toxicity of nanomaterials due to their size. However, we cannot stop technological advancements provided by the nanomaterials fulfilling human needs but can find a solution to the toxicity of the nanomaterials for a better future and safe environment. In this study, we propose capping of nanomaterials to reduce the toxicity without compromising their functionality. Capping of the nanomaterials is used to passivate nanomaterials but the same capping also helps in the reduction of surface reactivity leading to low toxicity. We studied phytotoxicity in the presence of one of the most extensively used metal nanoparticles (copper nanoparticles) on Eleusine corcana G. (finger millet) and Paspalum scrobiculatum L. (Kodo millet). Copper nanoparticles were synthesized by the hydrometallurgical methods. Ethylenediaminetetraacetic acid (EDTA) was used to cap the nanoparticles during the synthesis. In vitro studies results showed that the toxicity of copper nanoparticles is significantly reduced after capping. Anti-bacterial activity studies showed no change in efficacy of copper nanoparticles after capping. This study highlights the use of capping to reduce the toxicity of nanomaterials without sacrificing their required applicability.
Despite rapid progress in increasing the language coverage of automatic speech recognition, the field is still far from covering all languages with a known writing script. Recent work showed promising results with a zero-shot approach requiring only a small amount of text data, however, accuracy heavily depends on the quality of the used phonemizer which is often weak for unseen languages. In this paper, we present MMS Zero-shot a conceptually simpler approach based on romanization and an acoustic model trained on data in 1,078 different languages or three orders of magnitude more than prior art. MMS Zero-shot reduces the average character error rate by a relative 46% over 100 unseen languages compared to the best previous work. Moreover, the error rate of our approach is only 2.5x higher compared to in-domain supervised baselines, while our approach uses no labeled data for the evaluation languages at all.
We present the results of ground-based imaging spectroscopy of the [Ne II] 12.8 micron line emitted from the ultracompact (UC) H II regions; W51d, G45.12+0.13, G35.20-1.74 and Monoceros R2, with 2arcsec spatial resolution. We found that the overall distribution of the [Ne II] emission is generally in good agreement with the radio (5 or 15 GHz) VLA distribution for each source. The Ne+ abundance ([Ne+/H+]) distributions are also derived from the [Ne II] and the radio maps. As for G45.12+0.13 and W51d, the Ne+ abundance decreases steeply from the outer part of the map toward the radio peak. On the other hand, the Ne+ abundance distributions of G35.20-1.74 and Mon R2 appear rather uniform. These results can be interpreted by the variation of ionizing structures of neon, which is primarily determined by the spectral type of the ionizing stars. We have evaluated the effective temperature of the ionizing star by comparing the Ne+ abundance averaged over the whole observed region with that calculated by H II region models based on recent non-LTE stellar atmosphere models: 39,100 (+1100, -500) K (O7.5V-O8V) for W51d, 37,200 (+1000, -700) K (O8V-O8.5V) for G45.12+0.13, 35,000-37,600 (+1500, -600) K (O8V-O9V) for G35.20-1.74, and < 34,000 K (< B0V) for Mon R2. These effective temperatures are consistent with those inferred from the observed Ne+ abundance distributions.
An integral criterion for the existence of an invariant measure of an It\^{o} process is developed. This new criterion is based on the probabilistic symbol of the It\^{o} process. In contrast to the standard integral criterion for invariant measures of Markov processes based on the generator, no test functions and hence no information on the domain of the generator is needed.
The dominantly orbital state method allows a semiclassical description of quantum systems. At the origin, it was developed for two-body relativistic systems. Here, the method is extended to treat two-body Hamiltonians and systems with three identical particles, in $D\ge 2$ dimensions, with arbitrary kinetic energy and potential. This method is very easy to implement and can be used in a large variety of fields. Results are expected to be reliable for large values of the orbital angular momentum and small radial excitations, but information about the whole spectrum can also be obtained in some very specific cases.
We calculate the magnetic response of a buckled honeycomb lattice with intrinsic spin-orbit coupling (such as silicene) which supports valley-spin polarized energy bands when subjected to a perpendicular electric field $E_z$. By changing the magnitude of the external electric field, the size of the two band gaps involved can be tuned, and a transition from a topological insulator (TI) to a trivial band insulator (BI) is induced as one of the gaps becomes zero, and the system enters a valley-spin polarized metallic state (VSPM). In an external magnetic field ($B$), a distinct signature of the transition is seen in the derivative of the magnetization with respect to chemical potential ($\mu$) which gives the quantization of the Hall plateaus through the Streda relation. When plotted as a function of the external electric field, the magnetization has an abrupt change in slope at its minimum which signals the VSPM state. The magnetic susceptibility ($\chi$) shows jumps as a function of $\mu$ when a band gap is crossed which provides a measure of the gaps' variation as a function of external electric field. Alternatively, at fixed $\mu$, the susceptibility displays an increasingly large diamagnetic response as the electric field approaches the critical value of the VSPM phase. In the VSPM state, magnetic oscillations exist for any value of chemical potential while for the TI, and BI state, $\mu$ must be larger than the minimum gap in the system. When $\mu$ is larger than both gaps, there are two fundamental cyclotron frequencies (which can also be tuned by $E_z$) involved in the de-Haas van-Alphen oscillations which are close in magnitude. This causes a prominent beating pattern to emerge.
System reliability analysis aims at computing the probability of failure of an engineering system given a set of uncertain inputs and limit state functions. Active-learning solution schemes have been shown to be a viable tool but as of yet they are not as efficient as in the context of component reliability analysis. This is due to some peculiarities of system problems, such as the presence of multiple failure modes and their uneven contribution to failure, or the dependence on the system configuration (e.g., series or parallel). In this work, we propose a novel active learning strategy designed for solving general system reliability problems. This algorithm combines subset simulation and Kriging/PC-Kriging, and relies on an enrichment scheme tailored to specifically address the weaknesses of this class of methods. More specifically, it relies on three components: (i) a new learning function that does not require the specification of the system configuration, (ii) a density-based clustering technique that allows one to automatically detect the different failure modes, and (iii) sensitivity analysis to estimate the contribution of each limit state to system failure so as to select only the most relevant ones for enrichment. The proposed method is validated on two analytical examples and compared against results gathered in the literature. Finally, a complex engineering problem related to power transmission is solved, thereby showcasing the efficiency of the proposed method in a real-case scenario.
This note develops Rio's proof [C. R. Math. Acad. Sci. Paris, 1995] of the rate of convergence in the Marcinkiewicz--Zygmund strong law of large numbers to the case of sums of dependent random variables with regularly varying normalizing constants. It allows us to obtain a complete convergence result for dependent sequences under uniformly bounded moment conditions. This result is new even when the underlying random variables are independent. The main theorems are applied to three different dependence structures: (i) $m$-pairwise negatively dependent random variables, (ii) $m$-extended negatively dependent random variables, and (iii) $\varphi$-mixing sequences. To our best knowledge, the results for cases (i) and (ii) are the first results in the literature on complete convergence for sequences of $m$-pairwise negatively dependent random variables and $m$-extended negatively dependent random variables under the optimal moment conditions even when $m=1$. While the results for cases (i) and (iii) unify and improve many existing ones, the result for case (ii) complements the main result of Chen et al. [J. Appl. Probab., 2010]. Affirmative answers to open questions raised by Chen et al. [J. Math. Anal. Appl., 2014] and Wu and Rosalsky [Glas. Mat. Ser. III, 2015] are also given. An example illustrating the sharpness of the main result is presented.
In this paper we study several issues related to the generation of superpotential induced by background Ramond-Ramond fluxes in compactification of Type IIA string theory on Calabi-Yau four-folds. Identifying BPS solitons with D-branes wrapped over calibrated submanifolds in a Calabi-Yau space, we propose a general formula for the superpotential and justify it comparing the supersymmetry conditions in D=2 and D=10 supergravity theories. We also suggest a geometric interpretation to the supersymmetric index in the two-dimensional effective theory in terms of topological invariants of the Calabi-Yau four-fold, and estimate the asymptotic growth of these invariants from BTZ black hole entropy. Finally, we explicitly construct new supersymmetric vacua for Type IIA string theory compactification on a Calabi-Yau four-fold with Ramond-Ramond fluxes.
We study the computation of Gaussian orthant probabilities, i.e. the probability that a Gaussian falls inside a quadrant. The Geweke-Hajivassiliou-Keane (GHK) algorithm [Genz, 1992; Geweke, 1991; Hajivassiliou et al., 1996; Keane, 1993], is currently used for integrals of dimension greater than 10. In this paper we show that for Markovian covariances GHK can be interpreted as the estimator of the normalizing constant of a state space model using sequential importance sampling (SIS). We show for an AR(1) the variance of the GHK, properly normalized, diverges exponentially fast with the dimension. As an improvement we propose using a particle filter (PF). We then generalize this idea to arbitrary covariance matrices using Sequential Monte Carlo (SMC) with properly tailored MCMC moves. We show empirically that this can lead to drastic improvements on currently used algorithms. We also extend the framework to orthants of mixture of Gaussians (Student, Cauchy etc.), and to the simulation of truncated Gaussians.
The possibility of interpreting baryons containing a single heavy quark as bound states of solitons (that arise in the nonlinear sigma model) and heavy mesons is explored. Particular attention is paid to the parity of the bound states and to the role of heavy quark symmetry.
We derive a quadratic recursion relation for the linear Hodge integrals of the form $\langle\tau_2^n\lambda_k\rangle$. These numbers are used in a formula for Masur-Veech volumes of moduli spaces of quadratic differentials discovered by Chen, M\"oller, and Sauvaget. Therefore, our recursion provides an efficient way of computing these volumes.
This paper addresses the problem of estimating the 3-DoF camera pose for a ground-level image with respect to a satellite image that encompasses the local surroundings. We propose a novel end-to-end approach that leverages the learning of dense pixel-wise flow fields in pairs of ground and satellite images to calculate the camera pose. Our approach differs from existing methods by constructing the feature metric at the pixel level, enabling full-image supervision for learning distinctive geometric configurations and visual appearances across views. Specifically, our method employs two distinct convolution networks for ground and satellite feature extraction. Then, we project the ground feature map to the bird's eye view (BEV) using a fixed camera height assumption to achieve preliminary geometric alignment. To further establish content association between the BEV and satellite features, we introduce a residual convolution block to refine the projected BEV feature. Optical flow estimation is performed on the refined BEV feature map and the satellite feature map using flow decoder networks based on RAFT. After obtaining dense flow correspondences, we apply the least square method to filter matching inliers and regress the ground camera pose. Extensive experiments demonstrate significant improvements compared to state-of-the-art methods. Notably, our approach reduces the median localization error by 89%, 19%, 80% and 35% on the KITTI, Ford multi-AV, VIGOR and Oxford RobotCar datasets, respectively.
Offline reinforcement learning has emerged as a promising technology by enhancing its practicality through the use of pre-collected large datasets. Despite its practical benefits, most algorithm development research in offline reinforcement learning still relies on game tasks with synthetic datasets. To address such limitations, this paper provides autonomous driving datasets and benchmarks for offline reinforcement learning research. We provide 19 datasets, including real-world human driver's datasets, and seven popular offline reinforcement learning algorithms in three realistic driving scenarios. We also provide a unified decision-making process model that can operate effectively across different scenarios, serving as a reference framework in algorithm design. Our research lays the groundwork for further collaborations in the community to explore practical aspects of existing reinforcement learning methods. Dataset and codes can be found in https://sites.google.com/view/ad4rl.
Heath and Pemmaraju conjectured that the queue-number of a poset is bounded by its width and if the poset is planar then also by its height. We show that there are planar posets whose queue-number is larger than their height, refuting the second conjecture. On the other hand, we show that any poset of width $2$ has queue-number at most $2$, thus confirming the first conjecture in the first non-trivial case. Moreover, we improve the previously best known bounds and show that planar posets of width $w$ have queue-number at most $3w-2$ while any planar poset with $0$ and $1$ has queue-number at most its width.
We find Mask2Former also achieves state-of-the-art performance on video instance segmentation without modifying the architecture, the loss or even the training pipeline. In this report, we show universal image segmentation architectures trivially generalize to video segmentation by directly predicting 3D segmentation volumes. Specifically, Mask2Former sets a new state-of-the-art of 60.4 AP on YouTubeVIS-2019 and 52.6 AP on YouTubeVIS-2021. We believe Mask2Former is also capable of handling video semantic and panoptic segmentation, given its versatility in image segmentation. We hope this will make state-of-the-art video segmentation research more accessible and bring more attention to designing universal image and video segmentation architectures.
Document Layout Analysis is a fundamental step in Handwritten Text Processing systems, from the extraction of the text lines to the type of zone it belongs to. We present a system based on artificial neural networks which is able to determine not only the baselines of text lines present in the document, but also performs geometric and logic layout analysis of the document. Experiments in three different datasets demonstrate the potential of the method and show competitive results with respect to state-of-the-art methods.
In many real-world applications such as business planning and sensor data monitoring, one important, yet challenging, the task is to rank objects(e.g., products, documents, or spatial objects) based on their ranking scores and efficiently return those objects with the highest scores. In practice, due to the unreliability of data sources, many real-world objects often contain noises and are thus imprecise and uncertain. In this paper, we study the problem of probabilistic top-k dominating(PTD) query on such large-scale uncertain data in a distributed environment, which retrieves k uncertain objects from distributed uncertain databases(on multiple distributed servers), having the largest ranking scores with high confidences. In order to efficiently tackle the distributed PTD problem, we propose a MapReduce framework for processing distributed PTD queries over distributed uncertain databases. In this MapReduce framework, we design effective pruning strategies to filter out false alarms in the distributed setting, propose cost-model-based index distribution mechanisms over servers, and develop efficient distributed PTD query processing algorithms. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed distributed PTD approach on both real and synthetic data sets through various experimental settings.
Generalizing S. Gelfand's classical construction of a Novikov algebra from a commutative differential algebra, a deformation family $(A,\circ_q)$, for scalars $q$, of Novikov algebras is constructed from what we call an admissible commutative differential algebra, by adding a second linear operator to the commutative differential algebra with certain admissibility condition. The case of $(A,\circ_0)$ recovers the construction of S. Gelfand. This admissibility condition also ensures a bialgebra theory of commutative differential algebras, enriching the antisymmetric infinitesimal bialgebra. This way, a deformation family of Novikov bialgebras is obtained, under the further condition that the two operators are bialgebra derivations. As a special case, we obtain a bialgebra variation of S. Gelfand's construction with an interesting twist: every commutative and cocommutative differential antisymmetric infinitesimal bialgebra gives rise to a Novikov bialgebra whose underlying Novikov algebra is $(A,\circ_{-\frac{1}{2}})$ instead of $(A,\circ_0)$. The close relations of the classical bialgebra theories with Manin triples, classical Yang-Baxter type equations, $\mathcal{O}$-operators, and pre-structures are expanded to the two new bialgebra theories, in a way that is compatible with the just established connection between the two bialgebras. As an application, Novikov bialgebras are obtained from admissible differential Zinbiel algebras.
Deep learning methods have been considered promising for accelerating molecular screening in drug discovery and material design. Due to the limited availability of labelled data, various self-supervised molecular pre-training methods have been presented. While many existing methods utilize common pre-training tasks in computer vision (CV) and natural language processing (NLP), they often overlook the fundamental physical principles governing molecules. In contrast, applying denoising in pre-training can be interpreted as an equivalent force learning, but the limited noise distribution introduces bias into the molecular distribution. To address this issue, we introduce a molecular pre-training framework called fractional denoising (Frad), which decouples noise design from the constraints imposed by force learning equivalence. In this way, the noise becomes customizable, allowing for incorporating chemical priors to significantly improve molecular distribution modeling. Experiments demonstrate that our framework consistently outperforms existing methods, establishing state-of-the-art results across force prediction, quantum chemical properties, and binding affinity tasks. The refined noise design enhances force accuracy and sampling coverage, which contribute to the creation of physically consistent molecular representations, ultimately leading to superior predictive performance.
Stoichiometric Sr2IrO4 is a ferromagnetic Jeff = 1/2 Mott insulator driven by strong spin-orbit coupling. Introduction of very dilute oxygen vacancies into single-crystal Sr2IrO4-delta with delta < 0.04 leads to significant changes in lattice parameters and an insulator-to-metal transition at TMI = 105 K. The highly anisotropic electrical resistivity of the low-temperature metallic state for delta ~ 0.04 exhibits anomalous properties characterized by non-Ohmic behavior and an abrupt current-induced transition in the resistivity at T* = 52 K, which separates two regimes of resisitive switching in the nonlinear I-V characteristics. The novel behavior illustrates an exotic ground state and constitutes a new paradigm for devices structures in which electrical resistivity is manipulated via low-level current densities ~ 10 mA/cm2 (compared to higher spin-torque currents ~ 107-108 A/cm2) or magnetic inductions ~ 0.1-1.0 T.
Graphene nanoribbons are a promising candidate for fault-tolerant quantum electronics. In this scenario, qubits are realised by localised states that can emerge on junctions in hybrid ribbons formed by two armchair nanoribbons of different widths. We derive an effective theory based on a tight-binding ansatz for the description of hybrid nanoribbons and use it to make accurate predictions of the energy gap and nature of the localisation in various hybrid nanoribbon geometries. We use quantum Monte Carlo simulations to demonstrate that the effective theory remains applicable in the presence of Hubbard interactions. We discover, in addition to the well known localisations on junctions, which we call `Fuji', a new type of `Kilimanjaro' localisation smeared out over a segment of the hybrid ribbon. We show that Fuji localisations in hybrids of width $N$ and $N+2$ armchair nanoribbons occur around symmetric junctions if and only if $N\pmod3=1$, while edge-aligned junctions never support strong localisation. This behaviour cannot be explained relying purely on the topological $Z_2$ invariant, which has been believed the origin of the localisations to date.
Semantic communication serves as a novel paradigm and attracts the broad interest of researchers. One critical aspect of it is the multi-user semantic communication theory, which can further promote its application to the practical network environment. While most existing works focused on the design of end-to-end single-user semantic transmission, a novel non-orthogonal multiple access (NOMA)-based multi-user semantic communication system named NOMASC is proposed in this paper. The proposed system can support semantic tranmission of multiple users with diverse modalities of source information. To avoid high demand for hardware, an asymmetric quantizer is employed at the end of the semantic encoder for discretizing the continuous full-resolution semantic feature. In addition, a neural network model is proposed for mapping the discrete feature into self-learned symbols and accomplishing intelligent multi-user detection (MUD) at the receiver. Simulation results demonstrate that the proposed system holds good performance in non-orthogonal transmission of multiple user signals and outperforms the other methods, especially at low-to-medium SNRs. Moreover, it has high robustness under various simulation settings and mismatched test scenarios.
In this paper we continue the study of non-relativistic p+1 dimensional theories that we started at arXiv:0904.1343. We extend the analysis presented there to the case of stable and unstable Dp-branes.
Clustering analysis has become a ubiquitous information retrieval tool in a wide range of domains, but a more automatic framework is still lacking. Though internal metrics are the key players towards a successful retrieval of clusters, their effectiveness on real-world datasets remains not fully understood, mainly because of their unrealistic assumptions underlying datasets. We hypothesized that capturing {\it traces of information gain} between increasingly complex clustering retrievals---{\it InfoGuide}---enables an automatic clustering analysis with improved clustering retrievals. We validated the {\it InfoGuide} hypothesis by capturing the traces of information gain using the Kolmogorov-Smirnov statistic and comparing the clusters retrieved by {\it InfoGuide} against those retrieved by other commonly used internal metrics in artificially-generated, benchmarks, and real-world datasets. Our results suggested that {\it InfoGuide} can enable a more automatic clustering analysis and may be more suitable for retrieving clusters in real-world datasets displaying nontrivial statistical properties.
In this work, we investigate the size, thermal inertia, surface roughness and geometric albedo of 10 Vesta family asteroids by using the Advanced Thermophysical Model (ATPM), based on the thermal infrared data acquired by mainly NASA's Wide-field Infrared Survey Explorer (WISE). Here we show that the average thermal inertia and geometric albedo of the investigated Vesta family members are 42 $\rm J m^{-2} s^{-1/2} K^{-1}$ and 0.314, respectively, where the derived effective diameters are less than 10 km. Moreover, the family members have a relatively low roughness fraction on their surfaces. The similarity in thermal inertia and geometric albedo among the V-type Vesta family member may reveal their close connection in the origin and evolution. As the fragments of the cratering event of Vesta, the family members may have undergone similar evolution process, thereby leading to very close thermal properties. Finally, we estimate their regolith grain sizes with different volume filling factors.
It has become clear during the last decades that the interaction between the supernova ejecta and the circumstellar medium is playing a major role both for the observational properties of the supernova and for understanding the evolution of the progenitor star leading up to the explosion. In addition, it provides an opportunity to understand the shock physics connected to both thermal and non-thermal processes, including relativistic particle acceleration, radiation processes and the hydrodynamics of shock waves. This chapter has an emphasis on the information we can get from radio and X-ray observations, but also their connection to observations in the optical and ultraviolet. We first review the different physical processes involved in circumstellar interaction, including hydrodynamics, thermal X-ray emission, acceleration of relativistic particles and non-emission processes in the radio and X-ray ranges. Finally, we discuss applications of these to different types of supernovae.
We review measurements of semileptonic and leptonic charm meson decays performed by the Belle experiment, and we use these results to estimate the sensitivity of the follow-on Belle II experiment to these decays.
The use of machine learning to develop intelligent software tools for interpretation of radiology images has gained widespread attention in recent years. The development, deployment, and eventual adoption of these models in clinical practice, however, remains fraught with challenges. In this paper, we propose a list of key considerations that machine learning researchers must recognize and address to make their models accurate, robust, and usable in practice. Namely, we discuss: insufficient training data, decentralized datasets, high cost of annotations, ambiguous ground truth, imbalance in class representation, asymmetric misclassification costs, relevant performance metrics, generalization of models to unseen datasets, model decay, adversarial attacks, explainability, fairness and bias, and clinical validation. We describe each consideration and identify techniques to address it. Although these techniques have been discussed in prior research literature, by freshly examining them in the context of medical imaging and compiling them in the form of a laundry list, we hope to make them more accessible to researchers, software developers, radiologists, and other stakeholders.
The large ttbar production cross-section at the LHC suggests the use of top quark decays to calibrate several critical parts of the detectors, such as the trigger system, the jet energy scale and b-tagging.
Geodesic equations of timelike and null charged particles in the Ernst metric are studied. We consider two distinct forms of the Ernst solution where the Maxwell potential represents either a uniform electric or magnetic field. Circular orbits in various configurations are considered, as well as their perturbations and stability. We find that the electric field strength must be below a certain charge-dependent critical value for these orbits to be stable. The case of the magnetic Ernst metric contains a limit which reduces to the Melvin magnetic universe. In this case the equations of motion are solved to reveal cycloidlike or trochoidlike motion, similar to those found by Frolov and Shoom around black holes immersed in test magnetic fields.
We study the effect of disorder and doping on the metal-insulator transition in a repulsive Hubbard model on a square lattice using the determinant quantum Monte Carlo method. First, with the aim of making our results reliable, we compute the sign problem with various parameters such as temperature, disorder, on-site interactions, and lattice size. We show that in the presence of randomness in the hopping elements, the metal-insulator transition occurs and the critical disorder strength differs at different fillings. We also demonstrate that doping is a driving force behind the metal-insulator transition.
Risk assessment is a major challenge for supply chain managers, as it potentially affects business factors such as service costs, supplier competition and customer expectations. The increasing interconnectivity between organisations has put into focus methods for supply chain cyber risk management. We introduce a general approach to support such activity taking into account various techniques of attacking an organisation and its suppliers, as well as the impacts of such attacks. Since data is lacking in many respects, we use structured expert judgment methods to facilitate its implementation. We couple a family of forecasting models to enrich risk monitoring. The approach may be used to set up risk alarms, negotiate service level agreements, rank suppliers and identify insurance needs, among other management possibilities.
We study the tau-function and theta-divisor of an isomonodromic family of linear differential (2x2)-systems with non-resonant irregular singularities. In some particular case the estimates for pole orders of the coefficient matrices of the family are applied.
Conversational recommender systems (CRS) that are able to interact with users in natural language often utilize recommendation dialogs which were previously collected with the help of paired humans, where one plays the role of a seeker and the other as a recommender. These recommendation dialogs include items and entities that indicate the users' preferences. In order to precisely model the seekers' preferences and respond consistently, CRS typically rely on item and entity annotations. A recent example of such a dataset is INSPIRED, which consists of recommendation dialogs for sociable conversational recommendation, where items and entities were annotated using automatic keyword or pattern matching techniques. An analysis of this dataset unfortunately revealed that there is a substantial number of cases where items and entities were either wrongly annotated or annotations were missing at all. This leads to the question to what extent automatic techniques for annotations are effective. Moreover, it is important to study impact of annotation quality on the overall effectiveness of a CRS in terms of the quality of the system's responses. To study these aspects, we manually fixed the annotations in INSPIRED. We then evaluated the performance of several benchmark CRS using both versions of the dataset. Our analyses suggest that the improved version of the dataset, i.e., INSPIRED2, helped increase the performance of several benchmark CRS, emphasizing the importance of data quality both for end-to-end learning and retrieval-based approaches to conversational recommendation. We release our improved dataset (INSPIRED2) publicly at https://github.com/ahtsham58/INSPIRED2.
We report several recent updates from the BABAR Collaboration on the matrix elements $|V_{cb}|$, $|V_{ub}|$, and $|V_{td}|$ of the Cabibbo-Kobayashi-Maskawa (CKM) quark-mixing matrix, and the angles $\beta$ and $\alpha$ of the unitarity triangle. Most results presented here are using the full BABAR $\Upsilon(4S)$ data set.
Recent works have revealed that Transformers are implicitly learning the syntactic information in its lower layers from data, albeit is highly dependent on the quality and scale of the training data. However, learning syntactic information from data is not necessary if we can leverage an external syntactic parser, which provides better parsing quality with well-defined syntactic structures. This could potentially improve Transformer's performance and sample efficiency. In this work, we propose a syntax-guided localized self-attention for Transformer that allows directly incorporating grammar structures from an external constituency parser. It prohibits the attention mechanism to overweight the grammatically distant tokens over close ones. Experimental results show that our model could consistently improve translation performance on a variety of machine translation datasets, ranging from small to large dataset sizes, and with different source languages.
We analyze the statistical properties and dynamical implications of galaxy distributions in phase space for samples selected from the 2MASS Extended Source Catalog. The galaxy distribution is decomposed into modes $\delta({\bf k, x})$ which describe the number density perturbations of galaxies in phase space cell given by scale band $\bf k$ to ${\bf k}+\Delta {\bf k}$ and spatial range $\bf x$ to ${\bf x}+\Delta {\bf x}$. In the nonlinear regime, $\delta({\bf k, x})$ is highly non-Gaussian. We find, however, that the correlations between $\delta({\bf k, x})$ and $\delta({\bf k', x'})$ are always very weak if the spatial ranges (${\bf x}$, ${\bf x}+\Delta {\bf x}$) and (${\bf x'}$, ${\bf x'}+\Delta {\bf x'}$) don't overlap. This feature is due to the fact that the spatial locality of the initial perturbations is memorized during hierarchical clustering. The highly spatial locality of the 2MASS galaxy correlations is a strong evidence for the initial perturbations of the cosmic mass field being spatially localized, and therefore, consistent with a Gaussian initial perturbations on scales as small as about 0.1 h$^{-1}$ Mpc. Moreover, the 2MASS galaxy spatial locality indicates that the relationship between density perturbations of galaxies and the underlying dark matter should be localized in phase space. That is, for a structure consisting of perturbations on scales from $k$ to $ k+\Delta {k}$, the nonlocal range in the relation between galaxies and dark matter should {\it not} be larger than $|{\Delta {\bf x}}|=2\pi/|\Delta {\bf k}|$. The stochasticity and nonlocality of the bias relation between galaxies and dark matter fields should be no more than the allowed range given by the uncertainty relation $|{\Delta {\bf x}|| \Delta{\bf k}}|=2\pi$.
We report results of zero-field muon spin relaxation experiments on the filled-skutterudite superconductors~Pr$_{1-x}$Ce$_{x}$Pt$_4$Ge$_{12}$, $x = 0$, 0.07, 0.1, and 0.2, to investigate the effect of Ce doping on broken time-reversal symmetry (TRS) in the superconducting state. In these alloys broken TRS is signaled by the onset of a spontaneous static local magnetic field~$B_s$ below the superconducting transition temperature. We find that $B_s$ decreases linearly with $x$ and $\to 0$ at $x \approx 0.4$, close to the concentration above which superconductivity is no longer observed. The (Pr,Ce)Pt$_4$Ge$_{12}$ and isostructural (Pr,La)Os$_4$Sb$_{12}$ alloy series both exhibit superconductivity with broken TRS, and in both the decrease of $B_s$ is proportional to the decrease of Pr concentration. This suggests that Pr-Pr intersite interactions are responsible for the broken<EMAIL_ADDRESS>The two alloy series differ in that the La-doped alloys are superconducting for all La concentrations, suggesting that in (Pr,Ce)Pt$_4$Ge$_{12}$ pair-breaking by Ce doping suppresses superconductivity. For all $x$ the dynamic muon spin relaxation rate decreases somewhat in the superconducting state. This may be due to Korringa relaxation by conduction electrons, which is reduced by the opening of the superconducting energy gap.
We obtain a lower bound for the coarse Ricci curvature of continuous time pure jump Markov processes, with an emphasis on interacting particle systems. Applications to several models are provided, with a detailed study of the herd behavior of a simple model of interacting agents.
We prove that insertion-elimination Lie algebra of Feynman graphs, in the ladder case, has a natural interpretation in terms of a certain algebra of infinite dimensional matrices. We study some aspects of its representation theory and we discuss some relations with the representation of the Heisenberg algebra
Self-consistent field theory (SCFT) has established that for cubic network phases in diblock copolymer melts, the double-gyroid (DG) is thermodynamically stable relative to the competitor double-diamond (DD) and double-primitive (DP) phases, and exhibits a window of stability intermediate to the classical lamellar and columnar phases. This competition is widely thought to be controlled by "packing frustration" -- the incompatibility of uniformly filling melts with a locally preferred chain packing motif. Here, we reassess the thermodynamics of cubic network formation in strongly-segregated diblock melts, based on a recently developed medial strong segregation theory ("mSST") approach that directly connects the shape and thermodynamics of chain packing environments to the medial geometry of tubular network surfaces. We first show that medial packing significantly relaxes prior SST upper bounds on the free energy of network phases, which we attribute to the spreading of terminal chain ends within network nodal regions. Exploring geometric and thermodynamic metrics of chain packing in network phases, we show that mSST reproduces effects dependent on the elastic asymmetry of the blocks that are consistent with SCFT at large $\chi N$. We then characterize geometric frustration in terms of the spatially-variant distributions of local entropic and enthalpic costs throughout the morphologies, extracted from mSST predictions. We find that the DG morphology, due to its unique medial geometry in the nodal regions, is stabilized by the incorporation of favorable, quasi-lamellar packing over much of its morphology, motifs which are inaccessible to DD and DP morphologies due to "interior corners" in their medial geometries. Finally, we use our results to analyze "hot spots" of chain stretching and discuss implications for network susceptibility to the uptake of guest molecules.
For the fermion transformation in the space all books of quantum mechanics propose to use the unitary operator $\widehat{U}_{\vec n}(\varphi)=\exp{(-i\frac\varphi2(\widehat\sigma\cdot\vec n))}$, where $\varphi$ is angle of rotation around the axis $\vec{n}$. But this operator turns the spin in inverse direction presenting the rotation to the left. The error of defining of $\widehat{U}_{\vec n}(\varphi)$ action is caused because the spin supposed as simple vector which is independent from $\widehat\sigma$-operator a priori. In this work it is shown that each fermion marked by number $i$ has own Pauli-vector $\widehat\sigma_i$ and both of them change together. If we suppose the global $\widehat\sigma$-operator and using the Bloch Sphere approach define for all fermions the common quantization axis $z$ the spin transformation will be the same: the right hand rotation around the axis $\vec{n}$ is performed by the operator $\widehat{U}^+_{\vec n}(\varphi)=\exp{(+i\frac\varphi2(\widehat\sigma\cdot\vec n))}$.
A new optimized extreme learning machine- (ELM-) based method for power system transient stability prediction (TSP) using synchrophasors is presented in this paper. First, the input features symbolizing the transient stability of power systems are extracted from synchronized measurements. Then, an ELM classifier is employed to build the TSP model. And finally, the optimal parameters of the model are optimized by using the improved particle swarm optimization (IPSO) algorithm. The novelty of the proposal is in the fact that it improves the prediction performance of the ELM-based TSP model by using IPSO to optimize the parameters of the model with synchrophasors. And finally, based on the test results on both IEEE 39-bus system and a large-scale real power system, the correctness and validity of the presented approach are verified.
The influence of a Gaussian environment on a quantum system can be described by effectively replacing the continuum with a discrete set of ancillary quantum and classical degrees of freedom. This defines a pseudomode model which can be used to classically simulate the reduced system dynamics. Here, we consider an alternative point of view and analyze the potential benefits of an analog or digital quantum simulation of the pseudomode model itself. Superficially, such a direct experimental implementation is, in general, impossible due to the unphysical properties of the effective degrees of freedom involved. However, we show that the effects of the unphysical pseudomode model can still be reproduced using measurement results over an ensemble of physical systems involving ancillary harmonic modes and an optional stochastic driving field. This is done by introducing an extrapolation technique whose efficiency is limited by stability against imprecision in the measurement data. We examine how such a simulation would allow us to (i) perform accurate quantum simulation of the effects of complex non-perturbative and non-Markovian environments in regimes that are challenging for classical simulation, (ii) conversely, mitigate potential unwanted non-Markovian noise present in quantum devices, and (iii) restructure some of some of the properties of a given physical bath, such as its temperature.
The conventional mesh-based Level of Detail (LoD) technique, exemplified by applications such as Google Earth and many game engines, exhibits the capability to holistically represent a large scene even the Earth, and achieves rendering with a space complexity of O(log n). This constrained data requirement not only enhances rendering efficiency but also facilitates dynamic data fetching, thereby enabling a seamless 3D navigation experience for users. In this work, we extend this proven LoD technique to Neural Radiance Fields (NeRF) by introducing an octree structure to represent the scenes in different scales. This innovative approach provides a mathematically simple and elegant representation with a rendering space complexity of O(log n), aligned with the efficiency of mesh-based LoD techniques. We also present a novel training strategy that maintains a complexity of O(n). This strategy allows for parallel training with minimal overhead, ensuring the scalability and efficiency of our proposed method. Our contribution is not only in extending the capabilities of existing techniques but also in establishing a foundation for scalable and efficient large-scale scene representation using NeRF and octree structures.
We report preliminary results for 2D massive QED with two flavours of Wilson fermions, using the Hermitean variant of L\"uscher's bosonization technique. The chiral condensate and meson masses are obtained. The simplicity of the model allows for high statistics simulations close to the chiral and continuum limit, both in the quenched approximation and with dynamical fermions.
Given one quasi-smooth derived space cut out of another by a section of a 2-term complex of bundles, we give two formulae for its virtual cycle. They are modelled on the the $p$-fields construction of Chang-Li and the Quantum Lefschetz principle, and recover these when applied to moduli spaces of (stable or quasi-) maps. When the complex is a single bundle we recover results of Kim-Kresch-Pantev.
A well-known question asks whether the spectrum of the Laplacian on a Riemannian manifold $(M,g)$ determines the Riemannian metric $g$ up to isometry. A similar question is whether the energy spectrum of all harmonic maps from a given Riemannian manifold $(\Sigma,h)$ to $M$ determines the Riemannian metric on the target space. We consider this question in the case of harmonic maps between flat tori. In particular, we show that the two isospectral, non-isometric $16$-dimensional flat tori found by Milnor cannot be distinguished by the energy spectrum of harmonic maps from $d$-dimensional flat tori for $d\leq 3$, but can be distinguished by certain flat tori for $d\geq 4$. This is related to a property of the Siegel theta series in degree $d$ associated to the $16$-dimensional lattices in Milnor's example.
We review non-linear sigma-models with (2,1) and (2,2) supersymmetry. We focus on off-shell closure of the supersymmetry algebra and give a complete list of (2,2) superfields. We provide evidence to support the conjecture that all N=(2,2) non-linear sigma-models can be described by these fields. This in its turn leads to interesting consequences about the geometry of the target manifolds. One immediate corollary of this conjecture is the existence of a potential for hyper-Kahler manifolds, different from the Kahler potential, which does not only allow for the computation of the metric, but of the three fundamental two-forms as well. Several examples are provided: WZW models on SU(2) x U(1) and SU(2) x SU(2) and four-dimensional special hyper-Kahler manifolds.
We present the kinematic anaylsis of $246$ stars within $4^\prime$ from the center of Orion Nebula Cluster (ONC), the closest massive star cluster with active star formation across the full mass range, which provides valuable insights in the the formation and evolution of star cluster on an individual-star basis. High-precision radial velocities and surface temperatures are retrieved from spectra acquired by the NIRSPEC instrument used with adaptive optics (NIRSPAO) on the Keck II 10-m telescope. A three-dimensional kinematic map is then constructed by combining with the proper motions previously measured by the Hubble Space Telescope (HST) ACS/WFPC2/WFC3IR and Keck II NIRC2. The measured root-mean-squared velocity dispersion is $2.26\pm0.08~\mathrm{km}\,\mathrm{s}^{-1}$, significantly higher than the virial equilibrium's requirement of $1.73~\mathrm{km}\,\mathrm{s}^{-1}$, suggesting that the ONC core is supervirial, consistent with previous findings. Energy equipartition is not detected in the cluster. Most notably, the velocity of each star relative to its neighbors is found to be negatively correlated with stellar mass. Low-mass stars moving faster than their surrounding stars in a supervirial cluster suggests that the initial masses of forming stars may be related to their initial kinematic states. Additionally, a clockwise rotation preference is detected. A weak sign of inverse mass segregation is also identified among stars excluding the Trapezium stars, though it could be a sample bias. Finally, this study reports the discovery of four new candidate spectroscopic binary systems.
We introduce the circumcenter mapping induced by a set of (usually nonexpansive) operators. One prominent example of a circumcenter mapping is the celebrated Douglas--Rachford splitting operator. Our study is motivated by the Circumcentered--Douglas--Rachford method recently introduced by Behling, Bello Cruz, and Santos in order to accelerate the Douglas--Rachford method for solving certain classes of feasibility problems. We systematically explore the properness of the circumcenter mapping induced by reflectors or projectors. Numerous examples are presented. We also present a version of Browder's demiclosedness principle for circumcenter mappings.
To facilitate the evolution of edge intelligence in ever-changing environments, we study on-device incremental learning constrained in limited computation resource in this paper. Current on-device training methods just focus on efficient training without considering the catastrophic forgetting, preventing the model getting stronger when continually exploring the world. To solve this problem, a direct solution is to involve the existing incremental learning mechanisms into the on-device training framework. Unfortunately, such a manner cannot work well as those mechanisms usually introduce large additional computational cost to the network optimization process, which would inevitably exceed the memory capacity of the edge devices. To address this issue, this paper makes an early effort to propose a simple but effective edge-friendly incremental learning framework. Based on an empirical study on the knowledge intensity of the kernel elements of the neural network, we find that the center kernel is the key for maximizing the knowledge intensity for learning new data, while freezing the other kernel elements would get a good balance on the model's capacity for overcoming catastrophic forgetting. Upon this finding, we further design a center-sensitive kernel optimization framework to largely alleviate the cost of the gradient computation and back-propagation. Besides, a dynamic channel element selection strategy is also proposed to facilitate a sparse orthogonal gradient projection for further reducing the optimization complexity, upon the knowledge explored from the new task data. Extensive experiments validate our method is efficient and effective, e.g., our method achieves average accuracy boost of 38.08% with even less memory and approximate computation compared to existing on-device training methods, indicating its significant potential for on-device incremental learning.
We derive the central charge and BPS equations from the low-energy effective action for N=2 SU(2) Yang-Mills theory in the Coulomb phase, using a systematic, canonical procedure. We then obtain solutions for monopole and dyon BPS states, whose core structure is described by a dual Lagrangian containing the monopole or dyon as a fundamental field. Spherically symmetric states possess a shell of charge at a characteristic radius.
All possible symmetry-determined nonlinear normal modes (also called by simple periodic orbits, one-mode solutions etc.) in both hard and soft Fermi-Pasta-Ulam-$\beta$ chains are discussed. A general method for studying their stability in the thermodynamic limit, as well as its application for each of the above nonlinear normal modes are presented.
We study a bosonic string with one end free and the other confined to a D-brane. Only the odd oscillator modes are allowed, which leads to a Virasoro algebra of even Virasoro modes only. The theory is quantized in a gauge where world-sheet time and ordinary time are identified. There are no negative or null norm states, and no tachyon. The Regge slope is twice that of the open string; this can serve as a test of the usefulness of the the model as a semi-quantitative description of mesons with one light and one extremely heavy quark when such higher spin mesons are found. The Virasoro conditions select specific SO(D-1) irreps. The asymptotic density of states can be estimated by adapting the Hardy-Ramanujan analysis to a partition of odd integers; the estimate becomes exact as D goes to infinity.
The spin rate \Omega of neutron stars at a given temperature T is constrained by the interplay between gravitational-radiation instabilities and viscous damping. Navier-Stokes theory has been used to calculate the viscous damping timescales and produce a stability curve for r-modes in the (\Omega,T) plane. In Navier-Stokes theory, viscosity is independent of vorticity, but kinetic theory predicts a coupling of vorticity to the shear viscosity. We calculate this coupling and show that it can in principle significantly modify the stability diagram at lower temperatures. As a result, colder stars can remain stable at higher spin rates.
We show that the algebraic automorphism group of the SL(2,C) character variety of a closed orientable surface with negative Euler characteristic is a finite extension of its mapping class group. Along the way, we provide a simple characterization of the valuations on the character algebra coming from measured laminations.
Through experiments, we studied defect turbulence, a type of spatiotemporal chaos in planar systems of nematic liquid crystals, to clarify the chaotic advection of weak turbulence. In planar systems of large aspect ratio, structural relaxation which is characterized by the dynamic structure factor exhibits a long-period oscillation that is described well by a combination of a simple exponential relaxation and underdamped oscillation. The simple relaxation arises as a result of the roll modulation while the damped oscillation is manifest in the repetitive gliding of defect pairs in a local area. Each relaxation is derived analytically by the projection operator method that separates turbulent transport into a macroscopic contribution and fluctuations. The analysis proposes that the two relaxations are not correlated. The nonthermal fluctuations of defect turbulence are consequently separated into two independent Markov processes. Our approach sheds light on diversity and universality from a unified viewpoint for weak turbulence.
Two reactions, pp->ppX and pp->p\pi^+X, are used to study the 1.47<M<1.68 GeV baryonic mass range. Three different final states are considered in the invariant masses: N^* or \Delta^+, p\pi^0, and p\eta. The last two channels are defined by software cuts applied to the missing mass of the first reaction. Several narrow structures are extracted with widths \sigma(\Gamma) varying between 3 and 9 MeV. Some structures are observed in one channel but not in others. Such nonobservation may be due either to the spectrometer momenta limits or to the physics (e.g. no such disintegration channel is allowed from the narrow state considered). We tentatively conclude that the broad Particle Data Group (PDG) baryonic resonances N(1520)D13, N(1535)S11, Delta(1600)P33, and N(1675)D15 are collective states built from several narrow and weakly excited resonances, each having a (much) smaller width than the one reported by PDG.
A ribbon is a surface swept out by a line segment turning as it moves along a central curve. For narrow magnetic ribbons, for which the length of the line segment is much less than the length of the curve, the anisotropy induced by the magnetostatic interaction is biaxial, with hard axis normal to the ribbon and easy axis along the central curve. The micromagnetic energy of a narrow ribbon reduces to that of a one-dimensional ferromagnetic wire, but with curvature, torsion and local anisotropy modified by the rate of turning. These general results are applied to two examples, namely a helicoid ribbon, for which the central curve is a straight line, and a M\"obius ribbon, for which the central curve is a circle about which the line segment executes a $180^\circ$ twist. In both examples, for large positive tangential anisotropy, the ground state magnetization lies tangent to the central curve. As the tangential anisotropy is decreased, the ground state magnetization undergoes a transition, acquiring an in-surface component perpendicular to the central curve. For the helicoid ribbon, the transition occurs at vanishing anisotropy, below which the ground state is uniformly perpendicular to the central curve. The transition for the M\"obius ribbon is more subtle; it occurs at a positive critical value of the anisotropy, below which the ground state is nonuniform. For the helicoid ribbon, the dispersion law for spin wave excitations about the tangential state is found to exhibit an asymmetry determined by the geometric and magnetic chiralities.
The silicon-strip tracker of the China Seismo-Electromagnetic Satellite (CSES) consists of two double-sided silicon strip detectors (DSSDs) which provide incident particle tracking information. The low-noise analog ASIC VA140 was used in this study for DSSD signal readout. A beam test on the DSSD module was performed at the Beijing Test Beam Facility of the Beijing Electron Positron Collider (BEPC) using a 400~800 MeV/c proton beam. The pedestal analysis results, RMSE noise, gain correction, and particle incident position reconstruction of the DSSD module are presented.
Let $\Sigma_g$ denote the closed orientable surface of genus $g$ and fix an arbitrary simplicial triangulation of $\Sigma_g$. We construct and study a natural surjective group homomorphism from the surface braid group on $n$ strands on $\Sigma_g$ to the first singular homology group of $\Sigma_g$ with integral coefficients. In particular, we show that the kernel of this homomorphism is generated by canonical braids which arise from the triangulation of $\Sigma_g$. This provides a simple description of natural subgroups of surface braid groups which are closely tied to the homology groups of the surfaces $\Sigma_g$.
We study the abelianization of Kontsevich's Lie algebra associated with the Lie operad and some related problems. Calculating the abelianization is a long-standing unsolved problem, which is important in at least two different contexts: constructing cohomology classes in $H^k(\mathrm{Out}(F_r);\mathbb Q)$ and related groups as well as studying the higher order Johnson homomorphism of surfaces with boundary. The abelianization carries a grading by "rank," with previous work of Morita and Conant-Kassabov-Vogtmann computing it up to rank $2$. This paper presents a partial computation of the rank $3$ part of the abelianization, finding lots of irreducible $\mathrm{SP}$-representations with multiplicities given by spaces of modular forms. Existing conjectures in the literature on the twisted homology of $\mathrm{SL}_3(\mathbb Z)$ imply that this gives a full account of the rank $3$ part of the abelianization in even degrees.
In the present work we study non-thermal leptogenesis and baryon asymmetry in the universe in different neutrino mass models discussed recently. For each model we obtain a formula relating the reheating temperature after inflation to the inflaton mass. It is shown that all but four cases are excluded and that in the cases which survive the inflaton mass and the reheating temperature after inflation are bounded from below and from above.
The theory of a massless two-dimensional scalar field with a periodic boundary interaction is considered. At a critical value of the period this system defines a conformal field theory and can be re-expressed in terms of free fermions, which provide a simple realization of a hidden $SU(2)$ symmetry of the original theory. The partition function and the boundary $S$-matrix can be computed exactly as a function of the strength of the boundary interaction. We first consider open strings with one interacting and one Dirichlet boundary, and then with two interacting boundaries. The latter corresponds to motion in a periodic tachyon background, and the spectrum exhibits an interesting band structure which interpolates between free propagation and tight binding as the interaction strength is varied.
Matrices whose adjoint is a low rank perturbation of a rational function of the matrix naturally arise when trying to extend the well known Faber-Manteuffel theorem, which provides necessary and sufficient conditions for the existence of a short Arnoldi recurrence. We show that an orthonormal Krylov basis for this class of matrices can be generated by a short recurrence relation based on GMRES residual vectors. These residual vectors are computed by means of an updating formula. Furthermore, the underlying Hessenberg matrix has an accompanying low rank structure, which we will investigate closely.
We propose a geometric algorithm for topic learning and inference that is built on the convex geometry of topics arising from the Latent Dirichlet Allocation (LDA) model and its nonparametric extensions. To this end we study the optimization of a geometric loss function, which is a surrogate to the LDA's likelihood. Our method involves a fast optimization based weighted clustering procedure augmented with geometric corrections, which overcomes the computational and statistical inefficiencies encountered by other techniques based on Gibbs sampling and variational inference, while achieving the accuracy comparable to that of a Gibbs sampler. The topic estimates produced by our method are shown to be statistically consistent under some conditions. The algorithm is evaluated with extensive experiments on simulated and real data.
We consider waves, which obey the semilinear Klein-Gordon equation, propagating in the Friedmann-Lemaitre-Robertson-Walker spacetimes. The equations in the de Sitter and Einstein-de Sitter spacetimes are the important particular cases. We show the global in time existence in the energy class of solutions of the Cauchy problem.
An important question in the derivation of the acceleration radiation, which also arises in Hawking's derivation of black hole radiance, is the need to invoke trans-Planckian physics for the quantum field that originates the created quanta. We point out that this issue can be further clarified by reconsidering the analysis in terms of particle detectors, transition probabilities, and local two-point functions. By writing down separate expressions for the spontaneous- and induced-transition probabilities of a uniformly accelerated detector, we show that the bulk of the effect comes from the natural (non trans-Planckian) scale of the problem, which largely diminishes the importance of the trans-Planckian sector. This is so, at least, when trans-Planckian physics is defined in a Lorentz invariant way. This analysis also suggests how to define and estimate the role of trans-Planckian physics in the Hawking effect itself.
A long-standing issue in mathematical finance is the speed-up of pricing options, especially multi-asset options. A recent study has proposed to use tensor train learning algorithms to speed up Fourier transform (FT)-based option pricing, utilizing the ability of tensor networks to compress high-dimensional tensors. Another usage of the tensor network is to compress functions, including their parameter dependence. In this study, we propose a pricing method, where, by a tensor learning algorithm, we build tensor trains that approximate functions appearing in FT-based option pricing with their parameter dependence and efficiently calculate the option price for the varying input parameters. As a benchmark test, we run the proposed method to price a multi-asset option for the various values of volatilities and present asset prices. We show that, in the tested cases involving up to 11 assets, the proposed method is comparable to or outperforms Monte Carlo simulation with $10^5$ paths in terms of computational complexity, keeping the comparable accuracy.
We study Malliavin differentiability of solutions to sub-critical singular parabolic stochastic partial differential equations (SPDEs) and we prove the existence of densities for a class of singular SPDEs. Both of these results are implemented in the setting of regularity structures. For this we construct renormalized models in situations where some of the driving noises are replaced by deterministic Cameron-Martin functions, and we show Lipschitz continuity of these models with respect to the Cameron-Martin norm. In particular, in many interesting situations we obtain a convergence and stability result for lifts of $L^2$-functions to models, which is of independent interest. The proof also involves two separate algebraic extensions of the regularity structure which are carried out in rather large generality.
Let $R$ be a commutative additively idempotent semiring. In this paper, some properties and characterizations for permanents of matrices over $R$ are established, and several inequalities for permanents are given. Also, the adjiont matrices of matriecs over $R$ are considered. Partial results obtained in this paper generalize the corresponding ones on fuzzy matrices, on lattice matrices and on incline matrices.
Modern smart home control systems utilize real-time occupancy and activity monitoring to ensure control efficiency, occupants' comfort, and optimal energy consumption. Moreover, adopting machine learning-based anomaly detection models (ADMs) enhances security and reliability. However, sufficient system knowledge allows adversaries/attackers to alter sensor measurements through stealthy false data injection (FDI) attacks. Although ADMs limit attack scopes, the availability of information like occupants' location, conducted activities, and alteration capability of smart appliances increase the attack surface. Therefore, performing an attack space analysis of modern home control systems is crucial to design robust defense solutions. However, state-of-the-art analyzers do not consider contemporary control and defense solutions and generate trivial attack vectors. To address this, we propose a control and defense-aware novel attack analysis framework for a modern smart home control system, efficiently extracting ADM rules. We verify and validate our framework using a state-of-the-art dataset and a prototype testbed.
We conjecture an equivalence between the Gromov-Witten theory of 3-folds and the holomorphic Chern-Simons theory of Donaldson-Thomas. For Calabi-Yau 3-folds, the equivalence is defined by the change of variables, exp(iu)=-q, where u is the genus parameter of GW theory and q is charge parameter of DT theory. The conjecture is proven for local Calabi-Yau toric surfaces.
The Ising exchange interaction is a limiting case of strong exchange anisotropy and represents a key property of many magnetic materials. Here we find necessary and sufficient conditions to achieve Ising exchange interaction for metal sites with unquenched orbital moments. Contrary to current views, the rules established here narrow much the range of lanthanide and actinide ions which can exhibit Ising exchange interaction. It is shown that the arising Ising interaction can be of two distinct types: (i) coaxial, with magnetic moments directed along the anisotropy axes on the metal sites and (ii) non-coaxial, with arbitrary orientation of one of magnetic moments. These findings will contribute to purposeful design of lanthanide and actinide based materials.
Measurements of the low-frequency (f<= 100 kHz) permittivity at T<= 160 K and dc resistivity (T<= 430 K) are reported for La(1-x)Ca(x)MnO(3) (0<= x<= 0.15). Static dielectric constants are determined from the low-T limiting behavior of the permittivity. The estimated polarizability for bound holes ~ 10^{-22} cm^{-3} implies a radius comparable to the interatomic spacing, consistent with the small polaron picture established from prior transport studies near room temperature and above on nearby compositions. Relaxation peaks in the dielectric loss associated with charge-carrier hopping yield activation energies in good agreement with low-T hopping energies determined from variable-range hopping fits of the dc resistivity. The doping dependence of these energies suggests that the orthorhombic, canted antiferromagnetic ground state tends toward an insulator-metal transition that is not realized due to the formation of the ferromagnetic insulating state near Mn(4+) concentration ~ 0.13.
We study the problem of designing interval-valued observers that simultaneously estimate the system state and learn an unknown dynamic model for partially unknown nonlinear systems with dynamic unknown inputs and bounded noise signals. Leveraging affine abstraction methods and the existence of nonlinear decomposition functions, as well as applying our previously developed data-driven function over-approximation/abstraction approach to over-estimate the unknown dynamic model, our proposed observer recursively computes the maximal and minimal elements of the estimate intervals that are proven to contain the true augmented states. Then, using observed output/measurement signals, the observer iteratively shrinks the intervals by eliminating estimates that are not compatible with the measurements. Finally, given new interval estimates, the observer updates the over-approximation of the unknown model dynamics. Moreover, we provide sufficient conditions for uniform boundedness of the sequence of estimate interval widths, i.e., stability of the designed observer, in the form of tractable (mixed-)integer programs with finitely countable feasible sets.
The goal of the paper is to lay the foundation for the qualitative analogue of the classical, quantitative sparse graph limit theory. In the first part of the paper we introduce the qualitative analogues of the Benjamini-Schramm and local-global graph limit theories for sparse graphs. The natural limit objects are continuous actions of finitely generated groups on totally disconnected compact metric spaces. We prove that the space of weak equivalent classes of free Cantor actions is compact and contains a smallest element, as in the measurable case. We will introduce and study various notions of almost finiteness, the qualitative analogue of hyperfiniteness, for classes of bounded degree graphs. We prove the almost finiteness of a new class of \'etale groupoids associated to Cantor actions and construct an example of a nonamenable, almost finite totally disconnected \'etale groupoid, answering a query of Suzuki. Motivated by the notions and results on qualitative graph limits, in the second part of our paper we give a precise definition of constant-time distributed algorithms on sparse graphs. We construct such constant-time algorithms for various approximation problems for hyperfinite and almost finite graph classes. We also prove the Hausdorff convergence of the spectra of convergent graph sequences in the strongly almost finite category.