text
stringlengths
6
128k
The region of high baryonic densities of the QCD phase diagram is the object of several studies focused on the investigation of the order of the phase transition and the search for the critical point. The rare probes, which include electromagnetic observables and heavy quark production, are experimentally challenging to access as they require large integrated luminosities that could be studied with fixed-target experiments. A future experiment, NA60+ at CERN, is being proposed to access this region and perform accurate measurements of the dimuon spectrum up to the charmonium region and study charm and strange hadrons. With its high beam intensity, the CERN SPS can cover the center-of-mass collision energy region from 6 to 17 GeV providing access to rare observables which have been scarcely studied until now. The proposed experiment includes a muon spectrometer based on tracking gas detectors coupled to a vertex spectrometer based on Si detectors. The time slot after the Long Shutdown 3 of the LHC (past 2029) is aimed for the first data-taking, with Pb and proton beams. In this contribution, we review the project and recent R&D effort, including the technical aspects and the studies of the physics performances for the observables.
The first order perturbations of the energy levels of a hydrogen atom in central internal gravitational field are investigated. The internal gravitational field is produced by the mass of the atomic nucleus. The energy shifts are calculated for the relativistic 1S, 2S, 2P, 3S, 3P, 3D, 4S and 4P levels with Schwarzschild metric. The calculated results show that the gravitational corrections are sensitive to the total angular momentum quantum number.
Let H: C^2 -> C^2 be the Henon mapping given by (x,y) --> (p(x) - ay,x). The key invariant subsets are K_+/-, the sets of points with bounded forward images, J_+/- = the boundary of K_+/-, J = the union of J_+ and J_-, and K = the union of K_+ and K_-. In this paper we identify the topological structure of these sets when p is hyperbolic and |a| is sufficiently small, ie, when H is a small perturbation of the polynomial p. The description involves projective and inductive limits of objects defined in terms of p alone.
Eleven-dimensional supergravity can be formulated in superspaces locally of the form $\mathbf X\times Y$ where $\mathbf X$ is 4D $N=1$ conformal superspace and $Y$ is an arbitrary 7-manifold admitting a $G_2$-structure. The eleven-dimensional 3-form and the stable 3-form on $Y$ define the lowest component of a gauge superfield on $\mathbf X \times Y$ that is chiral as a superfield on $\mathbf X$. This chiral field is part of a tensor hierarchy giving rise to a superspace Chern-Simons action and its real field strength defines a lifting of the Hitchin functional on $Y$ to the $G_2$ superspace $\mathbf X\times Y$. These terms are those of lowest order in a superspace Noether expansion in seven $N=1$ conformal gravitino superfields $\Psi$. In this paper, we compute the $O(\Psi)$ action to all orders in the remaining fields. The eleven-dimensional origin of the resulting non-linear structures is parameterized by the choice of a complex spinor on $Y$ encoding the off-shell 4D $N=1$ subalgebra of the eleven-dimensional super-Poincare algebra.
We discuss the implementation and properties of the quenched approximation in the calculation of the left-right, strong penguin contributions (i.e. Q_6) to epsilon'/epsilon. The coefficient of the new chiral logarithm, discovered by Golterman and Pallante, which appears at leading order in quenched chiral perturbation theory is evaluated using both the method proposed by those authors and by an improved approach which is free of power divergent corrections. The result implies a large quenching artifact in the contribution of Q_6 to epsilon'/epsilon. This failure of the quenched approximation affects only the strong penguin operators and so does not affect the Q_8 contribution to epsilon'/epsilon nor Re A_0, Re A_2 and thus the Delta I=1/2 rule at tree level in chiral perturbation theory.
In this paper, we show the weak and strong well-posedness of density dependent stochastic differential equations driven by $\alpha$-stable processes with $\alpha \in(1,2)$. The existence part is based on Euler's approximation as \cite{HRZ20}, while, the uniqueness is based on the Schauder estimates in Besov spaces for nonlocal Fokker-Planck equations. For the existence, we only assume the drift being continuous in the density variable. For the weak uniqueness, the drift is assumed to be Lipschitz in the density variable, while for the strong uniqueness, we also need to assume the drift being $\beta_0$-order H\"older continuous in the spatial variable, where $\beta_0\in(1-\alpha/2,1)$.
Let $A, B\subseteq \mathbb{Z}$ be finite, nonempty subsets with $\min A=\min B=0$, and let $$\delta(A,B)={\begin{array}{ll} 1 & \hbox{if} A\subseteq B, 0 & \hbox{otherwise.} If $\max B\leq \max A\leq |A|+|B|-3$ and \label{one}|A+B|\leq |A|+2|B|-3-\delta(A,B), then we show $A+B$ contains an arithmetic progression with difference 1 and length $|A|+|B|-1$. As a corollary, if \eqref{one} holds, $\max(B)\leq \max(A)$ and either $\gcd(A)=1$ or else $\gcd(A+B)=1$ and $|A+B|\leq 2|A|+|B|-3$, then $A+B$ contains an arithmetic progression with difference 1 and length $|A|+|B|-1$.
The Ba\~{n}ados-Silk-West (BSW) effect consists in the possibility to obtain arbitrarily large energy $E_{c.m.}$ in the centre of mass frame of two colliding particles near the black hole horizon. One of the common beliefs was that the action of force on these particles (say, due to gravitational radiation) should necessarily restrict the growth of $E_{c.m.}$. We consider extremal horizons and develop a model-independent approach and analyze the conditions for the force to preserve or kill the effect, using the frames attached both to observers orbiting the black hole and to ones crossing the horizon. We argue that the aforementioned expectations are not confirmed. Under rather general assumptions, the BSW effect survives. For equatorial motion it is only required that in the proper frame the radial component of the force be finite, while the azimuthal one tend to zero not too slowly. If the latter condition is violated, we evaluate $E_{c.m.}$, which becomes indeed restricted but remains very large for small forces.
We obtain some results about the repeated exponentiation modulo a prime power from the viewpoint of arithmetic dynamical systems. Especially, we extend two asymptotic formulas about periodic points and tails in the case of modulo a prime to the case of modulo a prime power.
Within the ``anisotropic relaxation time'' approach to transport processes, I obtain a new geometric formula for the weak-field Hall conductivity of metals, which generalizes previously-known formulas due to Tsuji (restricted to cubic 3D metals) and Ong (for 2D metals).
For understanding the dissipation in a rotating flow when resonance occurs, we study the rotating flow driven by the harmonic force in a periodic box. Both the linear and nonlinear regimes are studied. The various parameters such as the force amplitude $a$, the force frequency $\omega$, the force wavenumber $k$, and the Ekman number $E$ are investigated. In the linear regime, the dissipation at the resonant frequency scales as $E^{-1}k^{-2}$, and it is much stronger than the dissipation at the non-resonant frequencies on the large scales and at the low Ekman numbers. In the nonlinear regime, at the resonant frequency the effective dissipation (dissipation normalised with the square of force amplitude) is lower than in the linear regime and it decreases with the increasing force amplitude. This nonlinear suppression effect is significant near the resonant frequency but negligible far away from the resonant frequency. Opposite to the linear regime, in the nonlinear regime at the resonant frequency the lower Ekman number leads to the lower dissipation because of the stronger nonlinear effect. This work implies that the previous linear calculations overestimated the tidal dissipation, which is important for understanding the tides in stars and giant planets.
We propose a perception imitation method to simulate results of a certain perception model, and discuss a new heuristic route of autonomous driving simulator without data synthesis. The motivation is that original sensor data is not always necessary for tasks such as planning and control when semantic perception results are ready, so that simulating perception directly is more economic and efficient. In this work, a series of evaluation methods such as matching metric and performance of downstream task are exploited to examine the simulation quality. Experiments show that our method is effective to model the behavior of learning-based perception model, and can be further applied in the proposed simulation route smoothly.
The summary of the available semi-analytical results for the three-loop corrections to the QCD static potential and for the $\mathcal{O}(\alpha_s^4)$ contributions to the ratio of the running and pole heavy quark masses are presented. The procedure of the determination of the dependence of the four-loop contribution to the pole-running heavy quarks mass ratio on the number of quarks flavours, based on application of the least squares method is described. The necessity of clarifying the reason of discrepancy between the numerical uncertainties of the $\alpha_s^4$ coefficients in the mass ratio, obtained by this mathematical method by the direct numerical calculations is emphasised.
Machine Learning (ML) is increasingly used across many disciplines with impressive reported results. However, recent studies suggest published performance of ML models are often overoptimistic. Validity concerns are underscored by findings of an inverse relationship between sample size and reported accuracy in published ML models, contrasting with the theory of learning curves where accuracy should improve or remain stable with increasing sample size. This paper investigates factors contributing to overoptimism in ML-driven science, focusing on overfitting and publication bias. We introduce a novel stochastic model for observed accuracy, integrating parametric learning curves and the aforementioned biases. We construct an estimator that corrects for these biases in observed data. Theoretical and empirical results show that our framework can estimate the underlying learning curve, providing realistic performance assessments from published results. Applying the model to meta-analyses of classifications of neurological conditions, we estimate the inherent limits of ML-based prediction in each domain.
Haros graphs have been recently introduced as a set of graphs bijectively related to real numbers in the unit interval. Here we consider the iterated dynamics of a graph operator $\cal R$ over the set of Haros graphs. This operator was previously defined in the realm of graph-theoretical characterisation of low-dimensional nonlinear dynamics, and has a renormalization group (RG) structure. We find that the dynamics of $\cal R$ over Haros graphs is complex and includes unstable periodic orbits or arbitrary period and non-mixing aperiodic orbits, overall portraiting a chaotic RG flow. We identify a single RG stable fixed point whose basin of attraction is the set of rational numbers, associate periodic RG orbits with (pure) quadratic irrationals and aperiodic RG orbits with (non-mixing) families of non-quadratic algebraic irrationals and trascendental numbers. Finally, we show that the entropy gradients inside periodic RG orbits are constant. We discuss the possible physical interpretation of such chaotic RG flow and speculate on the entropy-constant periodic orbits as a possible confirmation of a (quantum field-theoretic) $c$-theorem applied inside the invariant set of a RG flow.
We give a new proof of the free transportation cost inequality for measures on the circle following M. Ledoux's idea.
The wave plate is a basic device for transforming and measuring the polarization states of light. It is known that the transformation of light by means of two wave plates makes it possible to measure the state of polarization in an arbitrary basis. The finite spectral width of the light, however, leads to a chromatic aberration of the polarization quantum transformation caused by the parasitic dispersion of the birefringence of the plate material. This causes systematic errors in the tomography of quantum polarization states and significantly reduces its accuracy. This study is a development of our work1, in which an adequate model for quantum measurements of polarization qubits under chromatic aberration was first formulated. This work includes a generalization of the results obtained earlier for the cases of two-qubit states. Along with examples of random states those uniformly distributed over the Haar measure are considered. Using a matrix of complete information, it is quantitatively traced how the presence of chromatic aberrations under conditions of a finite spectral width of light leads to the loss of information in quantum measurements. It is shown that the use of the developed model of fuzzy measurements instead of the model of standard projection measurements makes it possible to suppress systematic errors of quantum tomography even when using high-order wave plates. It turns out that the fuzzy measurement model can give a significant increase in the reconstruction accuracy compared to the standard measurement model.
Purpose: The optic nerve head (ONH) undergoes complex and deep 3D morphological changes during the development and progression of glaucoma. Optical coherence tomography (OCT) is the current gold standard to visualize and quantify these changes, however the resulting 3D deep-tissue information has not yet been fully exploited for the diagnosis and prognosis of glaucoma. To this end, we aimed: (1) To compare the performance of two relatively recent geometric deep learning techniques in diagnosing glaucoma from a single OCT scan of the ONH; and (2) To identify the 3D structural features of the ONH that are critical for the diagnosis of glaucoma. Methods: In this study, we included a total of 2,247 non-glaucoma and 2,259 glaucoma scans from 1,725 subjects. All subjects had their ONHs imaged in 3D with Spectralis OCT. All OCT scans were automatically segmented using deep learning to identify major neural and connective tissues. Each ONH was then represented as a 3D point cloud. We used PointNet and dynamic graph convolutional neural network (DGCNN) to diagnose glaucoma from such 3D ONH point clouds and to identify the critical 3D structural features of the ONH for glaucoma diagnosis. Results: Both the DGCNN (AUC: 0.97$\pm$0.01) and PointNet (AUC: 0.95$\pm$0.02) were able to accurately detect glaucoma from 3D ONH point clouds. The critical points formed an hourglass pattern with most of them located in the inferior and superior quadrant of the ONH. Discussion: The diagnostic accuracy of both geometric deep learning approaches was excellent. Moreover, we were able to identify the critical 3D structural features of the ONH for glaucoma diagnosis that tremendously improved the transparency and interpretability of our method. Consequently, our approach may have strong potential to be used in clinical applications for the diagnosis and prognosis of a wide range of ophthalmic disorders.
We revisit a correspondence between toroidal compactifications of M-theory and del Pezzo surfaces, in which rational curves on the del Pezzo are related to ${1\over 2}$-BPS branes of the corresponding compactification. We argue that curves of higher genus correspond to non-geometric backgrounds of the M-theory compactifications, which are related to exotic branes. In particular, the number of "special directions" of the exotic brane is equal to the genus of the corresponding curve. We also point out a relation between addition of curves in the del Pezzo and the brane polarization effect.
In ESOP 2008, Gulwani and Musuvathi introduced a notion of cover and exploited it to handle infinite-state model checking problems. Motivated by applications to the verification of data-aware processes, we proved in a previous paper that covers are strictly related to model completions, a well-known topic in model theory. In this paper we investigate cover transfer to theory combinations in the disjoint signatures case. We prove that for convex theories, cover algorithms can be transferred to theory combinations under the same hypothesis (equality interpolation property aka strong amalgamation property) needed to transfer quantifier-free interpolation. In the non-convex case, we show by a counterexample that covers may not exist in the combined theories, even in case combined quantifier-free interpolants do exist. However, we exhibit a cover transfer algorithm operating also in the non-convex case for special kinds of theory combinations; these combinations (called `tame combinations') concern multi-sorted theories arising in many model-checking applications (in particular, the ones oriented to verification of data-aware processes).
We investigate the evolution and origin of small-scale chromospheric swirls by analyzing numerical simulations of the quiet solar atmosphere, using the radiative magnetohydrodynamic code CO$5$BOLD. We are interested in finding their relation with magnetic field perturbations and in the processes driving their evolution. For the analysis, the swirling strength criterion and its evolution equation are applied in order to identify vortical motions and to study their dynamics. We introduce a new criterion, the magnetic swirling strength, which allows us to recognize torsional perturbations in the magnetic field. We find a strong correlation between swirling strength and magnetic swirling strength, in particular in intense magnetic flux concentrations, which suggests a tight relation between vortical motions and torsional magnetic field perturbations. Furthermore, we find that swirls propagate upward with the local Alfv\'en speed as unidirectional swirls, in the form of pulses, driven by magnetic tension forces alone. In the photosphere and low chromosphere, the rotation of the plasma co-occurs with a twist in the upwardly directed magnetic field that is in the opposite direction of the plasma flow. All together, these are characteristics of torsional Alfv\'en waves. We also find indications of an imbalance between the hydrodynamic and magnetohydrodynamic baroclinic effects being at the origin of the swirls. At the base of the chromosphere, we find a net upwardly directed Poynting flux, which is mostly associated with large and complex swirling structures that we interpret as the superposition of various small-scale vortices. We conclude that the ubiquitous swirling events observed in simulations are tightly correlated with perturbations of the magnetic field. At photospheric and chromospheric levels, they form Alfv\'en pulses that propagate upward and may contribute to chromospheric heating.
This study examines a new formulation of non-equilibrium thermodynamics, which gives a conditional derivation of the ``maximum entropy production'' (MEP) principle for flow and/or chemical reaction systems at steady state. The analysis uses a dimensionless potential function $\phi_{st}$ for non-equilibrium systems, analogous to the free energy concept of equilibrium thermodynamics. Spontaneous reductions in $\phi_{st}$ arise from increases in the ``flux entropy'' of the system - a measure of the variability of the fluxes - or in the local entropy production; conditionally, depending on the behaviour of the flux entropy, the formulation reduces to the MEP principle. The inferred steady state is also shown to exhibit high variability in its instantaneous fluxes and rates, consistent with the observed behaviour of turbulent fluid flow, heat convection and biological systems; one consequence is the coexistence of energy producers and consumers in ecological systems. The different paths for attaining steady state are also classified.
We study the phase diagram of the superconducting vortex system in layered high-temperature superconductors in the presence of a magnetic field perpendicular to the layers and of random atomic scale point pinning centers. We consider the highly anisotropic limit where the pancake vortices on different layer are coupled only by their electromagnetic interaction. The free energy of the vortex system is then represented as a Ramakrishnan-Yussouff free energy functional of the time averaged vortex density. We numerically minimize this functional and examine the properties of the resulting phases. We find that, in the temperature ($T$) -- pinning strength ($s$) plane at constant magnetic induction, the equilibrium phase at low $T$ and $s$ is a Bragg glass. As one increases $s$ or $T$ a first order phase transition occurs to another phase that we characterize as a pinned vortex liquid. The weakly pinned vortex liquid obtained for high $T$ and small $s$ smoothly crosses over to the strongly pinned vortex liquid as $T$ is decreased or $s$ increased -- we do not find evidence for the existence, in thermodynamic equilibrium, of a distinct vortex glass phase in the range of pinning parameters considered here. %cdr We present results for the density correlation functions, the density and defect distributions, and the local field distribution accessible via $\mu$SR experiments. These results are compared with those of existing theoretical, numerical and experimental studies.
The objective of the article is to treat the Schr\"{o}dinger equation in parallel with a standard treatment of the heat equation. In the mathematics literature, the heat equation initial value problem is converted into a Volterra integral equation of the second kind, and then the Picard algorithm is used to find the exact solution of the integral equation. The Poisson Integral theorem shows that the Poisson integral formula with the Schrodinger kernel holds in the Abel summable sense. Furthermore, the Source integral theorem provides the solution of the initial value problem for the nonhomogeneous Schrodinger equation. Folland's proof of the Generalized Young's inequality is used as a model for the proof of the L^p lemma. Basically the Generalized Young's theorem is in a more general form where the functions take values in an arbitrary Banach space. The L^1, L^p and the L^\infty lemmas are inductively applied to the proofs of their respective Volterra theorems in order to prove that the Neumman series converge with respect to the topology L^p(I;B}, where I is a finite time interval, B is an arbitrary Banach space, and 1 =< p =< \infty. The Picard method of successive approximations is to be used to construct an approximate solution which should approach the exact solution as n->\infty. To prove convergence, Volterra kernels are introduced in arbitrary Banach spaces. The Volterra theorems are proved and applied in order to show that the Neumann series for the Hilbert-Schmidt kernel and the unitary kernel converge to the exact Green function.
It is shown that, for the bi-harmonic equation, an optimal regularity criterion of the vertex of typical paraboloids can be expressed in terms of Osgood-Dini integral conditions of Petrovskii's type for the heat equation derived in 1934. Some extensions of the first Fourier coefficent method to other PDEs are discussed. A survey on boundary point regularity for elliptic and parabolic PDEs is enclosed.
In this paper, we show the orbital stability of solitons arising in the cubic derivative nonlinear Schrodinger equations. We consider the zero mass case that is not covered by earlier works [8, 3]. As this case enjoys L^2 scaling invariance, we expect the orbital stability in the sense up to scaling symmetry, in addition to spatial and phase translations. For the proof, we are based on the variational argument and extend a similar argument in [21]. Moreover, we also show a self-similar type blow up criteria of solutions with the critical mass 4{\pi}.
An algorithm for integration of polynomial functions with variable weight is considered. It provides extension of the Gaussian integration, with appropriate scaling of the abscissas and weights. Method is a good alternative to usually adopted interval splitting.
Detection of pathologies is a fundamental task in medical imaging and the evaluation of algorithms that can perform this task automatically is crucial. However, current object detection metrics for natural images do not reflect the specific clinical requirements in pathology detection sufficiently. To tackle this problem, we propose Robust Detection Outcome (RoDeO); a novel metric for evaluating algorithms for pathology detection in medical images, especially in chest X-rays. RoDeO evaluates different errors directly and individually, and reflects clinical needs better than current metrics. Extensive evaluation on the ChestX-ray8 dataset shows the superiority of our metrics compared to existing ones. We released the code at https://github.com/FeliMe/RoDeO and published RoDeO as pip package (rodeometric).
We calculate the entanglement of formation and the entanglement of distillation for arbitrary mixtures of the zero spin states on an arbitrary-dimensional bipartite Hilbert space. Such states are relevant to quantum black holes and to decoherence-free subspaces based communication. The two measures of entanglement are equal and scale logarithmically with the system size. We discuss its relation to the black hole entropy law. Moreover, these states are locally distinguishable but not locally orthogonal, thus violating a conjecture that the entanglement measures coincide only on locally orthogonal states. We propose a slightly weaker form of this conjecture. Finally, we generalize our entanglement analysis to any unitary group.
A number of tasks have been proposed recently to facilitate easy access to charts such as chart QA and summarization. The dominant paradigm to solve these tasks has been to fine-tune a pretrained model on the task data. However, this approach is not only expensive but also not generalizable to unseen tasks. On the other hand, large language models (LLMs) have shown impressive generalization capabilities to unseen tasks with zero- or few-shot prompting. However, their application to chart-related tasks is not trivial as these tasks typically involve considering not only the underlying data but also the visual features in the chart image. We propose PromptChart, a multimodal few-shot prompting framework with LLMs for chart-related applications. By analyzing the tasks carefully, we have come up with a set of prompting guidelines for each task to elicit the best few-shot performance from LLMs. We further propose a strategy to inject visual information into the prompts. Our experiments on three different chart-related information consumption tasks show that with properly designed prompts LLMs can excel on the benchmarks, achieving state-of-the-art.
Let $Q$ be a quiver, $M$ a representation of $Q$ with an ordered basis $\cB$ and $\ue$ a dimension vector for $Q$. In this note we extend the methods of \cite{L12} to establish Schubert decompositions of quiver Grassmannians $\Gr_\ue(M)$ into affine spaces to the ramified case, i.e.\ the canonical morphism $F:T\to Q$ from the coefficient quiver $T$ of $M$ w.r.t.\ $\cB$ is not necessarily unramified. In particular, we determine the Euler characteristic of $\Gr_\ue(M)$ as the number of \emph{extremal successor closed subsets of $T_0$}, which extends the results of Cerulli Irelli (\cite{Cerulli11}) and Haupt (\cite{Haupt12}) (under certain additional assumptions on $\cB$).
The TAIGA experimental complex is a hybrid observatory for high-energy gamma-ray astronomy in the range from 10 TeV to several EeV. The complex consists of such installations as TAIGA- IACT, TAIGA-HiSCORE and a number of others. The TAIGA-HiSCORE facility is a set of wide-angle synchronized stations that detect Cherenkov radiation scattered over a large area. TAIGA-HiSCORE data provides an opportunity to reconstruct shower characteristics, such as shower energy, direction of arrival, and axis coordinates. The main idea of the work is to apply convolutional neural networks to analyze HiSCORE events, considering them as images. The distribution of registration times and amplitudes of events recorded by HiSCORE stations is used as input data. The paper presents the results of using convolutional neural networks to determine the characteristics of air showers. It is shown that even a simple model of convolutional neural network provides the accuracy of recovering EAS parameters comparable to the traditional method. Preliminary results of air shower parameters reconstruction obtained in a real experiment and their comparison with the results of traditional analysis are presented.
Let $f(z) = z+z^2+O(z^3)$ and $f_\epsilon(z) = f(z) + \epsilon^2$. A classical result in parabolic bifurcation in one complex variable is the following: if $N-\frac{\pi}{\epsilon}\to 0$ we obtain $(f_\epsilon)^{N} \to \mathcal{L}_f$, where $\mathcal{L}_f$ is the Lavaurs map of $f$. In this paper we study a \textit{non-autonomous} parabolic bifurcation. We focus on the case of $f_0(z)=\frac{z}{1-z}$. Given a sequence $\{\epsilon_i\}_{1\leq i\leq N}$, we denote $f_n(z) = f_0(z) + \epsilon_n^2$. We give sufficient and necessary conditions on the sequence $\{\epsilon_i\}$ that imply that $f_{N}\circ\ldots f_{1} \to \textrm{Id}$ (the Lavaurs map of $f_0$). We apply our results to prove parabolic bifurcation phenomenon in two dimensions for some class of maps.
To model bio-chemical reaction systems with diffusion one can either use stochastic, microscopic reaction-diffusion master equations or deterministic, macroscopic reaction-diffusion system. The connection between these two models is not only theoretically important but also plays an essential role in applications. This paper considers the macroscopic limits of the chemical reaction-diffusion master equation for first-order chemical reaction systems in highly heterogeneous environments. More precisely, the diffusion coefficients as well as the reaction rates are spatially inhomogeneous and the reaction rates may also be discontinuous. By carefully discretizing these heterogeneities within a reaction-diffusion master equation model, we show that in the limit we recover the macroscopic reaction-diffusion system with inhomogeneous diffusion and reaction rates.
We investigate the finite-temperature properties of attractive three-component (colors) fermionic atoms in optical lattices using a self-energy functional approach. As the strength of the attractive interaction increases in the low temperature region, a second-order transition occurs from a Fermi liquid to a color superfluid (CSF). In the strong attractive region, a first-order transition occurs from a CSF to a trionic state. In the high temperature region, a crossover between a Fermi liquid and a trionic state is observed with increasing the strength of the attractive interaction. The crossover region for fixed temperature is almost independent of filling.
Space debris is a major problem for all space-active nations. Adopting high precision measuring techniques will help to produce the reliable and accurate catalogue for space debris and collision avoidance. Laser Ranging is a kind of real-time measuring technology with high precision for space debris observation. The first experiment of laser ranging to the space debris in China was performed at the Shanghai Observatory in July 2008 at the ranging precision of about 60- 80cm. The experiment results show that the return signals from the targets with the range of 900 km were quite strong with the power of 40W (2J@20Hz), 10ns pulse width laser at 532nm wavelength. The performances of preliminary laser ranging system and the observed results in 2008 and 2010 are introduced in the paper.
It is known that, for the parabolic-elliptic Keller-Segel system with critical porous-medium diffusion in dimension $\RR^d$, $d \ge 3$ (also referred to as the quasilinear Smoluchowski-Poisson equation), there is a critical value of the chemotactic sensitivity (measuring in some sense the strength of the drift term) above which there are solutions blowing up in finite time and below which all solutions are global in time. This global existence result is shown to remain true for the parabolic-parabolic Keller-Segel system with critical porous-medium type diffusion in dimension $\RR^d$, $d \ge 3$, when the chemotactic sensitivity is below the same critical value. The solution is constructed by using a minimising scheme involving the Kantorovich-Wasserstein metric for the first component and the $L^2$-norm for the second component. The cornerstone of the proof is the derivation of additional estimates which relies on a generalisation to a non-monotone functional of a method due to Matthes, McCann, & Savar\'e (2009).
Payment platforms have significantly evolved in recent years to keep pace with the proliferation of online and cashless payments. These platforms are increasingly aligned with online social networks, allowing users to interact with each other and transfer small amounts of money in a Peer-to-Peer fashion. This poses new challenges for analysing payment data, as traditional methods are only user-centric or business-centric and neglect the network users build during the interaction. This paper proposes a first methodology for measuring user value in modern payment platforms. We combine quantitative user-centric metrics with an analysis of the graph created by users' activities and its topological features inspired by the evolution of opinions in social networks. We showcase our approach using a dataset from a large operational payment platform and show how it can support business decisions and marketing campaign design, e.g., by targeting specific users.
We consider the problem of estimating the autocorrelation operator of an autoregressive Hilbertian process. By means of a Tikhonov approach, we establish a general result that yields the convergence rate of the estimated autocorrelation operator as a function of the rate of convergence of the estimated lag zero and lag one autocovariance operators. The result is general in that it can accommodate any consistent estimators of the lagged autocovariances. Consequently it can be applied to processes under any mode of observation: complete, discrete, sparse, and/or with measurement errors. An appealing feature is that the result does not require delicate spectral decay assumptions on the autocovariances but instead rests on natural source conditions. The result is illustrated by application to important special cases.
Saliency computation models aim to imitate the attention mechanism in the human visual system. The application of deep neural networks for saliency prediction has led to a drastic improvement over the last few years. However, deep models have a high number of parameters which makes them less suitable for real-time applications. Here we propose a compact yet fast model for real-time saliency prediction. Our proposed model consists of a modified U-net architecture, a novel fully connected layer, and central difference convolutional layers. The modified U-Net architecture promotes compactness and efficiency. The novel fully-connected layer facilitates the implicit capturing of the location-dependent information. Using the central difference convolutional layers at different scales enables capturing more robust and biologically motivated features. We compare our model with state of the art saliency models using traditional saliency scores as well as our newly devised scheme. Experimental results over four challenging saliency benchmark datasets demonstrate the effectiveness of our approach in striking a balance between accuracy and speed. Our model can be run in real-time which makes it appealing for edge devices and video processing.
We prove that every $0$-shifted symplectic structure on a derived Artin $n$-stack admits a curved $A_{\infty}$ deformation quantisation. The classical method of quantising smooth varieties via quantisations of affine space does not apply in this setting, so we develop a new approach. We construct a map from DQ algebroid quantisations of unshifted symplectic structures on a derived Artin $n$-stack to power series in de Rham cohomology, depending only on a choice of Drinfeld associator. This gives an equivalence between even power series and certain involutive quantisations, which yield anti-involutive curved $A_{\infty}$ deformations of the dg category of perfect complexes. In particular, there is a canonical quantisation associated to every symplectic structure on such a stack, which agrees for smooth varieties with the Kontsevich--Tamarkin quantisation for even associators.
Texture editing is a crucial task in 3D modeling that allows users to automatically manipulate the surface materials of 3D models. However, the inherent complexity of 3D models and the ambiguous text description lead to the challenge in this task. To address this challenge, we propose ITEM3D, a \textbf{T}exture \textbf{E}diting \textbf{M}odel designed for automatic \textbf{3D} object editing according to the text \textbf{I}nstructions. Leveraging the diffusion models and the differentiable rendering, ITEM3D takes the rendered images as the bridge of text and 3D representation, and further optimizes the disentangled texture and environment map. Previous methods adopted the absolute editing direction namely score distillation sampling (SDS) as the optimization objective, which unfortunately results in the noisy appearance and text inconsistency. To solve the problem caused by the ambiguous text, we introduce a relative editing direction, an optimization objective defined by the noise difference between the source and target texts, to release the semantic ambiguity between the texts and images. Additionally, we gradually adjust the direction during optimization to further address the unexpected deviation in the texture domain. Qualitative and quantitative experiments show that our ITEM3D outperforms the state-of-the-art methods on various 3D objects. We also perform text-guided relighting to show explicit control over lighting. Our project page: https://shengqiliu1.github.io/ITEM3D.
The critical temperature ($T_{C}$) and the energy gap ($2\Delta(T)$) for the superconductor SiH$_4$(H$_2$)$_2$ at 250 GPa have been calculated. The wide range of the Coulomb pseudopotential's values has been considered: $\mu^{\star}\in<0.1,0.3>$. It has been stated that $T_{C}$ decreases together with the increase of $\mu^{\star}$ from 129.83 K to 81.40 K. The low-temperature energy gap ($T\sim 0$ K) decreases together with the increase of the Coulomb pseudopotential from 50.96 meV to 30.12 meV. The high values of $2\Delta(0)$ mean that the dimensionless ratio $R_{\Delta}\equiv 2\Delta(0)/k_{B}T_{C}$ significantly exceeds the value predicted by the classical BCS theory. In the considered case: $R_{\Delta}\in<4.55,4.29>$. Due to the unusual dependence of the critical temperature and the energy gap on $\mu^{\star}$, the analytical expressions for $T_{C}(\mu^{\star})$ and $\Delta(\mu^{\star})$ have been given.
We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also take advantage of the clustering method used in the well-known Thorup-Zwick distance oracle.
We consider the collider signature of right-handed neutrinos propagating in $\delta$ (large) extra dimensions, and interacting with Standard Model fields only through a Yukawa coupling to the left-handed neutrino and the Higgs boson. These theories are attractive as they can explain the smallness of the neutrino mass, as has already been shown. We show that if $\delta$ is bigger than two, it can result in an enhancement in the production rate of the Higgs boson, decaying either invisibly or to a $b$ anti-$b$ quark pair, associated with an isolated high $p_T$ charged lepton and missing transverse energy at future hadron colliders, such as the LHC. The enhancement is due to the large number of Kaluza-Klein neutrinos produced in the final state. The observation of the signal event would provide an opportunity to distinguish between the normal and inverted neutrino mass hierarchies, and to determine the absolute scale of neutrino masses by measuring the asymmetry of the observed event numbers in the electron and muon channels.
We show how the multiloop amplitudes of $\Phi^3$ theory (in the worldline formulation of Schmidt and Schubert) are recovered from the N-tachyon $(h+1)$-loop amplitude in bosonic string theory in the $\alpha' \to 0$ limit, if one keeps the tachyon coupling constant fixed. In this limit the integral over string moduli space receives contributions only from those corners where the world-sheet resembles a $\Phi^3$ particle diagram. Technical issues involved are a proper choice of local world-sheet coordinates, needed to take the string amplitudes off-shell, and a formal procedure for introducing a free mass parameter $M^2$ instead of the tachyonic value $-4/\alpha'$.
Spatiotemporal mode-locking (STML) has become an emerging approach to realize organized wavepackets in high-dimensional nonlinear photonic systems. Mode-locking in one dimensional systems employs a saturable absorber to resist fluctuations in the temporal domain. Analogous suppression of fluctuations in the space-time domains to retain a consistent output should also exist for STML. However, experimental evidence of such a resistance remains elusive, to our knowledge. Here, we report experimental observation of such a spatiotemporal stabilizer in STML, by embedding a spatial light modulator (SLM) into a multi-mode fibre (MMF) laser. Mode decomposition reveals the mode content remains steady for an STML state when applying phase perturbations on the SLM. Conversely, the mode content changes significantly for a non-STML lasing state. Numerical simulations confirm our observation and show that spatial filtering and saturable absorber mainly contribute to the observed stability. The capability to resist the spatial phase fluctuations is observed to depend on the intracavity pulse energy as well as the modal pulse energy condensed in the low-order modes. Our work constitutes another building block for the concept of STML in multi-mode photonic systems.
In this paper we present a concise overview of our recent results concerning the electric potential distribution around a small charged particle in weakly ionized plasmas. A number of different effects which influence plasma screening properties are considered. Some consequences of the results are discussed, mostly in the context of complex (dusty) plasmas.
We describe the development of the model for interstellar gamma-ray emission that is the standard adopted by the LAT team and is publicly available. The model is based on a linear combination of templates for interstellar gas column density and for the inverse Compton emission. The spectral energy distributions of the gamma-ray emission associated with each template are determined from a fit to 4 years of Fermi-LAT data in 14 independent energy bins from 50 MeV to 50 GeV. We fit those distributions with a realistic model for the emission processes to extrapolate to higher energies. We also include large-scale structures like Loop I and the Fermi bubbles following an iterative procedure that re-injects filtered LAT counts residual maps into the model. We confirm that the cosmic-ray proton density varies with the distance from the Galactic center and find a continuous softening of the proton spectrum with this distance. We observe that the Fermi bubbles have a shape similar to a catenary at their bases.
In-context learning, i.e., learning from context examples, is an impressive ability of Transformer. Training Transformers to possess this in-context learning skill is computationally intensive due to the occurrence of learning plateaus, which are periods within the training process where there is minimal or no enhancement in the model's in-context learning capability. To study the mechanism behind the learning plateaus, we conceptually seperate a component within the model's internal representation that is exclusively affected by the model's weights. We call this the "weights component", and the remainder is identified as the "context component". By conducting meticulous and controlled experiments on synthetic tasks, we note that the persistence of learning plateaus correlates with compromised functionality of the weights component. Recognizing the impaired performance of the weights component as a fundamental behavior drives learning plateaus, we have developed three strategies to expedite the learning of Transformers. The effectiveness of these strategies is further confirmed in natural language processing tasks. In conclusion, our research demonstrates the feasibility of cultivating a powerful in-context learning ability within AI systems in an eco-friendly manner.
We have developed a technique to measure the absolute frequencies of optical transitions by using an evacuated Rb-stabilized ring-cavity resonator as a transfer cavity. We study possible wavelength-dependent errors due to dispersion at the cavity mirrors by measuring the frequency of the same transition in the $D_2$ line of Cs at three cavity lengths. We find no discernable change in values within our error of 30 kHz. Our values are consistent with measurements using the frequency-comb technique and have similar accuracy.
We have developed a self-consistent description of the radiation heat transfer and dynamics of large perfectly black spherical bodies with sizes much greater than the characteristic wavelength of radiation moving in a photon gas with relativistic velocity. The results can be important in astrophysics.
Data on low x physics from H1 & ZEUS are presented and their interpretation discussed. The focus is on the increasing hardness of the energy dependence of inclusive virtual-photon proton scattering and certain diffractive processes as the transverse size of the probe decreases.
We present magnetotransport data on the ferrimagnet GdMn$_6$Sn$_6$. From the temperature dependent data we are able to extract a large instrinsic contribution to the anomalous Hall effect $\sigma_{xz}^{int} \sim$ 32 $\Omega^{-1}cm^{-1}$ and $\sigma_{xy}^{int} \sim$ 223 $\Omega^{-1}cm^{-1}$, which is comparable to values found in other systems also containing kagome nets of transition metals. From our transport anisotropy, as well as our density functional theory calculations, we argue that the system is electronically best described as a three dimensional system. Thus, we show that reduced dimensionality is not a strong requirement for obtaining large Berry phase contributions to transport properties. In addition, the coexistence of rare-earth and transition metal magnetism makes the hexagonal MgFe$_6$Ge$_6$ structure type a promising system to tune the electronic and magnetic properties in future studies.
Electrically actuated optomechanical resonators provide a route to quantum-coherent, bidirectional conversion of microwave and optical photons. Such devices could enable optical interconnection of quantum computers based on qubits operating at microwave frequencies. Here we present a novel platform for microwave-to-optical conversion comprising a photonic crystal cavity made of single-crystal, piezoelectric gallium phosphide integrated on pre-fabricated niobium circuits on an intrinsic silicon substrate. The devices exploit spatially extended, sideband-resolved mechanical breathing modes at $\sim$ 3.2 GHz, with vacuum optomechanical coupling rates of up to $g_0/2\pi \approx$ 300 kHz. The mechanical modes are driven by integrated microwave electrodes via the inverse piezoelectric effect. We estimate that the system could achieve an electromechanical coupling rate to a superconducting transmon qubit of $\sim$ 200 kHz. Our work represents a decisive step towards integration of piezoelectro-optomechanical interfaces with superconducting quantum processors.
Generalising results from Godel and Chaitin in mathematics suggests that self-referential systems contain intrinsic randomness. We argue that this is relevant to modelling the universe and show how three-dimensional space may arise from a non-geometric order-disorder model driven by self-referential noise.
This paper addresses an inverse cavity scattering problem associated with the biharmonic wave equation in two dimensions. The objective is to determine the domain or shape of the cavity. The Green's representations are demonstrated for the solution to the boundary value problem, and the one-to-one correspondence is confirmed between the Helmholtz component of biharmonic waves and the resulting far-field patterns. Two mixed reciprocity relations are deduced, linking the scattered field generated by plane waves to the far-field pattern produced by various types of point sources. Furthermore, the symmetry relations are explored for the scattered fields generated by point sources. Finally, we present two uniqueness results for the inverse problem by utilizing both far-field patterns and phaseless near-field data.
We present extragalactic number counts and a lower limit estimate for the cosmic infrared background at 15 um from AKARI ultra deep mapping of the gravitational lensing cluster Abell 2218. This data is the deepest taken by any facility at this wavelength, and uniquely samples the normal galaxy population. We have de-blended our sources, to resolve photometric confusion, and de-lensed our photometry to probe beyond AKARI's blank-field sensitivity. We estimate a de-blended 5 sigma sensitivity of 28.7 uJy. The resulting 15 um galaxy number counts are a factor of three fainter than previous results, extending to a depth of ~ 0.01 mJy and providing a stronger lower limit constraint on the cosmic infrared background at 15 um of 1.9 +/- 0.5 nW m^-2 sr^-1.
Quartic gauge couplings are tested by this study of the production of $WW\gamma$ and $WZ\gamma$ events in 20.2 fb$^{-1}$ of proton--proton collisions at a centre-of-mass energy of $\sqrt{s} = 8$ TeV recorded with the ATLAS detector at the LHC. The final state of $WW\gamma$ events containing an electron, a muon and a photon is analysed as well as the final states of $WW\gamma$ and $WZ\gamma$ production containing an electron or a muon, two jets and a photon. For all final states two different fiducial regions are defined: one yielding the best sensitivity to the production cross-section of the process and one optimised for the detection of new physical phenomena. In the former region, the $WW\gamma$ production cross-section is computed and in both regions, upper limits on the $WW\gamma$ and $WZ\gamma$ production cross-section are derived. The results obtained in the second phase space are combined for the interpretation in the context of anomalous quartic gauge couplings using an effective field theory.
The International Ultraviolet Explorer satellite has made a tremendous contribution to the study of hot-star winds. Its long lifetime has resulted in the collection of ultraviolet spectra for a large sample of OB stars. Its unique monitoring capability has enabled detailed time-series analyses to investigate the stellar-wind variability for individual objects. IUE has also been a major driver for the development of the radiation-driven-wind theory; the synergy between theory and observations is one of the main reasons for the large progress that has been made in our understanding of hot-star winds and their impact on the atmospheres and evolution of massive stars.
Let H and K be infinite dimensional Hilbert spaces, while B(H) and B(K) denote the algebras of all linear bounded operators on H and K, respectively. We characterize the forms of additive mappings from B(H) into B(K) that preserve the nonzero idempotency of either Jordan products of operators or usual products of operators in both directions.
The spacing distribution between Farey points has drawn attention in recent years. It was found that the gaps $\gamma_{j+1}-\gamma_j$ between consecutive elements of the Farey sequence produce, as $Q\to\infty$, a limiting measure. Numerical computations suggest that for any $d\ge 2$, the gaps $\gamma_{j+d}-\gamma_j$ also produce a limiting measure whose support is distinguished by remarkable topological features. Here we prove the existence of the spacing distribution for $d=2$ and characterize completely the corresponding support of the measure.
Recently, hardware technology has rapidly evolved pertaining to domain-specific applications/architectures. Soon, processors may be composed of a large collection of vendor-independent IP specialized for application-specific algorithms, resulting in extreme heterogeneity. However, integrating multiple vendors within the same die is difficult. Chiplet technology is a solution that integrates multiple vendor dies within the same chip by breaking each piece into an independent block, each with a common interconnect for fast data transfer. Most prior chiplet research focuses on interconnect technology, but program execution models (PXMs) that enable programmability and performance are missing from the discussion. In chiplet architectures, a cohesive co-designed PXM can further separate the roles of the different actors, while maintaining a common abstraction for program execution. This position paper describes the need for co-designed PXMs and proposes the Codelet PXM and associated architectural features as a candidate to fill this need in extremely heterogeneous chiplet-based architectures.
In 2009 we started an intense radial-velocity monitoring of a few nearby, slowly-rotating and quiet solar-type stars within the dedicated HARPS-Upgrade GTO program. The goal of this campaign is to gather very-precise radial-velocity data with high cadence and continuity to detect tiny signatures of very-low-mass stars that are potentially present in the habitable zone of their parent stars. Ten stars were selected among the most stable stars of the original HARPS high-precision program that are uniformly spread in hour angle, such that three to four of them are observable at any time of the year. For each star we recorded 50 data points spread over the observing season. The data points consist of three nightly observations with a total integration time of 10 minutes each and are separated by two hours. This is an observational strategy adopted to minimize stellar pulsation and granulation noise. We present the first results of this ambitious program. The radial-velocity data and the orbital parameters of five new and one confirmed low-mass planets around the stars HD20794, HD85512, and HD192310 are reported and discussed, among which is a system of three super-Earths and one that harbors a 3.6 Earth-mass planet at the inner edge of the habitable zone. This result already confirms previous indications that low-mass planets seem to be very frequent around solar-type stars and that this may occur with a frequency higher than 30%
This paper studies the SI1SI2S spreading model of two competing behaviors over a bilayer network. We address the problem of determining resource allocation strategies which design a spreading network so as to ensure the extinction of a selected process. Our discussion begins by extending the SI1SI2S model to edge-dependent infection and node-dependent recovery parameters with generalized graph topologies, which builds upon prior work that studies the homogeneous case. We then find conditions under which the mean-field approximation of a chosen epidemic process stabilizes to extinction exponentially quickly. Leveraging this result, we formulate and solve an optimal resource allocation problem in which we minimize the expenditure necessary to force a chosen epidemic process to become extinct as quickly as possible. In the case that the budget is not sufficient to ensure extinction of the desired process, we instead minimize a useful heuristic. We explore the efficacy of our methods by comparing simulations of the stochastic process to the mean-field model, and find that the mean-field methods developed work well for the optimal cost networks designed, but suffer from inaccuracy in other situations.
Let $\mathsf{H}$ be a separable Hilbert space. We prove that the Grassmannian $\mathsf{P}_c(\mathsf{H})$ of the finite dimensional subspaces of $\mathsf{H}$ is an Alexandrov space of nonnegative curvature and we employ its metric geometry to develop the theory of optimal transport for the normal states of the von Neumann algebra of linear and bounded operators $\mathsf{B}(\mathsf{H})$. Seeing density matrices as discrete probability measures on $\mathsf{P}_c(\mathsf{H})$ (via the spectral theorem) we define an optimal transport cost and the Wasserstein distance for normal states. In particular we obtain a cost which induces the $w^*$-topology. Our construction is compatible with the quantum mechanics approach of composite systems as tensor products $\mathsf{H}\otimes \mathsf{H}$. We provide indeed an interpretation of the pure normal states of $\mathsf{B}(\mathsf{H}\otimes \mathsf{H})$ as families of transport maps. This also defines a Wasserstein cost for the pure normal states of $\mathsf{B}(\mathsf{H}\otimes \mathsf{H})$, reconciling with our proposal.
We develop a novel algorithm to predict the occurrence of major abdominal surgery within 5 years following Crohn's disease diagnosis using a panel of 29 baseline covariates from the Swedish population registers. We model pseudo-observations based on the Aalen-Johansen estimator of the cause-specific cumulative incidence with an ensemble of modern machine learning approaches. Pseudo-observation pre-processing easily extends all existing or new machine learning procedures to right-censored event history data. We propose pseudo-observation based estimators for the area under the time varying ROC curve, for optimizing the ensemble, and the predictiveness curve, for evaluating and summarizing predictive performance.
Modern software design practice implies widespread use in the development of ready-made components, usually designed as external libraries. The undoubted advantages of reusing third-party code can be offset by integration errors that appear in the developed software. The reason for the appearance of such errors is mainly due to misunderstanding or incomplete understanding by the programmer of the details of external libraries such as an internal structure and the subtleties of functioning. The documentation provided with the libraries is often very sparse and describes only the main intended scenarios for the interaction of the program and the library. In this paper, we propose the approach based on the use of formal library specifications, which allows detecting integration errors using static analysis methods. To do this, the external library is described using the LibSL specification language, the resulting description is translated into the internal data structures of the KEX analyzer. The execution of the incorrect scenarios of library usage, such as the incorrect sequence of method calls or the violation of the API function contract, is marked in the program model with special built-in functions of the KEX analyzer. Later, when analyzing the program, KEX becomes able to detect integration errors, since incorrect library usage scenarios are diagnosed as calling marked functions. The proposed approach is implemented as SPIDER (SPecification-based Integration Defect Revealer), which is an extension of the Kex analyzer and has proven its efficiency by detecting integration errors of different classes on several special-made projects, as well as on several projects taken from open repositories.
We determine the density-dependent electron mass, m*, in two-dimensional (2D) electron systems of GaAs/AlGaAs heterostructures by performing detailed low-temperature Shubnikov deHaas measurements. Using very high quality transistors with tunable electron densities we measure m* in single, high mobility specimens over a wide range of r_s (6 to 0.8). Toward low-densities we observe a rapid increase of m* by as much as 40%. For 2>r_s>0.8 the mass values fall ~10% below the band mass of GaAs. Numerical calculations are in qualitative agreement with our data but differ considerably in detail.
Maxwell-Bloch system describing the resonant propagation of electromagnetic pulses in both two-level media with degeneracy in angle moment projection and three-level media with equal oscillator forces is considered. The inhomogeneous broadening of energy levels is accounted. Binary Darboux Transformation generating the solutions of the system is constructed. Pulses corresponding to the transition between levels with largest population difference are shown to be stable. The solution describing the propargation of pulses in the medium exited by the periodic wave is obtained. The hierarchy of infinitesimal symmetries is obtained by means of Darboux transformation.
In this paper, we introduce a new class of curves \alpha called a f-rectifying curves, which its f-position vector defined by {\alpha}_{f}(s)=\int f(s)T(s)ds always lie in the rectifying plane of \alpha, where f is an integrable function and T is the speed curve of {\alpha}. In particular case, when the function f=0 or constant, the class of f-rectifying curves are helix or rectifying curves, respectively. The classification and the characterization of such curves in terms of their curvature and the torsion functions are given with a physical interpretation. We close this study with some examples.
We demonstrate an RF energy harvesting rectenna design based on a metamaterial perfect absorber (MPA). With the embedded Schottky diodes, the rectenna converts captured RF waves to DC power. The Fabry-Perot (FP) cavity resonance of the MPA greatly improves the amount of energy captured. Furthermore, the FP resonance exhibits a high Q-factor and significantly increases the voltage across the Schottky diodes, which improves the rectification efficiency, particularly at low power. This leads to factor of 16 improvement of RF-DC conversion efficiency at ambient intensity level.
This Comment corrects two separate errors which previously appeared in the calculation of vibrational mode frequencies of tin spheres embedded in a silica matrix. The first error has to do with the vibrational frequency of a free elastic sphere. The second error has to do with the effect on the vibrational frequencies of a sphere due to its embedding in an infinite elastic matrix.
A weak fluctuating magnetic field embedded into a turbulent conducting medium grows exponentially while its characteristic scale decays. In the ISM and protogalactic plasmas, the magnetic Pr is very large, so a broad spectrum of growing magnetic fluctuations is excited at subviscous scales. We study the statistical correlations that are set up in the field pattern and show that the magnetic-field lines possess a folding structure, where most of the scale decrease is due to rapid transverse field direction reversals, while the scale of the field variation along itself stays approximately constant. Specifically, we find that the field strength and the field-line curvature are anticorrelated, and the curvature possesses a stationary limiting distribution with the bulk located at the values of curvature comparable to the characteristic wave number of the velocity field and a power tail extending to large values of curvature. The regions of large curvature, therefore, occupy only a small fraction of the total volume of the system. Our theoretical results are corroborated by direct numerical simulations. The implication of the folding effect is that the advent of the Lorentz back reaction occurs when the magnetic energy approaches that of the smallest turbulent eddies. Our results also directly apply to the problem of statistical geometry of the material lines in a random flow.
Consider the (Helgason-) Fourier transform on a Riemannian symmetric space G/K. We give a simple proof of the L^p-Schwartz space isomorphism theorem (0 <p \le 2) for K-finite functions. The proof is a generalization of J.-Ph. Anker's proof for K-invariant functions.
Based on suggested interactions of potential tipping elements in the Earth's climate and in ecological systems, tipping cascades as possible dynamics are increasingly discussed and studied as their activation would impose a considerable risk for human societies and biosphere integrity. However, there are ambiguities in the description of tipping cascades within the literature so far. Here we illustrate how different patterns of multiple tipping dynamics emerge from a very simple coupling of two previously studied idealized tipping elements. In particular, we distinguish between a two phase cascade, a domino cascade and a joint cascade. While a mitigation of an unfolding two phase cascade may be possible and common early warning indicators are sensitive to upcoming critical transitions to a certain degree, the domino cascade may hardly be stopped once initiated and critical slowing down--based indicators fail to indicate tipping of the following element. These different potentials for intervention and anticipation across the distinct patterns of multiple tipping dynamics should be seen as a call to be more precise in future analyses on cascading dynamics arising from tipping element interactions in the Earth system.
Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.
Given an appropriate diagram of left Quillen functors between model categories, one can define a notion of homotopy fiber product, but one might ask if it is really the correct one. Here, we show that this homotopy pullback is well-behaved with respect to translating it into the setting of more general homotopy theories, given by complete Segal spaces, where we have well-defined homotopy pullbacks.
We examine the minimum entropy coupling problem, where one must find the minimum entropy variable that has a given set of distributions $S = \{p_1, \dots, p_m \}$ as its marginals. Although this problem is NP-Hard, previous works have proposed algorithms with varying approximation guarantees. In this paper, we show that the greedy coupling algorithm of [Kocaoglu et al., AAAI'17] is always within $\log_2(e)$ ($\approx 1.44$) bits of the minimum entropy coupling. In doing so, we show that the entropy of the greedy coupling is upper-bounded by $H\left(\bigwedge S \right) + \log_2(e)$. This improves the previously best known approximation guarantee of $2$ bits within the optimal [Li, IEEE Trans. Inf. Theory '21]. Moreover, we show our analysis is tight by proving there is no algorithm whose entropy is upper-bounded by $H\left(\bigwedge S \right) + c$ for any constant $c<\log_2(e)$. Additionally, we examine a special class of instances where the greedy coupling algorithm is exactly optimal.
The persistent homology of a stationary point process on ${\bf R}^N$ is studied in this paper. As a generalization of continuum percolation theory, we study higher dimensional topological features of the point process such as loops, cavities, etc. in a multiscale way. The key ingredient is the persistence diagram, which is an expression of the persistent homology. We prove the strong law of large numbers for persistence diagrams as the window size tends to infinity and give a sufficient condition for the limiting persistence diagram to have the full support. We also discuss a central limit theorem for persistent Betti numbers.
Sorting algorithms have attracted a great deal of attention and study, as they have numerous applications to Mathematics, Computer Science and related fields. In this thesis, we first deal with the mathematical analysis of the Quicksort algorithm and its variants. Specifically, we study the time complexity of the algorithm and we provide a complete demonstration of the variance of the number of comparisons required, a known result but one whose detailed proof is not easy to read out of the literature. We also examine variants of Quicksort, where multiple pivots are chosen for the partitioning of the array. The rest of this work is dedicated to the analysis of finding the true order by further pairwise comparisons when a partial order compatible with the true order is given in advance. We discuss a number of cases where the partially ordered sets arise at random. To this end, we employ results from Graph and Information Theory. Finally, we obtain an alternative bound on the number of linear extensions when the partially ordered set arises from a random graph, and discuss the possible application of Shellsort in merging chains.
A stability analysis of the Borel-Laplace series summation technique, used as explicit time integrator, is carried out. Its numerical performance on stiff and non-stiff problems is analyzed. Applications to ordinary and partial differential equations are presented. The results are compared with those of many popular schemes designed for stiff and non-stiff equations.
In this work we study diffeomorphism-invariant metric-affine theories of gravity from the point of view of self-interacting field theories on top of Minkowski spacetime (or other background). We revise how standard metric theories couple to their own energy-momentum tensor, and discuss the generalization of these ideas when torsion and nonmetricity are also present. We review the computation of the corresponding currents through the Hilbert and canonical (Noether) prescriptions, emphasizing the potential ambiguities arising from both. We also provide the extension of this consistent self-coupling procedure to the vielbein formalism, so that fermions can be included in the matter sector. In addition, we clarify some subtle issues regarding previous discussions on the self-coupling problem for metric theories, both General Relativity and its higher derivative generalizations. We also suggest a connection between Lovelock theorem and the ambiguities in the bootstrapping procedure arising from those in the definition of conserved currents.
Virtual models are important for training and teaching tools used in medical imaging research. We introduce a workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to computer-based models where we can perform interaction of tool with the tissue. This process is broken up into two steps: image segmentation and tool-tissue interaction. We demonstrate the utility of this streamlined workflow by creating models of a liver. A FE model for probe insertion has been developed using cohesive elements to simulate the tissue rupture phenomena. FE based simulations are performed and the results are compared with that of various published papers. An analytical model that governs the reaction forces on the needle tip has also been developed. Expressions for reaction forces acting in both axial and transverse directions on a symmetric tip needle and a bevel tip needle are developed. These models consider local tissue deformation by the needle and the frictional forces generated due to the inclusion of the needle in the tissue, but do not consider the role of tissue rupture toughness parameter Gc due to which some differences are visible in FE based simulations and these analytical results.
We introduce two three sided adaptative systems as toy models to mimic the exchange of commodities between buyers and sellers. These models are simple extensions of the minority game, exhibiting similar behaviour as well as some new features. The main difference between our two models is that in the first the three sides are equivalent while in the second, one choice appears as a compromise between the two other sides. Both models are investigated numerically and compared with the original minority game.
We construct a family of short-range resonating-valence-bond wave functions on a layered cubic lattice, allowing for a tunable anisotropy in the amplitudes assigned to nearest-neighbour valence bonds along one axis. Monte Carlo simulations reveal that four phases are stabilized over the full range of the anisotropy parameter. They are separated from one another by a sequence of continuous quantum phase transitions. An antiferromagnetic phase, centred on the perfect isotropy point, intervenes between two distinct quantum spin liquid states. One of them is continuously deformable to the two-dimensional U(1) spin liquid, which is known to exhibit critical bond correlations. The other has both spin and bond correlations that decay exponentially. The existence of this second phase is proof that, contrary to expectations, neither a bipartite lattice structure nor a conventional Marshall sign rule is an impediment to realizing a fully gapped quantum spin liquid.
Understanding the features learned by deep models is important from a model trust perspective, especially as deep systems are deployed in the real world. Most recent approaches for deep feature understanding or model explanation focus on highlighting input data features that are relevant for classification decisions. In this work, we instead take the perspective of relating deep features to well-studied, hand-crafted features that are meaningful for the application of interest. We propose a methodology and set of systematic experiments for exploring deep features in this setting, where input feature importance approaches for deep feature understanding do not apply. Our experiments focus on understanding which hand-crafted and deep features are useful for the classification task of interest, how robust these features are for related tasks and how similar the deep features are to the meaningful hand-crafted features. Our proposed method is general to many application areas and we demonstrate its utility on orchestral music audio data.
AU Mic is a young, very active M dwarf star with a debris disk and at least one transiting Neptune-size planet. Here we present detailed analysis of the magnetic field of AU Mic based on previously unpublished high-resolution optical and near-infrared spectropolarimetric observations. We report a systematic detection of circular and linear polarization signatures in the stellar photospheric lines. Tentative Zeeman Doppler imaging modeling of the former data suggests a non-axisymmetric global field with a surface-averaged strength of about 90 G. At the same time, linear polarization observations indicate the presence of a much stronger $\approx$2 kG axisymmetric dipolar field, which contributes no circular polarization signal due to the equator-on orientation of AU Mic. A separate Zeeman broadening and intensification analysis allowed us to determine a mean field modulus of 2.3 and 2.1 kG from the Y- and K-band atomic lines respectively. These magnetic field measurements are essential for understanding environmental conditions within the AU Mic planetary system.
The CHANG-ES galaxy sample consists of 35 nearby edge-on galaxies that have been observed using the VLA at 1.6 GHz and 6.0 GHz. Here we present the 3rd data release of our sample, namely the B-configuration 1.6 GHz sample. In addition, we make available the {\it band-to-band} spectral index maps between 1.6 GHz and 6.0 GHz, the latter taken in the matching resolution C-configuration. The images can be downloaded from https://www.queensu.ca/changes. These are our highest resolution images ($\approx$ 3 arcsec) and we examine the possible presence of low luminosity active galactic nuclei in the sample as well as some in-disk structure. New features can be seen in the spectral index maps that are masked in the total intensity emission, including hidden spiral arms in NGC~3448 and two previously unknown radio lobes on either side of the nucleus of NGC~3628. Our AGN detection rate, using only radio criteria, is 55\% which we take as a lower limit because some weaker embedded AGNs are likely present which could be revealed at higher resolution. Archival XMM-Newton data were used to search for further fingerprints of the AGNs in the studied sample. In galaxy disks, discrete regions of flat spectral index are seen, likely due to a thermal emission fraction that is higher than the global average.
Dynamic game arises as a powerful paradigm for multi-robot planning, for which safety constraint satisfaction is crucial. Constrained stochastic games are of particular interest, as real-world robots need to operate and satisfy constraints under uncertainty. Existing methods for solving stochastic games handle chance constraints using exponential penalties with hand-tuned weights. However, finding a suitable penalty weight is nontrivial and requires trial and error. In this paper, we propose the chance-constrained iterative linear-quadratic stochastic games (CCILQGames) algorithm. CCILQGames solves chance-constrained stochastic games using the augmented Lagrangian method. We evaluate our algorithm in three autonomous driving scenarios, including merge, intersection, and roundabout. Experimental results and Monte Carlo tests show that CCILQGames can generate safe and interactive strategies in stochastic environments.
We discuss the consequences of spin current conservation in systems with SU(2) spin symmetry that is spontaneously broken by partial magnetic order, using a momentum-space approach. The long-distance interaction is mediated by Goldstone magnons, whose interaction is expressed in terms of the electron Green's functions. There is also a Higgs mode, whose excitation energy can be calculated. The case of fast magnons obeying linear dispersion relation in three spatial dimensions admits nonperturbative treatment using the Gribov equation, and the solution exhibits singular behaviour which has an interpretation as a tower of spin-1 electronic excitations. This occurs near the Mott insulator state. The electrons are more free in the case of slow magnons, where the perturbative corrections are less singular at the thresholds. We then turn our attention to the problem of high-Tc superconductivity, through the discussion of the stability of the antiferromagnetic ground state in two spatial dimensions. We argue that this is caused by an effective mixing of the Goldstone and Higgs modes, which in turn is caused by an effective Goldstone-boson condensation. The instability of the antiferromagnetic system is analyzed by studying the non-perturbative behaviour of the Higgs boson self-energy using the Dyson-Schwinger equations.
In our ongoing deep infrared imaging search for faint wide secondaries of planet-candidate host stars we have confirmed astrometrically the companionship of a low-mass object to be co-moving with HD89744, a companion-candidate suggested already by Wilson et al. (2000). The derivation of the common proper motion of HD89744 and its companion, which are separated by 62.996+-0.035 arcsec, is based on about 5 year epoch difference between 2MASS and our own UKIRT/UFTI images. The companion effective temperature is about 2200 K and its mass is in the range between 0.072 and 0.081 Msun, depending on the evolutionary model. Therefore, HD89744B is either a very low mass stellar or a heavy brown dwarf companion.
We describe a new high precision measurement of the production cross-section for the eta' meson in proton-proton collisions for five beam momenta at low access energy region Q conducted at the COSY-11 detection system together with an updated results of all other previous measurements of cross-section for (pp-->pp eta') at COSY-11.
We develop ambitwistor string theories for 4 dimensions to obtain new formulae for tree-level gauge and gravity amplitudes with arbitrary amounts of supersymmetry. Ambitwistor space is the space of complex null geodesics in complexified Minkowski space, and in contrast to earlier ambitwistor strings, we use twistors rather than vectors to represent this space. Although superficially similar to the original twistor string theories of Witten, Berkovits and Skinner, these theories differ in the assignment of worldsheet spins of the fields, rely on both twistor and dual twistor representatives for the vertex operators, and use the ambitwistor procedure for calculating correlation functions. Our models are much more flexible, no longer requiring maximal supersymmetry, and the resulting formulae for amplitudes are simpler, having substantially reduced moduli. These are supported on the solutions to the scattering equations refined according to MHV degree and can be checked by comparison with corresponding formulae of Witten and of Cachazo and Skinner.
We show a fractal uncertainty principle with exponent $1/2-\delta+\epsilon$, $\epsilon>0$, for Ahflors-David regular subsets of $\mathbb R$ of dimension $\delta\in (0,1)$. This improves over the volume bound $1/2-\delta$, and $\epsilon$ is estimated explicitly in terms of the regularity constant of the set. The proof uses a version of techniques originating in the works of Dolgopyat, Naud, and Stoyanov on spectral radii of transfer operators. Here the group invariance of the set is replaced by its fractal structure. As an application, we quantify the result of Naud on spectral gaps for convex co-compact hyperbolic surfaces and obtain a new spectral gap for open quantum baker maps.
CONTEXT. Dynamical interactions in young stellar clusters can eject massive stars early in their lives and significantly alter their mass functions. If all of the most massive stars are lost, we are left with an orphan cluster. AIMS. We study the Bermuda cluster (Villafranca O-014 NW), the most significant young stellar group in the North America and Pelican nebulae, and the massive stars that may have been ejected from it to test if it has been orphaned. METHODS. We use Gaia EDR3 parallaxes and proper motions to search for walkaway/runaway stars in the vicinity of the North America and Pelican nebulae. The candidates are analyzed with spectroscopy and photometry to assess their nature and their trajectories are traced back in time to determine at what time they left the Bermuda cluster. RESULTS. We detect three ejection events (Bajamar, Toronto, and HD 201 795 events) that expelled 5, 2, and 2 systems, respectively, or 6, 3, and 3 stars if we count the individual components in spectroscopic/eclipsing binaries. The events took place 1.611$\pm$0.011 Ma, 1.496$\pm$0.044 Ma, and 1.905$\pm$0.037 Ma ago, respectively, but our analysis is marginally consistent with the first two being simultaneous. We detect bow shocks in WISE images associated with four of the ejected systems. Combining the three events, the Bermuda cluster has lost >200 M_Sol, including its three most massive stars, so it can be considered an orphan cluster. One consequence is that the PDMF of the cluster has been radically altered from its top-heavy initial value to one compatible with a Kroupa-like function. Another one is that the cluster is currently expanding with a dynamical time scale consistent with the cause being the ejection events. A scenario in which the Bermuda cluster was formed in a conveyor belt fashion over several hundreds of ka or even 1 Ma is consistent with all the observables. [ABRIDGED]
Large Language Models (LLMs) have an unrivaled and invaluable ability to "align" their output to a diverse range of human preferences, by mirroring them in the text they generate. The internal characteristics of such models, however, remain largely opaque. This work presents the Injectable Realignment Model (IRM) as a novel approach to language model interpretability and explainability. Inspired by earlier work on Neural Programming Interfaces, we construct and train a small network -- the IRM -- to induce emotion-based alignments within a 7B parameter LLM architecture. The IRM outputs are injected via layerwise addition at various points during the LLM's forward pass, thus modulating its behavior without changing the weights of the original model. This isolates the alignment behavior from the complex mechanisms of the transformer model. Analysis of the trained IRM's outputs reveals a curious pattern. Across more than 24 training runs and multiple alignment datasets, patterns of IRM activations align themselves in striations associated with a neuron's index within each transformer layer, rather than being associated with the layers themselves. Further, a single neuron index (1512) is strongly correlated with all tested alignments. This result, although initially counterintuitive, is directly attributable to design choices present within almost all commercially available transformer architectures, and highlights a potential weak point in Meta's pretrained Llama 2 models. It also demonstrates the value of the IRM architecture for language model analysis and interpretability. Our code and datasets are available at https://github.com/DRAGNLabs/injectable-alignment-model
Acoustic unit discovery (AUD) is a process of automatically identifying a categorical acoustic unit inventory from speech and producing corresponding acoustic unit tokenizations. AUD provides an important avenue for unsupervised acoustic model training in a zero resource setting where expert-provided linguistic knowledge and transcribed speech are unavailable. Therefore, to further facilitate zero-resource AUD process, in this paper, we demonstrate acoustic feature representations can be significantly improved by (i) performing linear discriminant analysis (LDA) in an unsupervised self-trained fashion, and (ii) leveraging resources of other languages through building a multilingual bottleneck (BN) feature extractor to give effective cross-lingual generalization. Moreover, we perform comprehensive evaluations of AUD efficacy on multiple downstream speech applications, and their correlated performance suggests that AUD evaluations are feasible using different alternative language resources when only a subset of these evaluation resources can be available in typical zero resource applications.
Constant composition codes have been proposed as suitable coding schemes to solve the narrow band and impulse noise problems associated with powerline communication. In particular, a certain class of constant composition codes called frequency permutation arrays have been suggested as ideal, in some sense, for these purposes. In this paper we characterise a family of neighbour transitive codes in Hamming graphs in which frequency permutation arrays play a central rode. We also classify all the permutation codes generated by groups in this family.
Link prediction is a paradigmatic problem in network science with a variety of applications. In latent space network models this problem boils down to ranking pairs of nodes in the order of increasing latent distances between them. The network model with hyperbolic latent spaces has a number of attractive properties suggesting it must be a powerful tool to predict links, but the past work in this direction reported mixed results. Here we perform systematic investigation of the utility of latent hyperbolic geometry for link prediction in networks. We first show that some measures of link prediction accuracy are extremely sensitive with respect to inaccuracies in the inference of latent hyperbolic coordinates of nodes, so that we develop a new coordinate inference method that maximizes the accuracy of such inference. Applying this method to synthetic and real networks, we then find that while there exists a multitude of competitive methods to predict obvious easy-to-predict links, among which hyperbolic link prediction is rarely the best but often competitive, it is the best, often by far, when the task is to predict less obvious missing links that are really hard to predict. These links include missing links in incomplete networks with large fractions of missing links, missing links between nodes that do not have any common neighbors, and missing links between dissimilar nodes at large latent distances. Overall these results suggest that the harder a specific link prediction task is, the more seriously one should consider using hyperbolic geometry.