text
stringlengths
6
128k
In a series of papers [22-24] by Bufetov and Gorin, Schur generating functions as the Fourier transforms on the unitary group $U(N)$, are introduced to study the asymptotic behaviors of random $N$-particle systems. We introduce and study the Jack generating functions of random $N$-particle systems. In special cases, this can be viewed as the Fourier transforms on the Gelfand pairs $(GL_N(\mathbb R), O(N))$, $(GL_N(\mathbb C), U(N))$ and $(GL_N(\mathbb H), Sp(N))$. Our main results state that the law of large numbers and the central limit theorems for such particle systems, is equivalent to certain conditions on the germ at unity of their Jack generating functions. Our main tool is the Nazarov-Sklyanin operators [50], which have Jack symmetric functions as their eigenfunctions. As applications, we derive asymptotics of Jack characters, prove law of large numbers and central limit theorems for the Littlewood-Richardson coefficients of zonal polynomials, and show that the fluctuations of the height functions of a general family of nonintersecting random walks are asymptotically equal to those of the pullback of the Gaussian free field on the upper half plane.
In most models of dark energy the structure formation stops when the accelerated expansion begins. In contrast, we show that the coupling of dark energy to dark matter may induce the growth of perturbations even in the accelerated regime. In particular, we show that this occurs in the models proposed to solve the cosmic coincidence problem, in which the ratio of dark energy to dark matter is constant. Moreover, if the dark energy couples only to dark matter and not to baryons, as requested by the constraints imposed by local gravity measurements, the baryon fluctuations develop a constant, scale-independent, large-scale bias which is in principle directly observable.
A $d$-dimensional random array on a nonempty set $I$ is a stochastic process $\boldsymbol{X}=\langle X_s:s\in \binom{I}{d}\rangle$ indexed by the set $\binom{I}{d}$ of all $d$-element subsets of $I$. We obtain structural decompositions of finite, high-dimensional random arrays whose distribution is invariant under certain symmetries. Our first main result is a distributional decomposition of finite, (approximately) spreadable, high-dimensional random arrays whose entries take values in a finite set; the two-dimensional case of this result is the finite version of an infinitary decomposition due to Fremlin and Talagrand. Our second main result is a physical decomposition of finite, spreadable, high-dimensional random arrays with square-integrable entries that is the analogue of the Hoeffding/Efron--Stein decomposition. All proofs are effective. We also present applications of these decompositions in the study of concentration of functions of finite, high-dimensional random arrays.
We carry out a thorough analysis on a class of cosmological spacetimes which admit three space-like Killing vectors of Bianchi class B and contain electromagnetic fields. Using dynamical system analysis, we show that a family of vacuum plane-wave solutions of the Einstein-Maxwell equations is the stable attractor for expanding universes. Phase dynamics are investigated in detail for particular symmetric models. We integrate the system exactly for some special cases to confirm the qualitative features. Some of the obtained solutions have not been presented previously to the best of our knowledge. Finally, based on those solutions, we discuss the relation between those homogeneous models and perturbations of open FLRW universes. We argue that the vacuum plane-wave modes correspond to a certain long-wavelength limit of electromagnetic perturbations.
This article is concerned with the fluctuations and the concentration properties of a general class of discrete generation and mean field particle interpretations of nonlinear measure valued processes. We combine an original stochastic perturbation analysis with a concentration analysis for triangular arrays of conditionally independent random sequences, which may be of independent interest. Under some additional stability properties of the limiting measure valued processes, uniform concentration properties, with respect to the time parameter, are also derived. The concentration inequalities presented here generalize the classical Hoeffding, Bernstein and Bennett inequalities for independent random sequences to interacting particle systems, yielding very new results for this class of models. We illustrate these results in the context of McKean-Vlasov-type diffusion models, McKean collision-type models of gases and of a class of Feynman-Kac distribution flows arising in stochastic engineering sciences and in molecular chemistry.
The low-cost, user-friendly, and convenient nature of Automatic Fingerprint Recognition Systems (AFRS) makes them suitable for a wide range of applications. This spreading use of AFRS also makes them vulnerable to various security threats. Presentation Attack (PA) or spoofing is one of the threats which is caused by presenting a spoof of a genuine fingerprint to the sensor of AFRS. Fingerprint Presentation Attack Detection (FPAD) is a countermeasure intended to protect AFRS against fake or spoof fingerprints created using various fabrication materials. In this paper, we have proposed a Convolutional Neural Network (CNN) based technique that uses a Generative Adversarial Network (GAN) to augment the dataset with spoof samples generated from the proposed Open Patch Generator (OPG). This OPG is capable of generating realistic fingerprint samples which have no resemblance to the existing spoof fingerprint samples generated with other materials. The augmented dataset is fed to the DenseNet classifier which helps in increasing the performance of the Presentation Attack Detection (PAD) module for the various real-world attacks possible with unknown spoof materials. Experimental evaluations of the proposed approach are carried out on the Liveness Detection (LivDet) 2015, 2017, and 2019 competition databases. An overall accuracy of 96.20\%, 94.97\%, and 92.90\% has been achieved on the LivDet 2015, 2017, and 2019 databases, respectively under the LivDet protocol scenarios. The performance of the proposed PAD model is also validated in the cross-material and cross-sensor attack paradigm which further exhibits its capability to be used under real-world attack scenarios.
We provide a classification of half-supersymmetric branes in quarter-maximal supergravity theories with scalars parametrising coset manifolds. Guided by the results previously obtained for the half-maximal theories, we are able to show that half-supersymmetric branes correspond to the real longest weights of the representations of the brane charges, where the reality properties of the weights are determined from the Tits-Satake diagrams associated to the global symmetry groups. We show that the resulting brane structure is universal for all theories that can be uplifted to six dimensions. We also show that when viewing these theories as low-energy theories for the suitably compactified heterotic string, the classification we obtain is in perfect agreement with the wrapping rules derived in previous works for the same theory compactified on tori. Finally, we relate the branes to the R-symmetry representations of the central charges and we show that in general the degeneracies of the BPS conditions are twice those of the half-maximal theories and four times those of the maximal ones.
We present a generic digit serial method (DSM) to compute the digits of a real number $V$ . Bounds on these digits, and on the errors in the associated estimates of $V$ formed from these digits, are derived. To illustrate our results, we derive such bounds for a parameterized family of high-radix algorithms for division and square root. These bounds enable a DSM designer to determine, for example, whether a given choice of parameters allows rapid formation and rounding of its approximation to $V$. All our claims are mechanically verified using the HOL-Light theorem prover, and are included in the appendix with commentary.
Motivated from the action functional for bosonic strings with extrinsic curvature term we introduce an action functional for maps between Riemannian manifolds that interpolates between the actions for harmonic and biharmonic maps. Critical points of this functional will be called interpolating sesqui-harmonic maps. In this article we initiate a rigorous mathematical treatment of this functional and study various basic aspects of its critical points.
We construct inequalities between R\'{e}nyi entropy and the indexes of coincidence of probability distributions, based on which we obtain improved state-dependent entropic uncertainty relations for general symmetric informationally complete positive operator-valued measures (SIC-POVM) and mutually unbiased measurements (MUM) on finite dimensional systems. We show that our uncertainty relations for general SIC-POVMs and MUMs can be tight for sufficiently mixed states, and moreover, comparisons to the numerically optimal results are made via information diagrams.
We study interstellar dust evolution in various environments by means of chemical evolution models for galaxies of different morphological types. We start from the formalism developed by Dwek (1998) to study dust evolution in the solar neighbourhood and extend it to ellipticals and dwarf irregular galaxies, showing how the evolution of the dust production rates and of the dust fractions depend on the galactic star formation history. The observed dust fractions observed in the solar neighbourhood can be reproduced by assuming that dust destruction depends the condensation temperatures T_c of the elements. In elliptical galaxies, type Ia SNe are the major dust factories in the last 10 Gyr. With our models, we successfully reproduce the dust masses observed in local ellipticals (~10^6 M_sun) by means of recent FIR and SCUBA observations. We show that dust is helpful in solving the iron discrepancy observed in the hot gaseous halos surrounding local ellipticals. In dwarf irregulars, we show how a precise determination of the dust depletion pattern could be useful to put solid constraints on the dust condensation efficiencies. Our results will be helpful to study the spectral properties of dust grains in local and distant galaxies.
Graph Neural Networks (GNNs) is a promising approach for applications with nonEuclidean data. However, training GNNs on large scale graphs with hundreds of millions nodes is both resource and time consuming. Different from DNNs, GNNs usually have larger memory footprints, and thus the GPU memory capacity and PCIe bandwidth are the main resource bottlenecks in GNN training. To address this problem, we present BiFeat: a graph feature quantization methodology to accelerate GNN training by significantly reducing the memory footprint and PCIe bandwidth requirement so that GNNs can take full advantage of GPU computing capabilities. Our key insight is that unlike DNN, GNN is less prone to the information loss of input features caused by quantization. We identify the main accuracy impact factors in graph feature quantization and theoretically prove that BiFeat training converges to a network where the loss is within $\epsilon$ of the optimal loss of uncompressed network. We perform extensive evaluation of BiFeat using several popular GNN models and datasets, including GraphSAGE on MAG240M, the largest public graph dataset. The results demonstrate that BiFeat achieves a compression ratio of more than 30 and improves GNN training speed by 200%-320% with marginal accuracy loss. In particular, BiFeat achieves a record by training GraphSAGE on MAG240M within one hour using only four GPUs.
In this paper we explore the phase space structures governing isomerization dynamics on a potential energy surface with four wells and an index-2 saddle. For this model, we analyze the influence that coupling both degrees of freedom of the system and breaking the symmetry of the problem have on the geometrical template of phase space structures that characterizes reaction. To achieve this goal we apply the method of Lagrangian descriptors, a technique with the capability of unveiling the key invariant manifolds that determine transport processes in nonlinear dynamical systems. This approach reveals with extraordinary detail the intricate geometry of the isomerization routes interconnecting the different potential wells, and provides us with valuable information to distinguish between initial conditions that undergo sequential and concerted isomerization.
We investigate the effect of interchain repulsive interaction on the pairing symmetry competition in quasi-one-dimensional organic superconductors (TMTSF)$_2$X by applying random phase approximation and quantum Monte Carlo calculation to an extended Hubbard model. We find that interchain repulsive interaction enhances the $2k_F$ charge fluctuations, thereby making the possibility of spin-triplet $f$-wave pairing dominating over spin-singlet d-wave pairing realistic.
Several experiments are about to measure the CP asymmetry for $B \to \psi K_S$ and other decays. The standard model together with the Kobayashi-Maskawa ansatz for CP violation predicts the sign as well as the magnitude for this asymmetry. In this note we elucidate the physics and conventions which lead to the prediction for the sign of the asymmetry.
This paper derives the non-analytic solution to the Fokker-Planck equation of fractional Brownian motion using the method of Laplace transform. Sequentially, by considering the fundamental solution of the non-analytic solution, this paper obtains the transition probability density function of the random variable that is described by the It\^o's stochastic ordinary differential equation of fractional Brownian motion. Furthermore, this paper applies the derived transition probability density function to the Cox-Ingersoll-Ross model governed by the fractional Brownian motion instead of the usual Brownian motion.
We propose a method for directly probing the dynamics of disentanglement of an initial two-qubit entangled state, under the action of a reservoir. We show that it is possible to detect disentanglement, for experimentally realizable examples of decaying systems, through the measurement of a single observable, which is invariant throughout the decay. The systems under consideration may lead to either finite-time or asymptotic disentanglement. A general prescription for measuring this observable, which yields an operational meaning to entanglement measures, is proposed, and exemplified for cavity quantum electrodynamics and trapped ions.
We present an analysis of the colour-magnitude relation for a sample of 56 X-ray underluminous Abell clusters, aiming to unveil properties that may elucidate the evolutionary stages of the galaxy populations that compose such systems. To do so, we compared the parameters of their colour-magnitude relations with the ones found for another sample of 50 "normal" X-ray emitting Abell clusters, both selected in an objective way. The $g$ and $r$ magnitudes from the SDSS-DR7 were used for constructing the colour-magnitude relations. We found that both samples show the same trend: the red sequence slopes change with redshift, but the slopes for X-ray underluminous clusters are always flatter than those for the normal clusters, by a difference of about 69% along the surveyed redshift range of 0.05 $\le z <$ 0.20. Also, the intrinsic scatter of the colour-magnitude relation was found to grow with redshift for both samples but, for the X-ray underluminous clusters, this is systematically larger by about 28%. By applying the Cram\'er test to the result of this comparison between X-ray normal and underluminous cluster samples, we get probabilities of 92% and 99% that the red sequence slope and intrinsic scatter distributions, respectively, differ, in the sense that X-ray underluminous clusters red sequences show flatter slopes and higher scatters in their relations. No significant differences in the distributions of red-sequence median colours are found between the two cluster samples. This points to X-ray underluminous clusters being younger systems than normal clusters, possibly in the process of accreting groups of galaxies, individual galaxies and gas.
Multiple gauge theories predict the presence of cosmic strings with different mass densities $G\mu/c^2$. We derive an equation governing the perturbations of a rotating black hole pierced by a straight, infinitely long cosmic string along its axis of rotation and calculate the quasinormal-mode frequencies of such a black hole. We then carry out parameter estimation on the first detected gravitational-wave event, GW150914, by hypothesizing that there is a string piercing through the remnant, yielding a constraint of $G\mu/c^2 <3.8\times 10^{-3}$ at the 90\% confidence interval with a comparable Bayes factor with an analysis for a Kerr black hole without a string. In contrast to existing studies which focus on the mutual intersection of cosmic strings, or the cosmic string network, our work focuses on the intersection of a cosmic string with a black hole, with characteristics which can be identified in binary coalescence signals.
Numerical simulations are presented to study the stability of gaps opened by giant planets in 3D self-gravitating disks. In weakly self-gravitating disks, a few vortices develop at the gap edge and merge on orbital time-scales. The result is one large but weak vortex with Rossby number -0.01. In moderately self-gravitating disks, more vortices develop and their merging is resisted on dynamical time-scales. Self-gravity can sustain multi-vortex configurations, with Rossby number -0.2 to -0.1, over a time-scale of order 100 orbits. Self-gravity also enhances the vortex vertical density stratification, even in disks with initial Toomre parameter of order 10. However, vortex formation is suppressed in strongly self-gravitating disks and replaced by a global spiral instability associated with the gap edge which develops during gap formation.
We present a field report of CTU-CRAS-NORLAB team from the Subterranean Challenge (SubT) organised by the Defense Advanced Research Projects Agency (DARPA). The contest seeks to advance technologies that would improve the safety and efficiency of search-and-rescue operations in GPS-denied environments. During the contest rounds, teams of mobile robots have to find specific objects while operating in environments with limited radio communication, e.g. mining tunnels, underground stations or natural caverns. We present a heterogeneous exploration robotic system of the CTU-CRAS-NORLAB team, which achieved the third rank at the SubT Tunnel and Urban Circuit rounds and surpassed the performance of all other non-DARPA-funded teams. The field report describes the team's hardware, sensors, algorithms and strategies, and discusses the lessons learned by participating at the DARPA SubT contest.
Pre-trained language models for code (PLMCs) have gained attention in recent research. These models are pre-trained on large-scale datasets using multi-modal objectives. However, fine-tuning them requires extensive supervision and is limited by the size of the dataset provided. We aim to improve this issue by proposing a simple data augmentation framework. Our framework utilizes knowledge gained during the pre-training and fine-tuning stage to generate pseudo data, which is then used as training data for the next step. We incorporate this framework into the state-of-the-art language models, such as CodeT5, CodeBERT, and UnixCoder. The results show that our framework significantly improves PLMCs' performance in code-related sequence generation tasks, such as code summarization and code generation in the CodeXGLUE benchmark.
This paper describes SPINDLE - an open source Python module implementing an efficient and accurate parser for written Dutch that transforms raw text input to programs for meaning composition, expressed as {\lambda} terms. The parser integrates a number of breakthrough advances made in recent years. Its output consists of hi-res derivations of a multimodal type-logical grammar, capturing two orthogonal axes of syntax, namely deep function-argument structures and dependency relations. These are produced by three interdependent systems: a static type-checker asserting the well-formedness of grammatical analyses, a state-of-the-art, structurally-aware supertagger based on heterogeneous graph convolutions, and a massively parallel proof search component based on Sinkhorn iterations. Packed in the software are also handy utilities and extras for proof visualization and inference, intended to facilitate end-user utilization.
It is shown that the N=4 superalgebra of the Dirac theory in Taub-NUT space has different unitary representations related among themselves through unitary U(2) transformations. In particular the SU(2) transformations are generated by the spin-like operators constructed with the help of the same covariantly constant Killing-Yano tensors which generate Dirac-type operators. A parity operator is defined and some explicit transformations which connect the Dirac-type operators among themselves are given. These transformations form a discrete group which is a realization of the quaternion discrete group. The fifth Dirac operator constructed using the non-covariant Killing-Yano tensor of the Taub-NUT space is quite special. This non-standard Dirac operator is connected with the hidden symmetry and is not equivalent to the Dirac-type operators of the standard N=4 supersymmetry.
In this work we consider the problem of an optimal geographic placement of content in wireless cellular networks modelled by Poisson point processes. Specifically, for the typical user requesting some particular content and whose popularity follows a given law (e.g. Zipf), we calculate the probability of finding the content cached in one of the base stations. Wireless coverage follows the usual signal-to-interference-and noise ratio (SINR) model, or some variants of it. We formulate and solve the problem of an optimal randomized content placement policy, to maximize the user's hit probability. The result dictates that it is not always optimal to follow the standard policy "cache the most popular content, everywhere". In fact, our numerical results regarding three different coverage scenarios, show that the optimal policy significantly increases the chances of hit under high-coverage regime, i.e., when the probabilities of coverage by more than just one station are high enough.
We recently showed that spin fluctuations of noncoplanar magnetic states can induce topological superconductivity in an adjacent normal metal [M{\ae}land et al., Phys. Rev. Lett. 130, 156002 (2023)]. The noncolinear nature of the spins was found to be essential for this result, while the necessity of noncoplanar spins was unclear. In this paper we show that magnons in coplanar, noncolinear magnetic states can mediate topological superconductivity in a normal metal. Two models of the Dzyaloshinskii-Moriya interaction are studied to illustrate the need for a sufficiently complicated Hamiltonian describing the magnetic insulator. The Hamiltonian, in particular the specific form of the Dzyaloshinskii-Moriya interaction, affects the magnons and by extension the effective electron-electron interaction in the normal metal. Symmetry arguments are applied to complement this discussion. We solve a linearized gap equation in the case of weak-coupling superconductivity. The result is a time-reversal-symmetric topological superconductor, as confirmed by calculating the topological invariant. In analogy with magnon-mediated superconductivity from antiferromagnets, Umklapp scattering enhances the critical temperature of superconductivity for certain Fermi momenta.
Identifying humans with their walking sequences, known as gait recognition, is a useful biometric understanding task as it can be observed from a long distance and does not require cooperation from the subject. Two common modalities used for representing the walking sequence of a person are silhouettes and joint skeletons. Silhouette sequences, which record the boundary of the walking person in each frame, may suffer from the variant appearances from carried-on objects and clothes of the person. Framewise joint detections are noisy and introduce some jitters that are not consistent with sequential detections. In this paper, we combine the silhouettes and skeletons and refine the framewise joint predictions for gait recognition. With temporal information from the silhouette sequences, we show that the refined skeletons can improve gait recognition performance without extra annotations. We compare our methods on four public datasets, CASIA-B, OUMVLP, Gait3D and GREW, and show state-of-the-art performance.
A cataclysmic variable is a binary system consisting of a white dwarf that accretes material from a secondary object via the Roche-lobe mechanism. In the case of long enough observation, a detailed temporal analysis can be performed, allowing the physical properties of the binary system to be determined. We present an XMM-Newton observation of the dwarf nova HT Cas acquired to resolve the binary system eclipses and constrain the origin of the X-rays observed. We also compare our results with previous ROSAT and ASCA data. After the spectral analysis of the three EPIC camera signals, the observed X-ray light curve was studied with well known techniques and the eclipse contact points obtained. The X-ray spectrum can be described by thermal bremsstrahlung of temperature $kT_1=6.89 \pm 0.23$ keV plus a black-body component (upper limit) with temperature $kT_2=30_{-6}^{+8}$ eV. Neglecting the black-body, the bolometric absorption corrected flux is $F^{\rm{Bol}}=(6.5\pm 0.1)\times10^{-12}$ erg s$^{-1}$ cm$^{-2}$, which, for a distance of HT Cas of 131 pc, corresponds to a bolometric luminosity of $(1.33\pm 0.02)\times10^{31}$ erg s$^{-1}$. The study of the eclipse in the EPIC light curve permits us to constrain the size and location of the X-ray emitting region, which turns out to be close to the white dwarf radius. We measure an X-ray eclipse somewhat smaller (but only at a level of $\simeq 1.5 \sigma$) than the corresponding optical one. If this is the case, we have possibly identified the signature of either high latitude emission or a layer of X-ray emitting material partially obscured by an accretion disk.
We study the stability of a quasi-one-dimensional dipolar Bose-Einstein condensate (dBEC) that is perturbed by a weak lattice potential along its axis. Our numerical simulations demonstrate that systems exhibiting a roton-maxon structure destabilize readily when the lattice wavelength equals either half the roton wavelength or a low roton subharmonic. We apply perturbation theory to the Gross-Pitaevskii and Bogoliubov de Gennes equations to illustrate the mechanisms behind the instability threshold. The features of our stability diagram may be used as a direct measurement of the roton wavelength for quasi-one-dimensional geometries.
We consider a mollifying operator with variable step that, in contrast to the standard mollification, is able to preserve the boundary values of functions. We prove boundedness of the operator in all basic Lebesgue, Sobolev and BV spaces as well as corresponding approximation results. The results are then applied to extend recently developed theory concerning the density of convex intersections.
Network slicing is a promising technology that allows mobile network operators to efficiently serve various emerging use cases in 5G. It is challenging to optimize the utilization of network infrastructures while guaranteeing the performance of network slices according to service level agreements (SLAs). To solve this problem, we propose SafeSlicing that introduces a new constraint-aware deep reinforcement learning (CaDRL) algorithm to learn the optimal resource orchestration policy within two steps, i.e., offline training in a simulated environment and online learning with the real network system. On optimizing the resource orchestration, we incorporate the constraints on the statistical performance of slices in the reward function using Lagrangian multipliers, and solve the Lagrangian relaxed problem via a policy network. To satisfy the constraints on the system capacity, we design a constraint network to map the latent actions generated from the policy network to the orchestration actions such that the total resources allocated to network slices do not exceed the system capacity. We prototype SafeSlicing on an end-to-end testbed developed by using OpenAirInterface LTE, OpenDayLight-based SDN, and CUDA GPU computing platform. The experimental results show that SafeSlicing reduces more than 20% resource usage while meeting SLAs of network slices as compared with other solutions.
We consider the planar three body problem of planetary type and we study the generation and continuation of periodic orbits and mainly of asymmetric periodic orbits. Asymmetric orbits exist in the restricted circular three body problem only in particular resonances called "asymmetric resonances". However, numerical studies showed that in the general three body problem asymmetric orbits may exist not only for asymmetric resonances, but for other kinds, too. In this work, we show the existence of asymmetric periodic orbits in the elliptic restricted problem. These orbits are continued and clarify the origin of many asymmetric periodic orbits in the general problem. Also, we illustrate how the families of periodic orbits of the restricted circular problem and those of the elliptic one join smoothly and form families in the general problem, verifying in this way the scenario described firstly by Bozis and Hadjidemetriou (1976).
We show that the recently constructed complete and ``minimal'' third order meson-baryon effective chiral Lagrangian can be further reduced from 84 to 78 independent operators.
Circuit quantum electrodynamics systems are typically built from resonators and two-level artificial atoms, but the use of multi-level artificial atoms instead can enable promising applications in quantum technology. Here we present an implementation of a Josephson junction circuit dedicated to operate as a V-shape artificial atom. Based on a concept of two internal degrees of freedom, the device consists of two transmon qubits coupled by an inductance. The Josephson nonlinearity introduces a strong diagonal coupling between the two degrees of freedom that finds applications in quantum non-demolition readout schemes, and in the realization of microwave cross-Kerr media based on superconducting circuits.
In this article, we analyze a nonlocal ring network of adaptively coupled phase oscillators. We observe a variety of frequency-synchronized states such as phase-locked, multicluster and solitary states. For an important subclass of the phase-locked solutions, the rotating waves, we provide a rigorous stability analysis. This analysis shows a strong dependence of their stability on the coupling structure and the wavenumber which is a remarkable difference to an all-to-all coupled network. Despite the fact that solitary states have been observed in a plethora of dynamical systems, the mechanisms behind their emergence were largely unaddressed in the literature. Here, we show how solitary states emerge due to the adaptive feature of the network and classify several bifurcation scenarios in which these states are created and stabilized.
We present measurements of the transient photoconductivity in pentacene single crystals using optical-pump THz-probe spectroscopy. We have measured the temperature and fluence dependence of the mobility of the photoexcited charge carriers with picosecond resolution. The pentacene crystals were excited at 3.0 eV which is above the bandgap of ~2.2 eV and the induced change in the far-infrared transmission was measured. At 30 K, the carrier mobility is mu ~ 0.4 cm^2/Vs and decreases to mu ~ 0.2 cm^2/Vs at room temperature. The transient terahertz signal reveals the presence of free carriers that are trapped on the timescale of a few ps or less, possibly through the formation of excitons, small polarons, or trapping by impurities.
In spacetime dimensions $n+1\geq 4$, we show the existence of solutions of the Einstein vacuum equations which describe asymptotically de Sitter spacetimes with prescribed smooth data at the conformal boundary. This provides a short alternative proof of a special case of a result by Shlapentokh-Rothman and Rodnianski, and generalizes earlier results by Friedrich and Anderson to all dimensions.
We review the q-deformed spin network approach to Topological Quantum Field Theory and apply these methods to produce unitary representations of the braid groups that are dense in the unitary groups. Our methods are rooted in the bracket state sum model for the Jones polynomial. We give our results for a large class of representations based on values for the bracket polynomial that are roots of unity. We make a separate and self-contained study of the quantum universal Fibonacci model in this framework. We apply our results to give quantum algorithms for the computation of the colored Jones polynomials for knots and links, and the Witten-Reshetikhin-Turaev invariant of three manifolds.
In this paper, we consider suitable weak solutions of incompressible Navier--Stokes equations in four spatial dimensions. We prove that the two-dimensional time-space Hausdorff measure of the set of singular points is equal to zero.
The Grigorchuk and Gupta-Sidki groups are natural examples of self-similar finitely generated periodic groups. The author constructed their analogue in case of restricted Lie algebras of characteristic 2, Shestakov and Zelmanov extended this construction to an arbitrary positive characteristic. It is known that the famous construction of Golod yields finitely generated associative nil-algebras of exponential growth. Recent extensions of that approach allowed to construct finitely generated associative nil-algebras of polynomial and intermediate growth. Another motivation of the paper is a construction of groups of oscillating growth by Kassabov and Pak. For any prime $p$ we construct a family of 3-generated restricted Lie algebras of intermediate oscillating growth. We call them Phoenix algebras because, for infinitely many periods of time, the algebra is "almost dying" by having a quasi-linear growth, namely the lower Gelfand-Kirillov dimension is one, more precisely, the growth is of type $n \big(\ln^{(q)} \!n\big )^{\kappa}$. On the other hand, for infinitely many $n$ the growth has a rather fast intermediate behaviour of type $\exp( n/ (\ln n)^{\lambda})$, for such periods the algebra is "resuscitating". Moreover, the growth function is oscillating between these two types of behaviour. These restricted Lie algebras have a nil $p$-mapping.
The heteroepitaxy of III-V semiconductors on silicon is a promising approach for making silicon a photonic platform for on-chip optical interconnects and quantum optical applications. Monolithic integration of both material systems is a long-time challenge, since different material properties lead to high defect densities in the epitaxial layers. In recent years, nanostructures however have shown to be suitable for successfully realising light emitters on silicon, taking advantage of their geometry. Facet edges and sidewalls can minimise or eliminate the formation of dislocations, and due to the reduced contact area, nanostructures are little affected by dislocation networks. Here we demonstrate the potential of indium phosphide quantum dots as efficient light emitters on CMOS-compatible silicon substrates, with luminescence characteristics comparable to mature devices realised on III-V substrates. For the first time, electrically driven single-photon emission on silicon is presented, meeting the wavelength range of silicon avalanche photo diodes' highest detection efficiency.
The orbits of Weyl groups W(A(n)) of simple A(n) type Lie algebras are reduced to the union of orbits of the Weyl groups of maximal reductive subalgebras of A(n). Matrices transforming points of the orbits of W(An) into points of subalgebra orbits are listed for all cases n<=8 and for the infinite series of algebra-subalgebra pairs A(n) - A(n-k-1) x A(k) x U(1), A(2n) - B(n), A(2n-1) - C(n), A(2n-1) - D(n). Numerous special cases and examples are shown.
The aim of this work is to continue the analysis, started in arXiv:2105.02108, of the dynamics of a point-mass particle $P$ moving in a galaxy with an harmonic biaxial core, in whose center sits a Keplerian attractive center (e.g. a Black Hole). Accordingly, the plane $\mathbb R^2$ is divided into two complementary domains, depending on whether the gravitational effects of the galaxy's mass distribution or of the Black Hole prevail. Thus, solutions alternate arcs of Keplerian hyperbolae with harmonic ellipses; at the interface, the trajectory is refracted according to Snell's law. The model was introduced in arXiv:1501.05577, in view of applications to astrodynamics. In this paper we address the general issue of periodic and quasi-periodic orbits and associated caustics when the domain is a perturbation of the circle, taking advantage of KAM and Aubry-Mather theories.
We study the thermal transport through a Majorana island connected to multiple external quantum wires. In the presence of a large charging energy, we find that the Wiedemann-Franz law is nontrivially violated at low temperature, contrarily to what happens for the overscreened Kondo effect and for nontopological junctions. For three wires, we find that the Lorenz ratio is rescaled by a universal factor 2/3 and we show that this behavior is due to the presence of localized Majorana modes on the island.
We study QCD with two colors and quarks in the fundamental representation at finite baryon density in the limit of light quark masses. In this limit the free energy of this theory reduces to the free energy of a chiral Lagrangian which is based on the symmetries of the microscopic theory. In earlier work this Lagrangian was analyzed at the mean field level and a phase transition to a phase of condensed diquarks was found at a chemical potential of half the diquark mass (which is equal to the pion mass). In this article we analyze this theory at next-to-leading order in chiral perturbation theory. We show that the theory is renormalizable and calculate the next-to-leading order free energy in both phases of the theory. By deriving a Landau-Ginzburg theory for the order parameter we show that the finite one-loop contribution and the next-to-leading order terms in the chiral Lagrangian do not qualitatively change the phase transition. In particular, the critical chemical potential is equal to half the next-to-leading order pion mass, and the phase transition is second order.
We present newly obtained three-dimensional gaseous maps of the Milky Way Galaxy; HI, H$_2$ and total-gas (HI plus H$_2$) maps, which were derived from the HI and $^{12}$CO($J=1$--0) survey data and rotation curves based on the kinematic distance. The HI and H$_2$ face-on maps show that the HI disk is extended to the radius of 15--20 kpc and its outskirt is asymmetric to the Galactic center, while most of the H$_2$ gas is distributed inside the solar circle. The total gas mass within radius 30 kpc amounts to $8.0\times 10^9$ M$_\odot$, 89\% and 11\% of which are HI and H$_2$, {respectively}. The vertical slices show that the outer HI disk is strongly warped and the inner HI and H$_2$ disks are corrugated. The total gas map is advantageous to trace spiral structure from the inner to outer disk. Spiral structures such as the Norma-Cygnus, the Perseus, the Sagittarius-Carina, the Scutum-Crux, and the Orion arms are more clearly traced in the total gas map than ever. All the spiral arms are well explained with logarithmic spiral arms with pitch angle of $11\degree$ -- $15\degree$. The molecular fraction to the total gas is high near the Galactic center and decreases with the Galactocentric distance. The molecular fraction also locally enhanced at the spiral arms compared with the inter-arm regions.
Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, a channel selection scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by non-coordination, each user secondary learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of Q-learning by considering the opponent secondary users as a part of the environment. The dynamics of the Q-learning are illustrated using Metrick-Polak plot. A rigorous proof of the convergence of Q-learning is provided via the similarity between the Q-learning and Robinson-Monro algorithm, as well as the analysis of convergence of the corresponding ordinary differential equation (via Lyapunov function). Examples are illustrated and the performance of learning is evaluated by numerical simulations.
In the previous paper we proved that the Evans-Vigier definitions of B^{(0)} and {\bf B}^{(3)} may be related {\it not} with magnetic fields but with a 4-vector field. In the present {\it Addendum} it is shown that the terms used in the {\bf B}- Cyclic theorem proposed by M. Evans and J.-P. Vigier may have various transformation properties with respect to Lorentz transformations. The fact whether the {\bf B}^{(3)} field is a part of a bi-vector (which is equivalent to antisymmetric second-rank tensor) or a part of a 4-vector, depends on the phase factors in the definition of positive- and negative- frequency solutions of the ({\bf B}, {\bf E}) transverse field. This is closely connected to our considerations of the Bargmann-Wightman-Wigner (Gelfand-Tsetlin-Sokolik) Constructs and with the Ahluwalia's recent consideration of the phase factor related to gravity. The physical relevance of proposed constructs is discussed.
We investigate an integrated optical circuit on lithium niobate designed to implement the teleportation-based quantum relay scheme for one-way quantum communication at a telecom wavelength. Such an advanced quantum circuit merges for the first time, both optical-optical and electro-optical non-linear functions necessary to implement the desired on-chip single qubit teleportation. On one hand, spontaneous parametric down-conversion is used to produce entangled photon-pairs. On the other hand, we take advantage of two photon routers, consisting of electro-optically controllable couplers, to separate the paired photons and to perform a Bell state measurement, respectively. After having validated all the individual functions in the classical regime, we have performed a Hong-Ou-Mandel (HOM) experiment to mimic a one-way quantum communication link. Such a quantum effect, seen as a prerequisite towards achieving teleportation, has been obtained, at one of the routers, when the chip was coupled to an external single photon source. The two-photon interference pattern shows a net visibility of 80%, which validates the proof of principle of a "quantum relay circuit" for qubits carried by telecom photons. In case of optimized losses, such a chip could increase the maximal achievable distance of one-way quantum key distribution links by a factor 1.8. Our approach and results emphasize the high potential of integrated optics on lithium niobate as a key technology for future reconfigurable quantum information manipulation.
We consider the model selection task in the stochastic contextual bandit setting. Suppose we are given a collection of base contextual bandit algorithms. We provide a master algorithm that combines them and achieves the same performance, up to constants, as the best base algorithm would, if it had been run on its own. Our approach only requires that each algorithm satisfy a high probability regret bound. Our procedure is very simple and essentially does the following: for a well chosen sequence of probabilities $(p_{t})_{t\geq 1}$, at each round $t$, it either chooses at random which candidate to follow (with probability $p_{t}$) or compares, at the same internal sample size for each candidate, the cumulative reward of each, and selects the one that wins the comparison (with probability $1-p_{t}$). To the best of our knowledge, our proposal is the first one to be rate-adaptive for a collection of general black-box contextual bandit algorithms: it achieves the same regret rate as the best candidate. We demonstrate the effectiveness of our method with simulation studies.
The TARK conference (Theoretical Aspects of Rationality and Knowledge) is a conference that aims to bring together researchers from a wide variety of fields, including computer science, artificial intelligence, game theory, decision theory, philosophy, logic, linguistics, and cognitive science. Its goal is to further our understanding of interdisciplinary issues involving reasoning about rationality and knowledge. Previous conferences have been held biennially around the world since 1986, on the initiative of Joe Halpern (Cornell University). Topics of interest include, but are not limited to, semantic models for knowledge, belief, awareness and uncertainty, bounded rationality and resource-bounded reasoning, commonsense epistemic reasoning, epistemic logic, epistemic game theory, knowledge and action, applications of reasoning about knowledge and other mental states, belief revision, computational social choice, algorithmic game theory, and foundations of multi-agent systems. Information about TARK, including conference proceedings, is available at http://www.tark.org/ These proceedings contain the papers that have been accepted for presentation at the Nineteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2023), held between June 28 and June 30, 2023, at the University of Oxford, United Kingdom. The conference website can be found at https://sites.google.com/view/tark-2023
Novelty detection is a process for distinguishing the observations that differ in some respect from the observations that the model is trained on. Novelty detection is one of the fundamental requirements of a good classification or identification system since sometimes the test data contains observations that were not known at the training time. In other words, the novelty class is often is not presented during the training phase or not well defined. In light of the above, one-class classifiers and generative methods can efficiently model such problems. However, due to the unavailability of data from the novelty class, training an end-to-end model is a challenging task itself. Therefore, detecting the Novel classes in unsupervised and semi-supervised settings is a crucial step in such tasks. In this thesis, we propose several methods to model the novelty detection problem in unsupervised and semi-supervised fashion. The proposed frameworks applied to different related applications of anomaly and outlier detection tasks. The results show the superior of our proposed methods in compare to the baselines and state-of-the-art methods.
Hate speech has become pervasive in today's digital age. Although there has been considerable research to detect hate speech or generate counter speech to combat hateful views, these approaches still cannot completely eliminate the potential harmful societal consequences of hate speech -- hate speech, even when detected, can often not be taken down or is often not taken down enough; and hate speech unfortunately spreads quickly, often much faster than any generated counter speech. This paper investigates a relatively new yet simple and effective approach of suggesting a rephrasing of potential hate speech content even before the post is made. We show that Large Language Models (LLMs) perform well on this task, outperforming state-of-the-art baselines such as BART-Detox. We develop 4 different prompts based on task description, hate definition, few-shot demonstrations and chain-of-thoughts for comprehensive experiments and conduct experiments on open-source LLMs such as LLaMA-1, LLaMA-2 chat, Vicuna as well as OpenAI's GPT-3.5. We propose various evaluation metrics to measure the efficacy of the generated text and ensure the generated text has reduced hate intensity without drastically changing the semantic meaning of the original text. We find that LLMs with a few-shot demonstrations prompt work the best in generating acceptable hate-rephrased text with semantic meaning similar to the original text. Overall, we find that GPT-3.5 outperforms the baseline and open-source models for all the different kinds of prompts. We also perform human evaluations and interestingly, find that the rephrasings generated by GPT-3.5 outperform even the human-generated ground-truth rephrasings in the dataset. We also conduct detailed ablation studies to investigate why LLMs work satisfactorily on this task and conduct a failure analysis to understand the gaps.
Two proof-of-principle experiments towards T1-limited magnetic resonance imaging with NV centers in diamond are demonstrated. First, a large number of Rabi oscillations is measured and it is demonstrated that the hyperfine interaction due to the NV's 14N can be extracted from the beating oscillations. Second, the Rabi beats under V-type microwave excitation of the three hyperfine manifolds is studied experimentally and described theoretically.
In order to illustrate the emergence of Coulomb blockade from coherent quantum phase-slip processes in thin superconducting wires, we propose and theoretically investigate two elementary setups, or "devices". The setups are derived from Cooper-pair box and Cooper-pair transistor, so we refer to them as QPS-box and QPS-transistor, respectively. We demonstrate that the devices exhibit sensitivity to a charge induced by a gate electrode, this being the main signature of Coulomb blockade. Experimental realization of these devices will unambiguously prove the Coulomb blockade as an effect of coherence of phase-slip processes. We analyze the emergence of discrete charging in the limit strong phase-slips. We have found and investigated six distinct regimes that are realized depending on the relation between three characteristic energy scales: inductive and charging energy, and phase-slip amplitude. For completeness, we include a brief discussion of dual Josephson-junction devices.
Causal inference seeks to identify cause-and-effect interactions in coupled systems. A recently proposed method by Liang detects causal relations by quantifying the direction and magnitude of information flow between time series. The theoretical formulation of information flow for stochastic dynamical systems provides a general expression and a data-driven statistic for the rate of entropy transfer between different system units. To advance understanding of information flow rate in terms of intuitive concepts and physically meaningful parameters, we investigate statistical properties of the data-driven information flow rate between coupled stochastic processes. We derive relations between the expectation of the information flow rate statistic and properties of the auto- and cross-correlation functions. Thus, we elucidate the dependence of the information flow rate on the analytical properties and characteristic times of the correlation functions. Our analysis provides insight into the influence of the sampling step, the strength of cross-correlations, and the temporal delay of correlations on information flow rate. We support the theoretical results with numerical simulations of correlated Gaussian processes.
The measurement of identified charged-hadron production at mid-rapidity ($|y| < 0.5$) performed with the ALICE experiment is presented for Pb--Pb collisions at $\sqrt{s_{\rm NN}} = \rm .76 TeV$. The transverse momentum spectra of $\pi^{\pm}$, $K^{\pm}$, $p$ and $\bar{p}$ are measured from 100 MeV/c up to 3 GeV/c for pions, from 200 MeV/c up to 2 GeV/c for kaons and from 300 MeV/c up to 3 GeV/c for protons and antiprotons using the \emph{dE/dx} and the \emph{time-of-flight} particle-identification techniques. Preliminary results on charged hadron production yields and particle ratios are reported as a function of $p_{T}$ and collision centrality. Finally, the results are discussed in terms of hydrodynamics-inspired models and compared with published RHIC data in Au--Au collisions at $\sqrt{s_{\rm NN}} = \rm 200 GeV$ and predictions for the LHC.
We prove the existence of solution for a class of $p(x)$-Laplacian equations where the nonlinearity has a critical growth. Here, we consider two cases: the first case involves the situation where the variable exponents are periodic functions. The second one involves the case where the variable exponents are nonperiodic perturbations.
The distribution of counterions and the electrostatic interaction between two similarly charged dielectric slabs is studied in the strong coupling limit. Dielectric inhomogeneities and discreteness of charge on the slabs have been taken into account. It is found that the amount of dielectric constant difference between the slabs and the environment, and the discreteness of charge on the slabs have opposing effects on the equilibrium distribution of the counterions. At small inter-slab separations, increasing the amount of dielectric constant difference increases the tendency of the counterions toward the middle of the intersurface space between the slabs and the discreteness of charge pushes them to the surfaces of the slabs. In the limit of point charges, independent of the strength of dielectric inhomogeneity, counterions distribute near the surfaces of the slabs. The interaction between the slabs is attractive at low temperatures and its strength increases with the dielectric constant difference. At room temperature, the slabs may completely attract each other, reach to an equilibrium separation or have two equilibrium separations with a barrier in between, depending on the system parameters.
A restricted dynamics, previously introduced in a kinetic model for relaxation phenomena in linear polymer chains, is used to study the dynamic critical exponent of one-dimensional Ising models. Both the alternating isotopic chain and the alternating-bond chain are considered. In contrast with what occurs for the Glauber dynamics, in these two models the dynamic critical exponent turns out to be the same. The alternating isotopic chain with the restricted dynamics is shown to lead to Nagel scaling for temperatures above some critical value. Further support is given relating the Nagel scaling to the existence of multiple (simultaneous) relaxation processes, the dynamics apparently not playing the most important role in determining such scaling.
Many types of data from fields including natural language processing, computer vision, and bioinformatics, are well represented by discrete, compositional structures such as trees, sequences, or matchings. Latent structure models are a powerful tool for learning to extract such representations, offering a way to incorporate structural bias, discover insight about the data, and interpret decisions. However, effective training is challenging, as neural networks are typically designed for continuous computation. This text explores three broad strategies for learning with discrete latent structure: continuous relaxation, surrogate gradients, and probabilistic estimation. Our presentation relies on consistent notations for a wide range of models. As such, we reveal many new connections between latent structure learning strategies, showing how most consist of the same small set of fundamental building blocks, but use them differently, leading to substantially different applicability and properties.
We investigate the effect of bias on the formation and dynamics of political parties in the bounded confidence model. For weak bias, we quantify the change in average opinion and potential dispersion and decrease in party size. For nonlinear bias modeling self-incitement, we establish coherent drifting motion of parties on a background of uniform opinion distribution for biases below a critical threshold where parties dissolve. Technically, we use geometric singular perturbation theory to derive drift speeds, we rely on a nonlocal center manifold analysis to construct drifting parties near threshold, and we implement numerical continuation in a forward-backward delay equation to connect asymptotic regimes.
Symmetry is an important aesthetic criteria in graph drawing and network visualisation. Symmetric graph drawings aim to faithfully represent automorphisms of graphs as geometric symmetries in a drawing. In this paper, we design and implement a framework for quality metrics that measure symmetry, that is, how faithfully a drawing of a graph displays automorphisms as geometric symmetries. The quality metrics are based on geometry (i.e. Euclidean distance) as well as mathematical group theory (i.e. orbits of automorphisms). More specifically, we define two varieties of symmetry quality metrics: (1) for displaying a single automorphism as a symmetry (axial or rotational) and (2) for displaying a group of automorphisms (cyclic or dihedral). We also present algorithms to compute the symmetric quality metrics in O(n log n) time for rotational symmetry and axial symmetry. We validate our symmetry quality metrics using deformation experiments. We then use the metrics to evaluate a number of established graph drawing layouts to compare how faithfully they display automorphisms of a graph as geometric symmetries.
The aim of this article is to assess the ability of chemical shift surfaces to provide structural information on conformational distributions of disaccharides in glassy solid state. The validity of the general method leading to a simulation of inhomogeneous 13C chemical shift distributions is discussed in detail. In particular, a proper consideration of extrema and saddle points of the chemical shift map correctly accounts for the observed discontinuities in the experimental CPMAS spectra. Provided that these basic requirements are met, DFT/GIAO chemical shift maps calculated on relaxed conformations lead to a very satisfactory description of the experimental lineshapes. On solid-state trehalose as a model of amorphous disaccharide, this simulation approach defines unambiguously the most populated sugar conformation in the glass, and can help in discriminating the validity of different models of intramolecular energy landscape. Application to other molecular systems with broad conformational populations is foreseen to produce a larger dependence of the calculated chemical shift distribution on the conformational map.
The memo describes the methods used to track the long-term gain variations in the NuSTAR detectors. It builds on the analysis presented in Madsen et al. (2015) using the deployable calibration source to measure the gain drift in the NuSTAR CdZnTe detectors. This is intended to be a live document that is periodically updated as new entries are required in the NuSTAR gain CALDB files. This document covers analysis up through early-2022 and the gain v011 CALDB file released in version 20240226.
Kronecker-factored Approximate Curvature (K-FAC) method is a high efficiency second order optimizer for the deep learning. Its training time is less than SGD(or other first-order method) with same accuracy in many large-scale problems. The key of K-FAC is to approximates Fisher information matrix (FIM) as a block-diagonal matrix where each block is an inverse of tiny Kronecker factors. In this short note, we present CG-FAC -- an new iterative K-FAC algorithm. It uses conjugate gradient method to approximate the nature gradient. This CG-FAC method is matrix-free, that is, no need to generate the FIM matrix, also no need to generate the Kronecker factors A and G. We prove that the time and memory complexity of iterative CG-FAC is much less than that of standard K-FAC algorithm.
Motion tracking is a challenge the visual system has to solve by reading out the retinal population. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar's position. Instead, we found that ganglion cells fired sparsely over an area much larger than predicted by their receptive fields, so that the neural image did not track the bar. This highly redundant organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits.
The goal for classification is to correctly assign labels to unseen samples. However, most methods misclassify samples with unseen labels and assign them to one of the known classes. Open-Set Classification (OSC) algorithms aim to maximize both closed and open-set recognition capabilities. Recent studies showed the utility of such algorithms on small-scale data sets, but limited experimentation makes it difficult to assess their performances in real-world problems. Here, we provide a comprehensive comparison of various OSC algorithms, including training-based (SoftMax, Garbage, EOS) and post-processing methods (Maximum SoftMax Scores, Maximum Logit Scores, OpenMax, EVM, PROSER), the latter are applied on features from the former. We perform our evaluation on three large-scale protocols that mimic real-world challenges, where we train on known and negative open-set samples, and test on known and unknown instances. Our results show that EOS helps to improve performance of almost all post-processing algorithms. Particularly, OpenMax and PROSER are able to exploit better-trained networks, demonstrating the utility of hybrid models. However, while most algorithms work well on negative test samples -- samples of open-set classes seen during training -- they tend to perform poorly when tested on samples of previously unseen unknown classes, especially in challenging conditions.
The Cartwheel, archetypical ring galaxy with strong star formation activity concentrated in a peculiar annular structure, has been imaged with the CHANDRA ACIS-S instrument. We present here preliminary results, that confirm the high luminosity detected earlier with the ROSAT HRI. Many bright isolated sources are visible in the star-forming ring. A diffuse component at luminosity of the order of 10E.40 cgs is also detected.
Monogamy inequalities for the way bipartite EPR steering can be distributed among N systems are derived. One set of inequalities is based on witnesses with two measurement settings, and may be used to demonstrate correlation of outcomes between two parties, that cannot be shared with more parties. It is shown that the monogamy for steering is directional. Two parties cannot independently demonstrate steering of a third system, using the same two-setting steering witness, but it is possible for one party to steer two independent systems. This result explains the monogamy of two-setting Bell inequality violations, and the sensitivity of the continuous variable (CV) EPR criterion to losses on the steering party. We generalise to m settings. A second type of monogamy relation gives the quantitative amount of sharing possible, when the number of parties is less than or equal to m, and takes a form similar to the Coffman-Kundu-Wootters (CKW) relation for entanglement. The results enable characterisation of the tripartite steering for CV Gaussian systems and qubit GHZ and W states.
If there is a topologically locally constant family of smooth algebraic varieties together with an admissible normal function on the total space, then the latter is constant on any fiber if this holds on some fiber. Combined with spreading out, it implies for instance that an irreducible component of the zero locus of an admissible normal function is defined over k if it has a k-rational point where k is an algebraically closed subfield of the complex number field with finite transcendence degree. This generalizes a result of F. Charles that was shown in case the normal function is associated with an algebraic cycle defined over k.
Public procurement refers to the purchase by public sector entities, such as government departments or local authorities, of Services, Goods, or Works. It accounts for a significant share of OECD countries' expenditures. However, while governments are expected to execute them as efficiently as possible, there is a lack of methodologies for an adequate comparison of procurement activity between institutions at different scales, which represents a challenge for policymakers and academics. Here, we propose using methods from urban scaling laws literature to study public procurement activity among 278 Portuguese municipalities between 2011 and 2018. We find that public procurement expenditure scales sub-linearly with population size, indicating an economy of scale for public spending as cities increase their population size. Moreover, when looking at the municipal Scale-Adjusted Indicators (the deviations from the scaling laws) by contract type -- Works, Goods, and Services -- we obtain a new local characterization of municipalities based on the similarity of procurement activity. These results make up a framework for quantitatively studying local public expenditure by enabling policymakers a more appropriate ground for comparative analysis.
With the continuous and vast increase in the amount of data in our digital world, it has been acknowledged that the number of knowledgeable data scientists can not scale to address these challenges. Thus, there was a crucial need for automating the process of building good machine learning models. In the last few years, several techniques and frameworks have been introduced to tackle the challenge of automating the process of Combined Algorithm Selection and Hyper-parameter tuning (CASH) in the machine learning domain. The main aim of these techniques is to reduce the role of the human in the loop and fill the gap for non-expert machine learning users by playing the role of the domain expert. In this paper, we present a comprehensive survey for the state-of-the-art efforts in tackling the CASH problem. In addition, we highlight the research work of automating the other steps of the full complex machine learning pipeline (AutoML) from data understanding till model deployment. Furthermore, we provide comprehensive coverage for the various tools and frameworks that have been introduced in this domain. Finally, we discuss some of the research directions and open challenges that need to be addressed in order to achieve the vision and goals of the AutoML process.
This text is the draft of Julius Wess' contribution to the Proceedings of SUSY07 (KIT Karlsruhe) and to "Supersymmetry at the dawn of the LHC" in Eur. Phys. J. C59/2. The manuscript, which Wess could not finish before his death, has been edited for the publication.
Timing observations of 40 mostly young pulsars using the ATNF Parkes radio telescope between 1990 January and 1998 December are reported. In total, 20 previously unreported glitches and ten other glitches were detected in 11 pulsars. These included 12 glitches in PSR J1341$- $6220, corresponding to a glitch rate of 1.5 glitches per year. We also detected the largest known glitch, in PSR J1614$-$5047, with $\Delta\nu_g/\nu \approx 6.5 \times 10^{-6}$ where $\nu = 1/P$ is the pulse frequency. Glitch parameters were determined both by extrapolating timing solutions to inter-glitch intervals and by phase-coherent timing fits across the glitch(es). Analysis of glitch parameters, both from this work and from previously published results, shows that most glitches have a fractional amplitude $\Delta\nu_g/\nu$ of between $10^{-8}$ and $10^{-6}$. There is no consistent relationship between glitch amplitude and the time since the previous glitch or the time to the following glitch, either for the ensemble or for individual pulsars. As previously recognised, the largest glitch activity is seen in pulsars with ages of order 10$^4$ years, but for about 30 per cent of such pulsars, no glitches were detected in the 8-year data span. There is some evidence for a new type of timing irregularity in which there is a significant increase in pulse frequency over a few days, accompanied by a decrease in the magnitude of the slowdown rate. Fits of an exponential recovery to post-glitch data show that for most older pulsars, only a small fraction of the glitch decays. In some younger pulsars, a large fraction of the glitch decays, but in others, there is very little decay.
A central question in the field of graphene-related research is how graphene behaves when it is patterned at the nanometer scale with different edge geometries. Perhaps the most fundamental shape relevant to this question is the graphene nanoribbon (GNR), a narrow strip of graphene that can have different chirality depending on the angle at which it is cut. Such GNRs have been predicted to exhibit a wide range of behaviour (depending on their chirality and width) that includes tunable energy gaps and the presence of unique one-dimensional (1D) edge states with unusual magnetic structure. Most GNRs explored experimentally up to now have been characterized via electrical conductivity, leaving the critical relationship between electronic structure and local atomic geometry unclear (especially at edges). Here we present a sub-nm-resolved scanning tunnelling microscopy (STM) and spectroscopy (STS) study of GNRs that allows us to examine how GNR electronic structure depends on the chirality of atomically well-defined GNR edges. The GNRs used here were chemically synthesized via carbon nanotube (CNT) unzipping methods that allow flexible variation of GNR width, length, chirality, and substrate. Our STS measurements reveal the presence of 1D GNR edge states whose spatial characteristics closely match theoretical expectations for GNR's of similar width and chirality. We observe width-dependent splitting in the GNR edge state energy bands, providing compelling evidence of their magnetic nature. These results confirm the novel electronic behaviour predicted for GNRs with atomically clean edges, and thus open the door to a whole new area of applications exploiting the unique magnetoelectronic properties of chiral GNRs.
We present the Spectral Image Typer (SPIT), a convolutional neural network (CNN) built to classify spectral images. In contrast to traditional, rules-based algorithms which rely on meta data provided with the image (e.g. header cards), SPIT is trained solely on the image data. We have trained SPIT on 2,004 human-classified images taken with the Kast spectrometer at Lick Observatory with types of Bias, Arc, Flat, Science and Standard. We include several pre-processing steps (scaling, trimming) motivated by human practice and also expanded the training set to balance between image type and increase diversity. The algorithm achieved an accuracy of 98.7% on the held-out validation set and an accuracy of 98.7% on the test set of images. We then adopt a slightly modified classification scheme to improve robustness at a modestly reduced cost in accuracy (98.2%). The majority of mis-classifications are Science frames with very faint sources confused with Arc images (e.g. faint emission-line galaxies) or Science frames with very bright sources confused with Standard stars. These are errors that even a well-trained human is prone to make. Future work will increase the training set from Kast, will include additional optical and near-IR instruments, and may expand the CNN architecture complexity. We are now incorporating SPIT in the PYPIT data reduction pipeline (DRP) and are willing to facilitate its inclusion in other DRPs.
A pressure-induced topological quantum phase transition has been theoretically predicted for the semiconductor BiTeI with giant Rashba spin splitting. In this work, the evolution of the electrical transport properties in BiTeI and BiTeBr is investigated under high pressure. The pressure-dependent resistivity in a wide temperature range passes through a minimum at around 3 GPa, indicating the predicted transition in BiTeI. Superconductivity is observed in both BiTeI and BiTeBr while the resistivity at higher temperatures still exhibits semiconducting behavior. Theoretical calculations suggest that the superconductivity may develop from the multi-valley semiconductor phase. The superconducting transition temperature Tc increases with applied pressure and reaches a maximum value of 5.2 K at 23.5 GPa for BiTeI (4.8 K at 31.7 GPa for BiTeBr), followed by a slow decrease. Our results demonstrate that BiTeX (X = I, Br) compounds with non-trivial topology of electronic states display new ground states upon compression.
Reinforcement learning (RL) allows to solve complex tasks such as Go often with a stronger performance than humans. However, the learned behaviors are usually fixed to specific tasks and unable to adapt to different contexts. Here we consider the case of adapting RL agents to different time restrictions, such as finishing a task with a given time limit that might change from one task execution to the next. We define such problems as Time Adaptive Markov Decision Processes and introduce two model-free, value-based algorithms: the Independent Gamma-Ensemble and the n-Step Ensemble. In difference to classical approaches, they allow a zero-shot adaptation between different time restrictions. The proposed approaches represent general mechanisms to handle time adaptive tasks making them compatible with many existing RL methods, algorithms, and scenarios.
M-theory compactified on a $G_2$ manifold with resolved $E_8$ singularities realizes 4d $\mathcal{N} = 1$ supersymmetric gauge theories coupled to gravity with three families of Standard Model fermions. Beginning with one $E_8$ singularity, three fermion families emerge when $E_8$ is broken by geometric engineering deformations to a smaller subgroup with equal rank. In this paper, we use the local geometry of the theory to explain the origin of the three families and their mass hierarchy. We linearize the blowing-up of 2-cycles associated with resolving $E_8$ singularities. After imposing explicit constraints on the effectively stabilized moduli, we arrive at Yukawa couplings for the quarks and leptons. We fit the high scale Yukawa couplings approximately which results in the quark masses agreeing reasonably well with the observations, implying that the experimental hierarchy of the masses is achievable within this framework. The hierarchy separation of the top quark from the charm and up is a stringy effect, while the spitting of the charm and up also depends on the Higgs sector. The Higgs sector cannot be reduced to having a single vev; all three vevs must be non-zero.Three extra $U(1)$'s survive to the low scale but are not massless, so Z' states are motivated to occur in the spectrum, but may be massive.
A2319 is a massive, merging galaxy cluster with a previously detected radio halo that roughly follows the X-ray emitting gas. We present the results from recent observations of A2319 at 20 cm with the Jansky Very Large Array (VLA) and a re-analysis of the X-ray observations from XMM-Newton, to investigate the interactions between the thermal and nonthermal components of the ICM . We confirm previous reports of an X-ray cold front, and report on the discovery of a distinct core to the radio halo, 800 kpc in extent, that is strikingly similar in morphology to the X-ray emission, and drops sharply in brightness at the cold front. We detect additional radio emission trailing off from the core, which blends smoothly into the 2 Mpc halo detected with the Green Bank Telescope (GBT; Farnsworth et al., 2013). We speculate on the possible mechanisms for such a two-component radio halo, with sloshing playing a dominant role in the core. By directly comparing the X-ray and radio emission, we find that a hadronic origin for the cosmic ray electrons responsible for the radio halo would require a magnetic field and/or cosmic ray proton distribution that increases with radial distance from the cluster center, and is therefore disfavored.
In this paper we consider the problem of minimizing functionals of the form $E(u)=\int_B f(x,\nabla u) \,dx$ in a suitably prepared class of incompressible, planar maps $u: B \rightarrow \mathbb{R}^2$. Here, $B$ is the unit disk and $f(x,\xi)$ is quadratic and convex in $\xi$. It is shown that if $u$ is a stationary point of $E$ in a sense that is made clear in the paper, then $u$ is a unique global minimizer of $E(u)$ provided the gradient of the corresponding pressure satisfies a suitable smallness condition. We apply this result to construct a non-autonomous, uniformly convex functional $f(x,\xi)$, depending smoothly on $\xi$ but discontinuously on $x$, whose unique global minimizer is the so-called $N-$covering map, which is Lipschitz but not $C^1$.
We numerically explore the pasta structures and properties of low-density nuclear matter without any assumption on the geometry. We observe conventional pasta structures, while a mixture of the pasta structures appears as a metastable state at some transient densities. We also discuss the lattice structure of droplets.
We investigate the influence of an exotic fluid component ("quintessence") on the angular size-redshift relation for distant extragalactic sources. Particular emphasis is given for the redshif $z_{m}$ at which the angular size takes its minimal value. We derive an analytical closed form which determines how $z_m$ depends on the parameter of the equation of state describing the exotic component. The results for a flat model dominated by a "quintessence" are compared in detail with the ones for the standard open model dominated by cold dark matter. Some consequences of systematic evolutionary effects on the values of $z_{m}$ are also briefly discussed. It is argued that the critical redshift, for all practical purposes, may completely be removed if such effects are taken into account.
The evolution of multiplicity distribution of a species which undergoes chemical reactions can be described with the help of a master equation. We study the master equation for a fixed temperature, because we want to know how fast different moments of the multiplicity distribution approach their equilibrium value. We particularly look at the 3rd and 4th factorial moments and their equilibrium values from which central moments, cumulants and their ratios can be calculated. Then we study the situation in which the temperature of the system decreases. We find out that in the non-equilibrium state, higher factorial moments differ more from their equilibrium values than the lower moments and that the behaviour of the combination of the central moments depends on the combination we choose. If one chooses to determine the chemical freeze-out temperature from the measured values of higher moments, these effects might jeopardise the correctness of the extracted value.
Objective: To derive a closed-form analytical solution to the swing equation describing the power system dynamics, which is a nonlinear second order differential equation. Existing challenges: No analytical solution to the swing equation has been identified, due to the complex nature of power systems. Two major approaches are pursued for stability assessments on systems: (1) computationally simple models based on physically unacceptable assumptions, and (2) digital simulations with high computational costs. Motivation: The motion of the rotor angle that the swing equation describes is a vector function. Often, a simple form of the physical laws is revealed by coordinate transformation. Methods: The study included the formulation of the swing equation in the Cartesian coordinate system, which is different from conventional approaches that describe the equation in the polar coordinate system. Based on the properties and operational conditions of electric power grids referred to in the literature, we identified the swing equation in the Cartesian coordinate system and derived an analytical solution within a validity region. Results: The estimated results from the analytical solution derived in this study agree with the results using conventional methods, which indicates the derived analytical solution is correct. Conclusion: An analytical solution to the swing equation is derived without unphysical assumptions, and the closed-form solution correctly estimates the dynamics after a fault occurs.
Language models (LM) play an important role in large vocabulary continuous speech recognition (LVCSR). However, traditional language models only predict next single word with given history, while the consecutive predictions on a sequence of words are usually demanded and useful in LVCSR. The mismatch between the single word prediction modeling in trained and the long term sequence prediction in read demands may lead to the performance degradation. In this paper, a novel enhanced long short-term memory (LSTM) LM using the future vector is proposed. In addition to the given history, the rest of the sequence will be also embedded by future vectors. This future vector can be incorporated with the LSTM LM, so it has the ability to model much longer term sequence level information. Experiments show that, the proposed new LSTM LM gets a better result on BLEU scores for long term sequence prediction. For the speech recognition rescoring, although the proposed LSTM LM obtains very slight gains, the new model seems obtain the great complementary with the conventional LSTM LM. Rescoring using both the new and conventional LSTM LMs can achieve a very large improvement on the word error rate.
It is well-known that natural axiomatic theories are pre-well-ordered by logical strength, according to various characterizations of logical strength such as consistency strength and inclusion of $\Pi^0_1$ theorems. Though these notions of logical strength coincide for natural theories, they are not generally equivalent. We study analogues of these notions -- such as $\Pi^1_1$-reflection strength and inclusion of $\Pi^1_1$ theorems -- in the presence of an oracle for $\Sigma^1_1$ truths. In this context these notions coincide; moreover, we get genuine pre-well-orderings of axiomatic theories and may drop the non-mathematical quantification over "natural" theories.
We investigate the dynamics of electrons in the vicinity of the Anderson transition in $d=3$ dimensions. Using the exact eigenstates from a numerical diagonalization, a number of quantities related to the critical behavior of the diffusion function are obtained. The relation $\eta = d-D_{2}$ between the correlation dimension $D_{2}$ of the multifractal eigenstates and the exponent $\eta$ which enters into correlation functions is verified. Numerically, we have $\eta\approx 1.3$. Implications of critical dynamics for experiments are predicted. We investigate the long-time behavior of the motion of a wave packet. Furthermore, electron-electron and electron-phonon scattering rates are calculated. For the latter, we predict a change of the temperature dependence for low $T$ due to $\eta$. The electron-electron scattering rate is found to be linear in $T$ and depends on the dimensionless conductance at the critical point.
Accurate control of qubits is the central requirement for building functional quantum processors. For the current superconducting quantum processor, high-fidelity control of qubits is mainly based on independently calibrated microwave pulses, which could differ from each other in frequencies, amplitudes, and phases. With this control strategy, the needed physical source could be challenging, especially when scaling up to large-scale quantum processors is considered. Inspired by Kane's proposal for spin-based quantum computing, here, we explore theoretically the possibility of baseband flux control of superconducting qubits with only shared and always-on microwave drives. In our strategy, qubits are by default far detuned from the drive during system idle periods, qubit readout and baseband flux-controlled two-qubit gates can thus be realized with minimal impacts from the always-on drive. By contrast, during working periods, qubits are tuned on resonance with the drive and single-qubit gates can be realized. Therefore, universal qubit control can be achieved with only baseband flux pulses and always-on shared microwave drives. We apply this strategy to the qubit architecture where tunable qubits are coupled via a tunable coupler, and the analysis shows that high-fidelity qubit control is possible. Besides, the baseband control strategy needs fewer physical resources, such as control electronics and cooling power in cryogenic systems, than that of microwave control. More importantly, the flexibility of baseband flux control could be employed for addressing the non-uniformity issue of superconducting qubits, potentially allowing the realization of multiplexing and cross-bar technologies and thus controlling large numbers of qubits with fewer control lines. We thus expect that baseband control with shared microwave drives can help build large-scale superconducting quantum processors.
We present a comprehensive analysis of the shape of dark matter (DM) halos in a sample of 25 Milky Way-like galaxies in TNG50 simulation. Using an Enclosed Volume Iterative Method (EVIM), we infer an oblate-to-triaxial shape for the DM halo with the median $T \simeq 0.24 $. We group DM halos in 3 different categories. Simple halos (32% of population) establish principal axes whose ordering in magnitude does not change with radius and whose orientations are almost fixed throughout the halo. Twisted halos (32% of population), experience levels of gradual rotations throughout their radial profiles. Finally, stretched halos (36% of population) demonstrate a stretching in their principal axes lengths where the ordering of different eigenvalues change with radius. Subsequently, the halo experiences a "rotation" of $\sim$90 deg where the stretching occurs. Visualizing the 3D ellipsoid of each halo, for the first time, we report signs of re-orienting ellipsoid in twisted and stretched halos. We examine the impact of baryonic physics on DM halo shape through a comparison to dark matter only (DMO) simulations. This suggests a triaxial (prolate) halo. We analyze the impact of substructure on DM halo shape in both hydro and DMO simulations and confirm that their impacts are subdominant. We study the distribution of satellites in our sample. In simple and twisted halos, the angle of satellites' angular momentum with galaxy's angular momentum grows with radius. However, stretched halos show a flat distribution of angles. Overlaying our theoretical outcome on the observational results presented in the literature establishes a fair agreement.
The phase behaviour of a single large semiflexible polymer immersed in a suspension of spherical particles is studied. All interactions are simple excluded volume interactions and the diameter of the spherical particles is an order of magnitude larger than the diameter of the polymer. The spherical particles induce a quite long ranged depletion attraction between the segments of the polymer and this induces a continuous coil-globule transition in the polymer. This behaviour gives an indication of the condensing effect of macromolecular crowding on DNA.
We prove that the Birkhoff pointwise ergodic theorem and the Oseledets multiplicative ergodic theorem hold for every flat surface in almost every direction. The proofs rely on the strong law of large numbers, and on recent rigidity results for the action of the upper triangular subgroup of SL(2,R) on the moduli space of flat surfaces. Most of the results also use a theorem about continuity of splittings of the Kontsevich-Zorich cocycle recently proved by S. Filip.
Aggregation of heating, ventilation, and air conditioning (HVAC) loads can provide reserves to absorb volatile renewable energy, especially solar photo-voltaic (PV) generation. In this paper, we decide HVAC control schedules under uncertain PV generation, using a distributionally robust chance-constrained (DRCC) building load control model under two typical ambiguity sets: the moment-based and Wasserstein ambiguity sets. We derive mixed-integer linear programming (MILP) reformulations for DRCC problems under both sets. Especially, for the Wasserstein ambiguity set, we utilize the right-hand side (RHS) uncertainty to derive a more compact MILP reformulation than the commonly known MILP reformulations with big-M constants. All the results also apply to general individual chance constraints with RHS uncertainty. Furthermore, we propose an adjustable chance-constrained variant to achieve a trade-off between the operational risk and costs. We derive MILP reformulations under the Wasserstein ambiguity set and second-order conic programming (SOCP) reformulations under the moment-based set. Using real-world data, we conduct computational studies to demonstrate the efficiency of the solution approaches and the effectiveness of the solutions.
In this article, we study the fully differential observables of exclusive production of heavy (charm and bottom) quark pairs in high-energy ultraperipheral $pA$ and $AA$ collisions. In these processes, the nucleus $A$ serves as an efficient source of the photon flux, while the QCD interaction of the produced heavy-quark pair with the target ($p$ or $A$) proceeds via an exchange of gluons in a color singlet state, described by the gluon Wigner distribution. The corresponding predictions for differential cross sections were obtained by using the dipole $S$-matrix in the McLerran--Venugopalan saturation model with impact parameter dependence for the nucleus target, and its recent generalization, for the proton target. Prospects of experimental constraints on the gluon Wigner distribution in this class of reactions are discussed.
We introduce stronger notions for approximate single-source shortest-path distances, show how to efficiently compute them from weaker standard notions, and demonstrate the algorithmic power of these new notions and transformations. One application is the first work-efficient parallel algorithm for computing exact single-source shortest paths graphs -- resolving a major open problem in parallel computing. Given a source vertex in a directed graph with polynomially-bounded nonnegative integer lengths, the algorithm computes an exact shortest path tree in $m \log^{O(1)} n$ work and $n^{1/2+o(1)}$ depth. Previously, no parallel algorithm improving the trivial linear depths of Dijkstra's algorithm without significantly increasing the work was known, even for the case of undirected and unweighted graphs (i.e., for computing a BFS-tree). Our main result is a black-box transformation that uses $\log^{O(1)} n$ standard approximate distance computations to produce approximate distances which also satisfy the subtractive triangle inequality (up to a $(1+\varepsilon)$ factor) and even induce an exact shortest path tree in a graph with only slightly perturbed edge lengths. These strengthened approximations are algorithmically significantly more powerful and overcome well-known and often encountered barriers for using approximate distances. In directed graphs they can even be boosted to exact distances. This results in a black-box transformation of any (parallel or distributed) algorithm for approximate shortest paths in directed graphs into an algorithm computing exact distances at essentially no cost. Applying this to the recent breakthroughs of Fineman et al. for compute approximate SSSP-distances via approximate hopsets gives new parallel and distributed algorithm for exact shortest paths.
We investigate the scattering processes of two photons in a one-dimensional waveguide coupled to two giant atoms. By adjusting the accumulated phase shifts between the coupling points, we are able to effectively manipulate the characteristics of these scattering photons. Utilizing the Lippmann-Schwinger formalism, we derive analytical expressions for the wave functions describing two-photon interaction in separate, braided, and nested configurations. Based on these wave functions, we also obtain analytical expressions for the incoherent power spectra and second-order correlation functions. In contrast to small atoms, the incoherent spectrum, which is defined by the correlation of the bound state, can exhibit more tunability due to the phase shifts. Additionally, the second-order correlation functions in the transmission and reflection fields could be tuned to exhibit either bunching or antibunching upon resonant driving. These unique features offered by the giant atoms in waveguide QED could benefit the generation of nonclassical itinerant photons in quantum networks.
In this work we provide a possible geometrical interpretation of the spin of elementary particles. In particular, it is investigated how the wave equations of matter are altered by the addition of an antisymmetric contribution to the metric tensor. In this scenario the explicit form of the matter wave equations is investigated in a general curved space-time, and then the equations are particularized to the flat case. Unlike traditional approaches of NGT, in which the gravitational field is responsible for breaking the symmetry of the flat Minkowski metric, we find more natural to consider that, in general, the metric of the space-time could be nonsymmetric even in the flat case. The physical consequences of this assumption are explored in detail. Interestingly enough, it is found that the metric tensor splits into a bosonic and a fermionic; the antisymmetric part of the metric is very sensitive to the spin and turns out to be undetectable for spinless scalar particles. However, fermions couple to it in a non-trivial way (only when there are interactions). In addition, the Pauli coupling is derived automatically as a consequence of the nonsymmetric nature of the metric
With a new proof approach we prove in a more general setting the classical convergence theorem that almost everywhere convergence of measurable functions on a finite measure space implies convergence in measure. Specifically, we generalize the theorem for the case where the codomain is a separable metric space and for the case where the limiting map is constant and the codomain is an arbitrary topological space.
A Newtonian uniform ball expanding in empty space constitutes a common heuristic analogy for FLRW cosmology. We discuss possible implementations of the corresponding general-relativistic problem and a variety of new cosmological analogies arising from them. We highlight essential ingredients of the Newtonian analogy, including that the quasilocal energy is always `Newtonian' in the sense that the magnetic part of the Weyl tensor does not contribute to it. A symmetry of the Einstein-Friedmann equations produces another one in the original Newtonian system.