text
stringlengths
6
128k
The effect of interorbital hopping on the orbital selective Mottness in a two-band correlation system is investigated by using the dynamical mean-field theory with the Lanczos method as impurity solver. We construct the phase diagram of the two-orbital Hubbard model with interorbital hopping ($t_{12})$, where the orbital selective Mott phases (OSMP) show different evolution trends. We find that the negative interorbital hopping ($t_{12}<0$) can enhance the OSMP regime upon tuning the effective bandwidth ratio. On the contrary, for the cases with positive interorbital hopping ($t_{12}>0$), the OSMP region becomes narrow with the increase of orbital hybridization until it disappears. It is also shown that a new OSMP emerges for a large enough positive interorbital hopping, owing to the role exchange of wide and narrow effective orbitals caused by the large $t_{12}$. Our results are also applicable to the hole-overdoped Ba$_2$CuO$_{4-\delta}$ superconductor, which is an orbital-selective Mott compound at half-filling.
We call a family G of subsets of [n] a k-generator of (\mathbb{P}[n]) if every (x \subset [n]) can be expressed as a union of at most k disjoint sets in (\mathcal{G}). Frein, Leveque and Sebo conjectured that for any (n \geq k), such a family must be at least as large as the k-generator obtained by taking a partition of [n] into classes of sizes as equal as possible, and taking the union of the power-sets of the classes. We generalize a theorem of Alon and Frankl \cite{alon} in order to show that for fixed k, any k-generator of (\mathbb{P}[n]) must have size at least (k2^{n/k}(1-o(1))), thereby verifying the conjecture asymptotically for multiples of k.
In light of a recent reformulation of Bell's theorem from causal principles by Howard Wiseman and the author, I argue that the conflict between quantum theory and relativity brought up by Bell's work can be softened by a revision of our classical notions of causation. I review some recent proposals for a quantum theory of causation that make great strides towards that end, but highlight a property that is shared by all those theories that would not have satisfied Bell's realist inclinations. They require (implicitly or explicitly) agent-centric notions such as "controllables" and "uncontrollables", or "observed" and "unobserved". Thus they relieve the tensions around Bell's theorem by highlighting an issue more often associated with another deep conceptual issue in quantum theory: the measurement problem. Rather than rejecting those terms, however, I argue that we should understand why they seem to be, at least at face-value, needed in order to reach compatibility between quantum theory and relativity. This seems to suggest that causation, and thus causal structure, are emergent phenomena, and lends support to the idea that a resolution of the conflict between quantum theory and relativity necessitates a solution to the measurement problem.
The recent emergence of chirality in mechanical metamaterials has revolutionized the field, enabling achievements in wave propagation and polarization control. Despite being an intrinsic feature of some molecules and ubiquitous in our surroundings, the incorporation of chirality into mechanical systems has only gained widespread recognition in the last few years. The extra degrees of freedom induced by chirality has propelled the study of systems to new heights, leading to a better understanding of the physical laws governing these systems. In this study, we present a structural design of a butterfly meta-structure that exploits the chiral effect to create a 3D chiral butterfly capable of inducing a rotation of 90{\deg} in the plane of polarization, enabling a switch between various polarization states within a solid material. Furthermore, our numerical investigation using Finite Element Analysis (FEA) has revealed an unexpected conversion of compressional movement to transverse movement within these structures, further highlighting the transformative potential of chirality in mechanical metamaterials. Thus, revealing an additional degree of freedom that can be manipulated, namely the polarization state.
We consider the longitudinal momentum distribution of hadrons inside jets in proton-proton collisions. At partonic threshold large double logarithmic corrections arise which need to be resummed to all orders. We develop a factorization formalism within SCET that allows for the joint resummation of threshold and jet radius logarithms. We achieve next-to-leading logarithmic NLL$'$ accuracy by including non-global logarithms in the leading-color approximation. Overall, we find that the threshold resummation leads to a sizable enhancement of the cross section and a reduced QCD scale dependence, suggesting that the all-order resummation can be important for the reliable extraction of fragmentation functions in global analyses when jet substructure data is included.
This tutorial presents a simple yet accurate transient stability analysis for Type-A wind turbines. The paper is presented in a tutorial way and therefore it includes some scripts in Matlab/muPad which demonstrate the main concepts.
In the present work, we study heat transport through a one dimensional time-dependent nanomechanical system. The microscopic model consists of coupled chains of atoms, considering local and non-local interactions between particles. We show that the system presents different stationary transport regimes depending on the driving frequency, temperature gradients and the degree of locality of the interactions. In one of these regimes, the system operates as a phonon refrigerator, and its cooling performance is analyzed. Based on a low frequency approach, we show that non-locality and its interplay with dissipation cause a decrease in cooling capacity. The results are obtained numerically by means of the Keldysh non-equilibrium Green's function formalism.
The Four Fermi model with discrete chiral symmetry is studied in three dimensions at non-zero chemical potential and temperature using the Hybrid Monte Carlo algorithm. The number of fermion flavors is chosen large $(N_f=12)$ to compare with analytic results. A first order chiral symmetry restoring transition is found at zero temperature with a critical chemical potential $\mu_c$ in good agreement with the large $N_f$ calculations. The critical index $\nu$ of the correlation length is measured in good agreement with analytic calculations. The two dimensional phase diagram (chemical potential vs. temperature) is mapped out quantitatively. Finite size effects on relatively small lattices and non-zero fermion mass effects are seen to smooth out the chiral transition dramatically.
We calculate $\partial\mu/\partial n$ in extrinsic graphene as a function of carrier density $n$ at zero temperature by obtaining the electronic self-energy within the Hartree-Fock approximation. The exchange-driven Dirac-point logarithmic singularity in the quasiparticle velocity of intrinsic graphene disappears in the extrinsic case. The calculated renormalized $\partial\mu/\partial n$ in extrinsic graphene has the same qualitative $n^{-\frac12}$ density dependence as the inverse bare density of states with a 20% enhancement from the corresponding bare value, a relatively weak effect compared to the corresponding parabolic-band case.
We study an initial-boundary-value problem (IBVP) for a system of coupled Maxwell-Bloch equations (CMBE) that model two colors or polarizations of light resonantly interacting with a degenerate, two-level, active optical medium with an excited state and a pair of degenerate ground states. We assume that the electromagnetic field approaches non-vanishing plane waves in the far past and future. This type of interaction has been found to underlie nonlinear optical phenomena including electromagnetically induced transparency, slow light, stopped light, and quantum memory. Under the assumptions of unidirectional, lossless propagation of slowly-modulated plane waves, the resulting CMBE become completely integrable in the sense of possessing a Lax Pair. In this paper, we formulate an inverse scattering transform (IST) corresponding to these CMBE and their Lax pair, allowing for the spectral line of the atomic transitions in the active medium to have a finite width. The scattering problem for this Lax pair is the same as for the Manakov system. The main advancement in this IST for CMBE is calculating the nontrivial spatial propagation of the spectral data and determining the state of the optical medium in the distant future from that in the distant past, which is needed for the complete formulation of the IBVP. The Riemann-Hilbert problem is used to extract the spatio-temporal dependence of the solution from the evolving spectral data. We further derive and analyze several types of solitons and determine their velocity and stability, as well as find dark states of the medium which fail to interact with a given pulse.
A spontaneously broken SU(2)xU(1) gauge theory with just one "primordial" generation of fermions is formulated in the context of generally covariant theory which contains two measures of integration in the action: the standard \sqrt{-g}d^{4}x and a new \Phi d^{4}x, where \Phi is a density built out of degrees of freedom independent of the metric. Such type of models are known to produce a satisfactory answer to the cosmological constant problem. Global scale invariance is implemented. After SSB of scale invariance and gauge symmetry it is found that with the conditions appropriate to laboratory particle physics experiments, to each primordial fermion field corresponds three physical fermionic states. Two of them correspond to particles with constant masses and they are identified with the first two generations of the electro-weak theory. The third fermionic states at the classical level get non-polynomial interactions which indicate the existence of fermionic condensate and fermionic mass generation.
This document presents in detail the derivation of the formulas that describe the resonance driving terms and the tune spread with amplitude generated by the beam-beam long range interactions and the DC wire compensators in cyclical machines. This analysis make use of the weak-strong approximation.
We study a very small three player poker game (one-third street Kuhn poker), and a simplified version of the game that is interesting because it has three distinct equilibrium solutions. For one-third street Kuhn poker, we are able to find all of the equilibrium solutions analytically. For large enough pot size, $P$, there is a degree of freedom in the solution that allows one player to transfer profit between the other two players without changing their own profit. This has potentially interesting consequences in repeated play of the game. We also show that in a simplified version of the game with $P>5$, there is one equilibrium solution if $5 < P < P^* \equiv (5+\sqrt{73})/2$, and three distinct equilibrium solutions if $P > P^*$. This may be the simplest non-trivial multiplayer poker game with more than one distinct equilibrium solution and provides us with a test case for theories of dynamic strategy adjustment over multiple realisations of the game. We then study a third order system of ordinary differential equations that models the dynamics of three players who try to maximise their expectation by continuously varying their betting frequencies. We find that the dynamics of this system are oscillatory, with two distinct types of solution. We then study a difference equation model, based on repeated play of the game, in which each player continually updates their estimates of the other players' betting frequencies. We find that the dynamics are noisy, but basically oscillatory for short enough estimation periods and slow enough frequency adjustments, but that the dynamics can be very different for other parameter values.
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration. ViTs typically yield superior results in image restoration compared to CNNs due to their ability to capture long-range dependencies and input-dependent characteristics. However, the computational complexity of Transformer-based models grows quadratically with the image resolution, limiting their practical appeal in high-resolution image restoration tasks. In this paper, we propose a simple yet effective visual state space model (EVSSM) for image deblurring, leveraging the benefits of state space models (SSMs) to visual data. In contrast to existing methods that employ several fixed-direction scanning for feature extraction, which significantly increases the computational cost, we develop an efficient visual scan block that applies various geometric transformations before each SSM-based module, capturing useful non-local information and maintaining high efficiency. Extensive experimental results show that the proposed EVSSM performs favorably against state-of-the-art image deblurring methods on benchmark datasets and real-captured images.
The goal of these notes is to present the C*-algebra $C^*(B,L,\theta)$ of a Boolean dynamical system $(B,L,\theta)$, that generalizes the $C^*$-algebra associated to Labelled graphs introduced by Bates and Pask, and to determine its simplicity, its gauge invariant ideals, as well as compute its K-Theory
At zero magnetic field, a series of five phase transitions occur in Co3V2O8. The Neel temperature, TN=11.4 K, is followed by four additional phase changes at T1=8.9 K, T2=7.0 K, T3=6.9 K, and T4=6.2 K. The different phases are distinguished by the commensurability of the b-component of its spin density wave vector. We investigate the stability of these various phases under magnetic fields through dielectric constant and magnetic susceptibility anomalies. The field-temperature phase diagram of Co3V2O8 is completely resolved. The complexity of the phase diagram results from the competition of different magnetic states with almost equal ground state energies due to competing exchange interactions and frustration.
We present a novel interactive learning protocol that enables training request-fulfilling agents by verbally describing their activities. Unlike imitation learning (IL), our protocol allows the teaching agent to provide feedback in a language that is most appropriate for them. Compared with reward in reinforcement learning (RL), the description feedback is richer and allows for improved sample complexity. We develop a probabilistic framework and an algorithm that practically implements our protocol. Empirical results in two challenging request-fulfilling problems demonstrate the strengths of our approach: compared with RL baselines, it is more sample-efficient; compared with IL baselines, it achieves competitive success rates without requiring the teaching agent to be able to demonstrate the desired behavior using the learning agent's actions. Apart from empirical evaluation, we also provide theoretical guarantees for our algorithm under certain assumptions about the teacher and the environment.
Most of the observed extrasolar planets are found on tight and often eccentric orbits. The high eccentricities are not easily explained by planet-formation models, which predict that planets should be on rather circular orbits. Here we explore whether fly-bys involving planetary systems with properties similar to those of the gas giants in the solar system, can produce planets with properties similar to the observed planets. Using numerical simulations, we show that fly-bys can cause the immediate ejection of planets, and sometimes also lead to the capture of one or more planets by the intruder. More common, however, is that fly-bys only perturb the orbits of planets, sometimes leaving the system in an unstable state. Over time-scales of a few million to several hundred million years after the fly-by, this perturbation can trigger planet-planet scatterings, leading to the ejection of one or more planets. For example, in the case of the four gas giants of the solar system, the fraction of systems from which at least one planet is ejected more than doubles in 10^8 years after the fly-by. The remaining planets are often left on more eccentric orbits, similar to the eccentricities of the observed extrasolar planets. We combine our results of how fly-bys effect solar-system-like planetary systems, with the rate at which encounters in young stellar clusters occur. For example, we measure the effects of fly-bys on the four gas giants in the solar system. We find, that for such systems, between 5 and 15 per cent suffer ejections of planets in 10^8 years after fly-bys in typical open clusters. Thus, encounters in young stellar clusters can significantly alter the properties of any planets orbiting stars in clusters. As a large fraction of stars which populate the solar neighbourhood form in stellar clusters, encounters can significantly affect the properties of the observed extrasolar planets.
More than 500 diffuse interstellar bands (DIBs) have been observed in astronomical spectra, and their signatures and correlations in different environments have been studied over the past decades to reveal clues about the nature of the carriers. We compare the equivalent widths of the DIBs, normalized to the amount of reddening, E_B-V, to search for anti-correlated DIB pairs using a data sample containing 54 DIBs measured in 25 sight lines. This data sample covers most of the strong and commonly detected DIBs in the optical region, and the sight lines probe a variety of ISM conditions. We find that 12.9% of the DIB pairs are anti-correlated, and the lowest Pearson correlation coefficient is r_norm ~ -0.7. We revisit correlation-based DIB families and are able to reproduce the assignments of such families for the well-studied DIBs by applying hierarchical agglomerative and k-means clustering algorithms. We visualize the dissimilarities between DIBs, represented by 1 - r_norm, using multi-dimensional scaling (MDS). With this representation, we find that the DIBs form a rather continuous sequence, which implies that some properties of the DIB carriers are changing gradually following this sequence. We also find at that least two factors are needed to properly explain the dissimilarities between DIBs. While the first factor may be interpreted as related to the ionization properties of the DIB carriers, a physical interpretation of the second factor is less clear and may be related to how DIB carriers interact with surrounding interstellar material.
In this article we consider asymptotics for the spectral function of Schr\"odinger operators on the real line. Let $P:L^2(\mathbb{R})\to L^2(\mathbb{R})$ have the form $$ P:=-\tfrac{d^2}{dx^2}+W, $$ where $W$ is a self-adjoint first order differential operator with certain modified almost periodic structure. We show that the kernel of the spectral projector, $\mathbb{1}_{(-\infty,\lambda^2]}(P)$ has a full asymptotic expansion in powers of $\lambda$. In particular, our class of potentials $W$ is stable under perturbation by formally self-adjoint first order differential operators with smooth, compactly supported coefficients. Moreover, it includes certain potentials with dense pure point spectrum. The proof combines the gauge transform methods of Parnovski-Shterenberg and Sobolev with Melrose's scattering calculus.
Moir\'e superlattices formed from twisting trilayers of graphene are an ideal model for studying electronic correlation, and offer several advantages over bilayer analogues, including more robust and tunable superconductivity and a wide range of twist angles associated with flat band formation. Atomic reconstruction, which strongly impacts the electronic structure of twisted graphene structures, has been suggested to play a major role in the relative versatility of superconductivity in trilayers. Here, we exploit an inteferometric 4D-STEM approach to image a wide range of trilayer graphene structures. Our results unveil a considerably different model for moir\'e lattice relaxation in trilayers than that proposed from previous measurements, informing a thorough understanding of how reconstruction modulates the atomic stacking symmetries crucial for establishing superconductivity and other correlated phases in twisted graphene trilayers.
In the second part of this two-paper series, the stability margin of a critical machine and that of the system are first proposed, and then the concept of non-global stability margin is illustrated. Based on the crucial statuses of the leading unstable critical machine and the most severely disturbed critical machine, the critical stability of the system from the perspective of an individual machine is analyzed. In the end of this paper, comparisons between the proposed method and classic global methods are demonstrated.
This article describes an original approach to analyze simultaneously cross sections and surrogate data measurements using efficient Monte Carlo extended $\mathcal{R}$-matrix theory algorithm based on unique set of nuclear structure parameters. The alternative analytical path based on the manifold Hauser-Feshbach equation was intensively used in this work to gauge the errors carried by the surrogate-reaction method commonly taken to predict neutron-induced cross sections from observed partial decay probabilities. Present paper emphasizes in particular a dedicated way to treat direct reaction entrance and prior decay excited nucleus outgoing channels widths correlations. Present smart theoretical foundation brought the opportunity to apply successfully our method to both fission-probability data and directly measured neutron cross sections according to Pu fissile isotopes; namely the $^{237, 238, 240, 242~and~244}$Pu$^*$ nuclei. This new capability opens genuine perspectives in matter of 'evaluation process' from foreseen fission- and $\gamma$-decay probabilities simultaneously measured as derived data will become available.
We discuss the problem of fronts propagating into metastable and unstable states. We examine the time development of the leading edge, discovering a precursor which in the metastable case propagates out ahead of the front at a velocity more than double that of the front and establishes the characteristic exponential behavior of the steady-state leading edge. We also study the dependence of the velocity on the imposition of a cutoff in the reaction term. These studies shed new light on the problem of velocity selection in the case of propagation into an unstable state. We also examine how discreteness in a particle simulation acts as an effective cutoff in this case.
Globular clusters are unique tracers of ancient star formation. We determine the formation efficiencies of globular clusters across cosmic time by modeling the formation and dynamical evolution of the globular cluster population of a Milky Way type galaxy in hierarchical cosmology, using the merger tree from the Via Lactea II simulation. All of the models are constrained to reproduce the observed specific frequency and initial mass function of globular clusters in isolated dwarfs. Globular cluster orbits are then computed in a time varying gravitational potential after they are either accreted from a satellite halo or formed in situ, within the Milky Way halo. We find that the Galactocentric distances and metallicity distribution of globular clusters are very sensitive to the formation efficiencies of globular clusters as a function of redshift and halo mass. Our most accurate models reveal two distinct peaks in the globular cluster formation efficiency at z~2 and z~7-12 and prefer a formation efficiency that is mildly increasing with decreasing halo mass, the opposite of what expected for feedback-regulated star formation. This model accurately reproduces the positions, velocities, mass function, metallicity distribution, and age distribution of globular clusters in the Milky Way and predicts that ~ 40% formed in situ, within the Milky Way halo, while the other ~ 60% were accreted from about 20 satellite dwarf galaxies with Vc > 30 km/s, and about 29% or all globular clusters formed at redshifts z > 7. These results further strengthen the notion that globular cluster formation was an important mode of star formation in high-redshift galaxies and likely played a significant role in the reionization of the intergalactic medium
Events in a narrative differ in salience: some are more important to the story than others. Estimating event salience is useful for tasks such as story generation, and as a tool for text analysis in narratology and folkloristics. To compute event salience without any annotations, we adopt Barthes' definition of event salience and propose several unsupervised methods that require only a pre-trained language model. Evaluating the proposed methods on folktales with event salience annotation, we show that the proposed methods outperform baseline methods and find fine-tuning a language model on narrative texts is a key factor in improving the proposed methods.
The dynamics of any quantum system is unavoidably influenced by the external environment. Thus, the observation of a quantum system (probe) can allow the measure of the environmental features. Here, to spectrally resolve a noise field coupled to the quantum probe, we employ dissipative manipulations of the probe, leading to so-called Stochastic Quantum Zeno (SQZ) phenomena. A quantum system coupled to a stochastic noise field and subject to a sequence of protective Zeno measurements slowly decays from its initial state with a survival probability that depends both on the measurement frequency and the noise. We present a robust sensing method to reconstruct the unkonwn noise power spectral density by evaluating the survival probability that we obtain when we additionally apply a set of coherent control pulses to the probe. The joint effect of coherent control, protective measurements and noise field on the decay provides us the desired information on the noise field.
Spintronic devices, whose operation is based on the motion of a magnetic domain wall (DW), have been proposed recently. If a DW could be driven directly by flowing an electric current instead of a magnetic field, the performance and functions of such device would be drastically improved. Here we report real-space observation of the current-driven DW motion by using a well-defined single DW in a micro-fabricated magnetic wire with submicron width. Magnetic force microscopy (MFM) visualizes that a single DW introduced in the wire is displaced back and forth by positive and negative pulsed-current, respectively. We can control the DW position in the wire by tuning the intensity, the duration and the polarity of the pulsed-current. It is, thus, demonstrated that spintronic device operation by the current-driven DW motion is possible.
We derive the soft effective action in $(d+2)$-dimensional abelian gauge theories from the on-shell action obeying Neumann boundary conditions at timelike and null infinity and Dirichlet boundary conditions at spatial infinity. This allows us to identify the on-shell degrees of freedom on the boundary with the soft modes living on the celestial sphere. Following the work of Donnelly and Wall, this suggests that we can interpret soft modes as entanglement edge modes on the celestial sphere and study entanglement properties of soft modes in abelian gauge theories.
George and Wilson [Acta. Cryst. D 50, 361 (1994)] looked at the distribution of values of the second virial coefficient of globular proteins, under the conditions at which they crystallise. They found the values to lie within a fairly narrow range. We have defined a simple model of a generic globular protein. We then generate a set of proteins by picking values for the parameters of the model from a probability distribution. At fixed solubility, this set of proteins is found to have values of the second virial coefficient that fall within a fairly narrow range. The shape of the probability distribution of the second virial coefficient is Gaussian because the second virial coefficient is a sum of contributions from different patches on the protein surface.
We study dynamics of a ring of three unidirectionally coupled double-well Duffing oscillators for three different values of the damping coefficient: fixed dumping, proportional to time, and inversely proportional to time. The dynamics in all cases is analyzed through time series, Fourier and Hilbert transforms, Poincar\'e sections, as well as bifurcation diagrams and Lyapunov exponents with respect to the coupling strength. In the first case, we observe a well-known route from a stable steady state to hyperchaos through Hopf bifurcation and a series of torus bifurcations, as the coupling strength is increased. In the second case, the system is highly dissipative and converges into one of stable equilibria. Finally, in the third case, transient toroidal hyperchaos takes place.
The newly discovered "Higgs" boson h^0, being lighter than the top quark t, opens up new probes for flavor and mass generation. In the general two Higgs doublet model, new ct, cc and tt Yukawa couplings could modify h^0 properties. If t --> ch^0 occurs at the percent level, the observed ZZ^* and \gamma\gamma signal events may have accompanying cbW activity coming from t\bar{t} feeddown. We suggest that t --> ch^0 can be searched for via h^0 --> ZZ^*, \gamma\gamma, WW^* and b\bar{b}, perhaps even \tau^+\tau^- modes in t\bar{t} events. Existing data might be able to reveal some clues for t --> ch^0 signature, or push the branching ratio B(t --> ch^0) down to below the percent level.
A spin version of dynamical mean-field theory is extended for magnetically ordered states in the Heisenberg model. The self-consistency equations are solved with high numerical accuracy by means of the continuous-time quantum Monte Carlo with bosonic baths coupled to the spin. The resultant solution is critically tested by known physical properties. In contrast with the mean-field theory, soft paramagnons appear near the transition temperature. Moreover, the Nambu-Goldstone mode (magnon) in the ferromagnetic phase is reproduced reasonably well. However, antiferromagnetic magnons have an energy gap in contradiction to the Nambu-Goldstone theorem. The origin of this failure is discussed in connection with an artificial first-order nature of the transition.
We provide an experimentally measurable local gauge $U(1)$ invariant Fubini-Study (FS) metric for mixed states. Like the FS metric for pure states, it also captures only the quantum part of the uncertainty in the evolution Hamiltonian. We show that this satisfies the quantum Cramer-Rao bound and thus arrive at a more general and measurable bound. Upon imposing the monotonicity condition, it reduces to the square-root derivative quantum Fisher Information. We show that on the Fisher information metric space dynamical phase is zero. A relation between square root derivative and logarithmic derivative is formulated such that both give the same Fisher information. We generalize the Fubini-Study metric for mixed states further and arrive at a set of Fubini-Study metric---called $\alpha$ metric. This newly defined $\alpha$ metric also satisfies the Cramer-Rao bound. Again by imposing the monotonicity condition on this metric, we derive the monotone $\alpha$ metric. It reduces to the Fisher information metric for $\alpha=1$.
After briefly recalling the quantum entanglement-based view of topological phases of matter in order to outline the general context, we give an overview of different approaches to the classification problem of topological insulators and superconductors of non-interacting Fermions. In particular, we review in some detail general symmetry aspects of the "ten-fold way" which forms the foundation of the classification, and put different approaches to the classification in relationship with each other. We end by briefly mentioning some of the results obtained on the effect of interactions, mainly in three spatial dimensions.
A search for new physics is performed in events with two same-sign isolated leptons, hadronic jets, and missing transverse energy in the final state. The analysis is based on a data sample corresponding to an integrated luminosity of 4.98 inverse femtobarns produced in pp collisions at a center-of-mass energy of 7 TeV collected by the CMS experiment at the LHC. This constitutes a factor of 140 increase in integrated luminosity over previously published results. The observed yields agree with the standard model predictions and thus no evidence for new physics is found. The observations are used to set upper limits on possible new physics contributions and to constrain supersymmetric models. To facilitate the interpretation of the data in a broader range of new physics scenarios, information on the event selection, detector response, and efficiencies is provided.
We report the discovery of a $75\deg$ long stellar stream in Gaia DR2 catalog, found using the new STREAMFINDER algorithm. The structure is probably the remnant of a now fully disrupted globular cluster, lies $\approx 3.8 \, {\rm kpc}$ away from the Sun in the direction of the Galactic bulge, and possesses highly retrograde motion. We find that the system orbits close to the Galactic plane at Galactocentric distances between $4.9$ and $19.8 \, {\rm kpc}$. The discovery of this extended and extremely low surface brightness stream ($\Sigma_G\sim 34.3 \, {\rm mag \, arcsec^{-2}}$) with a mass of only $2580\pm140 \, {\rm\,M_\odot}$, demonstrates the power of the STREAMFINDER algorithm to detect even very nearby and ultra-faint structures. Due to its proximity and length we expect that Phlegethon will be a very useful probe of the Galactic acceleration field.
We consider a stochastic epidemic model with sideward contact tracing. We assume that infection is driven by interactions within mixing events (gatherings of two or more individuals). Once an infective is diagnosed, each individual who was infected at the same event as the diagnosed individual is contact traced with some given probability. Assuming few initial infectives in a large population, the early phase of the epidemic is approximated by a branching process with sibling dependencies. To address the challenges given by the dependencies, we consider sibling groups (individuals who become infected at the same event) as macro-individuals and define a macro-branching process. This allows us to derive an expression for the effective macro-reproduction number which corresponds to the effective individual reproduction number and represents a threshold for the behaviour of the epidemic. Through numerical illustrations, we show how the reproduction number varies with the mean size of mixing events, the rate of diagnosis and the tracing probability.
Canonical analysis has long been the primary analysis method for studies of phase transitions. However, this approach is not sensitive enough if transition signals are too close in temperature space. The recently introduced generalized microcanonical inflection-point analysis method not only enables the systematic identification and classification of transitions in systems of any size, but it can also distinguish transitions that standard canonical analysis cannot resolve. By applying this method to a generic coarse-grained model for semiflexible polymers, we identify a mixed structural phase dominated by secondary structures such as hairpins and loops that originates from a bifurcation in the hyperspace spanned by inverse temperature and bending stiffness. This intermediate phase, which is embraced by the well-known random-coil and toroidal phases, is testimony to the necessity of balancing entropic variability and energetic stability in functional macromolecules under physiological conditions.
Network embedding maps the nodes of a given network into a low-dimensional space such that the semantic similarities among the nodes can be effectively inferred. Most existing approaches use inner-product of node embedding to measure the similarity between nodes leading to the fact that they lack the capacity to capture complex relationships among nodes. Besides, they take the path in the network just as structural auxiliary information when inferring node embeddings, while paths in the network are formed with rich user informations which are semantically relevant and cannot be ignored. In this paper, We propose a novel method called Network Embedding on the Metric of Relation, abbreviated as NEMR, which can learn the embeddings of nodes in a relational metric space efficiently. First, our NEMR models the relationships among nodes in a metric space with deep learning methods including variational inference that maps the relationship of nodes to a gaussian distribution so as to capture the uncertainties. Secondly, our NEMR considers not only the equivalence of multiple-paths but also the natural order of a single-path when inferring embeddings of nodes, which makes NEMR can capture the multiple relationships among nodes since multiple paths contain rich user information, e.g., age, hobby and profession. Experimental results on several public datasets show that the NEMR outperforms the state-of-the-art methods on relevant inference tasks including link prediction and node classification.
Let $G$ be a commutative connected algebraic group over a number field $K$, let $A$ be a finitely generated and torsion-free subgroup of $G(K)$ of rank $r>0$ and, for $n>1$, let $K(n^{-1}A)$ be the smallest extension of $K$ inside an algebraic closure $\overline K$ over which all the points $P\in G(\overline K)$ such that $nP\in A$ are defined. We denote by $s$ the unique non-negative integer such that $G(\overline K)[n]\cong (\mathbb Z/n\mathbb Z)^s$ for all $n\geq 1$. We prove that, under certain conditions, the ratio between $n^{rs}$ and the degree $[K(n^{-1}A):K(G[n])]$ is bounded independently of $n>1$ by a constant that depends only on the $\ell$-adic Galois representations associated with $G$ and on some arithmetic properties of $A$ as a subgroup of $G(K)$ modulo torsion. In particular we extend the main theorems of [13] about elliptic curves to the case of arbitrary rank.
In this talk I will review some of the recent applications of the replica theory to glasses. I will firstly describe the basic assumptions and I will show that they can be considered as a precise reformulations of old ideas. The relation of this approach with the mode-coupling theory will be shortly discussed. I will present numerical simulations for binary mixtures. The results of these simulations point toward the correctness of the replica approach to glasses. I will describe the results of off-equilibrium simulations for large systems, in which the aging dynamics is studied.
The Centaurs are a transient population of small bodies in the outer solar system whose orbits are strongly chaotic. These objects typically suffer significant changes of orbital parameters on timescales of a few thousand years, and their orbital evolution exhibits two types of behaviors described qualitatively as random-walk and resonance-sticking. We have analyzed the chaotic behavior of the known Centaurs. Our analysis has revealed that the two types of chaotic evolution are quantitatively distinguishable: (1) the random walk-type behavior is well described by so-called generalized diffusion in which the rms deviation of the semimajor axis grows with time t as ~t^H, with Hurst exponent H in the range 0.22--0.95, however (2) orbital evolution dominated by intermittent resonance sticking, with sudden jumps from one mean motion resonance to another, has poorly defined H. We further find that these two types of behavior are correlated with Centaur dynamical lifetime: most Centaurs whose dynamical lifetime is less than ~22 Myr exhibit generalized diffusion, whereas most Centaurs of longer dynamical lifetimes exhibit intermittent resonance sticking. We also find that Centaurs in the diffusing class are likely to evolve into Jupiter-family comets during their dynamical lifetimes, while those in the resonance-hopping class do not.
We derive normal approximation bounds in the Kolmogorov distance for sums of discrete multiple integrals and $U$-statistics made of independent Bernoulli random variables. Such bounds are applied to normal approximation for the renormalized subgraphs counts in the Erd{\H o}s-R\'enyi random graph. This approach completely solves a long-standing conjecture in the general setting of arbitrary graph counting, while recovering and improving recent results derived for triangles as well as results using the Wasserstein distance.
General relativity predicts that the spin axes of the pulsars in the double-pulsar system (PSR J0737-3039A/B) will precess rapidly, in general leading to a change in the observed pulse profiles. We have observed this system over a one-year interval using the Parkes 64-m radio telescope at three frequencies: 680, 1390 and 3030 MHz. These data, combined with the short survey observation made two years earlier, show no evidence for significant changes in the pulse profile of PSR J0737-3039A, the 22-ms pulsar. The limit on variations of the profile 10% width is about 0.5 deg per year. These results imply an angle delta between the pulsar spin axis and the orbit normal of <~ 60 deg, consistent with recent evolutionary studies of the system. Although a wide range of system parameters remain consistent with the data, the model proposed by Jenet & Ransom (2004) can be ruled out. A non-zero ellipticity for the radiation beam gives slightly but not significantly improved fits to the data, so that a circular beam describes the data equally well within the uncertainties.
Impact analysis is concerned with the identification of consequences of changes and is therefore an important activity for software evolution. In modelbased software development, models are core artifacts, which are often used to generate essential parts of a software system. Changes to a model can thus substantially affect different artifacts of a software system. In this paper, we propose a modelbased approach to impact analysis, in which explicit impact rules can be specified in a domain specific language (DSL). These impact rules define consequences of designated UML class diagram changes on software artifacts and the need of dependent activities such as data evolution. The UML class diagram changes are identified automatically using model differencing. The advantage of using explicit impact rules is that they enable the formalization of knowledge about a product. By explicitly defining this knowledge, it is possible to create a checklist with hints about development steps that are (potentially) necessary to manage the evolution. To validate the feasibility of our approach, we provide results of a case study.
The correlation buildup and the formation dynamics of the shell structure in a spherically confined one-component plasma are studied. Using Langevin dynamics simulations the relaxation processes and characteristic time scales and their dependence on the pair interaction and dissipation in the plasma are investigated. While in systems with Coulomb interaction (e.g. trapped ions) in a harmonic confinement shell formation starts at the plasma edge and proceeds inward, this trend is significantly weakened for dusty plasmas with Yukawa interaction. With a suitable change of the confinement conditions the crystallization scenario can be externally controlled.
Accurate control of quantum evolution is an essential requirement for quantum state engineering, laser chemistry, quantum information and quantum computing. Conditions of controllability for systems with a finite number of energy levels have been extensively studied. By contrast, results for controllability in infinite dimensions have been mostly negative, stating that full control cannot be achieved with a finite dimensional control Lie algebra. Here we show that by adding a discrete operation to a Lie algebra it is possible to obtain full control in infinite dimensions with a small number of control operators.
Incompressible even denominator fractional quantum Hall states at fillings $\nu = \pm \frac{1}{2}$ and $\nu = \pm \frac{1}{4}$ have been recently observed in monolayer graphene. We use a Chern-Simons description of multi-component fractional quantum Hall states in graphene to investigate the properties of these states and suggest variational wavefunctions that may describe them. We find that the experimentally observed even denominator fractions and standard odd fractions (such as $\nu=1/3, 2/5$, etc.) can be accommodated within the same flux attachment scheme and argue that they may arise from sublattice or chiral symmetry breaking orders (such as charge-density-wave and antiferromagnetism) of composite Dirac fermions, a phenomenon unifying integer and fractional quantum Hall physics for relativistic fermions. We also discuss possible experimental probes that can narrow down the candidate broken symmetry phases for the fractional quantum Hall states in the zeroth Landau level of monolayer graphene.
The entropy of BPS black holes in four space-time dimensions is discussed from both macroscopic and microscopic points of view.
Cem Tezer was a fastidious, meticulous, highly idiosyncratic and versatile scientist. Without him Turkish community of mathematics would be incomplete. Our sense of gratitude for his work in various areas of mathematics, history of sciences, literature, music and his encouragement to do mathematics for only its beauty was hardly unique and even unusual. After he passed away on 27 February 2020, while working actively at Middle East Technical University, the number of colleagues and former students described the ways in which their studies and indeed their view towards mathematics had been transformed by having known him might have surprised only those who had never met him. In this article not only, his contributions to mathematics will be classified and summarized but also his unique and distinguished personality as a mathematician will be emphasized.
We study the problem of rigidity of closures of totally geodesic plane immersions in geometrically finite manifolds containing rank $1$ cusps. We show that the key notion of K-thick recurrence of horocycles fails generically in this setting. This property was introduced in the recent work of McMullen, Mohammadi and Oh. Nonetheless, in the setting of geometrically finite groups whose limit sets are circle packings, we derive 2 density criteria for non-closed geodesic plane immersions, and show that closed immersions give rise to surfaces with finitely generated fundamental groups. We also obtain results on the existence and isolation of proper closed immersions of elementary surfaces.
Genomic selection (GS) is a technique that plant breeders use to select individuals to mate and produce new generations of species. Allocation of resources is a key factor in GS. At each selection cycle, breeders are facing the choice of budget allocation to make crosses and produce the next generation of breeding parents. Inspired by recent advances in reinforcement learning for AI problems, we develop a reinforcement learning-based algorithm to automatically learn to allocate limited resources across different generations of breeding. We mathematically formulate the problem in the framework of Markov Decision Process (MDP) by defining state and action spaces. To avoid the explosion of the state space, an integer linear program is proposed that quantifies the trade-off between resources and time. Finally, we propose a value function approximation method to estimate the action-value function and then develop a greedy policy improvement technique to find the optimal resources. We demonstrate the effectiveness of the proposed method in enhancing genetic gain using a case study with realistic data.
Design generation requires tight integration of neural and symbolic reasoning, as good design must meet explicit user needs and honor implicit rules for aesthetics, utility, and convenience. Current automated design tools driven by neural networks produce appealing designs, but cannot satisfy user specifications and utility requirements. Symbolic reasoning tools, such as constraint programming, cannot perceive low-level visual information in images or capture subtle aspects such as aesthetics. We introduce the Spatial Reasoning Integrated Generator (SPRING) for design generation. SPRING embeds a neural and symbolic integrated spatial reasoning module inside the deep generative network. The spatial reasoning module decides the locations of objects to be generated in the form of bounding boxes, which are predicted by a recurrent neural network and filtered by symbolic constraint satisfaction. Embedding symbolic reasoning into neural generation guarantees that the output of SPRING satisfies user requirements. Furthermore, SPRING offers interpretability, allowing users to visualize and diagnose the generation process through the bounding boxes. SPRING is also adept at managing novel user specifications not encountered during its training, thanks to its proficiency in zero-shot constraint transfer. Quantitative evaluations and a human study reveal that SPRING outperforms baseline generative models, excelling in delivering high design quality and better meeting user specifications.
Recent CLEO-c results on open and closed charm physics at center of mass of 3773 MeV (psi(3770) resonance), 4170 MeV and 3686 MeV (psi(2s) peak) are reviewed. Measurements of absolute hadronic branching ratios of D0, D+ and Ds mesons as well as charmonium spectroscopy are discussed. An outlook and future prospects for the experiment at CESR is also presented.
We review recent work examining the influence of fission in rapid neutron capture ($r$-process) nucleosynthesis which can take place in astrophysical environments. We briefly discuss the impact of uncertain fission barriers and fission rates on the population of heavy actinide species. We demonstrate the influence of the fission fragment distributions for neutron-rich nuclei and discuss currently available treatments, including recent macroscopic-microscopic calculations. We conclude by comparing our nucleosynthesis results directly with stellar data for metal-poor stars rich in $r$-process elements to consider whether fission plays a role in the so-called `universality' of $r$-process abundances observed from star to star.
Based on multiyear INTEGRAL observations of SS433, a composite IBIS/ISGRI 18-60 keV orbital light curve is constructed around zero precessional phase $\psi_{pr}= 0$. It shows a peculiar shape characterized by a significant excess near the orbital phase $\phi_{orb}= 0.25$, which is not seen in the softer 2-10 keV energy band. Such a shape is likely to be due to a complex asymmetric structure of the funnel in a supercritical accretion disk in SS433. The orbital light curve at 40-60 keV demonstrates two almost equal bumps at phases $\sim 0.25$ and $\sim 0.75$, most likely due to nutation effects of the accretion disk. The change of the off-eclipse 18-60 keV X-ray flux with the precessional phase shows a double-wave form with strong primary maximum at $\psi_{pr}= 0$ and weak but significant secondary maximum at $\psi_{pr}= 0.6$. A weak variability of the 18-60 keV flux in the middle of the orbital eclipse correlated with the disk precessional phase is also observed. The joint analysis of the broadband (18-60 keV) orbital and precessional light curves obtained by INTEGRAL confirms the presence of a hot extended corona in the central parts of the supercritical accretion disk and constrain the binary mass ratio in SS433 in the range $0.5\gtrsim q\gtrsim 0.3$, confirming the black hole nature of the compact object. Orbital and precessional light curves in the hardest X-ray band 40-60 keV, which is free from emission from thermal X-ray jets, are also best fitted by the same geometrical model with hot extended corona at $q\sim 0.3$, stressing the conclusions of the modeling of the broad-band X-ray orbital and precessional light curves.
It is now widely recognized that mechanical interactions between cells play a crucial role in epithelial morphogenesis, yet understanding the mechanisms through which stress and deformation affect cell behavior remains an open problem due to the complexity inherent in the mechanical behavior of cells and the difficulty of direct measurement of forces within tissues. Theoretical models can help by focusing experimental studies and by providing the framework for interpreting measurements. To that end, "vertex models" have introduced an approximation of epithelial cell mechanics based on a polygonal tiling representation of planar tissue. Here we formulate and analyze an Active Tension Network (ATN) model, which is based on the same polygonal representation of epithelial tissue geometry, but in addition i) assumes that mechanical balance is dominated by cortical tension and ii) introduces tension dependent local remodeling of the cortex, representing the active nature of cytoskeletal mechanics. The tension-dominance assumption has immediate implications for the geometry of cells, which we demonstrate to hold in certain types of Drosophila epithelial tissues. We demonstrate that stationary configurations of an ATN form a manifold with one degree of freedom per cell, corresponding to "isogonal" - i.e. angle preserving - deformations of cells, which dominate the dynamic response to perturbations. We show that isogonal modes account for approximately 90% of experimentally observed deformation of cells during the process of ventral furrow formation in Drosophila. Other interesting properties of our model include the exponential screening of mechanical stress and a negative Poisson ratio response to external uniaxial stress. We also provide a new approach to the problem of inferring local cortical tensions from the observed geometry of epithelial cells in a tissue
The article deals with the family ${\mathcal U}(\lambda)$ of all functions $f$ normalized and analytic in the unit disk such that $\big |\big (z/f(z)\big )^{2}f'(z)-1\big |<\lambda $ for some $0<\lambda \leq 1$. The family ${\mathcal U}(\lambda)$ has been studied extensively in the recent past and functions in this family are known to be univalent in $\ID$. However, the problem of determining sharp bounds for the second coefficients of functions in this family was solved recently in \cite{VY2013} by Vasudevarao and Yanagihara but the proof was complicated. In this article, we first present a simpler proof. We obtain a number of new subordination results for this family and their consequences. In addition, we show that the family ${\mathcal U}(\lambda )$ is preserved under a number of elementary transformations such as rotation, conjugation, dilation and omitted value transformations, but surprisingly this family is not preserved under the $n$-th root transformation for any $n\geq 2$. This is a basic here which helps to generate a number of new theorems and in particular provides a way for constructions of functions from the family ${\mathcal U}(\lambda)$. Finally, we deal with a radius problem.
Isolated many-body quantum systems quenched far from equilibrium can eventually equilibrate, but it is not yet clear how long they take to do so. To answer this question, we use exact numerical methods and analyze the entire evolution, from perturbation to equilibration, of a paradigmatic disordered many-body quantum system in the chaotic regime. We investigate how the equilibration time depends on the system size and observables. We show that if dynamical manifestations of spectral correlations in the form of the correlation hole ("ramp") are taken into account, the time for equilibration scales exponentially with system size, while if they are neglected, the scaling is better described by a power law with system size, though with an exponent larger than what is expected for diffusive transport.
Artificial Intelligence (AI) in healthcare holds great potential to expand access to high-quality medical care, whilst reducing overall systemic costs. Despite hitting the headlines regularly and many publications of proofs-of-concept, certified products are failing to breakthrough to the clinic. AI in healthcare is a multi-party process with deep knowledge required in multiple individual domains. The lack of understanding of the specific challenges in the domain is, therefore, the major contributor to the failure to deliver on the big promises. Thus, we present a decision perspective framework, for the development of AI-driven biomedical products, from conception to market launch. Our framework highlights the risks, objectives and key results which are typically required to proceed through a three-phase process to the market launch of a validated medical AI product. We focus on issues related to Clinical validation, Regulatory affairs, Data strategy and Algorithmic development. The development process we propose for AI in healthcare software strongly diverges from modern consumer software development processes. We highlight the key time points to guide founders, investors and key stakeholders throughout their relevant part of the process. Our framework should be seen as a template for innovation frameworks, which can be used to coordinate team communications and responsibilities towards a reasonable product development roadmap, thus unlocking the potential of AI in medicine.
Citation content analysis seeks to understand citations based on the language used during the making of a citation. A key issue in citation content analysis is looking for linguistic structures that characterize distinct classes of citations for the purposes of understanding the intent and function of a citation. Previous works have focused on modeling linguistic features first and drawn conclusions on the language structures unique to each class of citation function based on the performance of a classification task or inter-annotator agreement. In this study, we start with a large sample of a pre-classified citation corpus, 2 million citations from each class of the scite Smart Citation dataset (supporting, disputing, and mentioning citations), and analyze its corpus linguistics in order to reveal the unique and statistically significant language structures belonging to each type of citation. By generating comparison tables for each citation type we present a number of interesting linguistic features that uniquely characterize citation type. What we find is that within citation collocates, there is very low correlation between citation type and sentiment. Additionally, we find that the subjectivity of citation collocates across classes is very low. These findings suggest that the sentiment of collocates is not a predictor of citation function and that due to their low subjectivity, an opinion-expressing mode of understanding citations, implicit in previous citation sentiment analysis literature, is inappropriate. Instead, we suggest that citations can be better understood as claims-making devices where the citation type can be explained by understanding how two claims are being compared. By presenting this approach, we hope to inspire similar corpus linguistic studies on citations that derive a more robust theory of citation from an empirical basis using citation corpora
This thesis is dedicated to random walks on spaces with non-positive curvature. In particular, we study the case of group actions on CAT(0) spaces that admit contracting elements, that is, whose properties mimic those of loxodromic isometries in Gromov-hyperbolic spaces. In this context, we prove several limit laws, among which the almost sure convergence to the boundary without moment assumption, positivity of the drift and a central limit theorem. In a second part, we study boundary maps and stationary measures on affine buildings of type $\tilde{A}_2$, and we show that there always exists a hyperbolic isometry for a non-elementary action by isometries on such a space. Our approach involves the use of hyperbolic models for CAT(0) spaces, which were constructed by H.~Petyt, D.~Spriano and A.~Zalloum, and measured boundary theory, whose principles come from H.~Furstenberg.
The expanding ejecta from a classical nova remains hot enough ($\sim10^{4}\, {\rm K}$) to be detected in thermal radio emission for up to years after the cessation of mass loss triggered by a thermonuclear instability on the underlying white dwarf (WD). Nebular spectroscopy of nova remnants confirms the hot temperatures observed in radio observations. During this same period, the unstable thermonuclear burning transitions to a prolonged period of stable burning of the remnant hydrogen-rich envelope, causing the WD to become, temporarily, a super-soft X-ray source. We show that photoionization heating of the expanding ejecta by the hot WD maintains the observed nearly constant temperature of $(1-4)\times10^4\mathrm{~K}$ for up to a year before an eventual decline in temperature due to either the cessation of the supersoft phase or the onset of a predominantly adiabatic expansion. We simulate the expanding ejecta using a one-zone model as well as the Cloudy spectral synthesis code, both incorporating the time-dependent WD effective temperatures for a range of masses from $0.60\ M_{\odot}$ to $1.10\ M_{\odot}$. We show that the duration of the nearly isothermal phase depends most strongly on the velocity and mass of the ejecta and that the ejecta temperature depends on the WD's effective temperature, and hence its mass.
We report findings from several ab-initio, self-consistent calculations of electronic and transport properties of wurtzite aluminum nitride. Our calculations utilized a local density approximation (LDA) potential and the linear combination of Gaussian orbitals (LCGO). Unlike some other density functional theory (DFT) calculations, we employed the Bagayoko, Zhao, and Williams' method, enhanced by Ekuma and Franklin (BZW-EF). The BZW-EF method verifiably leads to the minima of the occupied energies; these minima, the low laying unoccupied energies, and related wave functions provide the most variationally and physically valid density functional theory (DFT) description of the ground states of materials under study. With multiple oxidation states of Al (Al$^{3+}$ to Al) and the availability of N$^{3-}$ to N, the BZW-EF method required several sets of self-consistent calculations with different ionic species as input. The binding energy for (Al$^{+3}$ & N$^{3-}$) as input was 1.5 eV larger in magnitude than those for other input choices; the results discussed here are those from the calculation that led to the absolute minima of the occupied energies with this input. Our calculated, direct band gap for w-AlN, at the $\Gamma$ point, is 6.28 eV, in excellent agreement with the 6.28 eV experimental value at 5 K. We discuss the bands, total and partial densities of states, and calculated, effective masses.
Massive halos of hot plasma exist around some, but not all elliptical galaxies. There is evidence that this is related to the age of the galaxy. In this paper new X-ray observations are presented for three early-type galaxies that show evidence of youth, in order to investigate their X-ray components and properties. NGC 5363 and NGC 2865 were found to have X-ray emission dominated by purely discrete stellar sources. Limits are set on the mass distribution in one of the galaxies observed with XMM-Newton, NGC 4382, which contains significant hot gas. We detect the X-ray emission in NGC 4382 out to 4r$_e$. The mass-to-light ratio is consistent with a stellar origin in the inner regions but rises steadily to values indicative of some dark matter by 4r$_e$. These results are set in context with other data drawn from the literature, for galaxies with ages estimated from dynamical or spectroscopic indicators. Ages obtained from optical spectroscopy represent central luminosity weighted stellar ages. We examine the X-ray evolution with age, normalised by B and K band luminosities. Low values of Log(L$_X$/L$_B$) and Log(L$_X$/L$_K$) are found for all galaxies with ages between 1 and 4 Gyrs. Luminous X-ray emission only appears in older galaxies. This suggests that the interstellar medium is removed and then it takes several gigayears for hot gas halos to build up, following a merger. A possible mechanism for gas expulsion might be associated with feedback from an active nucleus triggered during a merger.
While the recent demonstration of accurate computations of classically intractable simulations on noisy quantum processors brings quantum advantage closer, there is still the challenge of demonstrating it for practical problems. Here we investigate the application of noisy intermediate-scale quantum devices for simulating nuclear magnetic resonance (NMR) experiments in the high-field regime. In this work, the NMR interactions are mapped to a quantum device via a product formula with minimal resource overhead, an approach that we discuss in detail. Using this approach, we show the results of simulations of liquid-state proton NMR spectra on relevant molecules with up to 11 spins, and up to a total of 47 atoms, and compare them with real NMR experiments. Despite current limitations, we show that a similar approach will eventually lead to a case of quantum utility, a scenario where a practically relevant computational problem can be solved by a quantum computer but not by conventional means. We provide an experimental estimation of the amount of quantum resources needed for solving larger instances of the problem with the presented approach. The polynomial scaling we demonstrate on real processors is a foundational step in bringing practical quantum computation closer to reality.
Liquid Argon Time Projection Chambers (LArTPCs) are a class of detectors that produce high resolution images of charged particles within their sensitive volume. In these images, the clustering of distinct particles into superstructures is of central importance to the current and future neutrino physics program. Electromagnetic (EM) activity typically exhibits spatially detached fragments of varying morphology and orientation that are challenging to efficiently assemble using traditional algorithms. Similarly, particles that are spatially removed from each other in the detector may originate from a common interaction. Graph Neural Networks (GNNs) were developed in recent years to find correlations between objects embedded in an arbitrary space. The Graph Particle Aggregator (GrapPA) first leverages GNNs to predict the adjacency matrix of EM shower fragments and to identify the origin of showers, i.e. primary fragments. On the PILArNet public LArTPC simulation dataset, the algorithm achieves achieves a shower clustering accuracy characterized by a mean adjusted Rand index (ARI) of 97.8 % and a primary identification accuracy of 99.8 %. It yields a relative shower energy resolution of $(4.1+1.4/\sqrt{E (\text{GeV})})\,\%$ and a shower direction resolution of $(2.1/\sqrt{E(\text{GeV})})^{\circ}$. The optimized algorithm is then applied to the related task of clustering particle instances into interactions and yields a mean ARI of 99.2 % for an interaction density of $\sim\mathcal{O}(1)\,m^{-3}$.
The rapid expansion of ride-sharing services has caused significant disruptions in the transpor-tation industry and fundamentally altered the way individuals move from one place to another. Accurate estimation of ride-sharing improves service utilization and reliability and reduces travel time and traffic congestion. In this study, we employ two Bayesian models to estimate ride-sharing demand in the 77 Chicago community areas. We consider demographic, scoio-economic, transportation factors as well as land-use characteristics as explanatory variables. Our models assume conditional autoregression (CAR) prior for the explanatory variables. Moreover, the Bayesian frameworks estimate both the unstructured random error and the struc-tured errors for the spatial and the spatiotemporal correlation. We assessed the performance of the estimated models and the residuals of the spatial regression model have no left-over spatial structure. For the spatiotemporal model, the squared correlation between actual ride-shares and the fitted values is 0.95. Our analysis revealed that the demographic factors (populations size and registered crimes) positively impact the ride-sharing demand. Additionally, the ride-sharing demand increases with higher income and increase in the economically active propor-tion of the population as well as the residents with no cars. Moreover, the transit availability and the walkability indices are crucial determinants for the ridesharing in Chicago.
We consider Hermite-Pad\'e approximants in the framework of discrete integrable systems defined on the lattice $\mathbb{Z}^2$. We show that the concept of multiple orthogonality is intimately related to the Lax representations for the entries of the nearest neighbor recurrence relations and it thus gives rise to a discrete integrable system. We show that the converse statement is also true. More precisely, given the discrete integrable system in question there exists a perfect system of two functions, i.e., a system for which the entire table of Hermite-Pad\'e approximants exists. In addition, we give a few algorithms to find solutions of the discrete system.
Large Language Models (LLMs) possess extensive foundational knowledge and moderate reasoning abilities, making them suitable for general task planning in open-world scenarios. However, it is challenging to ground a LLM-generated plan to be executable for the specified robot with certain restrictions. This paper introduces CLMASP, an approach that couples LLMs with Answer Set Programming (ASP) to overcome the limitations, where ASP is a non-monotonic logic programming formalism renowned for its capacity to represent and reason about a robot's action knowledge. CLMASP initiates with a LLM generating a basic skeleton plan, which is subsequently tailored to the specific scenario using a vector database. This plan is then refined by an ASP program with a robot's action knowledge, which integrates implementation details into the skeleton, grounding the LLM's abstract outputs in practical robot contexts. Our experiments conducted on the VirtualHome platform demonstrate CLMASP's efficacy. Compared to the baseline executable rate of under 2% with LLM approaches, CLMASP significantly improves this to over 90%.
Deep neural networks are the default choice of learning models for computer vision tasks. Extensive work has been carried out in recent years on explaining deep models for vision tasks such as classification. However, recent work has shown that it is possible for these models to produce substantially different attribution maps even when two very similar images are given to the network, raising serious questions about trustworthiness. To address this issue, we propose a robust attribution training strategy to improve attributional robustness of deep neural networks. Our method carefully analyzes the requirements for attributional robustness and introduces two new regularizers that preserve a model's attribution map during attacks. Our method surpasses state-of-the-art attributional robustness methods by a margin of approximately 3% to 9% in terms of attribution robustness measures on several datasets including MNIST, FMNIST, Flower and GTSRB.
We introduce an axiomatic approach to group recommendations, in line of previous work on the axiomatic treatment of trust-based recommendation systems, ranking systems, and other foundational work on the axiomatic approach to internet mechanisms in social choice settings. In group recommendations we wish to recommend to a group of agents, consisting of both opinionated and undecided members, a joint choice that would be acceptable to them. Such a system has many applications, such as choosing a movie or a restaurant to go to with a group of friends, recommending games for online game players, & other communal activities. Our method utilizes a given social graph to extract information on the undecided, relying on the agents influencing them. We first show that a set of fairly natural desired requirements (a.k.a axioms) leads to an impossibility, rendering mutual satisfaction of them unreachable. However, we also show a modified set of axioms that fully axiomatize a group variant of the random-walk recommendation system, expanding a previous result from the individual recommendation case.
We prove that the harmonic extension matrices for the level-k Sierpinski Gasket are invertible for every k>2. This has been previously conjectured to be true by Hino in [6] and [7] and tested numerically for k<50. We also give a necessary condition for the non-degeneracy of the harmonic structure for general finitely ramified self-similar sets based on the vertex connectivity of their first graph approximation.
We investigate theoretically electron transfer in a doble dot in a situation where it is governed by nuclear magnetic field: This has been recently achieved in experiment. We show how to partially compensate the nuclear magnetic field to restore Spin Blockade.
Elemental abundances can be determined from stellar spectra, making it possible to study galactic formation and evolution. Accurate atomic data is essential for the reliable interpretation and modeling of astrophysical spectra. In this work, we perform laboratory studies on neutral aluminium. This element is found, for example, in young, massive stars and it is a key element for tracing ongoing nucleosynthesis throughout the Galaxy. The near-infrared (NIR) wavelength region is of particular importance, since extinction in this region is lower than for optical wavelengths. This makes the NIR wavelength region a better probe for highly obscured regions, such as those located close to the Galactic center. We investigate the spectrum of neutral aluminium with the aim to provide oscillator strengths (f-values) of improved accuracy for lines in the NIR and optical regions (670 - 4200 nm). Measurements of high-resolution spectra were performed using a Fourier transform spectrometer and a hollow cathode discharge lamp. The f-values were derived from experimental line intensities combined with published radiative lifetimes. We report oscillator strengths for 12 lines in the NIR and optical spectral regions, with an accuracy between 2 and 11%, as well as branching fractions for an additional 16 lines.
Probability density models based on deep networks have achieved remarkable success in modeling complex high-dimensional datasets. However, unlike kernel density estimators, modern neural models do not yield marginals or conditionals in closed form, as these quantities require the evaluation of seldom tractable integrals. In this work, we present the Marginalizable Density Model Approximator (MDMA), a novel deep network architecture which provides closed form expressions for the probabilities, marginals and conditionals of any subset of the variables. The MDMA learns deep scalar representations for each individual variable and combines them via learned hierarchical tensor decompositions into a tractable yet expressive CDF, from which marginals and conditional densities are easily obtained. We illustrate the advantage of exact marginalizability in several tasks that are out of reach of previous deep network-based density estimation models, such as estimating mutual information between arbitrary subsets of variables, inferring causality by testing for conditional independence, and inference with missing data without the need for data imputation, outperforming state-of-the-art models on these tasks. The model also allows for parallelized sampling with only a logarithmic dependence of the time complexity on the number of variables.
Many existing approaches for estimating parameters in settings with distributional shifts operate under an invariance assumption. For example, under covariate shift, it is assumed that p(y|x) remains invariant. We refer to such distribution shifts as sparse, since they may be substantial but affect only a part of the data generating system. In contrast, in various real-world settings, shifts might be dense. More specifically, these dense distributional shifts may arise through numerous small and random changes in the population and environment. First, we will discuss empirical evidence for such random dense distributional shifts and explain why commonly used models for distribution shifts-including adversarial approaches-may not be appropriate under these conditions. Then, we will develop tools to infer parameters and make predictions for partially observed, shifted distributions. Finally, we will apply the framework to several real-world data sets and discuss diagnostics to evaluate the fit of the distributional uncertainty model.
We present a MATLAB code that implements the Smoothed Particle Hydrodynamics (SPH) method. The paper reviews the continuous Navier-Stokes equations as well as their SPH approximation, adopting a coherent notation that allows to make easy reference to the code. The MATLAB implementation was heavily inspired by the earlier FORTRAN code of G. R. Liu and M. B. Liu, 2003. The code can be used for simple computational fluid dynamics simulations. Two classical benchmark problems are used to validate the algorithm: a one-dimensional shock tube and a two-dimensional shear cavity problem.
In this paper, we determine the blocks of $\mathcal{O}^\mathfrak{p}$ associated with semisimple Lie algebras of type $E$.
Energy markets and the associated energy futures markets play a crucial role in global economies. We investigate the statistical properties of the recurrence intervals of daily volatility time series of four NYMEX energy futures, which are defined as the waiting times $\tau$ between consecutive volatilities exceeding a given threshold $q$. We find that the recurrence intervals are distributed as a stretched exponential $P_q(\tau)\sim e^{(a\tau)^{-\gamma}}$, where the exponent $\gamma$ decreases with increasing $q$, and there is no scaling behavior in the distributions for different thresholds $q$ after the recurrence intervals are scaled with the mean recurrence interval $\bar\tau$. These findings are significant under the Kolmogorov-Smirnov test and the Cram{\'e}r-von Mises test. We show that empirical estimations are in nice agreement with the numerical integration results for the occurrence probability $W_q(\Delta{t}|t)$ of a next event above the threshold $q$ within a (short) time interval after an elapsed time $t$ from the last event above $q$. We also investigate the memory effects of the recurrence intervals. It is found that the conditional distributions of large and small recurrence intervals differ from each other and the conditional mean of the recurrence intervals scales as a power law of the preceding interval $\bar\tau(\tau_0)/\bar\tau \sim (\tau_0/\bar\tau)^\beta$, indicating that the recurrence intervals have short-term correlations. Detrended fluctuation analysis and detrending moving average analysis further uncover that the recurrence intervals possess long-term correlations. We confirm that the "clustering" of the volatility recurrence intervals is caused by the long-term correlations well known to be present in the volatility.
The magnetic Otto thermal machine based on a two-spin-1/2 XYZ working fluid in the presence of an inhomogeneous magnetic field and antisymmetric Dzyaloshinsky--Moriya (DM) and symmetric Kaplan--Shekhtman--Entin-Wohlman--Aharony (KSEA) interactions is considered. Its possible modes of operation are found and classified. The efficiencies of engines at maximum power are estimated for various choices of model parameters. There are cases when these efficiencies exceed the Novikov value. New additional points of local minima of the total work are revealed and the mechanism of their occurrence is analyzed.
Detailed studies of the magnetoresistance of alpha-(ET)2KHg(SCN)4 and alpha-(ET)2TlHg(SCN)4 as a function of temperature, magnetic field strength, and field orientation are reported. Below 15 K, the temperature dependence of the magnetoresistance is metallic (dR/dT > 0) for magnetic field orientation corresponding to an angular dependent magnetoresistance oscillation (AMRO) minimum and nonmetallic (dR/dT < 0) at all other field orientations. We find that this behavior can be explained in terms of semiclassical models without the use of a non-Fermi liquid description. The alternating temperature dependence (metallic/nonmetallic)with respect to field orientation is common to any system with either quasi-one or two-dimensional AMRO. Furthermore, we report a new metallic property of the high field and low temperature regime of alpha-(ET)2MHg(SCN)4 (where M = K, Rb, or Tl) compounds.
For the generic continuous map and for the generic homeomorphism of the Cantor space, we study the dynamics of the induced map on the space of probability measures, with emphasis on the notions of Li-Yorke chaos, topological entropy, equicontinuity, chain continuity, chain mixing, shadowing and recurrence. We also establish some results concerning induced maps that hold on arbitrary compact metric spaces.
This paper studies the geometry of immersions into statistical manifolds. A necessary and sufficient condition is obtained for statistical manifold structures to be dual to each other for a non-degenerate equiaffine immersion. Then we obtain conditions for realizing an n-dimensional statistical manifold in an (n+1)-dimensional statistical manifold and its converse. Centro-affine immersion of codimension two into a dually flat statistical manifold is defined. Also we have shown that statistical manifold realized in a dually flat statistical manifold of codimension two is conformally-projectively flat.
We study the influence of spatially varying reaction rates on a spatial stochastic two-species Lotka-Volterra lattice model for predator-prey interactions using two-dimensional Monte Carlo simulations. The effects of this quenched randomness on population densities, transient oscillations, spatial correlations, and invasion fronts are investigated. We find that spatial variability in the predation rate results in more localized activity patches, which in turn causes a remarkable increase in the asymptotic population densities of both predators and prey, and accelerated front propagation.
No-cloning theorem is fundamental for quantum mechanics and for quantum information science that states an unknown quantum state cannot be cloned perfectly. However, we can try to clone a quantum state approximately with the optimal fidelity, or instead, we can try to clone it perfectly with the largest probability. Thus various quantum cloning machines have been designed for different quantum information protocols. Specifically, quantum cloning machines can be designed to analyze the security of quantum key distribution protocols such as BB84 protocol, six-state protocol, B92 protocol and their generalizations. Some well-known quantum cloning machines include universal quantum cloning machine, phase-covariant cloning machine, the asymmetric quantum cloning machine and the probabilistic quantum cloning machine etc. In the past years, much progress has been made in studying quantum cloning machines and their applications and implementations, both theoretically and experimentally. In this review, we will give a complete description of those important developments about quantum cloning and some related topics. On the other hand, this review is self-consistent, and in particular, we try to present some detailed formulations so that further study can be taken based on those results.
This paper proposes distributed discrete-time algorithms to cooperatively solve an additive cost optimization problem in multi-agent networks. The striking feature lies in the use of only the sign of relative state information between neighbors, which substantially differentiates our algorithms from others in the existing literature. We first interpret the proposed algorithms in terms of the penalty method in optimization theory and then perform non-asymptotic analysis to study convergence for static network graphs. Compared with the celebrated distributed subgradient algorithms, which however use the exact relative state information, the convergence speed is essentially not affected by the loss of information. We also study how introducing noise into the relative state information and randomly activated graphs affect the performance of our algorithms. Finally, we validate the theoretical results on a class of distributed quantile regression problems.
A new lattice action is proposed for the overlap Dirac matrix with nonzero chemical potential. It is shown to preserve the full chiral invariance for all values of lattice spacing exactly. It is further demonstrated to arise in the domain wall formalism by coupling the chemical potential count only to the physically relevant wall modes.
The failure of 2D numerical cohesive granular steps collapsing under gravity are simulated for a large range of cohesion. Focussing on the cumulative displacement of the grains, and defining a displacement threshold, we establish a sensible criterion for capturing the failure characteristics. We are able to locate the failure in time and to identify the different stages of the destabilisation. We find that the onset of the failure is delayed by increasing cohesion, but its duration becomes shorter. Defining a narrow displacement interval, a well-defined shear band revealing the failure comes out. Solving the equilibrium of the failing block, we are able to make successful predictions for the dependance between failure angle and cohesion, thereby disclosing two distinct frictional behaviour: while friction remains constant at small cohesion, it significantly decreases with cohesive properties at larger cohesion. The results hence reveal two regimes for the behaviour of cohesive granular matter depending on cohesion strength, revealing a cohesion-induced weakening mechanism.
We consider a mesoscopic mechanism of the exchange interaction in a system of alternating ferromagnetic/nonmagnetic metallic layers. In the case of small mesoscopic samples the sign and the amplitude of the exchange energy turn out to be random sample specific quantities. They can be changed by applying an external magnetic field, by attaching to the system superconducting electrodes with different phases of the superconducting order parameter and by changing the chemical potential of electrons in the matal with the help of a gate. In the case of square or cubic geometries of the nonmagnetic layer at low temperature the variance of the exchange energy turns out to be sample size independent.
Let $\bar{M}$ be a manifold with boundary $Y$ which is the total space of a fibre bundle, and is defined by the vanishing of a boundary defining function, $x$. We prove $L^2$ Hodge and signature theorems for $M$ endowed with a metric of the form $dx^2 + x^{2c} h + k$, where $k$ is the lift to $Y$ of the metric on the base of the fibre bundle, $h$ is a two form on $Y$ which restricts to a metric on each fibre, and $0 \leq c \leq 1$. These metrics interpolate between the case when $c=0$, in which case the metric near the boundary is a cylinder, and the case where $c=1$, in which case the metric near the boundary is that of a cone bundle over the base of the boundary fibration. We show that the $L^2$ Hodge theorems for the cohomologies given by the maximal and minimal extensions of $d$ with respect to these metrics and the $L^2$ signature theorem for the image of the minimal cohomology in the maximal cohomology interpolate between known results for $L^2$ Hodge and signature theorems for cylindrical and cone bundle type metrics. In particular, the Hodge theorems all relate the related spaces of $L^2$ harmonic forms to intersection cohomology of varying perversities for $X$, the space formed from $\bar{M}$ by collapsing the fibres of $Y$ at the boundary. The signature theorem involves variations on the $\tau$ invariant described by Dai.
The large gas and dust reservoirs of submm galaxies (SMGs) could potentially provide ample fuel to trigger an Active Galactic Nucleus (AGN), but previous studies of the AGN fraction in SMGs have been controversial largely due to the inhomogeneity and limited angular resolution of the available submillimeter surveys. Here we set improved constraints on the AGN fraction and X-ray properties of the SMGs with ALMA and Chandra observations in the Extended Chandra Deep Field-South (E-CDF-S). This study is the first among similar works to have unambiguously identified the X-ray counterparts of SMGs; this is accomplished using the fully submm-identified, statistically reliable SMG catalog with 99 SMGs from the ALMA LABOCA E-CDF-S Submillimeter Survey (ALESS). We found 10 X-ray sources associated with SMGs (median redshift z = 2.3), of which 8 were identified as AGNs using several techniques that enable cross-checking. The other 2 X-ray detected SMGs have levels of X-ray emission that can be plausibly explained by their star-formation activity. 6 of the 8 SMG-AGNs are moderately/highly absorbed, with N_H > 10e23 cm-2. An analysis of the AGN fraction, taking into account the spatial variation of X-ray sensitivity, yields an AGN fraction of 17+16-6% for AGNs with rest-frame 0.5-8 keV absorption-corrected luminosity >7.8x10e42 erg s-1; we provide estimated AGN fractions as a function of X-ray flux and luminosity. ALMA's high angular resolution also enables direct X-ray stacking at the precise positions of SMGs for the first time, and we found 4 potential SMG-AGNs in our stacking sample.
We examine Chern-Simons theory written on a noncommutative plane with a `hole', and show that the algebra of observables is a nonlinear deformation of the $w_\infty$ algebra. The deformation depends on the level (the coefficient in the Chern-Simons action), and the noncommutativity parameter, which were identified, respectively, with the inverse filling fraction and the inverse density in a recent description of the fractional quantum Hall effect. We remark on the quantization of our algebra. The results are sensitive to the choice of ordering in the Gauss law.
We study the $SU(\infty)$ lattice Yang-Mills theory at the dimensions $D=2,3,4$ via the numerical bootstrap method. It combines the Makeenko-Migdal loop equations, with a cut-off $L_{\mathrm{max}}$ on the maximal length of loops, and positivity conditions on certain matrices of Wilson loops. Our algorithm is inspired by the pioneering paper of P.Anderson and M.Kruczenski but it is significantly more efficient, as it takes into account the symmetries of the lattice theory and uses the relaxation procedure in line with our previous work on matrix bootstrap. We thus obtain rigorous upper and lower bounds on the plaquette average at various couplings and dimensions. For $D=4$, the lower bound data appear to be close to the MC data in the strong coupling phase and the upper bound data in the weak coupling phase reproduce well the 3-loop perturbation theory. Our results suggest that this bootstrap approach can provide a tangible alternative to the, so far uncontested, Monte Carlo approach.
This paper provides an optimized cable path planning solution for a tree-topology network in an irregular 2D manifold in a 3D Euclidean space, with an application to the planning of submarine cable networks. Our solution method is based on total cost minimization, where the individual cable costs are assumed to be linear to the length of the corresponding submarine cables subject to latency constraints between pairs of nodes. These latency constraints limit the cable length and number of hops between any pair of nodes. Our method combines the Fast Marching Method (FMM) and a new Integer Linear Programming (ILP) formulation for Minimum Spanning Tree (MST) where there are constraints between pairs of nodes. We note that this problem of MST with constraints is NP-complete. Nevertheless, we demonstrate that ILP running time is adequate for the great majority of existing cable systems. For cable systems for which ILP is not able to find the optimal solution within an acceptable time, we propose an alternative heuristic algorithm based on Prim's algorithm. In addition, we apply our FMM/ILP-based algorithm to a real-world cable path planning example and demonstrate that it can effectively find an MST with latency constraints between pairs of nodes.
The compression of deep learning models is of fundamental importance in deploying such models to edge devices. The selection of compression parameters can be automated to meet changes in the hardware platform and application using optimization algorithms. This article introduces a Multi-Objective Hardware-Aware Quantization (MOHAQ) method, which considers hardware efficiency and inference error as objectives for mixed-precision quantization. The proposed method feasibly evaluates candidate solutions in a large search space by relying on two steps. First, post-training quantization is applied for fast solution evaluation (inference-only search). Second, we propose the "beacon-based search" to retrain selected solutions only and use them as beacons to know the effect of retraining on other solutions. We use a speech recognition model based on Simple Recurrent Unit (SRU) using the TIMIT dataset and apply our method to run on SiLago and Bitfusion platforms. We provide experimental evaluations showing that SRU can be compressed up to 8x by post-training quantization without any significant error increase. On SiLago, we found solutions that achieve 97\% and 86\% of the maximum possible speedup and energy saving, with a minor increase in error. On Bitfusion, beacon-based search reduced the error gain of inference-only search by up to 4.9 percentage points.
We consider the action of exact plane gravitational waves, or pp-waves, on free particles. The analysis is carried out by investigating the variations of the geodesic trajectories of the particles, before and after the passage of the wave. The initial velocities of the particles are non-vanishing. We evaluate numerically the Kinetic energy per unit mass of the free particles, and obtain interesting, quasi-periodic behaviour of the variations of the Kinetic energy with respect to the width $\lambda$ of the gaussian that represents the wave. The variation of the energy of the free particle is expected to be exactly minus the variation of the energy of the gravitational field, and therefore provides an estimation of the local variation of the gravitational energy. The investigation is carried out in the context of short bursts of gravitational waves, and of waves described by normalised gaussians, that yield impulsive waves in a certain limit.
We study the asymptotical compatibility of the Fourier spectral method in multidimensional space for the Nonlocal Ohta-Kawasaka (NOK) model, which is proposed in our previous work. By introducing the Fourier collocation discretization for the spatial variable, we show that the asymptotical compatibility holds in 2D and 3D over a periodic domain. For the temporal discretization, we adopt the second-order backward differentiation formula (BDF) method. We prove that for certain nonlocal kernels, the proposed time discretization schemes inherit the energy dissipation law. In the numerical experiments, we verify the asymptotical compatibility, the second-order temporal convergence rate, and the energy stability of the proposed schemes. More importantly, we discover a novel square lattice pattern when certain nonlocal kernel are applied in the model. In addition, our numerical experiments confirm the existence of an upper bound for the optimal number of bubbles in 2D for some specific nonlocal kernels. Finally, we numerically explore the promotion/demotion effect induced by the nonlocal horizon, which is consistent with the theoretical studies presented in our earlier work.
Let $d \geq 4$ be a natural number and let $A$ be a finite, non-empty subset of $\mathbb{R}^d$ such that $A$ is not contained in a translate of a hyperplane. In this setting, we show that \[ |A-A| \geq \bigg(2d - 2 + \frac{1}{d-1} \bigg) |A| - O_{d}(|A|^{1- \delta}), \] for some absolute constant $\delta>0$ that only depends on $d$. This provides a sharp main term, consequently answering questions of Ruzsa and Stanchescu up to an $O_{d}(|A|^{1- \delta})$ error term. We also prove new lower bounds for restricted type difference sets and asymmetric sumsets in $\mathbb{R}^d$.