text
stringlengths
6
128k
The cosmological case for a next generation radio observatory, the Square Kilometer Array, is discussed and reviewed. An instrument like the SKA would be able to measure galaxy redshifts of normal late-type galaxies, via the 21 cm line of HI, out to redshifts of $\sim 3$. Not only would such very deep redshift surveys enable us to map the large scale galaxy distribution and probe the large scale structure of the universe out to previously unexplored scales, it would also allow for the first time to obtain direct observational data on the evolution of this structure. Other promising applications concern the mapping of the local velocity field of the universe, study of the formation and evolution of galaxies, and determining the global cosmological parameters $H_0$, $q_0$ and $\Lambda$ through the application of classical cosmological tests like source counts. Particularly emphasized is the redshift survey capability of the SKA. A review is given of the current knowledge of the galaxy distribution, starting from an inventarisation of nearby cosmic structures, through a discussion of how it all fits together in a coherent ``foamlike pattern''. After providing a short overview of the basics of theories of structure formation, a description is provided of different observational strategies to probe the structure of the universe out to larger depths, ranging from pencil-beam surveys and cluster surveys out to the new and ambitious complete and deep galaxy redshift surveys like the 2dF and the Sloan survey. It is argued that a survey with the SKA would be a natural and complementary follow-up. We finally conclude with a specification of the technical requirements for the SKA to make it into an instrument ideally suited for these purposes.
We present a non-perturbative computation of the running of the coupling alpha_s in QCD with two flavours of dynamical fermions in the Schroedinger functional scheme. We improve our previous results by a reliable continuum extrapolation. The Lambda-parameter characterizing the high-energy running is related to the value of the coupling at low energy in the continuum limit. An estimate of Lambda*r_0 is given using large-volume data with lattice spacings a from 0.07 fm to 0.1 fm. It translates into Lambda_{MSbar}^{(2)}=245(16)(16) MeV [assuming r_0=0.5 fm]. The last step still has to be improved to reduce the uncertainty.
We study the $O(N)^3$ symmetric quantum field theory of a bosonic tensor $\phi^{abc}$ with sextic interactions. Its large $N$ limit is dominated by a positive-definite operator, whose index structure has the topology of a prism. We present a large $N$ solution of the model using Schwinger-Dyson equations to sum the leading diagrams, finding that for $2.81 < d < 3$ and for $d<1.68$ the spectrum of bilinear operators has no complex scaling dimensions. We also develop perturbation theory in $3-\epsilon$ dimensions including eight $O(N)^3$ invariant operators necessary for the renormalizability. For sufficiently large $N$, we find a "prismatic" fixed point of the renormalization group, where all eight coupling constants are real. The large $N$ limit of the resulting $\epsilon$ expansions of various operator dimensions agrees with the Schwinger-Dyson equations. Furthermore, the $\epsilon$ expansion allows us to calculate the $1/N$ corrections to operator dimensions. The prismatic fixed point in $3-\epsilon$ dimensions survives down to $N\approx 53.65$, where it merges with another fixed point and becomes complex. We also discuss the $d=1$ model where our approach gives a slightly negative scaling dimension for $\phi$, while the spectrum of bilinear operators is free of complex dimensions.
We give an asymptotic evaluation of the complexity of spherical p-spin spin-glass models via random matrix theory. This study enables us to obtain detailed information about the bottom of the energy landscape, including the absolute minimum (the ground state), the other local minima, and describe an interesting layered structure of the low critical values for the Hamiltonians of these models. We also show that our approach allows us to compute the related TAP-complexity and extend the results known in the physics literature. As an independent tool, we prove a LDP for the k-th largest eigenvalue of the GOE, extending the results of Ben Arous, Dembo and Guionnett (2001).
We propose new nonparametric accordance R\'enyi-$\alpha$ and $\alpha$-Tsallis divergence estimators for continuous distributions. We discuss this approach with a view to the selection model (on al\'etoire and autoregressive AR (1)). We lestimateur used by kernel density esttimer underlying. Nevertheless, we are able to prove that the estimators are consistent under certain conditions. We also describe how to apply these estimators and demonstrate their effectiveness through numerical experiments.
Large Language Models (LLMs) are gaining popularity among software engineers. A crucial aspect of developing effective code-generation LLMs is to evaluate these models using a robust benchmark. Evaluation benchmarks with quality issues can provide a false sense of performance. In this work, we conduct the first-of-its-kind study of the quality of prompts within benchmarks used to compare the performance of different code generation models. To conduct this study, we analyzed 3,566 prompts from 9 code generation benchmarks to identify quality issues in them. We also investigated whether fixing the identified quality issues in the benchmarks' prompts affects a model's performance. We also studied memorization issues of the evaluation dataset, which can put into question a benchmark's trustworthiness. We found that code generation evaluation benchmarks mainly focused on Python and coding exercises and had very limited contextual dependencies to challenge the model. These datasets and the developers' prompts suffer from quality issues like spelling and grammatical errors, unclear sentences to express developers' intent, and not using proper documentation style. Fixing all these issues in the benchmarks can lead to a better performance for Python code generation, but not a significant improvement was observed for Java code generation. We also found evidence that GPT-3.5-Turbo and CodeGen-2.5 models possibly have data contamination issues.
We theoretically investigate the spectrum of a single electron double quantum dot, defined by top gates in a graphene with a substrate induced gap. We examine the effects of electric and magnetic fields on the spectrum of localized states, focusing on the tunability of the inter-dot coupling. We find that the substrate induced gap allows for electrostatic control, with some limitations that for a fixed inter-dot distance, the inter-dot coupling can not be made arbitrarily small due to the Klein tunneling. On the other hand, the proximity of the valence band in graphene allows for new regimes, such as an $npn$ double dot, which have no counterparts in GaAs.
This Letter reports measurements of differential cross sections for the production of two Z bosons in association with jets in proton-proton collisions at $\sqrt{s} =$ 8 and 13 TeV. The analysis is based on data samples collected at the LHC with the CMS detector, corresponding to integrated luminosities of 19.7 and 35.9 fb$^{-1}$ at 8 and 13 TeV, respectively. The measurements are performed in the leptonic decay modes ZZ $\to\ell^+ \ell^- \ell'^+ \ell'^-$, where $\ell,\ell' =$ e, $\mu$. The differential cross sections as a function of the jet multiplicity, the transverse momentum $p_\mathrm{T}$, and pseudorapidity of the $p_\mathrm{T}$-leading and subleading jets are presented. In addition, the differential cross sections as a function of variables sensitive to the vector boson scattering, such as the invariant mass of the two $p_\mathrm{T}$-leading jets and their pseudorapidity separation, are reported. The results are compared to theoretical predictions and found in good agreement within the theoretical and experimental uncertainties.
Let $q \in \mathbb{Z} [i]$ be prime and $\chi $ be the primitive quadratic Hecke character modulo $q$. Let $\pi$ be a self-dual Hecke automorphic cusp form for $\mathrm{SL}_3 (\mathbb{Z} [i] )$ and $f$ be a Hecke cusp form for $\Gamma_0 (q) \subset \mathrm{SL}_2 (\mathbb{Z} [i])$. Consider the twisted $L$-functions $ L (s, \pi \otimes f \otimes \chi) $ and $L (s, \pi \otimes \chi)$ on $\mathrm{GL}_3 \times \mathrm{GL}_2$ and $\mathrm{GL}_3$. We prove the subconvexity bounds \begin{equation*} L \big(\tfrac 1 2, \pi \otimes f \otimes \chi \big) \ll_{\, \varepsilon, \pi, f } \mathrm{N} (q)^{5/4 + \varepsilon}, L \big(\tfrac 1 2 + it, \pi \otimes \chi \big) \ll_{\, \varepsilon, \pi, t } \mathrm{N} (q)^{5/8 + \varepsilon}, \end{equation*} for any $\varepsilon > 0$.
We developed a model with no adjustable parameter for retention loss at short and long time scale in ferroelectric thin-film capacitors. We found that the predictions of this model are in good agreement with the experimental observations in the literature. In particular, it explains why a power-law function shows better fitting than a linear-log relation on a short time scale (10^-7 s to 1 s) and why a stretched exponential relation gives more precise description than a linear-log plot on a long time scale (>100 s), as reported by many researchers in the past. More severe retention losses at higher temperatures and in thinner films have also been correctly predicted by the present theory.
This paper presents the Real-time Adaptive and Interpretable Detection (RAID) algorithm. The novel approach addresses the limitations of state-of-the-art anomaly detection methods for multivariate dynamic processes, which are restricted to detecting anomalies within the scope of the model training conditions. The RAID algorithm adapts to non-stationary effects such as data drift and change points that may not be accounted for during model development, resulting in prolonged service life. A dynamic model based on joint probability distribution handles anomalous behavior detection in a system and the root cause isolation based on adaptive process limits. RAID algorithm does not require changes to existing process automation infrastructures, making it highly deployable across different domains. Two case studies involving real dynamic system data demonstrate the benefits of the RAID algorithm, including change point adaptation, root cause isolation, and improved detection accuracy.
Low complexity joint estimation of synchronization impairments and channel in a single-user MIMO-OFDM system is presented in this letter. Based on a system model that takes into account the effects of synchronization impairments such as carrier frequency offset, sampling frequency offset, and symbol timing error, and channel, a Maximum Likelihood (ML) algorithm for the joint estimation is proposed. To reduce the complexity of ML grid search, the number of received signal samples used for estimation need to be reduced. The conventional channel estimation methods using Least-Squares (LS) fail for the reduced sample under-determined system, which results in poor performance of the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS) based channel estimation method in a sparse fading scenario, where the received samples used for estimation are less than that required for an LS based estimation. The performance of the estimation method is studied through numerical simulations, and it is observed that CS based joint estimator performs better than LS based joint estimator
New applications of liquid crystalline materials have increased the need for precise engineering of elastic properties. Recently, Sidky et al. presented methods by which the elastic coefficients of molecular models with atomistic detail can be accurately calculated, demonstrating the result for the ubiquitous mesogen 5CB. In this work, these techniques are applied to the homologous series of nCB materials, focusing on the standard bend, twist, and splay deformations, using an entirely automated process. Our results show strong agreement with published experimental measurements for the nCBs and present a path forward to computational molecular engineering of liquid crystal elasticity for novel molecules and mixtures.
We report on a multi-year study of student attitudes measured with the Colorado Learning Attitudes about Science Survey (CLASS) in calculus-based introductory physics taught with the Modeling Instruction curriculum. We find that five of six instructors and eight of nine sections using Modeling Instruction showed significantly improved attitudes from pre to post-course. Cohen's d effect sizes range from 0.08 - 0.95 for individual instructors. The average effect was d = 0.45, with a 95% confidence interval of (0.26 - 0.64). These results build on previously published results showing positive shifts in attitudes from Modeling Instruction classes. We interpret these data in light of other published positive attitudinal shifts and explore mechanistic explanations for similarities and differences with other published positive shifts.
We show that minimal models of nondegenerated hypersufaces defined by Laurent polynomials with a $d$-dimensional Newton polytope $\Delta$ are Calabi-Yau varieties $X$ if and only if the Fine interior of $\Delta$ consists of a single lattice point. We give a combinatorial formula for computing the stringy Euler number of $X$. This formula allows to test mirror symmetry in cases when $\Delta$ is not a reflexive polytope. In particular we apply this formula to pairs of lattice polytopes $(\Delta, \Delta^{\vee})$ that appear in the Mavlyutov's generalization of the polar duality for reflexive polytopes. Some examples of Mavlyutov's dual pairs $(\Delta, \Delta^{\vee})$ show that the stringy Euler numbers of the corresponding Calabi-Yau varieties $X$ and $X^{\vee}$ may not satisfy the expected topological mirror symmetry test: $e_{\rm st}(X) = (-1)^{d-1} e_{\rm st}(X^{\vee})$. This shows the necessity of an additional condition on Mavlyutov's pairs $(\Delta, \Delta^\vee)$.
Hot Jupiters may have formed in situ, or been delivered to their observed short periods through one of two categories of migration mechanisms: disk migration or high-eccentricity migration. If hot Jupiters were delivered by high-eccentricity migration, we would expect to observe some "super-eccentric" Jupiters in the process of migrating. We update a prediction for the number of super-eccentric Jupiters we would expect to observe in the Kepler sample if all hot Jupiters migrated through high-eccentricity migration and estimate the true number observed by Kepler. We find that the observations fail to match the prediction from high-eccentricity migration with 94.3% confidence and show that high-eccentricity migration can account for at most ~62% of the hot Jupiters discovered by Kepler.
Ditopic bis-(triazole-pyridine)viologens are bidentate ligands that self-assemble into coordination polymers. In such photo-responsive materials, light irradiation initiates photo-induced electron transfer to generate pi-radicals that can self-associate to form pi-dimers. This leads to a cascade of events: processes at the supramolecular scale associated with mechanical and structural transition at the macroscopic scale. By tuning the irradiation power and duration, we evidence the formation of aggregates and gels. Using microscopy, we show that the aggregates are dense polydisperse micron size spindle shaped particles which grow in time. Using microscopy and time resolved micro-rheology, we follow the gelation kinetics which leads to a gel characterized by a correlation length of a few microns and a weak elastic modulus. The analysis of the aggregates and the gel states vouch for an arrested phase separation process.
We study the effect of periodic hopping modulation on a Su-Schrieffer-Heeger (SSH) chain that exhibits non-Hermiticity in presence of an onsite staggered imaginary potential. This dissipative, non-Hermitian (NH) extension amply modifies the features of the topological trivial phase (TTP) and the topological nontrivial phase (TNP) of the SSH chain. Though a weak potential can respect the parity-time ($\mathcal{PT}$) symmetry keeping the energy eigenvalues real, a strong potential breaks $\mathcal{PT}$ conservation leading to imaginary end state and complex bulk state energies in the system. Furthermore for large commensurate periodicity of the hopping, in-gap states appear that take either purely real or purely imaginary eigenvalues depending on the strenth of both NH potential and hopping modulation. In particular, this paper is engaged with hopping periodicities of 2, 4 and 8 lattice spacings. The localization of end states and in-gap states at the boundaries are investigated for those hopping periodicities. Though we find that topology and $\mathcal{PT}$ symmetry are not very directly connected, distinguishing distribution of $\mathcal{PT}$ broken and unbroken phases are clearly observed within TNP and TTP in our systems.
We investigate the early onset of pionic color transparency ($\pi$CT) observed at Jefferson Laboratory (JLAB) in semi--exclusive pion electroproduction reaction $A(e,e'\pi^+)$ off nuclei. In the present description the primary $\gamma^*p \to \pi^+ n$ interaction is described very well both for the longitudinal and the transverse polarizations. For the final state interactions a coupled--channel treatment of the interactions of transmitted hadrons allows to go beyond the Glauber approximation. We show that a proper distinction between the soft hadronic and hard partonic components of the electroproduction amplitude is essential for a quantitative description of the measured nuclear transparency. The data are well reproduced if one assumes that point--like configurations are produced in the regime of hard deep--inelastic scattering (DIS) off partons and dominate the transverse channel.
Nonlinear electromagnetic (EM) inverse scattering is a quantitative and super-resolution imaging technique, in which more realistic interactions between the internal structure of scene and EM wavefield are taken into account in the imaging procedure, in contrast to conventional tomography. However, it poses important challenges arising from its intrinsic strong nonlinearity, ill-posedness, and expensive computation costs. To tackle these difficulties, we, for the first time to our best knowledge, exploit a connection between the deep neural network (DNN) architecture and the iterative method of nonlinear EM inverse scattering. This enables the development of a novel DNN-based methodology for nonlinear EM inverse problems (termed here DeepNIS). The proposed DeepNIS consists of a cascade of multi-layer complexvalued residual convolutional neural network (CNN) modules. We numerically and experimentally demonstrate that the DeepNIS outperforms remarkably conventional nonlinear inverse scattering methods in terms of both the image quality and computational time. We show that DeepNIS can learn a general model approximating the underlying EM inverse scattering system. It is expected that the DeepNIS will serve as powerful tool in treating highly nonlinear EM inverse scattering problems over different frequency bands, involving large-scale and high-contrast objects, which are extremely hard and impractical to solve using conventional inverse scattering methods.
We report on the design and performance of a double-sided coincidence velocity map imaging spectrometer optimized for electron-ion and ion-ion coincidence experiments studying inner-shell photoionization of gas-phase molecules with soft X-ray synchrotron radiation. The apparatus employs two microchannel plate detectors equipped with delay-line anodes for coincident, time- and position-resolved detection of photo- and Auger electrons with kinetic energies up to 300\,eV on one side of the spectrometer and photoions up to 25\,eV per unit charge on the opposite side. We demonstrate its capabilities by measuring valence photoelectron and ion spectra of neon and nitrogen, and by studying channel-resolved photoelectron and Auger spectra along with fragment-ion momentum correlations for chlorine $2p$ inner-shell ionization of \textit{cis}- and \textit{trans}-1,2-dichloroethene.
We use quantum Monte Carlo simulations and numerical analytic continuation to study high-energy spin excitations in the two-dimensional S=1/2 Heisenberg antiferromagnet at low temperature. We present results for both the transverse and longitudinal dynamic spin structure factor S(q,w) at q=(pi,0) and (pi/2,pi/2). Linear spin-wave theory predicts no dispersion on the line connecting these momenta. Our calculations show that in fact the magnon energy at (pi,0) is 10% lower than at (pi/2,pi/2). We also discuss the transverse and longitudinal multi-magnon continua and their relevance to neutron scattering experiments.
TOI-732 is an M dwarf hosting two transiting planets that are located on the two opposite sides of the radius valley. By doubling the number of available space-based observations and increasing the number of radial velocity (RV) measurements, we aim at refining the parameters of TOI-732 b and c. We also use the results to study the slope of the radius valley and the density valley for a well-characterised sample of M-dwarf exoplanets. We performed a global MCMC analysis by jointly modelling ground-based light curves and CHEOPS and TESS observations, along with RV time series both taken from the literature and obtained with the MAROON-X spectrograph. The slopes of the M-dwarf valleys were quantified via a Support Vector Machine (SVM) procedure. TOI-732 b is an ultrashort-period planet ($P\sim0.77$ d) with a radius $R_b=1.325_{-0.058}^{+0.057}$ $R_{\oplus}$ and a mass $M_b=2.46\pm0.19$ $M_{\oplus}$ (mean density $\rho_b=5.8_{-0.8}^{+1.0}$ g cm$^{-3}$), while the outer planet at $P\sim12.25$ d has $R_c=2.39_{-0.11}^{+0.10}$ $R_{\oplus}$, $M_c=8.04_{-0.48}^{+0.50}$ $M_{\oplus}$, and thus $\rho_c=3.24_{-0.43}^{+0.55}$ g cm$^{-3}$. Also taking into account our interior structure calculations, TOI-732 b is a super-Earth and TOI-732 c is a mini-Neptune. Following the SVM approach, we quantified $\mathrm{d}\log{R_{p,{\mathrm{valley}}}}/\mathrm{d}\log{P}=-0.065_{-0.013}^{+0.024}$, which is flatter than for Sun-like stars. In line with former analyses, we note that the radius valley for M-dwarf planets is more densely populated, and we further quantify the slope of the density valley as $\mathrm{d}\log{\hat{\rho}_{\mathrm{valley}}}/\mathrm{d}\log{P}=-0.02_{-0.04}^{+0.12}$. Compared to FGK stars, the weaker dependence of the position of the radius valley on the orbital period might indicate that the formation shapes the radius valley around M dwarfs more strongly than the evolution mechanisms.
We examined the Wilson-Bappu effect, a relationship between the absolute magnitude of the star, $M_V$, and the logarithm of the Ca {\sc ii} emission width, $W_0$, over the largest $M_V$ range to date, +13 to -5, covering M-dwarfs to type Ia supergiants. We used an extensive literature, the latest Hipparcos reduction, data from two globular clusters, and new observations from Apache Point Observatory to compile a sample that allowed us to study the effect of [Fe/H] on the Wilson-Bappu relationship. Our results include reporting the deviations from linearity and demonstrating that the Wilson-Bappu relationship is insensitive to metallicity.
Cosmological information is usually extracted from the Lyman-$\alpha$ forest correlations using only either large-scale information interpreted through linear theory or using small-scale information interpreted by means of expensive hydrodynamical simulations. A complete cosmological interpretation of the 3D correlations at all measurable scales is challenged by the need of more realistic models including the complex growth of non-linear small scales that can only be studied within large hydrodynamical simulations. Past work were often limited by the trade off between the simulated cosmological volume and the resolution of the low-density intergalactic medium from which the Lyman-$\alpha$ signal originates. We conduct a suite of hydrodynamical simulations of the intergalactic medium, including one of the largest Lyman-$\alpha$ simulations ever performed in terms of volume (640 $h^{-1}\mathrm{Mpc}$), alongside simulations in smaller volumes with resolutions up to 25 $h^{-1}\mathrm{kpc}$. We compare the 3D Lyman-$\alpha$ power spectra predicted by those simulations to different non-linear models. The inferred Lyman-$\alpha$ bias and RSD parameters, $b_\alpha$ and $\beta_\alpha$ are in remarkable agreement with those measured in SDSS and DESI data. We find that, contrary to intuition, the convergence of large-scale modes of the 3D Lyman-$\alpha$ power spectra, which determines $\beta_\alpha$, is primarily influenced by the resolution of the simulation box through mode coupling, rather than the box size itself. Finally, we study the BAO signal encoded in the 3D Lyman-$\alpha$ power spectra. For the first time with a hydrodynamical simulation, we clearly detect the BAO signal, however we only marginally detect its damping, associated with the non-linear growth of the structures.
We construct phenomenologically viable supersymmetric models where CP is an approximate symmetry. The full high energy theory has exact CP and horizontal symmetries that are spontaneously broken with a naturally induced hierarchy of scales, $\Lambda_{CP}\ll\Lambda_H$. Consequently, the effective low energy theory, that is the supersymmetric Standard Model, has CP broken explicitly but by a small parameter. The $\epsilon_K$ parameter is accounted for by supersymmetric contributions. The predictions for other CP violating observables are very different from the Standard Model. In particular, CP violating effects in neutral B decays into final CP eigenstates such as $B\ra\psi K_S$ and in $K\ra\pi\nu\bar\nu$ decays are very small.
The NEXT-White detector, a high-pressure gaseous xenon time projection chamber, demonstrated the excellence of this technology for future neutrinoless double beta decay searches using photomultiplier tubes (PMTs) to measure energy and silicon photomultipliers (SiPMs) to extract topology information. This analysis uses $^{83m}\text{Kr}$ data from the NEXT-White detector to measure and understand the energy resolution that can be obtained with the SiPMs, rather than with PMTs. The energy resolution obtained of (10.9 $\pm$ 0.6) $\%$, full-width half-maximum, is slightly larger than predicted based on the photon statistics resulting from very low light detection coverage of the SiPM plane in the NEXT-White detector. The difference in the predicted and measured resolution is attributed to poor corrections, which are expected to be improved with larger statistics. Furthermore, the noise of the SiPMs is shown to not be a dominant factor in the energy resolution and may be negligible when noise subtraction is applied appropriately, for high-energy events or larger SiPM coverage detectors. These results, which are extrapolated to estimate the response of large coverage SiPM planes, are promising for the development of future, SiPM-only, readout planes that can offer imaging and achieve similar energy resolution to that previously demonstrated with PMTs.
Hyperbolic metamaterials were originally introduced to overcome the diffraction limit of optical imaging. Soon thereafter it was realized that hyperbolic metamaterials demonstrate a number of novel phenomena resulting from the broadband singular behavior of their density of photonic states. These novel phenomena and applications include super resolution imaging, new stealth technologies, enhanced quantum-electrodynamic effects, thermal hyperconductivity, superconductivity, and interesting gravitation theory analogues. Here we briefly review typical material systems, which exhibit hyperbolic behavior and outline important applications of hyperbolic metamaterials.
Using numerical methods we discuss the effects of open boundary conditions on condensation phenomena in the zero-range process (ZRP) and transport processes with pair-factorized steady states (PFSS), an extended model of the ZRP with nearest-neighbor interaction. For the zero-range process we compare to analytical results in the literature with respect to criticality and condensation. For the extended model we find a similar phase structure, but observe supercritical phases with droplet formation for strong boundary drives.
Hierarchies allow feature sharing between objects at multiple levels of representation, can code exponential variability in a very compact way and enable fast inference. This makes them potentially suitable for learning and recognizing a higher number of object classes. However, the success of the hierarchical approaches so far has been hindered by the use of hand-crafted features or predetermined grouping rules. This paper presents a novel framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes. The approach takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions are sufficiently large and complex to represent the whole shapes of the objects. We learn the vocabulary layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. The experimental results show that the learned multi-class object representation scales favorably with the number of object classes and achieves a state-of-the-art detection performance at both, faster inference as well as shorter training times.
We study the evolution of the cosmological perturbations after inflation in curvaton models where the non-relativistic curvaton decays into both radiation and a cold dark matter component. We calculate the primordial curvature and correlated isocurvature perturbations inherited by the radiation and cold dark matter after the curvaton has decayed. We give the transfer coefficient in terms of the initial curvaton density relative to the curvaton decay rate.
We study the correlation of top asymmetries that are sensitive to the different origin of (a new contribution to) the total asymmetry: loop- or tree-level origins. We find that both the size and sign of the correlation between total and $t\bar{t}j$ inclusive asymmetries are inherently different depending on the origin. We demonstrate the correlation by using the color-singlet $Z^\prime$ and the pure axigluon taken as representative models of loop- and tree-induced total asymmetries. We calculate the next-to-leading order QCD corrections to the $Z^\prime$ and perform Monte-Carlo event generation. The correlation is understood in the QCD eikonal approximation using its color structure.
The finite temperature lattice QCD with N_f=2 nonperturbatively improved Wilson fermions is studied on 16^3 8 lattice. Using abelian projection after fixing to MA gauge we determine the transition temperature for m_{\pi}/m_{\rho} \sim 0.8.
Recent work on the quantization of Maxwell theory has used a non-covariant class of gauge-averaging functionals which include explicitly the effects of the extrinsic-curvature tensor of the boundary, or covariant gauges which, unlike the Lorentz case, are invariant under conformal rescalings of the background four-metric. This paper studies in detail the admissibility of such gauges at the classical level. It is proved that Euclidean Green functions of a second- or fourth-order operator exist which ensure the fulfillment of such gauges at the classical level, i.e. on a portion of flat Euclidean four-space bounded by three-dimensional surfaces. The admissibility of the axial and Coulomb gauges is also proved.
We analyse the quantization procedure of the spinor field in the Rindler spacetime, showing the boundary conditions that should be imposed to the field, in order to have a well posed theory. Because of these boundary conditions we argue that this construction and the usual one in Minkowski spacetime are qualitatively different and can not be compared and consequently the conventional interpretation of the Unruh effect, that is the thermal nature of the Minkowski vacuum state from the point of view of an accelerated observer, is questionable. We also analyse in detail the Unruh quantization scheme and we show that it is not valid in the whole Minkowski space but only in the double Rindler wedge, and it cannot be used as a basis for a quantum theoretical proof of the Unruh effect.
I review a recently proposed scaling analysis of hadron suppression in Deeply Inelastic Scattering on nuclear targets measured at the HERMES experiment. The analysis can distinguish 2 competing explanations for the observed suppression, namely, quark radiative energy loss with long hadron formation times, and prehadron nuclear absorption with hadronization starting inside the nucleus. Experimental data are shown to favor short formation times and prehadron absorption.
Using an adapted Sn-flux growth technique we obtained comparatively large CeFeAsO single crystals of better quality than previously reported polycrystals or single crystals, as evidenced by much sharper anomalies at the structural and magnetic phase transitions as well as a much higher residual resistivity ratio of 12. In the magnetically ordered phase we observe a very pronounced metallic behavior of the in-plane resistivity, which excludes a Mott insulator regime at low temperature. The separation Delta_T = T_0 - T_N between structural and magnetic ordering temperatures decreases with increasing sample quality, from 18 K in the initial reports to 6 K in the present single crystals, demonstrating that this separation is not an intrinsic property of the RFeAsO systems. Our results indicate that the coupling between magnetic ordering and structural distortion is very similar in AFe2As2 and RFeAsO type of compounds, much more similar than previously thought. The implications of our experimental results give arguments both in favor and against the nematic phase model.
The radiative ortho-para transition in the molecular hydrogen is studied. This highly forbidden transition is very sensitive to relativistic and subtle nonadiabatic effects. Our result for the transition rate in the ground vibrational level $\Gamma(J=1\to J=0) = 6.20(62)\cdot 10^{-14} \iyr$ is significantly lower in comparison to all the previous approximate calculations. Experimental detection of such a weak line by observation of, for example, the cold interstellar molecular hydrogen is at present unlikely.
We propose a new family of message passing techniques for MAP estimation in graphical models which we call {\em Sequential Reweighted Message Passing} (SRMP). Special cases include well-known techniques such as {\em Min-Sum Diffusion} (MSD) and a faster {\em Sequential Tree-Reweighted Message Passing} (TRW-S). Importantly, our derivation is simpler than the original derivation of TRW-S, and does not involve a decomposition into trees. This allows easy generalizations. We present such a generalization for the case of higher-order graphical models, and test it on several real-world problems with promising results.
As a part of this project, we have developed an IoT-based instrument utilizing the NODE MCU-ESP8266 module, MQ135 gas sensor, and DHT-11 sensor for measuring CO$_2$ levels in parts per million (ppm), temperature, and humidity. The escalating CO$_2$ levels worldwide necessitate constant monitoring and analysis to comprehend the implications for human health, safety, energy efficiency, and environmental well-being. Thus, an efficient and cost-effective solution is imperative to measure and transmit data for statistical analysis and storage. The instrument offers real-time monitoring, enabling a comprehensive understanding of indoor environmental conditions. By providing valuable insights, it facilitates the implementation of measures to ensure health and safety, optimize energy efficiency, and promote effective environmental monitoring. This scientific endeavor aims to contribute to the growing body of knowledge surrounding CO$_2$ levels, temperature, and humidity, fostering sustainable practices and informed decision-making
The primary objective of this paper is to revisit and make a case for the merits of R.A. Fisher's objections to the decision-theoretic framing of frequentist inference. It is argued that this framing is congruent with the Bayesian but incongruent with the frequentist inference. It provides the Bayesian approach with a theory of optimal inference, but it misrepresents the theory of optimal frequentist inference by framing inferences solely in terms of the universal quantifier `for all values of theta in the parameter space'. This framing is at odds with the primary objective of model-based frequentist inference, which is to learn from data about the true value of theta (unknown parameter(s)); the one that gave rise to the particular data. The frequentist approach relies on factual (estimation, prediction), as well as hypothetical (testing) reasoning whose primary aim is to learn from data about the true theta. The paper calls into question the appropriateness of admissibility and reassesses Stein's paradox as it relates to the capacity of frequentist estimators to pinpoint the true theta. The paper also compares and contrasts loss-based errors with traditional frequentist errors, such as coverage, type I and II; the former are attached to {\theta}, but the latter to the inference procedure itself.
We discuss the possibility of observing CPT violation from top anti-top production in hadronic colliders. We study a general approach by analyzing constraints on the mass difference between the top and anti-top quarks. We present current bounds from Tevatron data, and comment on the prospects for improving these bounds at the LHC and the ILC.
Although autoregressive models have achieved promising results on image generation, their unidirectional generation process prevents the resultant images from fully reflecting global contexts. To address the issue, we propose an effective image generation framework of Draft-and-Revise with Contextual RQ-transformer to consider global contexts during the generation process. As a generalized VQ-VAE, RQ-VAE first represents a high-resolution image as a sequence of discrete code stacks. After code stacks in the sequence are randomly masked, Contextual RQ-Transformer is trained to infill the masked code stacks based on the unmasked contexts of the image. Then, Contextual RQ-Transformer uses our two-phase decoding, Draft-and-Revise, and generates an image, while exploiting the global contexts of the image during the generation process. Specifically. in the draft phase, our model first focuses on generating diverse images despite rather low quality. Then, in the revise phase, the model iteratively improves the quality of images, while preserving the global contexts of generated images. In experiments, our method achieves state-of-the-art results on conditional image generation. We also validate that the Draft-and-Revise decoding can achieve high performance by effectively controlling the quality-diversity trade-off in image generation.
We investigate the quantum phase transitions of a two-dimensional Bose-Hubbard model in the presence of a Rashba spin-orbit coupling with and without thermal fluctuations. The interplay of single-particle hopping, strength of spin-orbit coupling, and interspin interaction leads to superfluid phases with distinct properties. With interspin interactions weaker than intraspin interactions, the spin-orbit coupling induces two finite-momentum superfluid phases. One of them is a phase-twisted superfluid that exists at low hopping strengths and reduces the domain of insulating phases. At comparatively higher hopping strengths, there is a transition from the phase-twisted to a finite momenta stripe superfluid. With interspin interactions stronger than the intraspin interactions, the system exhibits phase-twisted to ferromagnetic phase transition. At finite temperatures, the thermal fluctuations destroy the phase-twisted superfluidity and lead to a wide region of normal-fluid states. These findings can be observed in recent quantum gas experiments with spin-orbit coupling in optical lattices.
Contemporary wisdom based on empirical studies suggests that standard recurrent neural networks (RNNs) do not perform well on tasks requiring long-term memory. However, precise reasoning for this behavior is still unknown. This paper provides a rigorous explanation of this property in the special case of linear RNNs. Although this work is limited to linear RNNs, even these systems have traditionally been difficult to analyze due to their non-linear parameterization. Using recently-developed kernel regime analysis, our main result shows that linear RNNs learned from random initializations are functionally equivalent to a certain weighted 1D-convolutional network. Importantly, the weightings in the equivalent model cause an implicit bias to elements with smaller time lags in the convolution and hence, shorter memory. The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem. The theory is validated in both synthetic and real data experiments.
A model is developed for electromagnetic form factor of the pion. One-loop corrections are included in the linear sigma-model. The rho-meson contribution is added in an extended VMD model. The form factor, calculated without fitting parameters, is in a good agreement with experiment for space-like and time-like photon momenta. Loop corrections to the two-pion hadronic contribution a^{(had, \pi)}_\mu to the muon anomalous magnetic moment are calculated. The optimal value of the sigma-meson mass appears to be close to the rho-meson mass.
In this work we evaluate the $^1S_0$ energy gap of $\Sigma^-$ hyperons in $\beta$-stable neutron star matter. We solve the BCS gap equation for an effective $\Sigma^-\Sigma^-$ pairing interaction derived from the most recent parametrization of the hyperon-hyperon interaction constructed by the Nijmegen group. We find that the $\Sigma^-$ hyperons are in a $^1S_0$ superfluid state in the density region $\sim 0.27-0.7$ fm$^{-3}$, with a maximum energy gap of order 8 MeV at a total baryon number density of $\sim 0.37$ fm$^{-3}$ and a $\Sigma^-$ fraction of about 8%. We examine the implications on neutron star cooling.
We present a new approach aimed at constraining the typical size and optical properties of carbon dust grains in Circumstellar envelopes (CSEs) of carbon-rich stars (C-stars) in the Small Magellanic Cloud (SMC). To achieve this goal, we apply our recent dust growth description, coupled with a radiative transfer code to the CSEs of C-stars evolving along the TP-AGB, for which we compute spectra and colors. Then we compare our modeled colors in the near- and mid-infrared (NIR and MIR) bands with the observed ones, testing different assumptions in our dust scheme and employing several data sets of optical constants for carbon dust available in the literature. Different assumptions adopted in our dust scheme change the typical size of the carbon grains produced. We constrain carbon dust properties by selecting the combination of grain size and optical constants which best reproduces several colors in the NIR and MIR at the same time. The different choices of optical properties and grain size lead to differences in the NIR and MIR colors greater than two magnitudes in some cases. We conclude that the complete set of observed NIR and MIR colors are best reproduced by small grains, with sizes between $\sim$0.035 and $\sim$0.12~$\mu$m, rather than by large grains between $\sim0.2$ and $0.7$~$\mu$m. The inability of large grains to reproduce NIR and MIR colors seems independent of the adopted optical data set. We also find a possible trend of the grain size with mass-loss and/or carbon excess in the CSEs of these stars.
Patient privacy is a major barrier to healthcare AI. For confidentiality reasons, most patient data remains in silo in separate hospitals, preventing the design of data-driven healthcare AI systems that need large volumes of patient data to make effective decisions. A solution to this is collective learning across multiple sites through federated learning with differential privacy. However, literature in this space typically focuses on differentially private statistical estimation and machine learning, which is different from the causal inference-related problems that arise in healthcare. In this work, we take a fresh look at federated learning with a focus on causal inference; specifically, we look at estimating the average treatment effect (ATE), an important task in causal inference for healthcare applications, and provide a federated analytics approach to enable ATE estimation across multiple sites along with differential privacy (DP) guarantees at each site. The main challenge comes from site heterogeneity -- different sites have different sample sizes and privacy budgets. We address this through a class of per-site estimation algorithms that reports the ATE estimate and its variance as a quality measure, and an aggregation algorithm on the server side that minimizes the overall variance of the final ATE estimate. Our experiments on real and synthetic data show that our method reliably aggregates private statistics across sites and provides better privacy-utility tradeoff under site heterogeneity than baselines.
In this article, we consider the ratio of structure functions for heavy quark pair production at low values of $x$. The importance of this ratio for charm and beauty pair production is examined according to the Hadron Electron Ring Accelerator (HERA) data. The behavior of these ratios considers due to the hard-pomeron behavior of the gluon distribution function. The results are in good agreement with respect to the HERA data. Expanding this data to the range of new energies underscore the importance of these measurements for heavy quarks. The ratio of charm and beauty structure functions at the proposed Large Hadron electron Collider (LHeC) is considered as a function of invariant center-of-mass energy. For top pair production this ratio is extracted with known kinematics of the LHeC and Future Circular Collider electron-hadron (FCC-eh) colliders. Comparison of the results obtained for the ratio of top structure functions in LHeC and FCC-eh are proportional to the specified inelasticity $y$ range.
Self-supervised learning (SSL) has recently attracted significant attention in the field of recommender systems. Contrastive learning (CL) stands out as a major SSL paradigm due to its robust ability to generate self-supervised signals. Mainstream graph contrastive learning (GCL)-based methods typically implement CL by creating contrastive views through various data augmentation techniques. Despite these methods are effective, we argue that there still exist several challenges: i) Data augmentation (e.g., discarding edges or adding noise) necessitates additional graph convolution (GCN) or modeling operations, which are highly time-consuming and potentially harm the embedding quality. ii) Existing CL-based methods use traditional CL objectives to capture self-supervised signals. However, few studies have explored obtaining CL objectives from more perspectives and have attempted to fuse the varying signals from these CL objectives to enhance recommendation performance. To overcome these challenges, we propose a High-Order Fusion Graph Contrastive Learning (HFGCL) framework for recommendation. Specifically, we discards the data augmentations and instead high-order information from GCN process to create contrastive views. Additionally, to integrate self-supervised signals from various CL objectives, we propose an advanced CL objective. By ensuring that positive pairs are distanced from negative samples derived from both contrastive views, we effectively fuse self-supervised signals from distinct CL objectives, thereby enhancing the mutual information between positive pairs. Experimental results on three public datasets demonstrate the superior effectiveness of HFGCL compared to the state-of-the-art baselines.
In this paper, we study the production of isolated-photon plus a jet in $pp$ and $PbPb$ collisions, which can be used as an important probe to the jet transport property in quark gluon plasma created in heavy ion collisions. Normally, there are two types of observables associated with the production of isolated-photon plus a jet, namely, the azimuthal angular correlation and the transverse momentum imbalance. To understand both observables in the full kinematical region, we need to employ the perturbative QCD calculation, which takes into account the hard splitting of partons, together with the Sudakov resummation formalism, which resums soft gluon splittings. Furthermore, by introducing energy-loss into the system, we calculate the enhancement of the momentum imbalance distribution for $AA$ as compared to $pp$ collisions and make predictions for future unfolded experimental data. In addition, in order to extract the jet transport coefficient more precisely in our numerical calculation, we also distinguish quark jets from gluon jets, since they interact with quark gluon plasma with different strengths. This work provides a reliable theoretical tool for the calculation of the gamma-jet correlation, which can lead us to a more precise extraction of the jet transport coefficient in relativistic heavy-ion collisions.
In this paper, we reveal a new connection between approximation numbers of periodic Sobolev type spaces, where the smoothness weights on the Fourier coefficients are induced by a (quasi-)norm $\|\cdot\|$ on $\mathbb{R}^d$, and entropy numbers of the embedding $\textrm{id}: \ell_{\|\cdot\|}^d \to \ell_\infty^d$. This connection yields preasymptotic error bounds for approximation numbers of isotropic Sobolev spaces, spaces of analytic functions, and spaces of Gevrey type in $L_2$ and $H^1$, which find application in the context of Galerkin methods. Moreover, we observe that approximation numbers of certain Gevrey type spaces behave preasymptotically almost identical to approximation numbers of spaces of dominating mixed smoothness. This observation can be exploited, for instance, for Galerkin schemes for the electronic Schr\"odinger equation, where mixed regularity is present.
This paper studies the proposed green (energy-efficient) coverage probability, link and network energy efficiencies in the downlink of a heterogeneous cellular network (HetNet) consisting of $K$ independent Poisson point processes (PPPs) of base stations (BSs). The important statistical properties of the universal (general) cell association functions are first studied and the cell load statistics for power-law cell association functions, which can characterize the accurate void cell probability of a BS in every tier, is also derived. A simple and feasible green channel-aware cell association (GCA) scheme is proposed and the green coverage probability is also proposed for any particular cell association scheme, such as the maximum received power association (MRPA) and nearest base station association (NBA) schemes. Then the link and network energy efficiencies are proposed to characterize the mean spectrum efficiency per unit power consumption for a BS and the mean area spectrum efficiency for a HetNet, respectively. All the tight bounds on the green coverage probability, link and network energy efficiencies for the GCA, MRPA and NBA schemes are found. They are theoretically shown to pose the fundamental maximum limits on the link and network energy efficiencies achieved by any other cell association schemes and such a fact is validated by numerical results as well.
We derive a set of Leggett-Garg inequalities (temporal Bell's inequalities) for the moment generating function of charge transferred through a conductor. Violation of these inequalities demonstrates the absence of a macroscopic-real description of the transport process. We show how these inequalities can be violated by quantum-mechanical systems and consider transport through normal and superconducting single-electron transistors as examples.
We show that a class of divergence-form elliptic problems with quadratic growth in the gradient and non-coercive zero order terms are solvable, under essentially optimal hypotheses on the coefficients in the equation. In addition, we prove that the solutions are in general not unique. The case where the zero order term has the opposite sign was already intensively studied and the uniqueness is the rule.
Two-dimensional alloys of carbon and nitrogen represent an urgent interest due to prospective applications in nanomechanical and optoelectronic devices. Stability of these chemical structures must be understood as a function of their composition. The present study employs hybrid density functional theory and reactive molecular dynamics simulations to get insights regarding how many nitrogen atoms can be incorporated into the graphene sheet without destroying it. We conclude that (1) C:N=56:28 structure and all nitrogen-poorer structures maintain stability at 1000 K; (2) stability suffers from N-N bonds; (3) distribution of electron density heavily depends on the structural pattern in the N-doped graphene. Our calculations support experimental efforts on the production of highly N-doped graphene and tuning mechanical and optoelectronic properties of graphene.
We study the spatially homogeneous time dependent solutions and their bifurcations of the Gray-Scott model. We find the global map of bifurcations by a combination of rigorous verification of the existence of Takens Bogdanov and a Bautin bifurcations, in the space of two parameters k and F. With the aid of numerical continuation of local bifurcation curves we give a global description of all the possible bifurcations
We present a QCD sum rule analysis for the newly observed resonance $X_{c}(3250)$ by assuming it as a $D_{0}^{*}(2400)N$ molecular state. Technically, contributions of operators up to dimension 12 are included in the operator product expansion (OPE). We find that it is difficult to find the conventional OPE convergence in this work. By trying releasing the rigid OPE convergence criterion, one could find that the OPE convergence is still under control in the present work and the numerical result for $D_{0}^{*}(2400)N$ state is $3.18\pm0.51 {GeV}$, which is in agreement with the experimental data of $X_{c}(3250)$. In view of that the conventional OPE convergence is not obtained here, thus only weak conclusions can be drawn regarding the explanation of $X_{c}(3250)$ in terms of a $D_{0}^{*}(2400)N$ molecular state. As a byproduct, the mass for the bottom counterpart $\bar{B}_{0}^{*}N$ state is predicted to be $6.50\pm0.49 {GeV}$.
We complete the set of string vertices of non-negative dimension by introducing in a consistent manner those moduli spaces which had previously been excluded. As a consequence we obtain a `geometrised' string action taking the simple form $S=f(\B)$ where `$\B$' is the sum of the string vertices. That the action satisfies the B-V master equation follows from the recursion relations for the string vertices which take the form of a `geometrical' quantum master equation.
Max-stable random fields can be constructed according to Schlather (2002) with a random function or a stationary process and a kind of random event magnitude. These are applied for the modelling of natural hazards. We simply extend these event-controlled constructions to random fields of maxima with non-max-stable dependence structure (copula). The theory for the variant with a stationary process is obvious; the parameter(s) of its correlation function is/are determined by the event magnitude. The introduced variant with random functions can only be researched numerically. The scaling of the random function is exponentially determined by the event magnitude. The location parameter of the Gumbel margins depends only on this exponential function in the researched examples; the scale parameter of the margins is normalized. In addition, we propose a method for the parameter estimation for such constructions by using Kendall's tau. The spatial dependence in relation to the block size is considered therein. Finally, we briefly discuss some issues like the sampling.
Structural and superconducting properties of high quality Niobium nanofilms with different thicknesses are investigated on silicon oxide and sapphire substrates. The role played by the different substrates and the superconducting properties of the Nb films are discussed based on the defectivity of the films and on the presence of an interfacial oxide layer between the Nb film and the substrate. The X-ray absorption spectroscopy is employed to uncover the structure of the interfacial layer. We show that this interfacial layer leads to a strong proximity effect, specially in films deposited on a SiO$_2$ substrate, altering the superconducting properties of the Nb films. Our results establish that the critical temperature is determined by an interplay between quantum-size effects, due to the reduction of the Nb film thicknesses, and proximity effects.
We investigate baryon-baryon (BB) interactions in the 3-flavor full QCD simulations with degenerate quark masses for all flavors. The BB potentials in the orbital S-wave are extracted from the Nambu-Bethe-Salpeter wave functions measured on the lattice. We observe strong flavor-spin dependences of the BB potentials at short distances. In particular, a strong repulsive core exists in the flavor-octet and spin-singlet channel (the 8_s representation), while an attractive core appears in the flavor singlet channel (the 1 representation). We discuss a relation of such flavor-spin dependence with the Pauli exclusion principle in the quark level. Possible existence of an H-dibaryon resonance above the Lambda-Lambda threshold is also discussed.
This paper concerns what Background Independence itself is (as opposed to some particular physical theory that is background independent). The notions presented mostly arose from a layer-by-layer analysis of the facets of the Problem of Time in Quantum Gravity. Part of this coincides with two relational postulates which are thus identified as classical precursors of two of the facets of the Problem of Time. These are furthemore tied to the forms of each of the GR Hamiltonian and momentum constraints. Other aspects of Background Independence include the algebraic closure of these constraints, expressing physics in terms of beables, foliation independence as implemented by refoliation invariance, the reconstruction of spacetime from space. The final picture is that Background Independence - a philosophically desirable and physically implementable feature for a theory to have - has the facets of the Problem of Time among its consequences. Thus these arise naturally and are problems to be resolved, as opposed to avoided `by making one's physics background-dependent in order not to have these problems'. This serves as a selection criterion that limits the use of a number of model arenas and physical theories.
By using images taken with WFCAM on UKIRT and SofI on the NTT and combining them with 2MASS we have measured proper motions for 126 L and T dwarfs in the dwarf archive. Two of these L dwarfs appear to have M dwarf common proper motion companions, and 2 also appear to be high velocity dwarfs, indicating possible membership of the thick disc. We have also compared the motion of these 126 objects to that of numerous moving groups, and have identified new members of the Hyades, Ursa Major and Pleiades moving groups. These new objects, as well as those identified in Jameson et al. (2008) have allowed us to refine the L dwarf sequence for Ursa Major that was defined by Jameson et al. (2008).
We construct and thoroughly study a new integrable example of the AdS/CFT correspondence with Schr\"{o}dinger symmetry. On the gravity side, the supergravity solution depends on two parameters and is obtained by marginally deforming the internal space of the Schr\"{o}dinger background through a series of TsT transformations. On the field theory side, we identify the dual field theory which also depends on two parameters. We find a point-like string solution and derive its dispersion relation. A non-trivial test of the correspondence is provided by using the Landau-Lifshitz coherent state approach to reproduce the leading, in the deformation parameters, terms of that relation. Then, we calculate the Wilson loop, describing the quark/anti-quark potential at strong coupling. It exhibits confining behaviour when the separation length is much less than the Schr\"{o}dinger parameter. When the separation length is much greater than the Schr\"{o}dinger parameter the behaviour is that of a conformal theory. Subsequently, we take the Penrose limit along a certain null geodesic of the constructed background and calculate the bosonic spectrum. Based on that spectrum, we make an educated guess for the exact, in the 't Hooft coupling, dispersion relation of the magnon excitations in the original doubly deformed background. This provides us with an exact prediction for the dimensions of the dual field theory operators.
We postulate a new type of operator algebra with a non-abelian extension. This algebra generalizes the Kac--Moody algebra in string theory and the Mickelsson--Faddeev algebra in three dimensions to higher-dimensional extended objects ($p$-branes). We then construct new BRST operators, covariant derivatives and curvature tensors in the higher-dimensional generalization of loop space.
Yang's theorem states that an initial J=1 state cannot decay into two photons. Because of this result some reactions relating to elementary particles or atomic transitions can be ruled out. The theorem is not valid in the presence of background electric or magnetic fields. In this work we show that the decay of a J=1 particle into two photons is permitted by Bose symmetry and rotational invariance when the background of the decay process is not pure vacuum but contains an external classical magnetic/electric field. We also discuss constraints on these amplitudes from {\bf CP} invariance.
Zero-knowledge circuits are sets of equality constraints over arithmetic expressions interpreted in a prime field; they are used to encode computations in cryptographic zero-knowledge proofs. We make the following contributions to the problem of ensuring that a circuit correctly encodes a computation: a formal framework for circuit correctness; an ACL2 library for prime fields; an ACL2 model of the existing R1CS (Rank-1 Constraint Systems) formalism to represent circuits, along with ACL2 and Axe tools to verify circuits of this form; a novel PFCS (Prime Field Constraint Systems) formalism to represent hierarchically structured circuits, along with an ACL2 model of it and ACL2 tools to verify circuits of this form in a compositional and scalable way; verification of circuits, ranging from simple to complex; and discovery of bugs and optimizations in existing zero-knowledge systems.
Prokaryotic gene prediction plays an important role in understanding the biology of organisms and their function with applications in medicine and biotechnology. Although the current gene finders are highly sensitive in finding long genes, their sensitivity decreases noticeably in finding shorter genes (<180 nts). The culprit is insufficient annotated gene data to identify distinguishing features in short open reading frames (ORFs). We develop a deep learning-based method called ProtiGeno, specifically targeting short prokaryotic genes using a protein language model trained on millions of evolved proteins. In systematic large-scale experiments on 4,288 prokaryotic genomes, we demonstrate that ProtiGeno predicts short coding and noncoding genes with higher accuracy and recall than the current state-of-the-art gene finders. We discuss the predictive features of ProtiGeno and possible limitations by visualizing the three-dimensional structure of the predicted short genes. Data, codes, and models are available at https://github.com/tonytu16/protigeno.
We show that the usual Born-Oppenheimer type of approximation used in quantum gravity, in which a semiclassical time parameter emerges from a weak-coupling expansion of the Wheeler-DeWitt constraint, leads to a unitary theory at least up to the next-to-leading order in minisuperspace models. As there are no unitarity-violating terms, this settles the issue of unitarity at this order, which has been much debated in the literature. Furthermore, we also show that the conserved inner product is gauge-fixed in the sense that the measure is related to the Faddeev-Popov determinant associated with the choice of semiclassical time as a reparametrization gauge. This implies that the Born-Oppenheimer approach to the problem of time is, in fact, an instance of a relational quantum theory, in which transition amplitudes can be related to conditional probabilities.
The variation of the solar diameter in time and in position angle has implications in astrophysics and in general relativity, as the long series of studies attest. The Transits of Venus in 2004 and 2012 have been carefully studied because of the rarity of the phenomenon and its historical importance due the AU measure and to the discovery of Venus atmosphere. The characterization of Venus atmosphere and the measure of the solar diameter to the milliarcsecond level of precision have been studied also from satellite images. The results of the solar diameter measurements made with the observations in Athens (2004) and at the Huairou Solar Observing Station in China (2012) are presented. The topic of the oblateness of the Sun at sunset and its intrinsic value is drafted to introduce the general public to the relativistic relevance of measuring the solar figure, in the occasion of the International Year of Light 2015.
Theoretical and experimental investigations of water vapor interaction with porous materials are carried out both at the macro level and at the micro level. At the macro level, the influence of the arrangement structure of individual pores on the processes of water vapor interaction with porous material as a continuous medium is studied. At the micro level, it is very interesting to investigate the dependence of the characteristics of the water vapor interaction with porous media on the geometry and dimensions of the individual pore. In this paper, a study was carried out by means of mathematical modelling of the processes of water vapor interaction with suffering pore of the cylindrical type. The calculations were performed using a model of a hybrid type combining a molecular-dynamic and a macro-diffusion approach for describing water vapor interaction with an individual pore. The processes of evolution to the state of thermodynamic equilibrium of macroscopic characteristics of the system such as temperature, density, and pressure, depending on external conditions with respect to pore, were explored. The dependence of the evolution parameters on the distribution of the diffusion coefficient in the pore, obtained as a result of molecular dynamics modelling, is examined. The relevance of these studies is due to the fact that all methods and programs used for the modelling of the moisture and heat conductivity are based on the use of transport equations in a porous material as a continuous medium with known values of the transport coefficients, which are usually obtained experimentally.
The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to predict human joint coordinates in 3D space. Despite recent advancements in deep learning-based methods, they mostly ignore the capability of coupling accessible texts and naturally feasible knowledge of humans, missing out on valuable implicit supervision to guide the 3D HPE task. Moreover, previous efforts often study this task from the perspective of the whole human body, neglecting fine-grained guidance hidden in different body parts. To this end, we present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model for 3D HPE, named \textbf{FinePOSE}. It consists of three core blocks enhancing the reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt learning (FPP) block constructs fine-grained part-aware prompts via coupling accessible texts and naturally feasible knowledge of body parts with learnable prompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication (FPC) block establishes fine-grained communications between learned part-aware prompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp Stylization (PTS) block integrates learned prompt embedding and temporal information related to the noise level to enable adaptive adjustment at each denoising step. Extensive experiments on public single-human pose estimation datasets show that FinePOSE outperforms state-of-the-art methods. We further extend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE on the EgoHumans dataset demonstrates the potential of FinePOSE to deal with complex multi-human scenarios. Code is available at https://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.
High-precision calculations of hadron spectroscopy are a crucial task for Lattice QCD. State-of-the-art techniques are needed to disentangle the contributions from different energy states, such as solving the generalized eigenvalue problem (GEVP) for zero-momentum hadron correlators in an efficient way. We review the method and discuss its application in the determination of the $B_s$-meson spectrum using (quenched) nonperturbative HQET at order $1/m_b$.
The amplitudes for one-pion mediated transitions between heavy meson excited states are obtained in the framework of the relativistic chiral quark model. The effective coupling constants to pions and the decay widths of excited heavy mesons with l<=2 for non-radially excited, and the l=0 radially excited mesons are presented for both charmed and beauty mesons. We also discuss the allowed decays of strange excited heavy mesons by emission of a K-meson.
Circadian clocks play a pivotal role in orchestrating numerous physiological and developmental events. Waveform shapes of the oscillations of protein abundances can be informative about the underlying biochemical processes of circadian clocks. We derive a mathematical framework where waveforms do reveal hidden biochemical mechanisms of circadian timekeeping. We find that the cost of synthesizing proteins with particular waveforms can be substantially reduced by rhythmic protein half-lives over time, as supported by previous plant and mammalian data, as well as our own seedling experiment. We also find that previously-enigmatic, cyclic expression of positive arm components within the mammalian and insect clocks allows both a broad range of peak time differences between protein waveforms and the symmetries of the waveforms about the peak times. Such various peak-time differences may facilitate tissue-specific or developmental stage-specific multicellular processes. Our waveform-guided approach can be extended to various biological oscillators, including cell-cycle and synthetic genetic oscillators.
The formalism to treat quantization and evolution of cosmological perturbations of multiple fluids is described. We first construct the Lagrangian for both the gravitational and matter parts, providing the necessary relevant variables and momenta leading to the quadratic Hamiltonian describing linear perturbations. The final Hamiltonian is obtained without assuming any equations of motions for the background variables. This general formalism is applied to the special case of two fluids, having in mind the usual radiation and matter mix which made most of our current Universe history. Quantization is achieved using an adiabatic expansion of the basis functions. This allows for an unambiguous definition of a vacuum state up to the given adiabatic order. Using this basis, we show that particle creation is well defined for a suitable choice of vacuum and canonical variables, so that the time evolution of the corresponding quantum fields is unitary. This provides constraints for setting initial conditions for an arbitrary number of fluids and background time evolution. We also show that the common choice of variables for quantization can lead to an ill-defined vacuum definition. Our formalism is not restricted to the case where the coupling between fields is small, but is only required to vary adiabatically with respect to the ultraviolet modes, thus paving the way to consistent descriptions of general models not restricted to single-field (or fluid).
The introduction of ChatGPT has led to a significant increase in the utilization of Large Language Models (LLMs) for addressing downstream tasks. There's an increasing focus on cost-efficient training and deployment within this context. Low-cost training and deployment of LLMs represent the future development trend. This paper reviews the evolution of large language model training techniques and inference deployment technologies aligned with this emerging trend. The discussion on training includes various aspects, including data preprocessing, training architecture, pre-training tasks, parallel training, and relevant content related to model fine-tuning. On the inference side, the paper covers topics such as model compression, parallel computation, memory scheduling, and structural optimization. It also explores LLMs' utilization and provides insights into their future development.
Using Spitzer Space Telescope photometric observations of the eclipsing, interacting binary WZ Sge, we have discovered that the accretion disk is far more complex than previously believed. Our 4.5 and 8 micron time series observations reveal that the well known gaseous accretion disk is surrounded by an asymmetric disk of dusty material with a radius approximately 15 times larger than the gaseous disk. This dust ring contains only a small amount of mass and is completely invisible at optical and near-IR wavelengths, hence consisting of "dark matter". We have produced a model dust ring using 1 micron spherical particles with a density of 3 g/cm$^3$ and with a temperature profile ranging from 700-1500K. Our discovery about the accretion disk structure and the presence of a larger, outer dust ring have great relevance for accretion disks in general, including those in other interacting binary systems, pre-main sequence stars, and active galaxies.
In this work we present the $\alpha'$-exact background equations of motion of the bosonic chiral string (also known as Hohm-Siegel-Zwiebach model), with the spin two ghost fields integrated out. This is the first instance of a worldsheet model in which all corrections are fully determined in a generic curved spacetime. As a concrete cross-check, we find complete agreement between all three-point and a sample of four-point tree level scattering amplitudes computed using field theory methods and the chiral string prescription. These equations of motion provide a field theoretical shortcut to compute worldsheet correlators in conventional bosonic strings (with arbitrary number of massless and mass level one states), and outline a new perspective on massive resonances in string theory.
We present a solution to the Burnside Problem for 2 generator groups of prime-power exponent that does not rely on induced maps as in [2]. As before, we construct a surjective map of a rank 2 free group to a solvable group G and finish by showing that the Burnside group is an image of G. Theorem B in the paper with H. A. Heilbronn and H. Y. Mochizuki [9] is indispensable in the proof.
Commercial and industrial deployments of robot fleets at Amazon, Nimble, Plus One, Waymo, and Zoox query remote human teleoperators when robots are at risk or unable to make task progress. With continual learning, interventions from the remote pool of humans can also be used to improve the robot fleet control policy over time. A central question is how to effectively allocate limited human attention. Prior work addresses this in the single-robot, single-human setting; we formalize the Interactive Fleet Learning (IFL) setting, in which multiple robots interactively query and learn from multiple human supervisors. We propose Return on Human Effort (ROHE) as a new metric and Fleet-DAgger, a family of IFL algorithms. We present an open-source IFL benchmark suite of GPU-accelerated Isaac Gym environments for standardized evaluation and development of IFL algorithms. We compare a novel Fleet-DAgger algorithm to 4 baselines with 100 robots in simulation. We also perform a physical block-pushing experiment with 4 ABB YuMi robot arms and 2 remote humans. Experiments suggest that the allocation of humans to robots significantly affects the performance of the fleet, and that the novel Fleet-DAgger algorithm can achieve up to 8.8x higher ROHE than baselines. See https://tinyurl.com/fleet-dagger for supplemental material.
In this paper we propose a process of Lagrangian reduction and reconstruction for symmetric discrete-time mechanical systems acted on by external forces, where the symmetry group action on the configuration manifold turns it into a principal bundle. We analyze the evolution of momentum maps and Poisson structures under different conditions.
We introduce two extensions of the Segal-Bargmann coherent state transform from $L^2({\mathbb R},dx)$ to Hilbert spaces of slice monogenic and axial monogenic functions and study their properties. These two transforms are related by the dual Radon transform. Representation theoretic and quantum mechanical aspects of the new representations are studied.
We give arguments that in the 1+1 dimensional abelian Higgs model the classical approximation can be good for the leading high temperature behavior of real time processes. The Chern-Simons diffusion rate (`sphaleron rate') is studied numerically in this approximation. New results at high temperature show a $T^{2/3}$ behavior of the rate at sufficiently small lattice spacing.
We examine the effect of spatial resolution on initial mass ejection in grid-based hydrodynamic simulations of binary neutron star mergers. The subset of the dynamical ejecta with velocities greater than $\sim 0.6$c can generate an ultraviolet precursor to the kilonova on $\sim$hr timescales and contribute to a years-long non-thermal afterglow. Previous work has found differing amounts of this fast ejecta, by one- to two orders of magnitude, when using particle-based or grid-based hydrodynamic methods. Here we carry out a numerical experiment that models the merger as an axisymmetric collision in a co-rotating frame, accounting for Newtonian self-gravity, inertial forces, and gravitational wave losses. The lower computational cost allows us to reach spatial resolutions as high as $4$m, or $\sim 3\times 10^{-4}$ of the stellar radius. We find that fast ejecta production converges to within $10\%$ for a cell size of $20$m. This suggests that fast ejecta quantities found in existing grid-based merger simulations are unlikely to increase to the level needed to match particle-based results upon further resolution increases. The resulting neutron-powered precursors are in principle detectable out to distances $\lesssim 200$Mpc with upcoming facilities. We also find that head-on collisions at the free-fall speed, relevant for eccentric mergers, yield fast and slow ejecta quantities of order $10^{-2}M_\odot$, with a kilonova signature distinct from that of quasi-circular mergers.
Results of systematic measurements of Sr-90 activity concentrations in milk for the period 1961 - 2001 are summarized. An exponential decline of radioactivity followed the moratorium on atmospheric nuclear testing. The highest activity of Sr-90 deposited by fallout, being 1060 Bq/m2, was recorded in 1963, while the peak Sr-90 activity concentration in milk, 1.42 +/-0.17 Bq/L, was recorded in 1964. The values in year 2001 for fallout deposition and milk were 7.7 Bq/m2 and 0.07 +/- 0.03 Bq/L, respectively. The reactor accident at Chernobyl caused higher Sr-90 levels only in 1986. Sr-90 fallout activity affects milk activity, the coefficient of correlation between Sr-90 fallout activity and Sr-90 activity concentrations in milk being 0.80. The transfer coefficient from fallout deposition to milk was estimated to be 2.5 mBqy/L per Bq/m2. The dose incurred by milk consumption was estimated for the Croatian population, the annual collective effective dose in 2001 being approximately 2.0 man-Sv.
Molecular dynamics (MD) simulations are used to determine the diffusion coefficients, electrophoretic mobilities and electrical conductivity of a charged colloidal suspension in the salt-free regime as a function of the colloid charge. The behavior of the colloidal particles' diffusion constant can be well understood in terms of two coupled effects: counterion 'condensation' and slowdown due to the relaxation effect. We find that the conductivity exhibits a maximum which approximately separates the regimes of counterion-dominated and colloid-dominated conduction. We analyze the electrophoretic mobilities and the conductivity in terms of commonly employed assumptions about the role of "free" and "condensed" counterions, and discuss different interpretations of this approach.
As a rising star in the field of natural language processing, question answering systems (Q&A Systems) are widely used in all walks of life. Compared with other scenarios, the applicationin financial scenario has strong requirements in the traceability and interpretability of the Q&A systems. In addition, since the demand for artificial intelligence technology has gradually shifted from the initial computational intelligence to cognitive intelligence, this research mainly focuses on the financial numerical reasoning dataset - FinQA. In the shared task, the objective is to generate the reasoning program and the final answer according to the given financial report containing text and tables. We use the method based on DeBERTa pre-trained language model, with additional optimization methods including multi-model fusion, training set combination on this basis. We finally obtain an execution accuracy of 68.99 and a program accuracy of 64.53, ranking No. 4 in the 2022 FinQA Challenge.
Open Source Software (OSS) often relies on large repositories, like SourceForge, for initial incubation. The OSS repositories offer a large variety of meta-data providing interesting information about projects and their success. In this paper we propose a data mining approach for training classifiers on the OSS meta-data provided by such data repositories. The classifiers learn to predict the successful continuation of an OSS project. The `successfulness' of projects is defined in terms of the classifier confidence with which it predicts that they could be ported in popular OSS projects (such as FreeBSD, Gentoo Portage).
The optical filaments found in many cooling flows in galaxy clusters consist of low density ($\sim 10^3 \pcc$) cool ($\sim 10^3$ K) gas surrounded by significant amounts of cosmic-ray and magnetic-field energy. Their spectra show anomalously strong low-ionization and molecular emission lines when compared with galactic molecular clouds exposed to ionizing radiation such as the Orion complex. Previous studies have shown that the spectra cannot be produced by O-star photoionization. Here we calculate the physical conditions in dusty gas that is well shielded from external sources of ionizing photons and is energized either by cosmic rays or dissipative MHD waves. Strong molecular hydrogen lines, with relative intensities similar to those observed, are produced. Selection effects introduced by the microphysics produce a correlation between the \htwo line upper level energy and the population temperature. These selection effects allow a purely collisional gas to produce \htwo emission that masquerades as starlight-pumped \htwo but with intensities that are far stronger. This physics may find application to any environment where a broad range of gas densities or heating rates occur.
Recent developments in multiscale computation allow the solution of ``coarse equations'' for the expected macroscopic behavior of microscopically/stochastically evolving particle distributions without ever obtaining these coarse equations in closed form. The closure is obtained ``on demand'' through appropriately initialized bursts of microscopic simulation. The effective coupling of microscopic simulators with macrosocopic behavior embodied in this approach requires certain decisions about the nature of the unavailable ``coarse equation''. Such decisions include (a) the determination of the highest spatial derivative active in the equation, (b) whether the coarse equation satisfies certain conservation laws, and (c) whether the coarse dynamics is Hamiltonian. These decisions affect the number and type of boundary conditions as well as the nature of the algorithms employed in good solution practice. In the absence of an explicit formula for the temporal derivative, we propose, implement and validate a simple scheme for deciding these and other similar questions about the coarse equation using only the microscopic simulator. Microscopic simulations under periodic boundary conditions are carried out for appropriately chosen families of random initial conditions; evaluating the sample variance of certain statistics over the simulation ensemble allows us to infer the highest order of spatial derivatives active in the coarse equation. In the same spirit we show how to determine whether a certain coarse conservation law exists or not, and we discuss plausibility tests for the existence of a coarse Hamiltonian or integrability. We argue that such schemes constitute an important part of the equation-free approach to multiscale computation.
Interest in van der Waals materials often stems from a desire to miniaturise existing technologies by exploiting their intrinsic layered structure to create near atomically-thin components that do not suffer from surface defects. One appealing property is easily-switchable yet robust magnetic order, a quality only sparsely demonstrated in the case of in-plane anisotropy. In this work, we use widefield nitrogen-vacancy (NV) center magnetic imaging to measure the properties of individual flakes of CuCrP$_2$S$_6$, a multiferroic van der Waals magnet known to exhibit weak easy-plane anisotropy in the bulk. We chart the crossover between in-plane ferromagnetism in thin flakes down to the trilayer, and the bulk behaviour dominated by a low-field spin-flop transition. Further, by exploiting the directional dependence of NV center magnetometry, we are able to observe an instance of a predominantly out-of-plane ferromagetic phase near zero field, in contradiction with expectation and previous experiments on the bulk material. We attribute this to the presence of surface anisotropies arising from the sample preparation process or exposure to the ambient environment, which is expected to have more general implications for a broader class of weakly anisotropic van der Waals magnets.
Mobile video consumption is increasing and sophisticated video quality adaptation strategies are required to deal with mobile throughput fluctuations. These adaptation strategies have to keep the switching frequency low, the average quality high and prevent stalling occurrences to ensure customer satisfaction. This paper proposes a novel methodology for the design of machine learning-based adaptation logics named HASBRAIN. Furthermore, the performance of a trained neural network against two algorithms from the literature is evaluated. We first use a modified existing optimization formulation to calculate optimal adaptation paths with a minimum number of quality switches for a wide range of videos and for challenging mobile throughput patterns. Afterwards we use the resulting optimal adaptation paths to train and compare different machine learning models. The evaluation shows that an artificial neural network-based model can reach a high average quality with a low number of switches in the mobile scenario. The proposed methodology is general enough to be extended for further designs of machine learning-based algorithms and the provided model can be deployed in on-demand streaming scenarios or be further refined using reward-based mechanisms such as reinforcement learning. All tools, models and datasets created during the work are provided as open-source software.
A Massey-like inequality is any useful lower bound on guessing entropy in terms of the computationally scalable Shannon entropy. The asymptotically optimal Massey-like inequality is determined and further refined for finite-support distributions. The impact of these results are highlighted for side-channel attack evaluation where guessing entropy is a key metric. In this context, the obtained bounds are compared to the state of the art.
Based on symmetry consideration of migration and shape deformations, we formulate phenomenologically the dynamics of cell crawling in two dimensions. Forces are introduced to change the cell shape. The shape deformations induce migration of the cell on a substrate. For time-independent forces we show that not only a stationary motion but also a limit cycle oscillation of the migration velocity and the shape occurs as a result of nonlinear coupling between different deformation modes. Time-dependent forces are generated in a stochastic manner by utilizing the so-called coherence resonance of an excitable system. The present coarse-grained model has a flexibility that it can be applied, e.g., both to keratocyte cells and to Dictyostelium cells, which exhibit quite different dynamics from each other. The key factors for the motile behavior inherent in each cell type are identified in our model.
Time-resolved angle-resolved photoemission spectroscopy is one of the most powerful pump-probe measurements of materials driven far from equilibrium. Unlike the linear-response regime, where the frequency-dependent response function is independent of time, in a far-from-equilibrium experiment, the response function depends on two times in the time domain. In this work, we describe how one can use time-dependent frequency response functions and how they involve contributions from times that are near to each other. This implies that they should not be thought of as a frequency-dependent response at a single definite time. Instead, the Fourier uncertainty relations show that they involve contributions from ranges of times and must be interpreted in this light. We use this insight to help understand what time-resolved photoemission measurements actually measure.
Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance. This variance comes from two sources of randomness: Data subsampling and Monte Carlo sampling. While existing control variates only address Monte Carlo noise, and incremental gradient methods typically only address data subsampling, we propose a new "joint" control variate that jointly reduces variance from both sources of noise. This significantly reduces gradient variance, leading to faster optimization in several applications.
Graphs are a fundamental data structure used to represent relationships in domains as diverse as the social sciences, bioinformatics, cybersecurity, the Internet, and more. One of the central observations in network science is that real-world graphs are globally sparse, yet contains numerous "pockets" of high edge density. A fundamental task in graph mining is to discover these dense subgraphs. Most common formulations of the problem involve finding a single (or a few) "optimally" dense subsets. But in most real applications, one does not care for the optimality. Instead, we want to find a large collection of dense subsets that covers a significant fraction of the input graph. We give a mathematical formulation of this problem, using a new definition of regularly triangle-rich (RTR) families. These families capture the notion of dense subgraphs that contain many triangles and have degrees comparable to the subgraph size. We design a provable algorithm, RTRExtractor, that can discover RTR families that approximately cover any RTR set. The algorithm is efficient and is inspired by recent results that use triangle counts for community testing and clustering. We show that RTRExtractor has excellent behavior on a large variety of real-world datasets. It is able to process graphs with hundreds of millions of edges within minutes. Across many datasets, RTRExtractor achieves high coverage using high edge density datasets. For example, the output covers a quarter of the vertices with subgraphs of edge density more than (say) $0.5$, for datasets with 10M+ edges. We show an example of how the output of RTRExtractor correlates with meaningful sets of similar vertices in a citation network, demonstrating the utility of RTRExtractor for unsupervised graph discovery tasks.