text
stringlengths
6
128k
Acoustic side-channel attacks on keyboards can bypass security measures in many systems that use keyboards as one of the input devices. These attacks aim to reveal users' sensitive information by targeting the sounds made by their keyboards as they type. Most existing approaches in this field ignore the negative impacts of typing patterns and environmental noise in their results. This paper seeks to address these shortcomings by proposing an applicable method that takes into account the user's typing pattern in a realistic environment. Our method achieved an average success rate of 43% across all our case studies when considering real-world scenarios.
Planets are born in protostellar disks, which are now observed with enough resolution to address questions about internal gas flows. Candidates for driving the flows include magnetic forces, but ionization state estimates suggest much of the gas mass decouples from magnetic fields. Thus, hydrodynamical instabilities could play a major role. We investigate disk dynamics under conditions typical for a T Tauri system, using global 3D radiation hydrodynamics simulations with embedded particles and a resolution of 70 cells per scale height. Stellar irradiation heating is included with realistic dust opacities. The disk starts in joint radiative balance and hydrostatic equilibrium. The vertical shear instability (VSI) develops into turbulence that persists up to at least 1600 inner orbits (143 outer orbits). Turbulent speeds are a few percent of the local sound speed at the midplane, increasing to 20%, or 100 m/s, in the corona. These are consistent with recent upper limits on turbulent speeds from optically thin and thick molecular line observations of TW Hya and HD 163296. The predominantly vertical motions induced by the VSI efficiently lift particles upwards. Grains 0.1 and 1 mm in size achieve scale heights greater than expected in isotropic turbulence. We conclude that while kinematic constraints from molecular line emission do not directly discriminate between magnetic and nonmagnetic disk models, the small dust scale heights measured in HL Tau and HD 163296 favor turbulent magnetic models, which reach lower ratios of the vertical kinetic energy density to the accretion stress.
We discuss the axiomatic basis of quantum mechanics and show that it is neither general nor consistent, since its axioms are incompatible with each other and moreover it does not incorporate the magnetic quantization as in the cyclotron motion. A general and consistent system of axioms is conjectured which incorporates also the magnetic quantization.
In this paper we exploit the ideas and formalisms of twistor theory, to show how, on Minkowski space, given a null solution of the wave equation, there are precisely two null directions in $\ker df$, at least one of which is a shear-free ray congruence.
Multi-frame high dynamic range (HDR) imaging aims to reconstruct ghost-free images with photo-realistic details from content-complementary but spatially misaligned low dynamic range (LDR) images. Existing HDR algorithms are prone to producing ghosting artifacts as their methods fail to capture long-range dependencies between LDR frames with large motion in dynamic scenes. To address this issue, we propose a novel image fusion transformer, referred to as IFT, which presents a fast global patch searching (FGPS) module followed by a self-cross fusion module (SCF) for ghost-free HDR imaging. The FGPS searches the patches from supporting frames that have the closest dependency to each patch of the reference frame for long-range dependency modeling, while the SCF conducts intra-frame and inter-frame feature fusion on the patches obtained by the FGPS with linear complexity to input resolution. By matching similar patches between frames, objects with large motion ranges in dynamic scenes can be aligned, which can effectively alleviate the generation of artifacts. In addition, the proposed FGPS and SCF can be integrated into various deep HDR methods as efficient plug-in modules. Extensive experiments on multiple benchmarks show that our method achieves state-of-the-art performance both quantitatively and qualitatively.
We consider a many-body generalization of the Kapitza pendulum: the periodically-driven sine-Gordon model. We show that this interacting system is dynamically stable to periodic drives with finite frequency and amplitude. This finding is in contrast to the common belief that periodically-driven unbounded interacting systems should always tend to an absorbing infinite-temperature state. The transition to an unstable absorbing state is described by a change in the sign of the kinetic term in the effective Floquet Hamiltonian and controlled by the short-wavelength degrees of freedom. We investigate the stability phase diagram through an analytic high-frequency expansion, a self-consistent variational approach, and a numeric semiclassical calculations. Classical and quantum experiments are proposed to verify the validity of our results.
The natural mortality (M) and purse-seine catchability and selectivity were estimated for Trachurus novaezelandiae, Richardson, 1843 (yellowtail scad)-a small inshore pelagic species harvested off south-eastern Australia. Hazard functions were applied to two decades of data describing catches (mostly stable at a mean +- SE of 315 +- 14 t p.a.) and effort (declining from a maximum of 2289 to 642 boat days between 1999/00 and 2015/16) and inter-dispersed (over nine years) annual estimates of size-at-age (0+ to 18 years) to enable survival analysis. The data were best described by a model with eight parameters, including catchability (estimated at < 0.1 x 10-7 boat day-1), M (0.22 year-1) and variable age-specific selection up to 6 years with a 50% retention among 5-year olds (larger than the estimated age at maturation). The low catchability implied minimal fishing mortality by the purse-seine fleet. Ongoing monitoring and applied gear-based studies are required to validate purse-seine catchability and selectivity, but the data nevertheless imply T. novaezelandiae could incur substantial additional fishing effort and, in doing, so alleviate pressure on other regional small pelagics.
The quality of today's research is often tightly limited to the available computing power and scalability of codes to many processors. For example, tackling the problem of heating the solar corona requires a most realistic description of the plasma dynamics and the magnetic field. Numerically solving such a magneto-hydrodynamical (MHD) description of a small active region (AR) on the Sun requires millions of computation hours on current high-performance computing (HPC) hardware. The aim of this work is to describe methods for an efficient parallelization of boundary conditions and data input/output (IO) strategies that allow for a better scaling towards thousands of processors (CPUs). The Pencil Code is tested before and after optimization to compare the performance and scalability of a coronal MHD model above an AR. We present a novel boundary condition for non-vertical magnetic fields in the photosphere, where we approach the realistic pressure increase below the photosphere. With that, magnetic flux bundles become narrower with depth and the flux density increases accordingly. The scalability is improved by more than one order of magnitude through the HPC-friendly boundary conditions and IO strategies. This work describes also the necessary nudging methods to drive the MHD model with observed magnetic fields from the Sun's photosphere. In addition, we present the upper and lower atmospheric boundary conditions (photospheric and towards the outer corona), including swamp layers to diminish perturbations before they reach the boundaries. Altogether, these methods enable more realistic 3D MHD simulations than previous models regarding the coronal heating problem above an AR -- simply because of the ability to use a large amount of CPUs efficiently in parallel.
Most data analytics systems that require low-latency execution and efficient utilization of computing resources, increasingly adopt two computational paradigms, namely, incremental and approximate computing. Incremental computation updates the output incrementally instead of re-computing everything from scratch for successive runs of a job with input changes. Approximate computation returns an approximate output for a job instead of the exact output. Both paradigms rely on computing over a subset of data items instead of computing over the entire dataset, but they differ in their means for skipping parts of the computation. Incremental computing relies on the memoization of intermediate results of sub-computations, and reusing these memoized results across jobs for sub-computations that are unaffected by the changed input. Approximate computing relies on representative sampling of the entire dataset to compute over a subset of data items. In this thesis, we make the observation that these two computing paradigms are complementary, and can be married together! The high level idea is to: design a sampling algorithm that biases the sample selection to the memoized data items from previous runs. To concretize this idea, we designed an online stratified sampling algorithm that uses self-adjusting computation to produce an incrementally updated approximate output with bounded error. We implemented our algorithm in a data analytics system called IncAppox based on Apache Spark Streaming. Our evaluation of the system shows that IncApprox achieves the benefits of both incremental and approximate computing.
In this paper we study the effect of the anisotropic stress generated by neutrinos on the propagation of primordial cosmological gravitational waves. The presence of anisotropic stress, like the one generated by free-streaming neutrinos, partially absorbs the gravitational waves (GWs) propagating across the Universe. We find that in the standard case of three neutrino families, 22% of the intensity of the wave is absorbed, in fair agreement with previous studies. We have also calculated the maximum possible amount of damping, corresponding to the case of a flat Universe completely dominated by ultrarelativistic collisionless particles. In this case 43% of the intensity of the wave is absorbed. Finally, we have taken into account the effect of collisions, using a simple form for the collision term parameterized by the mean time between interactions, that allows to go smoothly from the case of a tigthly-coupled fluid to that of a collisionless gas. The dependence of the absorption on the neutrino energy density and on the effectiveness of the interactions opens the interesting possibility of observing spectral features related to particular events in the thermal history of the Universe, like neutrino decoupling and electron-positron annihilation, both occurring at T~1 MeV. GWs entering the horizon at that time will have today a frequency $\nu\sim 10^{-9} \Hz$, a region that is going to be probed by Pulsar Timing Arrays.
We present the construction of several microstate geometries of the supersymmetric D1-D5-P black hole in which, within six-dimensional supergravity, the momentum charge is carried by a vector field. The fully backreacted geometries are smooth and horizonless: They are asymptotically AdS$_3 \times S^3$ with an AdS$_2$ throat that smoothly caps off. We propose a holographic dual for these bulk solutions and discuss their extension to asymptotically flat space. In addition, we present several uplifts of the full six-dimensional supersymmetric ansatz to ten-dimensions. In particular, we show that there exists a frame in which geometries based on vector field momentum carriers are entirely in the NS-sector of supergravity, making them possible starting points for the exploration of stringy black-hole microstates.
The dense environment of a galaxy cluster can radically transform the content of in-falling galaxies. Recent observations have found a significant population of active galactic nuclei (AGN) within "jellyfish galaxies," galaxies with trailing tails of gas and stars that indicate significant ram pressure stripping. The relationship between AGN and ram pressure stripping is not well understood. In this letter, we investigate the connection between AGN activity and ram pressure in a fully cosmological setting for the first time using the RomulusC simulation, one of the highest resolution simulations of a galaxy cluster to date. We find unambiguous morphological evidence for ram pressure stripping. For lower mass galaxies (with stellar masses < 10^9.5 solar masses) both star formation and black hole accretion are suppressed by ram pressure before they reach pericenter, whereas for more massive galaxies accretion onto the black hole is enhanced during pericentric passage. Our analysis also indicates that as long as the galaxy retains gas, AGN with higher Eddington ratios are more likely to be the found in galaxies experiencing higher ram pressure. We conclude that prior to quenching star formation, ram pressure triggers enhanced accretion onto the black hole, which then produces heating and outflows due to AGN feedback. AGN feedback may in turn serve to aid in the quenching of star formation in tandem with ram pressure.
Using multi-wavelength imaging from the Wide Field Camera 3 on the Hubble Space Telescope we study the stellar cluster populations of two adjacent fields in the nearby face-on spiral galaxy, M83. The observations cover the galactic centre and reach out to ~6 kpc, thereby spanning a large range of environmental conditions, ideal for testing empirical laws of cluster disruption. The clusters are selected by visual inspection to be centrally concentrated, symmetric, and resolved on the images. We find that a large fraction of objects detected by automated algorithms (e.g. SExtractor or Daofind) are not clusters, but rather are associations. These are likely to disperse into the field on timescales of tens of Myr due to their lower stellar densities and not due to gas expulsion (i.e. they were never gravitationally bound). We split the sample into two discrete fields (inner and outer regions of the galaxy) and search for evidence of environmentally dependent cluster disruption. Colour-colour diagrams of the clusters, when compared to simple stellar population models, already indicate that a much larger fraction of the clusters in the outer field are older by tens of Myr than in the inner field. This impression is quantified by estimating each cluster's properties (age, mass, and extinction) and comparing the age/mass distributions between the two fields. Our results are inconsistent with "universal" age and mass distributions of clusters, and instead show that the ambient environment strongly affects the observed populations.
A wide range of mechanisms have been put forward to explain the quenching of star formation in galaxies with cosmic time, however, the true balance of responsible mechanisms remains unknown. The identification and study of galaxies that have shut down their star formation on different timescales might elucidate which mechanisms dominate at different epochs and masses. Here we study the population of rapidly quenched galaxies (RQGs) in the SIMBA cosmological hydrodynamic simulation at $0.5<z<2$, comparing directly to observational post-starburst galaxies in the UKIDSS Ultra Deep Survey via their colour distributions and mass functions. We find that the fraction of quiescent galaxies that are rapidly quenched in SIMBA is 59% (or 48% in terms of stellar mass), which is higher than observed. A similar "downsizing" of RQGs is observed in both SIMBA and the UDS, with RQGs at higher redshift having a higher average mass. However, SIMBA produces too many RQGs at $1<z_q<1.5$ and too few low mass RQGs at $0.5<z_q<1$. The precise colour distribution of SIMBA galaxies compared to the observations also indicates various inconsistencies in star formation and chemical enrichment histories, including an absence of short, intense starbursts. Our results will help inform the next generation of galaxy evolution models, particularly with respect to the quenching mechanisms employed.
Let G_p denote the tail function of Student's distribution with p degrees of freedom. It is shown that the ratio G_q(x)/G_p(x) is decreasing in x>0 for any p and q such that 0<p<q\le\infty. Therefore, G_q(x)<G_p(x) for all such p and q and all x>0. Corollaries on the monotonicity of (generalized) moments and ratios thereof are also given.
Canonical Correlation Analysis (CCA) is a widely used statistical tool with both well established theory and favorable performance for a wide range of machine learning problems. However, computing CCA for huge datasets can be very slow since it involves implementing QR decomposition or singular value decomposition of huge matrices. In this paper we introduce L-CCA, a iterative algorithm which can compute CCA fast on huge sparse datasets. Theory on both the asymptotic convergence and finite time accuracy of L-CCA are established. The experiments also show that L-CCA outperform other fast CCA approximation schemes on two real datasets.
We present results of a study of the muon decay in orbit (DIO) contribution to the signal region of muon - electron conversion. Electrons from DIO are the dominant source of background for muon - electron conversion experiments because the endpoint of DIO electrons is the same as the energy of electrons from elastic muon - electron conversion. The probability of DIO contribution to the signal region was considered for a tracker with Gaussian resolution function and with a realistic resolution function obtained in the application of pattern recognition and momentum reconstruction Kalman filter based procedure to GEANT simulated DIO events. It is found that the existence of non Gaussian tails in the realistic resolution function does not lead to a significant increase in DIO contribution to the signal region. The probability of DIO contribution to the calorimeter signal was studied in dependence on the resolution, assuming a Gaussian resolution function of calorimeter. In this study the geometrical acceptance played an important role, suppressing DIO contribution of the intermediate range electrons from muon decay in orbit.
It has been shown that a Hamiltonian with an unbroken $\cP\cT$ symmetry also possesses a hidden symmetry that is represented by the linear operator $\cC$. This symmetry operator $\cC$ guarantees that the Hamiltonian acts on a Hilbert space with an inner product that is both positive definite and conserved in time, thereby ensuring that the Hamiltonian can be used to define a unitary theory of quantum mechanics. In this paper it is shown how to construct the operator $\cC$ for the $\cP\cT$-symmetric square well using perturbative techniques.
The magnetic properties of Li_{1-x}Ni_{1+x}O_2 compounds with x ranging between 0.02 and 0.2 are investigated. Magnetization and ac susceptibility measured at temperatures between 2 K and 300 K reveal a high sensitivity to x, the excess Nickel concentration. We introduce a percolation model describing the formation of Ni clusters and use an Ising model to simulate their magnetic properties. Numerical results, obtained by a Monte-Carlo technique, are compared to the experimental data. We show the existence of a critical concentration, x_c = 0.136, locating the Ni percolation threshold. The system is superparamagnetic for x<x_c, while it is ferrimagnetic for x>x_c. The 180 Ni-O-Ni inter-plane super-exchange coupling J_\perp \simeq -110K is confirmed to be the predominant magnetic interaction. From the low temperature behavior, we find a clear indication of a 90 Ni-O-Ni intra-plane antiferromagnetic interaction $J_\parallel \simeq -1.5K$ which implies magnetic frustration.
In soft porous media, deformation drives solute transport via the intrinsic coupling between flow of the fluid and rearrangement of the pore structure. Solute transport driven by periodic loading, in particular, can be of great relevance in applications ranging from the geomechanics of contaminants in the subsurface to the biomechanics of nutrient transport in living tissues, scaffolds for tissue engineering, and biomedically employed hydrogels. However, the basic features of this process have not previously been systematically investigated. Here, we fill this hole in the context of a 1D model problem. We do so by expanding the results from a companion study, in which we explored the poromechanics of periodic deformations, by introducing and analysing the impact of the resulting fluid and solid motion on solute transport. We first characterise the independent roles of the three main mechanisms of solute transport in porous media - advection, molecular diffusion, and hydrodynamic dispersion - by examining their impacts on the solute concentration profile during one loading cycle. We next explore the impact of the transport parameters, showing how these alter the relative importance of diffusion and dispersion. We then explore the loading parameters by considering a range of loading periods - from slow to fast, relative to the poroelastic timescale - and amplitudes - from infinitesimal to large. We show that solute spreading over several loading cycle increases monotonically with amplitude, but is maximised for intermediate periods because of the increasing poromechanical localisation of the flow and deformation near the permeable boundary as the period decreases.
We investigate the possible enhancement to the discovery of the heavy Higgs boson through the possible fourth SM family heavy neutrino. Using the channel h-> v4 v4->mu W mu W-> mu j j mu j j, it is found that for certain ranges of Higgs boson and v4 masses LHC could discover both of them simultaneously with 1 fb^-1 integrated luminosity.
Slow (logarithmic) relaxation from a highly excited state is studied in a Hamiltonian system with many degrees of freedom. The relaxation time is shown to increase as the exponential of the square root of the energy of excitation, in agreement with the Boltzmann-Jeans conjecture, while it is found to be inversely proportional to residual Kolmogorov-Sinai entropy, introduced in this Letter. The increase of the thermodynamic entropy through this relaxation process is found to be proportional to this quantity.
The Facility for Antiproton and Ion Research (FAIR) in Darmstadt, Germany, provides unique possibilities for a new generation of hadron-, nuclear- and atomic physics experiments. The future antiProton ANnihilations at DArmstadt (PANDA or $\overline{\rm P}$ANDA) experiment at FAIR will offer a broad physics programme, covering different aspects of the strong interaction. Understanding the latter in the non-perturbative regime remains one of the greatest challenges in contemporary physics. The antiproton-nucleon interaction studied with PANDA provides crucial tests in this area. Furthermore, the high-intensity, low-energy domain of PANDA allows for searches for physics beyond the Standard Model, e.g. through high precision symmetry tests. This paper takes into account a staged approach for the detector setup and for the delivered luminosity from the accelerator. The available detector setup at the time of the delivery of the first antiproton beams in the HESR storage ring is referred to as the \textit{Phase One} setup. The physics programme that is achievable during Phase One is outlined in this paper.
It is known that orbital angular momentum (OAM) couples the Goos-Hanchen and Imbert-Fedorov shifts. Here, we present the first study of these shifts when the OAM-endowed LG(l,p) beams have higher-order radial mode index (p>0). We show theoretically and experimentally that the angular shifts are enhanced by p while the positional shifts are not. Since LG(l,p) modes form a complete basis set for paraxial beams, our results can be used to predict beam shifts of arbitrary modes of light.
We obtained K-band spectro-interferometric observations of the Miras R Cnc, X Hya, W Vel, and RW Vel with a spectral resolution of 1500 using the VLTI/AMBER instrument. We obtained concurrent JHKL photometry using the the Mk II instrument at the SAAO. Our sources have wavelength-dependent visibility values that are consistent with earlier low-resolution AMBER observations of S Ori and with the predictions of dynamic model atmosphere series based on self-excited pulsation models. The wavelength-dependent UD diameters show a minimum near the near-continuum bandpass at 2.25 um. They increase by up to 30% toward the H2O band at 2.0 um and by up to 70% at the CO bandheads. The dynamic model atmosphere series show a consistent wavelength-dependence, and their parameters such as the visual phase, effective temperature, and distances are consistent with independent estimates. The closure phases have significantly wavelength-dependent non-zero values indicating deviations from point symmetry. For example, the R Cnc closure phase is 110 degr in the 2.0 um H2O band, corresponding for instance to an additional unresolved spot contributing 3% of the total flux at a separation of ~4 mas. Our observations are consistent with the predictions of the latest dynamic model atmosphere series based on self-excited pulsation models. The wavelength-dependent radius variations are interpreted as the effect of molecular layers. The wavelength-dependent closure phase values are indicative of deviations from point symmetry at all wavelengths, thus a complex non-spherical stratification of the extended atmosphere. In particular, the significant deviation from point symmetry in the H2O band is interpreted as a signature on large scales of inhomogeneities or clumps in the water vapor layer. The observed inhomogeneities might be caused by pulsation- and shock-induced chaotic motion in the extended atmosphere.
We study the fermion pair production from a strong electric field in boost-invariant coordinates in (3+1) dimensions and exploit the cylindrical symmetry of the problem. This problem has been used previously as a toy model for populating the central-rapidity region of a heavy-ion collision (when we can replace the electric by a chromoelectric field). We derive and solve the renormalized equations for the dynamics of the mean electric field and current of the produced particles, when the field is taken to be a function only of the fluid proper time $\tau = \sqrt{t^2-z^2}$. We determine the proper-time evolution of the comoving energy density and pressure of the ensuing plasma and the time evolution of suitable interpolating number operators. We find that unlike in (1+1) dimensions, the energy density closely follows the longitudinal pressure. The transverse momentum distribution of fermion pairs at large momentum is quite different and larger than that expected from the constant field result.
Multidimensional unfolding methods are widely used for visualizing item response data. Such methods project respondents and items simultaneously onto a low-dimensional Euclidian space, in which respondents and items are represented by ideal points, with person-person, item-item, and person-item similarities being captured by the Euclidian distances between the points. In this paper, we study the visualization of multidimensional unfolding from a statistical perspective. We cast multidimensional unfolding into an estimation problem, where the respondent and item ideal points are treated as parameters to be estimated. An estimator is then proposed for the simultaneous estimation of these parameters. Asymptotic theory is provided for the recovery of the ideal points, shedding lights on the validity of model-based visualization. An alternating projected gradient descent algorithm is proposed for the parameter estimation. We provide two illustrative examples, one on users' movie rating and the other on senate roll call voting.
Information avalanches in social media are typically studied in a similar fashion as avalanches of neuronal activity in the brain. Whereas a large body of literature reveals substantial agreement about the existence of a unique process characterizing neuronal activity across organisms, the dynamics of information in online social media is far less understood. Statistical laws of information avalanches are found in previous studies to be not robust across systems, and radically different processes are used to represent plausible driving mechanisms for information propagation. Here, we analyze almost 1 billion time-stamped events collected from a multitude of online platforms -- including Telegram, Twitter and Weibo -- over observation windows longer than 10 years to show that the propagation of information in social media is a universal and critical process. Universality arises from the observation of identical macroscopic patterns across platforms, irrespective of the details of the specific system at hand. Critical behavior is deduced from the power-law distributions, and corresponding hyperscaling relations, characterizing size and duration of avalanches of information. Neuronal activity may be modeled as a simple contagion process, where only a single exposure to activity may be sufficient for its diffusion. On the contrary, statistical testing on our data indicates that a mixture of simple and complex contagion, where involvement of an individual requires exposure from multiple acquaintances, characterizes the propagation of information in social media. We show that the complexity of the process is correlated with the semantic content of the information that is propagated. Conversational topics about music, movies and TV shows tend to propagate as simple contagion processes, whereas controversial discussions on political/societal themes obey the rules of complex contagion.
We have used 2D Fabry-Perot absorption-line spectroscopy of the SB0 galaxy NGC 7079 to measure its bar pattern speed, $\om$. As in all previous cases of bar pattern speed measurements, we find a fast bar. We estimate that NGC 7079 has been undisturbed for at least the past Gyr or roughly 8 bar rotations, long enough for the bar to have slowed down significantly through dynamical friction if the disk is sub-maximal.
The results of the recent experiments on the reaction $\pi^-p\to\pi^0\pi^0n$ performed at KEK, BNL, IHEP, and CERN are analyzed in detail. For the I=0 $\pi\pi$ S wave phase shift $\delta^0_0$ and inelasticity $\eta^0_0$ a new set of data is obtained. Difficulties emerging when using the physical solutions for the $\pi^0\pi^0$ S and D wave amplitudes extracted with the partial wave analyses are discussed. Attention is drawn to the fact that, for the $\pi^0\pi^0$ invariant mass, m, above 1 GeV, the other solutions, in principle, are found to be more preferred. For clarifying the situation and further studying the $f_0(980)$ resonance thorough experimental investigations of the reaction $\pi^-p\to\pi^0\pi^0n$ in the m region near the $K\bar K$ threshold are required.
We locate gaps in the spectrum of a Hamiltonian on a periodic cuboidal (and generally hyperrectangular) lattice graph with $\delta$ couplings in the vertices. We formulate sufficient conditions under which the number of gaps is finite. As the main result, we find a connection between the arrangement of the gaps and the coefficients in a continued fraction associated with the ratio of edge lengths of the lattice. This knowledge enables a straightforward construction of a periodic quantum graph with any required number of spectral gaps and---to some degree---to control their positions; i.e., to partially solve the inverse spectral problem.
Modeling of matter bounce in $f(R,T)$ gravity has been presented with no violation of the null energy condition. Only a closed universe with negative pressure is allowed in good agreement with some recent observations which favor a universe with positive curvature. Our results agree with some recent works in which a combination of positive curvature and vacuum energy leads to non-singular bounces with no violation of the null energy condition. The stability of the model has been discussed. The cosmographic parameters are developed for the derived model to explain the accelerated expansion of the universe.
The energy charging of a quantum battery is analyzed in an open quantum setting, where the interaction between the battery element and the external power source is mediated by an ancilla system (the quantum charger) that acts as a controllable switch. Different implementations are analyzed putting emphasis on the interplay between coherent energy pumping mechanisms and thermalization.
The T-980 bent crystal collimation experiment at the Tevatron has recently acquired substantial enhancements. First, two new crystals - a 16-strip one manufactured and characterized by the INFN Ferrara group and a quasi-mosaic crystal manufactured and characterized by the PNPI group. Second, a two plane telescope with 3 high-resolution pixel detectors per plane along with corresponding mechanics, electronics, control and software has been manufactured, tested and installed in the E0 crystal region. The purpose of the pixel telescope is to measure and image channeled (CH), volume-reflected (VR) and multiple volume-reflected (MVR) beam profiles produced by bent crystals. Third, an ORIGIN-based system has been developed for thorough analysis of experimental and simulation data. Results of analysis are presented for different types of crystals used from 2005 to present for channeling and volume reflection including pioneering tests of two-plane crystal collimation at the collider, all in comparison with detailed simulations.
Many galaxies contain magnetic fields supported by galactic dynamo action. However, nothing definitive is known about magnetic fields in ring galaxies. Here we investigate large-scale magnetic fields in a previously unexplored context, namely ring galaxies, and concentrate our efforts on the structures that appear most promising for galactic dynamo action, i.e. outer star-forming rings in visually unbarred galaxies. We use tested methods for modelling $\alpha-\Omega$ galactic dynamos, taking into account the available observational information concerning ionized interstellar matter in ring galaxies. Our main result is that dynamo drivers in ring galaxies are strong enough to excite large-scale magnetic fields in the ring galaxies studied. The variety of dynamo driven magnetic configurations in ring galaxies obtained in our modelling is much richer than that found in classical spiral galaxies. In particular, various long-lived transients are possible. An especially interesting case is that of NGC 4513 where the ring counter-rotates with respect to the disc. Strong shear in the region between the disc and the ring is associated with unusually strong dynamo drivers for the counter-rotators. The effect of the strong drivers is found to be unexpectedly moderate. With counter-rotation in the disc, a generic model shows that a steady mixed parity magnetic configuration, unknown for classical spiral galaxies, may be excited, although we do not specifically model NGC 4513. We deduce that ring galaxies constitute a morphological class of galaxies in which identification of large-scale magnetic fields from observations of polarized radio emission, as well as dynamo modelling, may be possible. Such studies have the potential to throw additional light on the physical nature of rings, their lifetimes and evolution.
We describe a general method for expanding a truncated G-iterative Hasse-Schmidt derivation, where G is an algebraic group. We give examples of algebraic groups for which our method works.
The dilepton production in elementary ${pp\to e^{+}e^{-}X}$ reactions at BEVALAC energies $T_{lab}=1\div 5$ GeV is investigated. The calculations include direct ${e^{+}e^{-}}$ decays of the vector mesons $\rho ^{0}$, $\omega $, and $\phi $, Dalitz decays of the $\pi ^{0}$-, $\eta $-, $% \rho $-, $\omega $-, and $\phi $-mesons, and of the baryon resonances $% \Delta (1232),N(1520),$ $... $ . The subthreshold vector meson production cross sections in $pp$ collisions are treated in a way sufficient to avoid double counting with the inclusive vector meson production. The vector meson dominance model for the transition form factors of the resonance Dalitz decays $R\to e^{+}e^{-}N$ is used in an extended form to ensure correct asymptotics which are in agreement with the quark counting rules. Such a modification gives an unified and consistent description of both $R\to N\gamma $ radiative decays and $R\to N\rho (\omega)$ meson decays. The effect of multiple pion production on the experimental efficiency for the detection of the dilepton pairs is studied. We find the dilepton yield in reasonable agreement with the experimental data for the set of intermediate energies whereas at the highest energy $T_{lab}=4.88$ GeV the number of dilepton pairs is likely to be overestimated experimentally in the mass range $M=300\div 700$ MeV.
In periodically driven (Floquet) systems, evolution typically results in an infinite-temperature thermal state due to continuous energy absorption over time. However, before reaching thermal equilibrium, such systems may transiently pass through a meta-stable state known as a prethermal state. This prethermal state can exhibit phenomena not commonly observed in equilibrium, such as discrete time crystals (DTCs), making it an intriguing platform for exploring out-of-equilibrium dynamics. Here, we investigate the relaxation dynamics of initially prepared product states under periodic driving in a kicked Ising model using the IBM Quantum Heron processor, comprising 133 superconducting qubits arranged on a heavy-hexagonal lattice, over up to $100$ time steps. We identify the presence of a prethermal regime characterised by magnetisation measurements oscillating at twice the period of the Floquet cycle and demonstrate its robustness against perturbations to the transverse field. Our results provide evidence supporting the realisation of a period-doubling DTC in a two-dimensional system. Moreover, we discover that the longitudinal field induces additional amplitude modulations in the magnetisation with a period incommensurate with the driving period, leading to the emergence of discrete time quasicrystals (DTQCs). These observations are further validated through comparison with tensor-network and state-vector simulations. Our findings not only enhance our understanding of clean DTCs in two dimensions but also highlight the utility of digital quantum computers for simulating the dynamics of quantum many-body systems, addressing challenges faced by state-of-the-art classical simulations.
The intrinsic decay rate of orthopositronium formed in ${\rm SiO_2}$ powder is measured using the direct $2\gamma$ correction method such that the time dependence of the pick-off annihilation rate is precisely determined using high energy-resolution germanium detectors. As a systematic test, two different types of ${\rm SiO_2}$ powder are used with consistent findings. The intrinsic decay rate of orthopositronium is found to be $7.0396\pm0.0012 (stat.)\pm0.0011 (sys.)\mu s^{-1}$, which is consistent with previous measurements using ${\rm SiO_2}$ powder with about twice the accuracy. Results agree well with a recent $O(\alpha^2)$ QED prediction, varying $3.8-5.6$ experimental standard deviations from other measurements.
Self-supervised protein language models have proved their effectiveness in learning the proteins representations. With the increasing computational power, current protein language models pre-trained with millions of diverse sequences can advance the parameter scale from million-level to billion-level and achieve remarkable improvement. However, those prevailing approaches rarely consider incorporating knowledge graphs (KGs), which can provide rich structured knowledge facts for better protein representations. We argue that informative biology knowledge in KGs can enhance protein representation with external knowledge. In this work, we propose OntoProtein, the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training. Experimental results show that OntoProtein can surpass state-of-the-art methods with pre-trained protein language models in TAPE benchmark and yield better performance compared with baselines in protein-protein interaction and protein function prediction. Code and datasets are available in https://github.com/zjunlp/OntoProtein.
With the rapid development of Large Language Models (LLMs), it is crucial to have benchmarks which can evaluate the ability of LLMs on different domains. One common use of LLMs is performing tasks on scientific topics, such as writing algorithms, querying databases or giving mathematical proofs. Inspired by the way university students are evaluated on such tasks, in this paper, we propose SciEx - a benchmark consisting of university computer science exam questions, to evaluate LLMs ability on solving scientific tasks. SciEx is (1) multilingual, containing both English and German exams, and (2) multi-modal, containing questions that involve images, and (3) contains various types of freeform questions with different difficulty levels, due to the nature of university exams. We evaluate the performance of various state-of-the-art LLMs on our new benchmark. Since SciEx questions are freeform, it is not straightforward to evaluate LLM performance. Therefore, we provide human expert grading of the LLM outputs on SciEx. We show that the free-form exams in SciEx remain challenging for the current LLMs, where the best LLM only achieves 59.4\% exam grade on average. We also provide detailed comparisons between LLM performance and student performance on SciEx. To enable future evaluation of new LLMs, we propose using LLM-as-a-judge to grade the LLM answers on SciEx. Our experiments show that, although they do not perform perfectly on solving the exams, LLMs are decent as graders, achieving 0.948 Pearson correlation with expert grading.
A field theory is proposed where the regular fermionic matter and the dark fermionic matter are different states of the same "primordial" fermion fields. In regime of the fermion densities typical for normal particle physics, each of the primordial fermions splits into three generations identified with regular fermions. In a simple model, this fermion families birth effect is accompanied with the right lepton numbers conservation laws. It is possible to fit the muon to electron mass ratio without fine tuning of the Yukawa coupling constants. When fermion energy density becomes comparable with dark energy density, the theory allows new type of states - Cosmo-Low Energy Physics (CLEP) states. Neutrinos in CLEP state can be both a good candidate for dark matter and responsible for a new type of dark energy. In the latter case the total energy density of the universe is less than it would be in the universe free of fermionic matter at all. The (quintessence) scalar field is coupled to dark matter but its coupling to regular fermionic matter appears to be extremely suppressed.
In dynamic spacetimes in which asymmetric gravitational collapse/expansion is taking place, the timelike geodesic equation appears to exhibit an interesting property: Relative to the collapsing configuration, free test particles undergo gravitational "acceleration" and form a double-jet configuration parallel to the axis of collapse. We illustrate this aspect of peculiar motion in simple spatially homogeneous cosmological models such as the Kasner spacetime. To estimate the effect of spatial inhomogeneities on cosmic jets, timelike geodesics in the Ricci-flat double-Kasner spacetime are studied in detail. While spatial inhomogeneities can significantly modify the structure of cosmic jets, we find that under favorable conditions the double-jet pattern can initially persist over a finite period of time for sufficiently small inhomogeneities.
We give a description of asymptotic quadratic growth rates for geodesic segments on covers of Veech surfaces in terms of the modular fiber parameterizing coverings of a fixed Veech surface. To make the paper self contained we derive the necessary asymptotic formulas from the Gutkin-Judge formula. As an application of the method we define and analyze d-symmetric elliptic differentials and their modular fibers F^{sym}_d. For given genus g, g-symmetric elliptic differentials (with fixed base lattice) provide a 2-dimensional family of translation surfaces. We calculate several asymptotic constants, to establish their dependence on the translation geometry of F^{sym}_d and their sensitivity as SL(2,Z)-orbit invariants.
We examine the island size distribution function and spatial correlation function of a model for island growth in the submonolayer regime in both 1 and 2 dimensions. In our model the islands do not grow in shape, and a fixed number of adatoms are added, nucleate, and are trapped at islands as they diffuse. We study the cases of various critical island sizes $i$ for nucleation as a function of initial coverage. We found anomalous scaling of the island size distribution for large $i$ . Using scaling, random walk theory, a version of mean-field theory we obtain a closed form for the spatial correlation function. Our analytic results are verified by Monte Carlo simulations.
The simple gesture of pointing can greatly augment ones ability to comprehend states of the world based on observations. It triggers additional inferences relevant to ones task at hand. We model an agents update to its belief of the world based on individual observations using a partially observable Markov decision process (POMDP), a mainstream artificial intelligence (AI) model of how to act rationally according to beliefs formed through observation. On top of that, we model pointing as a communicative act between agents who have a mutual understanding that the pointed observation must be relevant and interpretable. Our model measures relevance by defining a Smithian Value of Information (SVI) as the utility improvement of the POMDP agent before and after receiving the pointing. We model that agents calculate SVI by using the cognitive theory of Smithian helping as a principle of coordinating separate beliefs for action prediction and action evaluation. We then import SVI into rational speech act (RSA) as the utility function of an utterance. These lead us to a pragmatic model of pointing allowing for contextually flexible interpretations. We demonstrate the power of our Smithian pointing model by extending the Wumpus world, a classic AI task where a hunter hunts a monster with only partial observability of the world. We add another agent as a guide who can only help by marking an observation already perceived by the hunter with a pointing or not, without providing new observations or offering any instrumental help. Our results show that this severely limited and overloaded communication nevertheless significantly improves the hunters performance. The advantage of pointing is indeed due to a computation of relevance based on Smithian helping, as it disappears completely when the task is too difficult or too easy for the guide to help.
Fully-automatic general-purpose high-quality machine translation systems (FGH-MT) are extremely difficult to build. In fact, there is no system in the world for any pair of languages which qualifies to be called FGH-MT. The reasons are not far to seek. Translation is a creative process which involves interpretation of the given text by the translator. Translation would also vary depending on the audience and the purpose for which it is meant. This would explain the difficulty of building a machine translation system. Since, the machine is not capable of interpreting a general text with sufficient accuracy automatically at present - let alone re-expressing it for a given audience, it fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine faces in interpreting a given text is the lack of general world knowledge or common sense knowledge.}
We investigate a family of SU(3)$\times$U(1)$\times$U(1)-invariant holographic flows and Janus solutions obtained from gauged $\mathcal{N}=8$ supergravity in four dimensions. We give complete details of how to use the uplift formulae to obtain the corresponding solutions in M theory. While the flow solutions appear to be singular from the four-dimensional perspective, we find that the eleven-dimensional solutions are much better behaved and give rise to interesting new classes of compactification geometries that are smooth, up to orbifolds, in the infra-red limit. Our solutions involve new phases in which M2 branes polarize partially or even completely into M5 branes. We derive the eleven-dimensional supersymmetries and show that the eleven-dimensional equations of motion and BPS equations are indeed satisfied as a consequence of their four-dimensional counterparts. Apart from elucidating a whole new class of eleven-dimensional Janus and flow solutions, our work provides extensive and highly non-trivial tests of the recently-derived uplift formulae.
Exaggeration or context changes can render maintainability experience into prejudice. For example, JavaScript is often seen as least elegant language and hence of lowest maintainability. Such prejudice should not guide decisions without prior empirical validation. We formulated 10 hypotheses about maintainability based on prejudices and test them in a large set of open-source projects (6,897 GitHub repositories, 402 million lines, 5 programming languages). We operationalize maintainability with five static analysis metrics. We found that JavaScript code is not worse than other code, Java code shows higher maintainability than C# code and C code has longer methods than other code. The quality of interface documentation is better in Java code than in other code. Code developed by teams is not of higher and large code bases not of lower maintainability. Projects with high maintainability are not more popular or more often forked. Overall, most hypotheses are not supported by open-source data.
Let $M$ be a 3-connected binary matroid and let $Y(M)$ be the set of elements of $M$ avoiding at least $r(M)+1$ non-separating cocircuits of $M$. Lemos proved that $M$ is non-graphic if and only if $Y(M)\neq\emp$. We generalize this result when by establishing that $Y(M)$ is very large when $M$ is non-graphic and $M$ has no $M\s(K_{3,3}"')$-minor if $M$ is regular. More precisely that $|E(M)-Y(M)|\le 1$ in this case. We conjecture that when $M$ is a regular matroid with an $M\s(K_{3,3})$-minor, then $r\s_M(E(M)-Y(M))\le 2$. The proof of such conjecture is reduced to a computational verification.
Patients with Type I Diabetes (T1D) must take insulin injections to prevent the serious long term effects of hyperglycemia - high blood glucose (BG). Patients must also be careful not to inject too much insulin because this could induce hypoglycemia (low BG), which can potentially be fatal. Patients therefore follow a "regimen" that determines how much insulin to inject at certain times. Current methods for managing this disease require adjusting the patient's regimen over time based on the disease's behavior (recorded in the patient's diabetes diary). If we can accurately predict a patient's future BG values from his/her current features (e.g., predicting today's lunch BG value given today's diabetes diary entry for breakfast, including insulin injections, and perhaps earlier entries), then it is relatively easy to produce an effective regimen. This study explores the challenges of BG modeling by applying several machine learning algorithms and various data preprocessing variations (corresponding to 312 [learner, preprocessed-dataset] combinations), to a new T1D dataset containing 29 601 entries from 47 different patients. Our most accurate predictor is a weighted ensemble of two Gaussian Process Regression models, which achieved an errL1 loss of 2.70 mmol/L (48.65 mg/dl). This was an unexpectedly poor result given that one can obtain an errL1 of 2.91 mmol/L (52.43 mg/dl) using the naive approach of simply predicting the patient's average BG. For each of data-variant/model combination we report several evaluation metrics, including glucose-specific metrics, and find similarly disappointing results (the best model was only incrementally better than the simplest measure). These results suggest that the diabetes diary data that is typically collected may not be sufficient to produce accurate BG prediction models; additional data may be necessary to build accurate BG prediction models.
We present a maximum likelihood (ML) algorithm that is fast enough to detect gamma-ray transients in real time on low-performance processors often used for space applications. We validate the routine with simulations and find that, relative to algorithms based on excess counts, the ML method is nearly twice as sensitive, allowing detection of 240-280% more short gamma-ray bursts. We characterize a reference implementation of the code, estimating its computational complexity and benchmarking it on a range of processors. We exercise the reference implementation on archival data from the Fermi Gamma-ray Burst Monitor (GBM), verifying the sensitivity improvements. In particular, we show that the ML algorithm would have detected GRB 170817A even if it had been nearly four times fainter. We present an ad hoc but effective scheme for discriminating transients associated with background variations. We show that the on-board localizations generated by ML are accurate, but that refined off-line localizations require a detector response matrix with about ten times finer resolution than is current practice. Increasing the resolution of the GBM response matrix could substantially reduce the few-degree systematic uncertainty observed in the localizations of bright bursts.
Estimating the parameters from $k$ independent Bin$(n,p)$ random variables, when both parameters $n$ and $p$ are unknown, is relevant to a variety of applications. It is particularly difficult if $n$ is large and $p$ is small. Over the past decades, several articles have proposed Bayesian approaches to estimate $n$ in this setting, but asymptotic results could only be established recently in \cite{Schneider}. There, posterior contraction for $n$ is proven in the problematic parameter regime where $n\rightarrow\infty$ and $p\rightarrow0$ at certain rates. In this article, we study numerically how far the theoretical upper bound on $n$ can be relaxed in simulations without losing posterior consistency.
A time dependent geometry outside a spherically symmetric mass is proposed. The source has zero energy density but nonzero radial and tangential pressures. The time variable is interpreted as the duration of measurement performed upon the physical system. For very short time intervals, the effect of the mass source is much reduced, going to zero when $t \rightarrow 0$. All physical quantities are finite when $t \rightarrow 0$ and $r \rightarrow 0$ and also at infinity. The total energy flux measured on a hypersurface of constant $r$ is vanishing.
Due to the highly parallelizable architecture, Transformer is faster to train than RNN-based models and popularly used in machine translation tasks. However, at inference time, each output word requires all the hidden states of the previously generated words, which limits the parallelization capability, and makes it much slower than RNN-based ones. In this paper, we systematically analyze the time cost of different components of both the Transformer and RNN-based model. Based on it, we propose a hybrid network of self-attention and RNN structures, in which, the highly parallelizable self-attention is utilized as the encoder, and the simpler RNN structure is used as the decoder. Our hybrid network can decode 4-times faster than the Transformer. In addition, with the help of knowledge distillation, our hybrid network achieves comparable translation quality to the original Transformer.
We introduce LAMP: the Linear Additive Markov Process. Transitions in LAMP may be influenced by states visited in the distant history of the process, but unlike higher-order Markov processes, LAMP retains an efficient parametrization. LAMP also allows the specific dependence on history to be learned efficiently from data. We characterize some theoretical properties of LAMP, including its steady-state and mixing time. We then give an algorithm based on alternating minimization to learn LAMP models from data. Finally, we perform a series of real-world experiments to show that LAMP is more powerful than first-order Markov processes, and even holds its own against deep sequential models (LSTMs) with a negligible increase in parameter complexity.
Superlinear convergence has been an elusive goal for black-box nonsmooth optimization. Even in the convex case, the subgradient method is very slow, and while some cutting plane algorithms, including traditional bundle methods, are popular in practice, local convergence is still sluggish. Faster variants depend either on problem structure or on analyses that elide sequences of "null" steps. Motivated by a semi-structured approach to optimization and the sequential quadratic programming philosophy, we describe a new bundle Newton method that incorporates second-order objective information with the usual linear approximation oracle. One representative problem class consists of maxima of several smooth functions, individually inaccessible to the oracle. Given as additional input just the cardinality of the optimal active set, we prove local quadratic convergence. A simple implementation shows promise on more general functions, both convex and nonconvex, and suggests first-order analogues.
We apply classical algorithms for approximately solving constraint satisfaction problems to find bounds on extremal eigenvalues of local Hamiltonians. We consider spin Hamiltonians for which we have an upper bound on the number of terms in which each spin participates, and find extensive bounds for the operator norm and ground-state energy of such Hamiltonians under this constraint. In each case the bound is achieved by a product state which can be found efficiently using a classical algorithm.
This article examines the Bouton-Lie group invariants of the Navier-Stokes equation (NSE) for incompressible fluids. Bouton's theory is applied to the general scaling transformation admitted by the NSE and is used to derive all self-similar solutions. In light of these, the criticality of the standard NSE system is examined and criticality criteria are derived. The theorem of Beale-Kato-Majda is used to rule out blow-up for a subset of Bouton's self-similar solutions. For a subset of Leray's self-similar solutions, the cavitation number of the fluid is found to be a scale-invariant, conserved quantity. By extending the analysis of Bouton to higher-dimensioned manifolds, additional conserved quantities are found, which could further elucidate the physics of fluid turbulence.
The main approach to defining equivalence among acyclic directed causal graphical models is based on the conditional independence relationships in the distributions that the causal models can generate, in terms of the Markov equivalence. However, it is known that when cycles are allowed in the causal structure, conditional independence may not be a suitable notion for equivalence of two structures, as it does not reflect all the information in the distribution that is useful for identification of the underlying structure. In this paper, we present a general, unified notion of equivalence for linear Gaussian causal directed graphical models, whether they are cyclic or acyclic. In our proposed definition of equivalence, two structures are equivalent if they can generate the same set of data distributions. We also propose a weaker notion of equivalence called quasi-equivalence, which we show is the extent of identifiability from observational data. We propose analytic as well as graphical methods for characterizing the equivalence of two structures. Additionally, we propose a score-based method for learning the structure from observational data, which successfully deals with both acyclic and cyclic structures.
In this paper characterizations of graphs satisfying heat kernel estimates for a wide class of space-time scaling functions are given. The equivalence of the two-sided heat kernel estimate and the parabolic Harnack inequality is also shown via the equivalence of the upper (lower) heat kernel estimate to the parabolic mean value (and super mean value) inequality.
We comment on the implications of the recently measured CP asymmetry in B --> Phi K_S decay. The data disfavor the Standard Model at 2.7 sigma and -if the trend persists in the future with higher statistics - require the existence of CP violation beyond that in the CKM matrix. In particular, the b --> s bar{s} s decay amplitude would require new contributions of comparable size to the Standard Model ones with an order one phase. While not every model can deliver such a large amount of CP and flavor violation, those with substantial FCNC couplings to the Z can reproduce the experimental findings.
Major Depressive Disorder (MDD) is a pervasive mental health condition that affects 300 million people worldwide. This work presents a novel, BiLSTM-based tri-modal model-level fusion architecture for the binary classification of depression from clinical interview recordings. The proposed architecture incorporates Mel Frequency Cepstral Coefficients, Facial Action Units, and uses a two-shot learning based GPT-4 model to process text data. This is the first work to incorporate large language models into a multi-modal architecture for this task. It achieves impressive results on the DAIC-WOZ AVEC 2016 Challenge cross-validation split and Leave-One-Subject-Out cross-validation split, surpassing all baseline models and multiple state-of-the-art models. In Leave-One-Subject-Out testing, it achieves an accuracy of 91.01%, an F1-Score of 85.95%, a precision of 80%, and a recall of 92.86%.
Eye movements provide insight into what parts of an image a viewer finds most salient, interesting, or relevant to the task at hand. Unfortunately, eye tracking data, a commonly-used proxy for attention, is cumbersome to collect. Here we explore an alternative: a comprehensive web-based toolbox for crowdsourcing visual attention. We draw from four main classes of attention-capturing methodologies in the literature. ZoomMaps is a novel "zoom-based" interface that captures viewing on a mobile phone. CodeCharts is a "self-reporting" methodology that records points of interest at precise viewing durations. ImportAnnots is an "annotation" tool for selecting important image regions, and "cursor-based" BubbleView lets viewers click to deblur a small area. We compare these methodologies using a common analysis framework in order to develop appropriate use cases for each interface. This toolbox and our analyses provide a blueprint for how to gather attention data at scale without an eye tracker.
We introduce endomorphisms of special jacobians and show that they satisfy polynomial equations with all integer roots which we compute. The eigen-abelian varieties for these endomorphisms are generalizations of Prym-Tjurin varieties and naturally contain special curves representing cohomology classes which are not expected to be represented by curves in generic abelian varieties.
comma.ai presents comma2k19, a dataset of over 33 hours of commute in California's 280 highway. This means 2019 segments, 1 minute long each, on a 20km section of highway driving between California's San Jose and San Francisco. The dataset was collected using comma EONs that have sensors similar to those of any modern smartphone including a road-facing camera, phone GPS, thermometers and a 9-axis IMU. Additionally, the EON captures raw GNSS measurements and all CAN data sent by the car with a comma grey panda. Laika, an open-source GNSS processing library, is also introduced here. Laika produces 40% more accurate positions than the GNSS module used to collect the raw data. This dataset includes pose (position + orientation) estimates in a global reference frame of the recording camera. These poses were computed with a tightly coupled INS/GNSS/Vision optimizer that relies on data processed by Laika. comma2k19 is ideal for development and validation of tightly coupled GNSS algorithms and mapping algorithms that work with commodity sensors.
The interplay between shear and bulk viscosities on the flow harmonics, $v_n$'s, at RHIC is investigated using the newly developed relativistic 2+1 hydrodynamical code v-USPhydro that includes bulk and shear viscosity effects both in the hydrodynamic evolution and also at freeze-out. While shear viscosity is known to attenuate the flow harmonics, we find that the inclusion of bulk viscosity decreases the shear viscosity-induced suppression of the flow harmonics bringing them closer to their values in ideal hydrodynamical calculations. Depending on the value of the bulk viscosity to entropy density ratio, $\zeta/s$, in the quark-gluon plasma, the bulk viscosity-driven suppression of shear viscosity effects on the flow harmonics may require a re-evaluation of the previous estimates of the shear viscosity to entropy density ratio, $\eta/s$, of the quark-gluon plasma previously extracted by comparing hydrodynamic calculations to heavy ion data.
Arag\'on Artacho and Campoy recently proposed a new method for computing the projection onto the intersection of two closed convex sets in Hilbert space; moreover, they proposed in 2018 a generalization from normal cone operators to maximally monotone operators. In this paper, we complete this analysis by demonstrating that the underlying curve converges to the nearest zero of the sum of the two operators. We also provide a new interpretation of the underlying operators in terms of the resolvent and the proximal average.
We have developed a fast, accurate and generally applicable method for inferring the power spectrum and its uncertainties from maps of the cosmic microwave background (CMB) in the presence of inhomogeneous and correlated noise. For maps with 10 to 100 thousand pixels, we apply an exact power spectrum estimation algorithm to submaps of the data at various resolutions, and then combine the results in an optimal manner. To analyze larger maps efficiently one must resort to sub-optimal combinations in which cross-map power spectrum error correlations are only calculated approximately. We expect such approximations to work well in general, and in particular for the megapixel maps to come from the next generation of satellite missions.
Parabolic equations with homogeneous Dirichlet conditions on the boundary are studied in a setting where the solutions are required to have a prescribed change of the profile in fixed time, instead of a Cauchy condition. It is shown that this problem is well-posed in L_2-setting. Existence and regularity results are established, as well as an analog of the maximum principle.
A total dominator coloring of a graph G is a proper coloring of G in which each vertex of the graph is adjacent to every vertex of some color class. The total dominator chromatic number of a graph is the minimum number of color classes in a total dominator coloring. In this article, we study the total dominator coloring on middle graphs by giving several bounds for the case of general graphs and trees. Moreover, we calculate explicitely the total dominator chromatic number of the middle graph of several known families of graphs.
Session-based recommender systems have attracted much attention recently. To capture the sequential dependencies, existing methods resort either to data augmentation techniques or left-to-right style autoregressive training.Since these methods are aimed to model the sequential nature of user behaviors, they ignore the future data of a target interaction when constructing the prediction model for it. However, we argue that the future interactions after a target interaction, which are also available during training, provide valuable signal on user preference and can be used to enhance the recommendation quality. Properly integrating future data into model training, however, is non-trivial to achieve, since it disobeys machine learning principles and can easily cause data leakage. To this end, we propose a new encoder-decoder framework named Gap-filling based Recommender (GRec), which trains the encoder and decoder by a gap-filling mechanism. Specifically, the encoder takes a partially-complete session sequence (where some items are masked by purpose) as input, and the decoder predicts these masked items conditioned on the encoded representation. We instantiate the general GRec framework using convolutional neural network with sparse kernels, giving consideration to both accuracy and efficiency. We conduct experiments on two real-world datasets covering short-, medium-, and long-range user sessions, showing that GRec significantly outperforms the state-of-the-art sequential recommendation methods. More empirical studies verify the high utility of modeling future contexts under our GRec framework.
A reconfigurable intelligent surface (RIS) enhanced non-orthogonal multiple access assisted backscatter communication (RIS-NOMABC) system is considered. A joint optimization problem over power reflection coefficients and phase shifts is formulated. To solve this non-convex problem, a low complexity algorithm is proposed by invoking the alternative optimization, successive convex approximation and manifold optimization algorithms. Numerical results corroborate that the proposed RIS-NOMABC system outperforms the conventional non-orthogonal multiple access assisted backscatter communication (NOMABC) system without RIS, and demonstrate the feasibility and effectiveness of the proposed algorithm.
Temperature dependence S(T) of the thermoelectric power of metallic systems with cerium and ytterbium ions exhibits some characteristic features, which can be used to classify these systems into distinct categories. The experimental data are explained by the Kondo scattering in the presence of the crystal field splitting and various shapes of S(T) are related to different Kondo scales that characterize Ce and Yb ions at different temperatures. The low- and high-temperature behaviors are calculated for different fixed point models and the overall shape of S(T) is obtained by interpolation. At high temperatures, we use the Coqblin-Schrieffer model and calculate S(T) by perturbation expansion with renormalized coupling constants. The renormalization is performed by the 'poor man's scaling'. At low temperatures, we describe the dilute Ce and Yb alloys by an effective spin-degenerate single-impurity Anderson model, and the stoichiometric compounds by an effective spin-degenerate periodic Anderson model. The parameters of these low-temperature models are such that their effective Kondo scale coincides with the lowest Kondo scale of the Coqblin-Schrieffer model. The interpolation between the results obtained for the Anderson model and the Coqblin-Schrieffer model explains the overall thermoelectric properties of most Ce and Yb intermetallics.
The aim of these notes is to give an accessible and self-contained introduction to the theory of gravitational waves as the theory of a relativistic symmetric tensor field in a Minkowski background spacetime. This is the approach of a particle physicist: the graviton is identified with a particular irreducible representation of the Poincar\'e group, corresponding to vanishing mass and spin two. It is shown how to construct an action functional giving the linear dynamics of gravitons, and how General Relativity can be obtained from it. The Hamiltonian formulation of the linear theory is examined in detail. We study the emission of gravitational waves and apply the results to the simplest case of a binary Newtonian system.
Using results from our companion article [arXiv:1112.4824v2] on a Schauder approach to existence of solutions to a degenerate-parabolic partial differential equation, we solve three intertwined problems, motivated by probability theory and mathematical finance, concerning degenerate diffusion processes. We show that the martingale problem associated with a degenerate-elliptic differential operator with unbounded, locally Holder continuous coefficients on a half-space is well-posed in the sense of Stroock and Varadhan. Second, we prove existence, uniqueness, and the strong Markov property for weak solutions to a stochastic differential equation with degenerate diffusion and unbounded coefficients with suitable H\"older continuity properties. Third, for an Ito process with degenerate diffusion and unbounded but appropriately regular coefficients, we prove existence of a strong Markov process, unique in the sense of probability law, whose one-dimensional marginal probability distributions match those of the given Ito process.
We present a continuous-time contract whereby a top-level player can incentivize a hierarchy of players below him to act in his best interest despite only observing the output of his direct subordinate. This paper extends Sannikov's approach from a situation of asymmetric information between a principal and an agent to one of hierarchical information between several players. We develop an iterative algorithm for constructing an incentive compatible contract and define the correct notion of concavity which must be preserved during iteration. We identify conditions under which a dynamic programming construction of an optimal dynamic contract can be reduced to only a one-dimensional state space and one-dimensional control set, independent of the size of the hierarchy. In this sense, our results contribute to the applicability of dynamic programming on dynamic contracts for a large-scale principal-agent hierarchy.
In the ultraintense laser-solid interaction, our recent work [Phys. Rev. E 109, 035204 (2024)] has revealed that the linear Breit-Wheeler (BW) process via photon-photon collisions will become the dominant mechanism for electron-positron pair production at the normalized laser amplitude $a_0<400$--$500$. Here, we investigate the impact of photon polarization on linear Breit-Wheeler pair production in the similar laser-solid setup, mainly focusing on the difference of positron yields between polarized and unpolarized situations. Two facts serve as the motivation for this work: (i) the emitted photons via nonlinear Compton scattering are highly linearly polarized in the strong-field QED regime; (ii) the linear BW cross section $\sigma_{\gamma\gamma}$ is dependent on photon polarization. By using two-dimensional QED particle-in-cell simulations, we find that the photon polarization effect can suppress the positron yield by 5% to 10% in linear BW pair production. This is because the polarization directions of colliding photons are predominantly parallel, resulting in a reduced $\sigma_{\gamma\gamma}$ compared to the unpolarized cross section. The suppression degree is decreased with the enhancement of the nonlinear QED strength at higher laser intensities. This work emphasizes the importance of photon polarization in accurately predicting linear BW pair production in laser-driven plasmas.
A new method based on the combination of small-anglescattering, reverse Monte Carlo simulations, and an aggregate recognition algorithm is proposed to characterize the structure of nanoparticle suspensions in solvents and polymer nanocomposites, allowing detailedstudies of the impact of different nanoparticle surface modifications.Experimental small-angle scattering is reproduced using simulated annealing of configurations of polydisperse particles in a simulation box compatible with the lowest experimental q-vector. Then, properties of interest likeaggregation states are extracted from these configurations and averaged. This approach has been applied to silane surface-modified silica nanoparticles with different grafting groups, in solvents and after casting into polymer matrices.It is shown that the chemistry of the silane function, in particular mono- or trifunctionality possibly related to patch formation, affects the dispersion state in a given medium, in spite of an unchanged alkylchain length. Our approach may be applied to study any dispersion or aggregation state of nanoparticles. Concerningnanocomposites, the method has potential impact on the design of new formulations allowing controlled tuning of nanoparticle dispersion.
We present an experimental demonstration of Additive Point Source Localization (APSL), a sparse parametric imaging algorithm that reconstructs the 3D positions and activities of multiple gamma-ray point sources. Using a handheld gamma-ray detector array and up to four $8$ ${\mu}$Ci $^{137}$Cs gamma-ray sources, we performed both source-search and source-separation experiments in an indoor laboratory environment. In the majority of the source-search measurements, APSL reconstructed the correct number of sources with position accuracies of ${\sim}20$ cm and activity accuracies (unsigned) of ${\sim}20\%$, given measurement times of two to three minutes and distances of closest approach (to any source) of ${\sim}20$ cm. In source-separation measurements where the detector could be moved freely about the environment, APSL was able to resolve two sources separated by $75$ cm or more given only ${\sim}60$ s of measurement time. In these source-separation measurements, APSL produced larger total activity errors of ${\sim}40\%$, but obtained source separation distances accurate to within $15$ cm. We also compare our APSL results against traditional Maximum Likelihood-Expectation Maximization (ML-EM) reconstructions, and demonstrate improved image accuracy and interpretability using APSL over ML-EM. These results indicate that APSL is capable of accurately reconstructing gamma-ray source positions and activities using measurements from existing detector hardware.
This is a short, elementary survey article about taut submanifolds. In order to simplify the exposition, we restrict to the case of compact smooth submanifolds of Euclidean or spherical spaces. Some new, partial results concerning taut 4-manifolds are discussed at the end of the text.
We consider several spin-unrestricted random-phase approximation (RPA) variants for calculating correlation energies, with and without range separation, and test them on datasets of atomization energies and reaction barrier heights. We show that range separation greatly improves the accuracy of all RPA variants for these properties. Moreover, we show that a RPA variant with exchange, hereafter referred to as RPAx-SO2, first proposed by Sz-abo and Ostlund [A. Szabo and N. S. Ostlund, J. Chem. Phys. 67, 4351 (1977)] in a spin-restricted closed-shell formalism, and extended here to a spin-unrestricted formalism , provides on average the most accurate range-separated RPA variant for atomization energies and reaction barrier heights. Since this range-separated RPAx-SO2 method had already been shown to be among the most accurate range-separated RPA variants for weak intermolecular interactions [J. Toulouse, W. Zhu, A. Savin, G. Jansen, and J. G. {\'A}ngy{\'a}n, J. Chem. Phys. 135, 084119 (2011)], this works confirms range-separated RPAx-SO2 as a promising method for general chemical applications.
In this note, by the umbra calculus method, the Sun and Zagier's congruences involving the Bell numbers and the derangement numbers are generalized to the polynomial cases. Some special congruences are also provided.
In this letter, we demonstrate a strong dependence of the electrostatic deformation of doubly-clamped single-walled carbon nanotubes on both the field strength and the tube length, using molecular simulations. Metallic nanotubes are found to be more sensitive to an electric field than semiconducting ones of the same size. For a given electric field, the induced deformation increases with tube length but decreases with tube radius. Furthermore, it is found that nanotubes can be more efficiently bent in a center-oriented transverse electric field.
This paper studies pairs trading using a nonlinear and non-Gaussian state-space model framework. We model the spread between the prices of two assets as an unobservable state variable and assume that it follows a mean-reverting process. This new model has two distinctive features: (1) The innovations to the spread is non-Gaussianity and heteroskedastic. (2) The mean reversion of the spread is nonlinear. We show how to use the filtered spread as the trading indicator to carry out statistical arbitrage. We also propose a new trading strategy and present a Monte Carlo based approach to select the optimal trading rule. As the first empirical application, we apply the new model and the new trading strategy to two examples: PEP vs KO and EWT vs EWH. The results show that the new approach can achieve a 21.86% annualized return for the PEP/KO pair and a 31.84% annualized return for the EWT/EWH pair. As the second empirical application, we consider all the possible pairs among the largest and the smallest five US banks listed on the NYSE. For these pairs, we compare the performance of the proposed approach with that of the existing popular approaches, both in-sample and out-of-sample. Interestingly, we find that our approach can significantly improve the return and the Sharpe ratio in almost all the cases considered.
The receiver operating characteristic curve is widely applied in measuring the performance of diagnostic tests. Many direct and indirect approaches have been proposed for modelling the ROC curve, and because of its tractability, the Gaussian distribution has typically been used to model both populations. We propose using a Gaussian mixture model, leading to a more flexible approach that better accounts for atypical data. Monte Carlo simulation is used to circumvent the issue of absence of a closed-form. We show that our method performs favourably when compared to the crude binormal curve and to the semi-parametric frequentist binormal ROC using the famous LABROC procedure.
We present low-frequency electrical resistance fluctuations, or noise, in graphene-based field-effect devices with varying number of layers. In single-layer devices the noise magnitude decreases with increasing carrier density, which behaved oppositely in the devices with two or larger number of layers accompanied by a suppression in noise magnitude by more than two orders in the latter case. This behavior can be explained from the influence of external electric field on graphene band structure, and provides a simple transport-based route to isolate single-layer graphene devices from those with multiple layers.
In this paper we give an exact analytical expression for the number of spanning trees of an infinite family of outerplanar, small-world and self-similar graphs. This number is an important graph invariant related to different topological and dynamic properties of the graph, such as its reliability, synchronization capability and diffusion properties. The calculation of the number of spanning trees is a demanding and difficult task, in particular for large graphs, and thus there is much interest in obtaining closed expressions for relevant infinite graph families. We have also calculated the spanning tree entropy of the graphs which we have compared with those for graphs with the same average degree.
Support and rank varieties of modules over a group algebra of an elementary abelian p-group have been well studied. In particular, Avrunin and Scott showed that in this setting, the rank and support varieties are equivalent. Avramov and Buchweitz proved an analogous result for pairs of modules over arbitrary commutative local complete intersection rings. In this paper we study support and rank varieties in the triangulated category of totally acyclic chain complexes over a complete intersection ring and show that these varieties are also equivalent.
The main goal of this paper is to find analytical solutions of a system of nonlinear ordinary differential equations arising in the virus propagation in blockchain networks. The presented method reduces the problem to an Abel differential equation of the first kind and solve it directly.
We present the Abnormal Netsukuku Domain Name Anarchy system. ANDNA is the distributed, non hierarchical and decentralised system of hostname management used in the Netsukuku network.
The dawn of the fourth industrial revolution, Industry 4.0 has created great enthusiasm among companies and researchers by giving them an opportunity to pave the path towards the vision of a connected smart factory ecosystem. However, in context of automotive industry there is an evident gap between the requirements supported by the current automotive manufacturing execution systems (MES) and the requirements proposed by industrial standards from the International Society of Automation (ISA) such as, ISA-95, ISA-88 over which the Industry 4.0 is being built on. In this paper, we bridge this gap by following a model-based requirements engineering approach along with a gap analysis process. Our work is mainly divided into three phases, (i) automotive MES tool selection phase, (ii) requirements modeling phase, (iii) and gap analysis phase based on the modeled requirements. During the MES tool selection phase, we used known reliable sources such as, MES product survey reports, white papers that provide in-depth and comprehensive information about various comparison criteria and tool vendors list for the current MES landscape. During the requirement modeling phase, we specified requirements derived from the needs of ISA-95 and ISA-88 industrial standards using the general purpose Systems Modeling Language (SysML). During the gap analysis phase, we find the misalignment between standard requirements and the compliance of the existing software tools to those standards.
Random walks by single-node agents have been systematically conducted on various types of complex networks in order to investigate how their topologies can affect the dynamics of the agents. However, by fitting any network node, these agents do not engage in topological interactions with the network. In the present work, we describe random walks on complex networks performed by agents that are actually small graphs. These agents can only occupy admissible portions of the network onto which they fit topologically, hence their name being taken as topologically-specific agents. These agents are also allowed to move to adjacent subgraphs in the network, which have each node adjacent to the original respective node of the agent. Two types of random walks are considered here: uniformly random and influenced by an external field. The performance of the random walks performed by three types of topologically-specific agents is studied respectively to the obtained coverage considering three types of complex networks (geometrical, Erd\H{o}s-R\'enyi, and Barab\'asi-Albert). The number of nodes displaced at each random walk step is also obtained and analyzed. Several interesting results are reported and discussed, including the fact that, despite its intrinsic node degree heterogeneity, Barab\'asi-Albert networks tend to allow relatively smooth and effective coverage by all the considered topologically-specific agents. Erd\H{o}s-R\'enyi networks were also found to yield large dispersions of node coverage. In addition, the triangle agent was found to allow more effective random walks respectively to any of the three considered networks.
Grasping the themes of social media content is key to understanding the narratives that influence public opinion and behavior. The thematic analysis goes beyond traditional topic-level analysis, which often captures only the broadest patterns, providing deeper insights into specific and actionable themes such as "public sentiment towards vaccination", "political discourse surrounding climate policies," etc. In this paper, we introduce a novel approach to uncovering latent themes in social media messaging. Recognizing the limitations of the traditional topic-level analysis, which tends to capture only overarching patterns, this study emphasizes the need for a finer-grained, theme-focused exploration. Traditional theme discovery methods typically involve manual processes and a human-in-the-loop approach. While valuable, these methods face challenges in scalability, consistency, and resource intensity in terms of time and cost. To address these challenges, we propose a machine-in-the-loop approach that leverages the advanced capabilities of Large Language Models (LLMs). To demonstrate our approach, we apply our framework to contentious topics, such as climate debate and vaccine debate. We use two publicly available datasets: (1) the climate campaigns dataset of 21k Facebook ads and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads. Our quantitative and qualitative analysis shows that our methodology yields more accurate and interpretable results compared to the baselines. Our results not only demonstrate the effectiveness of our approach in uncovering latent themes but also illuminate how these themes are tailored for demographic targeting in social media contexts. Additionally, our work sheds light on the dynamic nature of social media, revealing the shifts in the thematic focus of messaging in response to real-world events.
Deep neural networks (DNNs) have achieved state-of-the-art performances in many important domains, including medical diagnosis, security, and autonomous driving. In these domains where safety is highly critical, an erroneous decision can result in serious consequences. While a perfect prediction accuracy is not always achievable, recent work on Bayesian deep networks shows that it is possible to know when DNNs are more likely to make mistakes. Knowing what DNNs do not know is desirable to increase the safety of deep learning technology in sensitive applications. Bayesian neural networks attempt to address this challenge. However, traditional approaches are computationally intractable and do not scale well to large, complex neural network architectures. In this paper, we develop a theoretical framework to approximate Bayesian inference for DNNs by imposing a Bernoulli distribution on the model weights. This method, called MC-DropConnect, gives us a tool to represent the model uncertainty with little change in the overall model structure or computational cost. We extensively validate the proposed algorithm on multiple network architectures and datasets for classification and semantic segmentation tasks. We also propose new metrics to quantify the uncertainty estimates. This enables an objective comparison between MC-DropConnect and prior approaches. Our empirical results demonstrate that the proposed framework yields significant improvement in both prediction accuracy and uncertainty estimation quality compared to the state of the art.
The latest results from CMS on R-Parity violating Supersymmetry based on the 19.5/fb full dataset from the 8 TeV LHC run of 2012 are reviewed. The results are interpreted in the context of simplified models with multilepton and b-quark jets signatures that have low missing transverse energy arising from light top-squark pair with R-parity-violating decays of the lightest supersymmetric particle. In addition to simplified model, a new approach for phenomenological MSSM interpretation is shown which demonstrates that the obtained results from multilepton final states are valid for a wide range of supersymmetry models.
For any positive integer $n$, $\mathcal{A}_n$ is the class of all groups $G$ such that, for $0\leq i\leq n$, $H^i(\hat{G},A)\cong H^i(G,A)$ for every finite discrete $\hat{G}$-module $A$. We describe certain types of free products with amalgam and HNN extensions that are in some of the classes $\mathcal{A}_n$. In addition, we investigate the residually finite groups in the class $\mathcal{A}_2$.
In this paper we analyze a stochastic interpretation of the one-dimensional parabolic-parabolic Keller-Segel system without cut-off. It involves an original type of McKean-Vlasov interaction kernel. At the particle level, each particle interacts with all the past of each other particle by means of a time integrated functional involving a singular kernel. At the mean-field level studied here, the McKean-Vlasov limit process interacts with all the past time marginals of its probability distribution in a similarly singular way. We prove that the parabolic-parabolic Keller-Segel system in the whole Euclidean space and the corresponding McKean-Vlasov stochastic differential equation are well-posed for any values of the parameters of the model.
As the specification of the new 5G NR standard proceeds inside 3GPP, the availability of a versatile, full-stack, End-to-End (E2E), and open source simulator becomes a necessity to extract insights from the recently approved 3GPP specifications. This paper presents an extension to ns-3, a well-known discrete-event network simulator, to support the NR Radio Access Network. The present work describes the design and implementation choices at the MAC and PHY layers, and it discusses a technical solution for managing different bandwidth parts. Finally, we present calibration results, according to 3GPP procedures, and we show how to get E2E performance indicators in a realistic deployment scenario, with special emphasis on the E2E latency.
We demonstrate theoretically the possibility of using nano mechanical systems as single photon routers. We show how EIT in cavity optomechanical systems can be used to produce a switch for a probe field in a single photon Fock state using very low pumping powers of few microwatt. We present estimates of vacuum and thermal noise and show the optimal performance of the single photon switch is deteriorated by only few percent even at temperatures of the order of 20 mK.