text
stringlengths
6
128k
Wireless capsule endoscopy (WCE) has been widely adopted as complementary to traditional wired gastroendoscopy, especially for small bowel diseases which are beyond the latter's reach. However, both the video resolution and frame rates are limited in current WCE solutions due to the limited wireless data rate. The reasons behind this are that the electromagnetic (EM), radio frequency (RF) based communication scheme used by WCE has strict limits on useable bandwidth and power, and the high attenuation in the human body compared to air. Ultrasound communication could be a potential alternative solution as it has access to much higher bandwidths and transmitted power with much lower attenuation. In this paper, we propose an ultrasound communication scheme specially designed for high data rate through tissue data transmission and validate this communication scheme by successfully transmitting ultra-high-definition (UHD) video (3840*2160 pixels at 60 FPS) through 5 cm of pork belly. Over 8.3 Mbps error free payload data rate was achieved with the proposed communication scheme and our custom-built field programmable gate array (FPGA) based test platform.
We utilize the robust membership determination algorithm, ML-MOC, on the precise astrometric and deep photometric data from Gaia Early Data Release 3 within a region of radius 5$^{\circ}$ around the center of the intermediate-age galactic open cluster NGC 752 to identify its member stars. We report the discovery of the tidal tails of NGC 752, extending out to $\sim$35 pc on either side of its denser central region and following the cluster orbit. From comparison with PARSEC stellar isochrones, we obtain the mass function of the cluster with a slope, $\chi=-1.26\pm0.07$. The high negative value of $\chi$ is indicative of a disintegrating cluster undergoing mass-segregation. $\chi$ is more negative in the intra-tidal regions as compared to the outskirts of NGC 752. We estimate a present day mass of the cluster, M$\rm_{C}=297\pm10$ M$_{\odot}$. Through mass-loss due to stellar evolution and tidal interactions, we further estimate that NGC 752 has lost nearly 95.2-98.5 % of its initial mass, $\rm M_{i}~=~0.64~-2~\times~10^{4}~M_{\odot}$.
Ozsv\'ath-Szab\'o proved the property that any coefficient of Alexander polynomial of lens space knot is either $\pm1$ or $0$ and the non-zero coefficients are alternating. Combining the formulas of the Alexander polynomial of lens space knots due to Kadokami-Yamada and Ichihara-Saito-Teragaito, we refine Ozsv\'ath-Szab\'o's property as the existence of simple curves included in a region in ${\Bbb R}^2$. The existence of curves, that has no end-points connected, is just 1-component in a region, can search distribution of non-zero coefficients of the Alexander polynomial of the lens space knot. This curve is much useful to obtain constraints of Alexander polynomials of lens space knots. For example, we can investigate the location of the second, third and fourth non-zero coefficients. The curve extracts new invariant $\alpha$-index. The invariant is an important factor to determine Alexander polynomial of lens space knot. We classify lens space surgeries that the Alexander polynomial is the same as a $(2,r)$-torus knot and lens space surgeries with small genus and so on. As well as lens space knots in $S^3$, we also deal with lens space knots in homology spheres, which the surgery duals are simple (1,1)-knots.
The mean-square displacement (MSD) was measured by neutron scattering at various temperatures and pressures for a number of molecular glass-forming liquids. The MSD is invariant along the glass-transition line at the pressure studied, thus establishing an ``intrinsic'' Lindemann criterion for any given liquid. A one-to-one connection between the MSD's temperature dependence and the liquid's fragility is found when the MSD is evaluated on a time scale of approximately 4 nanoseconds, but does not hold when the MSD is evaluated at shorter times. The findings are discussed in terms of the elastic model and the role of relaxations, and the correlations between slow and fast dynamics are addressed.
The Hubble constant Ho describes not only the expansion of local space at redshift z ~ 0, but is also a fundamental parameter determining the evolution of the universe. Recent measurements of Ho anchored on Cepheid observations have reached a precision of several percent. However, this problem is so important that confirmation from several methods is needed to better constrain Ho and, with it, dark energy and the curvature of space. A particularly direct method involves the determination of distances to local galaxies far enough to be part of the Hubble flow through water vapor (H2O) masers orbiting nuclear supermassive black holes. The goal of this article is to describe the relevance of Ho with respect to fundamental cosmological questions and to summarize recent progress of the the `Megamaser Cosmology Project' (MCP) related to the Hubble constant.
We improve results of Baouendi, Rothschild and Treves and of Hill and Nacinovich by finding a much weaker sufficient condition for a CR manifold of type $(n,k)$ to admit a local CR embedding into a CR manifold of type $(n+\ell,k-\ell)$. While their results require the existence of a finite dimensional solvable transverse Lie algebra of vector fields; we require only a finite dimensional extension.
Having in view some applications in nanophysics, in particular in nanophysics of materials, we develop new dynamical models of structured bodies with affine internal degrees of freedom. In particular, we construct some models where not only kinematics but also dynamics of systems of affine bodies is affinely invariant. Quantization schemes are developed. This is necessary in the range of physical phenomena we are interested in.
The vacuum modular Hamiltonian $K$ of the Rindler wedge in any relativistic quantum field theory is given by the boost generator. Here we investigate the modular Hamiltoninan for more general half-spaces which are bounded by an arbitrary smooth cut of a null plane. We derive a formula for the second derivative of the modular Hamiltonian with respect to the coordinates of the cut which schematically reads $K" = T_{vv}$. This formula can be integrated twice to obtain a simple expression for the modular Hamiltonian. The result naturally generalizes the standard expression for the Rindler modular Hamiltonian to this larger class of regions. Our primary assumptions are the quantum null energy condition --- an inequality between the second derivative of the von Neumann entropy of a region and the stress tensor --- and its saturation in the vacuum for these regions. We discuss the validity of these assumptions in free theories and holographic theories to all orders in $1/N$.
In this paper we use the minimax inequalities obtained by S. Park (2011) to prove the existence of weighted Nash equilibria and Pareto Nash equilibria of a multiobjective game defined on abstract convex spaces.
Autonomous vehicles have a great potential in the application of both civil and military fields, and have become the focus of research with the rapid development of science and economy. This article proposes a brief review on learning-based decision-making technology for autonomous vehicles since it is significant for safer and efficient performance of autonomous vehicles. Firstly, the basic outline of decision-making technology is provided. Secondly, related works about learning-based decision-making methods for autonomous vehicles are mainly reviewed with the comparison to classical decision-making methods. In addition, applications of decision-making methods in existing autonomous vehicles are summarized. Finally, promising research topics in the future study of decision-making technology for autonomous vehicles are prospected.
We calculate the probability for rapidity gaps in the parton cascade for different approximations within the perturbative QCD and compare the results with recent measurements. The aim is to find out whether the dual connection between the parton and hadron final states -- observed so far in various inclusive measurements -- holds as well for the extreme kinematic configurations with colour sources separated by large rapidity gaps. A description of the data is possible indeed choosing the parameters of the cascade in the range suggested by recent analyses of the energy spectra (the $k_\perp$ cutoff $Q_0~\gsim$ QCD scale $\Lambda\sim 250$ MeV).
We present OrbNet Denali, a machine learning model for electronic structure that is designed as a drop-in replacement for ground-state density functional theory (DFT) energy calculations. The model is a message-passing neural network that uses symmetry-adapted atomic orbital features from a low-cost quantum calculation to predict the energy of a molecule. OrbNet Denali is trained on a vast dataset of 2.3 million DFT calculations on molecules and geometries. This dataset covers the most common elements in bio- and organic chemistry (H, Li, B, C, N, O, F, Na, Mg, Si, P, S, Cl, K, Ca, Br, I) as well as charged molecules. OrbNet Denali is demonstrated on several well-established benchmark datasets, and we find that it provides accuracy that is on par with modern DFT methods while offering a speedup of up to three orders of magnitude. For the GMTKN55 benchmark set, OrbNet Denali achieves WTMAD-1 and WTMAD-2 scores of 7.19 and 9.84, on par with modern DFT functionals. For several GMTKN55 subsets, which contain chemical problems that are not present in the training set, OrbNet Denali produces a mean absolute error comparable to those of DFT methods. For the Hutchison conformers benchmark set, OrbNet Denali has a median correlation coefficient of R^2=0.90 compared to the reference DLPNO-CCSD(T) calculation, and R^2=0.97 compared to the method used to generate the training data (wB97X-D3/def2-TZVP), exceeding the performance of any other method with a similar cost. Similarly, the model reaches chemical accuracy for non-covalent interactions in the S66x10 dataset. For torsional profiles, OrbNet Denali reproduces the torsion profiles of wB97X-D3/def2-TZVP with an average MAE of 0.12 kcal/mol for the potential energy surfaces of the diverse fragments in the TorsionNet500 dataset.
We prove that the Morse-Novikov number of a link L in a 3-sphere is less than or equal to twice the tunnel number of L.
Most notably we prove that for $d=1,2$ the classical Strichartz norm $$\|e^{i s\Delta}f\|_{L^{2+4/d}_{s,x}(\mathbb{R}\times\mathbb{R}^d)}$$ associated to the free Schr\"{o}dinger equation is nondecreasing as the initial datum $f$ evolves under a certain quadratic heat-flow.
We have developed a linear scaling algorithm for calculating maximally-localized Wannier functions (MLWFs) using atomic orbital basis. An O(N) ground state calculation is carried out to get the density matrix (DM). Through a projection of the DM onto atomic orbitals and a subsequent O(N) orthogonalization, we obtain initial orthogonal localized orbitals. These orbitals can be maximally localized in linear scaling by simple Jacobi sweeps. Our O(N) method is validated by applying it to water molecule and wurtzite ZnO. The linear scaling behavior of the new method is demonstrated by computing the MLWFs of boron nitride nanotubes.
We present the first set of maps and band-merged catalog from the Herschel Stripe 82 Survey (HerS). Observations at 250, 350, and 500 micron were taken with the Spectral and Photometric Imaging Receiver (SPIRE) instrument aboard the Herschel Space Observatory. HerS covers 79 deg$^2$ along the SDSS Stripe 82 to a depth of 13.0, 12.9, and 14.8 mJy beam$^{-1}$ (including confusion) at 250, 350, and 500 micron, respectively. HerS was designed to measure correlations with external tracers of the dark matter density field --- either point-like (i.e., galaxies selected from radio to X-ray) or extended (i.e., clusters and gravitational lensing) --- in order to measure the bias and redshift distribution of intensities of infrared-emitting dusty star-forming galaxies and AGN. By locating HeRS in Stripe 82, we maximize the overlap with available and upcoming cosmological surveys. The band-merged catalog contains 3.3x10$^4$ sources detected at a significance of >3 $\sigma$ (including confusion noise). The maps and catalog are available at http://www.astro.caltech.edu/hers/
Solid solution effects on the strength of the finest nanocrystalline grain sizes are studied with molecular dynamics simulations of different Cu-based alloys. We find evidence of both solid solution strengthening and softening, with trends in strength controlled by how alloying affects the elastic modulus of the material. This behavior is consistent with a shift to collective grain boundary deformation physics, and provides a link between the mechanical behavior of very fine-grained nanocrystalline metals and metallic glasses.
In this paper we classify Euclidean hypersurfaces $f\colon M^n \rightarrow \mathbb{R}^{n+1}$ with a principal curvature of multiplicity $n-2$ that admit a genuine conformal deformation $\tilde{f}\colon M^n \rightarrow \mathbb{R}^{n+2}$. That $\tilde{f}\colon M^n \rightarrow \mathbb{R}^{n+2}$ is a genuine conformal deformation of $f$ means that it is a conformal immersion for which there exists no open subset $U \subset M^n$ such that the restriction $\tilde{f}|_U$ is a composition $\tilde f|_U=h\circ f|_U$ of $f|_U$ with a conformal immersion $h\colon V\to \mathbb{R}^{n+2}$ of an open subset $V\subset \mathbb{R}^{n+1}$ containing $f(U)$.
The shape of the dark matter (DM) halo is key to understanding the hierarchical formation of the Galaxy. Despite extensive efforts in recent decades, however, its shape remains a matter of debate, with suggestions ranging from strongly oblate to prolate. Here, we present a new constraint on its present shape by directly measuring the evolution of the Galactic disk warp with time, as traced by accurate distance estimates and precise age determinations for about 2,600 classical Cepheids. We show that the Galactic warp is mildly precessing in a retrograde direction at a rate of $\omega = -2.1 \pm 0.5 ({\rm statistical}) \pm 0.6 ({\rm systematic})$ km s$^{-1}$ kpc$^{-1}$ for the outer disk over the Galactocentric radius [$7.5, 25$] kpc, decreasing with radius. This constrains the shape of the DM halo to be slightly oblate with a flattening (minor axis to major axis ratio) in the range $0.84 \le q_{\Phi} \le 0.96$. Given the young nature of the disk warp traced by Cepheids (less than 200 Myr), our approach directly measures the shape of the present-day DM halo. This measurement, combined with other measurements from older tracers, could provide vital constraints on the evolution of the DM halo and the assembly history of the Galaxy.
Let T be the one-dimensional complex torus. We consider the action of an automorphism of a Riemann surface X on the cohomology of the T-equivariant determinant line bundle over the moduli space of rank two Higgs bundles on X with fixed determinant of odd degree. We define and study the automorphism equivariant Hitchin index. We prove a formula for it in terms of cohomological pairings of canonical T-equivariant classes of certain moduli spaces of parabolic Higgs bundles over the quotient Riemann surface.
We present a quantum algorithm to simulate general finite dimensional Lindblad master equations without the requirement of engineering the system-environment interactions. The proposed method is able to simulate both Markovian and non-Markovian quantum dynamics. It consists in the quantum computation of the dissipative corrections to the unitary evolution of the system of interest, via the reconstruction of the response functions associated with the Lindblad operators. Our approach is equally applicable to dynamics generated by effectively non-Hermitian Hamiltonians. We confirm the quality of our method providing specific error bounds that quantify itss accuracy.
We extend the construction given by [Chisaki et.al, arXiv:1009.1306v1] from lines to planes, and obtain the associated limit theorems for quantum walks on such a graph.
We show short-time existence for curves driven by curve diffusion flow with a prescribed contact angle $\alpha \in (0, \pi)$: The evolving curve has free boundary points, which are supported on a line and it satisfies a no-flux condition. The initial data are suitable curves of class $W_2^{\gamma}$ with $\gamma \in (\tfrac{3}{2}, 2]$. For the proof the evolving curve is represented by a height function over a reference curve: The local well-posedness of the resulting quasilinear, parabolic, fourth-order PDE for the height function is proven with the help of contraction mapping principle. Difficulties arise due to the low regularity of the initial curve. To this end, we have to establish suitable product estimates in time weighted anisotropic $L_2$-Sobolev spaces of low regularity for proving that the non-linearities are well-defined and contractive for small times.
A metastable homogeneous state exists down to zero temperature in systems of repelling objects. Zero ''fluctuation temperature'' liquid state therefore serves as a (pseudo) ''fixed point'' controlling the properties of vortex liquid below and even around melting point. There exists Madelung constant for the liquid in the limit of zero temperature which is higher than that of the solid by an amount approximately equal to the latent heat of melting. This picture is supported by an exactly solvable large $N$ Ginzburg - Landau model in magnetic field. Based on this understanding we apply Borel - Pade resummation technique to develop a theory of the vortex liquid in type II superconductors. Applicability of the effective lowest Landau level model is discussed and corrections due to higher levels is calculated. Combined with previous quantitative description of the vortex solid the melting line is located. Magnetization, entropy and specific heat jumps along it are calculated. The magnetization of liquid is larger than that of solid by $% 1.8%$ irrespective of the melting temperature. We compare the result with experiments on high $T_{c}$ cuprates $YBa_{2}Cu_{3}O_{7}$, $DyBCO$, low $% T_{c}$ material $(K,Ba)BiO_{3}$ and with Monte Carlo simulations.
We analyse an algorithm of transition between Cauchy problems for second-order wave equations and first-order symmetric hyperbolic systems in case the coefficients as well as the data are non-smooth, even allowing for regularity below the standard conditions guaranteeing well-posedness. The typical operations involved in rewriting equations into systems are then neither defined classically nor consistently extendible to the distribution theoretic setting. However, employing the nonlinear theory of generalized functions in the sense of Colombeau we arrive at clear statements about the transfer of questions concerning solvability and uniqueness from wave equations to symmetric hyperbolic systems and vice versa. Finally, we illustrate how this transfer method allows to draw new conclusions on unique solvability of the Cauchy problem for wave equations with non-smooth coefficients.
Aether Scalar-Tensor theory is a modification of general relativity proposed to explain galactic and cosmological mass discrepancies conventionally attributed to dark matter. The theory is able to fit the cosmic microwave background and the linear matter power spectrum without dark matter. In this work, we derive the Tolman-Oppenheimer-Volkoff equation in this theory and solve it for realistic nuclear equations of state to predict the mass-radius relation of neutron stars. We find solutions that are compatible with all current observations of neutron stars.
We investigate the geometric hitting set problem in the online setup for the range space $\Sigma=({\cal P},{\cal S})$, where the set $\P\subset\mathbb{R}^2$ is a collection of $n$ points and the set $\cal S$ is a family of geometric objects in $\mathbb{R}^2$. In the online setting, the geometric objects arrive one by one. Upon the arrival of an object, an online algorithm must maintain a valid hitting set by making an irreversible decision, i.e., once a point is added to the hitting set by the algorithm, it can not be deleted in the future. The objective of the geometric hitting set problem is to find a hitting set of the minimum cardinality. Even and Smorodinsky (Discret. Appl. Math., 2014) considered an online model (Model-I) in which the range space $\Sigma$ is known in advance, but the order of arrival of the input objects in $\cal S$ is unknown. They proposed online algorithms having optimal competitive ratios of $\Theta(\log n)$ for intervals, half-planes and unit disks in $\mathbb{R}^2$. Whether such an algorithm exists for unit squares remained open for a long time. This paper considers an online model (Model-II) in which the entire range space $\Sigma$ is not known in advance. We only know the set $\cal P$ but not the set $\cal S$ in advance. Note that any algorithm for Model-II will also work for Model-I, but not vice-versa. In Model-II, we obtain an optimal competitive ratio of $\Theta(\log(n))$ for unit disks and regular $k$-gon with $k\geq 4$ in $\mathbb{R}^2$. All the above-mentioned results also hold for the equivalent geometric set cover problem in Model-II.
We present the results of our survey searching for new white dwarf pulsators for observations by the TESS space telescope. We collected photometric time-series data on 14 white dwarf variable-candidates at Konkoly Observatory, and found two new bright ZZ Ceti stars, namely EGGR 120 and WD 1310+583. We performed the Fourier-analysis of the datasets. In the case of EGGR 120, which was observed on one night only, we found one significant frequency at 1332 microHz with 2.3 mmag amplitude. We successfully observed WD 1310+583 on eight nights, and determined 17 significant frequencies by the whole dataset. Seven of them seem to be independent pulsation modes between 634 and 2740 microHz, and we performed preliminary asteroseismic investigations of the star utilizing six of these periods. We also identified three new light variables on the fields of white dwarf candidates: an eclipsing binary, a candidate delta Scuti/beta Cephei and a candidate W UMa-type star.
In this paper, we show that for sufficiently strong atomic interactions, there exist analytical solutions of current-carrying nonlinear Bloch states at the Brillouin zone edge to the model of spin-orbit-coupled Bose-Einstein condensates (BECs) with symmetric spin interaction loaded into optical lattices. These simple but generic exact solutions provide an analytical demonstration of some intriguing properties which have neither an analog in the regular BEC lattice systems nor in the uniform spin-orbit-coupled BEC systems. It is an analytical example for understanding the superfluid and other related properties of the spin-orbit-coupled BEC lattice systems.
Margot (1994) in his doctoral dissertation studied extended formulations of combinatorial polytopes that arise from "smaller" polytopes via some composition rule. He introduced the "projected faces property" of a polytope and showed that this property suffices to iteratively build extended formulations of composed polytopes. For the composed polytopes, we show that an extended formulation of the type studied in this paper is always possible only if the smaller polytopes have the projected faces property. Therefore, this produces a characterization of the projected faces property. Affinely generated polyhedral relations were introduced by Kaibel and Pashkovich (2011) to construct extended formulations for the convex hull of the images of a point under the action of some finite group of reflections. In this paper we prove that the projected faces property and affinely generated polyhedral relation are equivalent conditions.
In Paper II [N. G. Phillips and B. L. Hu, previous abstract] we presented the details for the regularization of the noise kernel of a quantum scalar field in optical spacetimes by the modified point separation scheme, and a Gaussian approximation for the Green function. We worked out the regularized noise kernel for two examples: hot flat space and optical Schwarzschild metric. In this paper we consider noise kernels for a scalar field in the Schwarzschild black hole. Much of the work in the point separation approach is to determine how the divergent piece conformally transforms. For the Schwarzschild metric we find that the fluctuations of the stress tensor of the Hawking flux in the far field region checks with the analytic results given by Campos and Hu earlier [A. Campos and B. L. Hu, Phys. Rev. D {\bf 58} (1998) 125021; Int. J. Theor. Phys. {\bf 38} (1999) 1253]. We also verify Page's result [D. N. Page, Phys. Rev. {\bf D25}, 1499 (1982)] for the stress tensor, which, though used often, still lacks a rigorous proof, as in his original work the direct use of the conformal transformation was circumvented. However, as in the optical case, we show that the Gaussian approximation applied to the Green function produces significant error in the noise kernel on the Schwarzschild horizon. As before we identify the failure as occurring at the fourth covariant derivative order.
In general, isolated integrable quantum systems have been found to relax to an apparent equilibrium state in which the expectation values of few-body observables are described by the generalized Gibbs ensemble. However, recent work has shown that relaxation to such a generalized statistical ensemble can be precluded by localization in a quasiperiodic lattice system. Here we undertake complementary single-particle and many-body analyses of noninteracting spinless fermions and hard-core bosons within the Aubry-Andre model to gain insight into this phenomenon. Our investigations span both the localized and delocalized regimes of the quasiperiodic system, as well as the critical point separating the two. Considering first the case of spinless fermions, we study the dynamics of the momentum distribution function and characterize the effects of real-space and momentum-space localization on the relevant single-particle wave functions and correlation functions. We show that although some observables do not relax in the delocalized and localized regimes, the observables that do relax in these regimes do so in a manner consistent with a recently proposed Gaussian equilibration scenario, whereas relaxation at the critical point has a more exotic character. We also construct various statistical ensembles from the many-body eigenstates of the fermionic and bosonic Hamiltonians and study the effect of localization on their properties.
Biological, linguistic, sociological and economical applications of statistical physics are reviewed here. They have been made on a variety of computers over a dozen years, not only at the NIC computers. A longer description can be found in our new book, an emphasis on teaching in Eur.J.Phys. 26, S 79 and AIP Conf. Proc. 779, 49, 56, 69 and 75.
Austrin showed that the approximation ratio $\beta\approx 0.94016567$ obtained by the MAX 2-SAT approximation algorithm of Lewin, Livnat and Zwick (LLZ) is optimal modulo the Unique Games Conjecture (UGC) and modulo a Simplicity Conjecture that states that the worst performance of the algorithm is obtained on so called simple configurations. We prove Austrin's conjecture, thereby showing the optimality of the LLZ approximation algorithm, relying only on the Unique Games Conjecture. Our proof uses a combination of analytic and computational tools. We also present new approximation algorithms for two restrictions of the MAX 2-SAT problem. For MAX HORN-$\{1,2\}$-SAT, i.e., MAX CSP$(\{x\lor y,\bar{x}\lor y,x,\bar{x}\})$, in which clauses are not allowed to contain two negated literals, we obtain an approximation ratio of $0.94615981$. For MAX CSP$(\{x\lor y,x,\bar{x}\})$, i.e., when 2-clauses are not allowed to contain negated literals, we obtain an approximation ratio of $0.95397990$. By adapting Austrin's and our arguments for the MAX 2-SAT problem we show that these two approximation ratios are also tight, modulo only the UGC conjecture. This completes a full characterization of the approximability of the MAX 2-SAT problem and its restrictions.
Temporal analysis of INTEGRAL/IBIS data has revealed a 5.7195\pm0.0007 day periodicity in the supergiant fast X-ray transient (SFXT) source AX J1845.0-0433, which we interpret as the orbital period of the system. The new-found knowledge of the orbital period is utilised to investigate the geometry of the system by means of estimating an upper limit for the size of the supergiant (<27 R_{\sun}) as well as the eccentricity of the orbit (\epsilon<0.37).
We examine a case study where classical evolution emerges when observing a quantum evolution. By using a single-mode quantum Kerr evolution interrupted by measurement of the double-homodyne kind (projecting the evolved field state into classical-like coherent states or quantum squeezed states), we show that irrespective of whether the measurement is classical or quantum there is no quantum Zeno effect and the evolution turns out to be classical.
A semi-relativistic density-functional theory that includes spin-orbit couplings and Zeeman fields on equal footing with the electromagnetic potentials, is an appealing framework to develop a unified first-principles computational approach for non-collinear magnetism, spintronics, orbitronics, and topological states. The basic variables of this theory include the paramagnetic current and the spin-current density, besides the particle and the spin density, and the corresponding exchange-correlation (xc) energy functional is invariant under local U(1)$\times$SU(2) gauge transformations. The xc-energy functional must be approximated to enable practical applications, but, contrary to the case of the standard density functional theory, finding simple approximations suited to deal with realistic atomistic inhomogeneities has been a long-standing challenge. Here, we propose a way out of this impasse by showing that approximate gauge-invariant functionals can be easily generated from existing approximate functionals of ordinary density-functional theory by applying a simple {\it minimal substitution} on the kinetic energy density, which controls the short-range behavior of the exchange hole. Our proposal opens the way to the construction of approximate, yet non-empirical functionals, which do not assume weak inhomogeneity and should therefore have a wide range of applicability in atomic, molecular and condensed matter physics.
The treatment of $\gamma_{5}$ in Dimensional Regularization leads to ambiguities in field-theoretic calculations, of which one example is the coefficient of a particular term in the four-loop gauge $\beta$-functions of the Standard Model. Using Weyl Consistency Conditions, we present a scheme-independent relation between the coefficient of this term and a corresponding term in the three-loop Yukawa $\beta$-functions, where a semi-na\"ive treatment of $\gamma_{5}$ is sufficient, thereby fixing this ambiguity. We briefly outline an argument by which the same method fixes similar ambiguities at higher orders.
Large-scale vision 2D vision language models, such as CLIP can be aligned with a 3D encoder to learn generalizable (open-vocabulary) 3D vision models. However, current methods require supervised pre-training for such alignment, and the performance of such 3D zero-shot models remains sub-optimal for real-world adaptation. In this work, we propose an optimization framework: Cross-MoST: Cross-Modal Self-Training, to improve the label-free classification performance of a zero-shot 3D vision model by simply leveraging unlabeled 3D data and their accompanying 2D views. We propose a student-teacher framework to simultaneously process 2D views and 3D point clouds and generate joint pseudo labels to train a classifier and guide cross-model feature alignment. Thereby we demonstrate that 2D vision language models such as CLIP can be used to complement 3D representation learning to improve classification performance without the need for expensive class annotations. Using synthetic and real-world 3D datasets, we further demonstrate that Cross-MoST enables efficient cross-modal knowledge exchange resulting in both image and point cloud modalities learning from each other's rich representations.
In this paper, the stochastic verification theorems for stochastic control problems of reflected forward-backward stochastic differential equations are studied. We carry out the work within the frameworks of classical and viscosity solutions. The sufficient conditions of verifying the controls to be optimal are given. We also construct the feedback optimal control laws from the classical and viscosity solutions of the associated Hamilton-Jacobi-Bellman equations with obstacles. Finally, we apply the theoretical results in two concrete examples. One is for the case of the classical solution, and the other is for the case of the viscosity solution.
Graph neural networks have been successful for machine learning, as well as for combinatorial and graph problems such as the Subgraph Isomorphism Problem and the Traveling Salesman Problem. We describe an approach for computing graph sparsifiers by combining a graph neural network and Monte Carlo Tree Search. We first train a graph neural network that takes as input a partial solution and proposes a new node to be added as output. This neural network is then used in a Monte Carlo search to compute a sparsifier. The proposed method consistently outperforms several standard approximation algorithms on different types of graphs and often finds the optimal solution.
For chaotic systems there is a theory for the decay of the survival probability, and for the parametric dependence of the local density of states. This theory leads to the distinction between "perturbative" and "non-perturbative" regimes, and to the observation that semiclassical tools are useful in the latter case. We discuss what is "left" from this theory in the case of one-dimensional systems. We demonstrate that the remarkably accurate {\em uniform} semiclassical approximation captures the physics of {\em all} the different regimes, though it cannot take into account the effect of strong localization.
We systematically study the parity- and time-reversal (PT) symmetric non-Hermitian version of a quantum network proposed in the paper of Christandl et al. [Phys. Rev. Lett. 92, 187902 (2004)]. The nature of this model shows that it is a paradigm to demonstrate the complex relationship between the pseudo-Hermitian Hamiltonian and its Hermitian counterpart as well as a candidate in the experimental realization to simulate PT-symmetry breaking. We also show that this model allows a conditional perfect state transfer within the unbroken PT-symmetry region but not an arbitrary one. This is due to the fact that the evolution operator at a certain period is equivalent to the PT operator for the real-valued wave function in the elaborate PT-symmetric Hilbert space.
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where the adversary manipulates a small portion of training data such that the victim model predicts normally on the benign samples but classifies the triggered samples as the target class. The backdoor attack is an emerging yet threatening training-phase threat, leading to serious risks in DNN-based applications. In this paper, we revisit the trigger patterns of existing backdoor attacks. We reveal that they are either visible or not sparse and therefore are not stealthy enough. More importantly, it is not feasible to simply combine existing methods to design an effective sparse and invisible backdoor attack. To address this problem, we formulate the trigger generation as a bi-level optimization problem with sparsity and invisibility constraints and propose an effective method to solve it. The proposed method is dubbed sparse and invisible backdoor attack (SIBA). We conduct extensive experiments on benchmark datasets under different settings, which verify the effectiveness of our attack and its resistance to existing backdoor defenses. The codes for reproducing main experiments are available at \url{https://github.com/YinghuaGao/SIBA}.
We consider a simple rescattering mechanism to calculate a leading twist $T$-odd pion fragmentation function, a favored candidate for filtering the transversity properties of the nucleon. We evaluate the single spin azimuthal asymmetry for a transversely polarized target in semi-inclusive deep inelastic scattering (for HERMES kinematics). Additionally, we calculate the double $T$-odd $\cos2\phi$ asymmetry in this framework.
This is a comment on arXiv:1611.04445 (PRL, 118, 114801 (2017)). It is pointed out that the fundamental problems in light beam vortices and the relativistic electron vortices are not identical and have subtle differences. The significance of two length scales is underlined for the electron vortices. Local gauge transformation on the Gordon current admits vortex structure.
Electronic health records (EHRs) have become the foundation of machine learning applications in healthcare, while the utility of real patient records is often limited by privacy and security concerns. Synthetic EHR generation provides an additional perspective to compensate for this limitation. Most existing methods synthesize new records based on real EHR data, without consideration of different types of events in EHR data, which cannot control the event combinations in line with medical common sense. In this paper, we propose MSIC, a Multi-visit health Status Inference model for Collaborative EHR synthesis to address these limitations. First, we formulate the synthetic EHR generation process as a probabilistic graphical model and tightly connect different types of events by modeling the latent health states. Then, we derive a health state inference method tailored for the multi-visit scenario to effectively utilize previous records to synthesize current and future records. Furthermore, we propose to generate medical reports to add textual descriptions for each medical event, providing broader applications for synthesized EHR data. For generating different paragraphs in each visit, we incorporate a multi-generator deliberation framework to collaborate the message passing of multiple generators and employ a two-phase decoding strategy to generate high-quality reports. Our extensive experiments on the widely used benchmarks, MIMIC-III and MIMIC-IV, demonstrate that MSIC advances state-of-the-art results on the quality of synthetic data while maintaining low privacy risks.
We prove the nonexistence of smooth stable solution to the biharmonic problem $\Delta^2 u= u^p$, $u>0$ in $\R^N$ for $1 < p < \infty$ and $N < 2(1 + x_0)$, where $x_0$ is the largest root of the following equation: $$x^4 - \frac{32p(p+1)}{(p-1)^2}x^2 + \frac{32p(p+1)(p+3)}{(p-1)^3}x -\frac{64p(p+1)^2}{(p-1)^4} = 0.$$ In particular, as $x_0 > 5$ when $p > 1$, we obtain the nonexistence of smooth stable solution for any $N \leq 12$ and $p > 1$. Moreover, we consider also the corresponding problem in the half space $\R^N_+$, or the elliptic problem $\Delta^2 u= \l(u+1)^p$ on a bounded smooth domain $\O$ with the Navier boundary conditions. We will prove the regularity of the extremal solution in lower dimensions. Our results improve the previous works.
We consider in detail the calculation of the decay rate of high-energy superluminal neutrinos against (charged) lepton pair Cerenkov radiation (LPCR), and neutrino pair Cerenkov radiation (NPCR), i.e., against the decay channels nu -> nu e+ e- and nu -> nu nubar nu. Under the hypothesis of a tachyonic nature of neutrinos, these decay channels put constraints on the lifetime of high-energy neutrinos for terrestrial experiments as well as on cosmic scales. For the oncoming neutrino, we use the Lorentz-covariant tachyonic relation E_nu = (p^2 - m_nu^2)^(1/2), where m_nu is the tachyonic mass parameter. We derive both threshold conditions as well as decay and energy loss rates, using the plane-wave fundamental bispinor solutions of the tachyonic Dirac equation. Various intricacies of rest frame versus lab frame calculations are highlighted. The results are compared to the observations of high-energy IceCube neutrinos of cosmological origin.
In this note we discuss the invariance under general changes of reference frame of all the physical predictions of particle detector models in quantum field theory in general and, in particular, of those used in quantum optics to model atoms interacting with light. We find explicitly how the light-matter interaction Hamiltonians change under general coordinate transformations, and analyze the subtleties of the Hamiltonians commonly used to describe the light-matter interaction when relativistic motion is taken into account.
We discuss the dynamics of smooth diffeomorphisms of the disc with vanishing topological entropy which satisfy the mild dissipation property introduced in [CP]. In particular it contains the H\'enon maps with Jacobian up to 1/4. We prove that these systems are either (generalized) Morse Smale or infinitely renormalizable. In particular we prove for this class of diffeomorphisms a conjecture of Tresser: any diffeomorphism in the interface between the sets of systems with zero and positive entropy admits doubling cascades. This generalizes for these surface dynamics a well known consequence of Sharkovskii's theorem for interval maps.
AI agents have drawn increasing attention mostly on their ability to perceive environments, understand tasks, and autonomously achieve goals. To advance research on AI agents in mobile scenarios, we introduce the Android Multi-annotation EXpo (AMEX), a comprehensive, large-scale dataset designed for generalist mobile GUI-control agents. Their capabilities of completing complex tasks by directly interacting with the graphical user interface (GUI) on mobile devices are trained and evaluated with the proposed dataset. AMEX comprises over 104K high-resolution screenshots from 110 popular mobile applications, which are annotated at multiple levels. Unlike existing mobile device-control datasets, e.g., MoTIF, AitW, etc., AMEX includes three levels of annotations: GUI interactive element grounding, GUI screen and element functionality descriptions, and complex natural language instructions, each averaging 13 steps with stepwise GUI-action chains. We develop this dataset from a more instructive and detailed perspective, complementing the general settings of existing datasets. Additionally, we develop a baseline model SPHINX Agent and compare its performance across state-of-the-art agents trained on other datasets. To facilitate further research, we open-source our dataset, models, and relevant evaluation tools. The project is available at https://yuxiangchai.github.io/AMEX/
In this work we relate the density of the first-passage time of a Wiener process to a moving boundary with the three dimensional Bessel bridge process and a solution of the heat equation with a moving boundary. We provide bounds.
Relativistic quantum chemistry has evolved into a fertile and large field and is now becoming an integrated part of mainstream chemistry. Yet, given the much-involved physics and mathematics (as compared with nonrelativistic quantum chemistry), it is still necessary to clean up the essentials underlying the relativistic electronic structure theories and methodologies (such that uninitiated readers can pick up quickly the right ideas and tools for further development or application) and meanwhile pinpoint future directions of the field. To this end, the three aspects of electronic structure calculations, i.e., relativity, correlation, and QED, will be highlighted.
The growth of the population of space debris in the geostationary ring and the resulting threat to active satellites require insight into the dynamics of uncontrolled objects in the region. A Monte Carlo simulation analyzed the sensitivity to initial conditions of the long-term evolution of geostationary spacecraft near an unstable point of the geopotential, where irregular behavior (e.g., transitions between long libration and continuous circulation) occurs. A statistical analysis unveiled sudden transitions from order to disorder, interspersed with intervals of smooth evolution. There is a periodicity of approximately half a century in the episodes of disorder, suggesting a connection with the precession of the orbital plane, due to Earth's oblateness and lunisolar perturbations. The third-degree harmonics of the geopotential also play a vital role. They introduce an asymmetry between the unstable equilibrium points, enabling the long libration mode. The unpredictability occurs just in a small fraction of the precession cycle, when the inclination is close to zero. A simplified model, including only gravity harmonics up to degree 3 and the Earth and Moon in circular coplanar orbits is capable of reproducing most features of the high-fidelity simulation.
A new discrete-velocity model is presented to solve the three-dimensional Euler equations. The velocities in the model are of an adaptive nature---both the origin of the discrete-velocity space and the magnitudes of the discrete-velocities are dependent on the local flow--- and are used in a finite volume context. The numerical implementation of the model follows the near-equilibrium flow method of Nadiga and Pullin [1] and results in a scheme which is second order in space (in the smooth regions and between first and second order at discontinuities) and second order in time. (The three-dimensional code is included.) For one choice of the scaling between the magnitude of the discrete-velocities and the local internal energy of the flow, the method reduces to a flux-splitting scheme based on characteristics. As a preliminary exercise, the result of the Sod shock-tube simulation is compared to the exact solution.
The ability to control the asymmetric propagation of light in nanophotonic waveguides is of fundamental importance for optical communications and on-chip signal processing. However, in most studies so far, the design of such structures has been based on asymmetric mode conversion where multi-mode waveguides are involved. Here we propose a hybrid plasmonic structure that performs optical diode behavior via breaking polarization symmetry in single mode waveguides. The exploited physical mechanism is based on the combination of polarization rotation and polarization selection. The whole device is ultra-compact with a footprint of 2.95*14.18 {\mu}m2, whose dimension is much smaller than the device previously proposed for the similar function. The extinction ratio is greater than 11.8 dB for both forward and backward propagation at {\lambda} = 1550 nm (19.43 dB for forward propagation and 11.8 dB for the backward one). The operation bandwidth of the device is as great as 70 nm (form 1510 to 1580 nm) for extinction > 10 dB. These results may find important applications in the integrated devices where polarization handling or unidirectional propagation is required.
Transfer learning between different language pairs has shown its effectiveness for Neural Machine Translation (NMT) in low-resource scenario. However, existing transfer methods involving a common target language are far from success in the extreme scenario of zero-shot translation, due to the language space mismatch problem between transferor (the parent model) and transferee (the child model) on the source side. To address this challenge, we propose an effective transfer learning approach based on cross-lingual pre-training. Our key idea is to make all source languages share the same feature space and thus enable a smooth transition for zero-shot translation. To this end, we introduce one monolingual pre-training method and two bilingual pre-training methods to obtain a universal encoder for different languages. Once the universal encoder is constructed, the parent model built on such encoder is trained with large-scale annotated data and then directly applied in zero-shot translation scenario. Experiments on two public datasets show that our approach significantly outperforms strong pivot-based baseline and various multilingual NMT approaches.
It is shown theoretically that the Luttinger liquid phase in quasi-one-dimensional conductors can exist in the presence of impurities in a form of a collection of bounded Luttinger liquids. The conclusion is based upon the observation by Kane and Fisher that a local impurity potential in Luttinger liquid acts, at low energies, as an infinite barrier. This leads to a discrete spectrum of collective charge and spin density fluctuations, so that interchain hopping can be considered as a small parameter at temperatures below the minimum excitation energy of the collective modes. The results are compared with recent experimental observation of a Luttinger-liquid-like behavior in thin NbSe$_3$ and TaS$_3$ wires.
Aims: Active Galactic Nuclei are known to be variable throughout the electromagnetic spectrum. An energy domain poorly studied in this respect is the hard X-ray range above 20 keV. Methods: The first 9 months of the Swift/BAT all-sky survey are used to study the 14 - 195 keV variability of the 44 brightest AGN. The sources have been selected due to their detection significance of >10 sigma. We tested the variability using a maximum likelihood estimator and by analysing the structure function. Results: Probing different time scales, it appears that the absorbed AGN are more variable than the unabsorbed ones. The same applies for the comparison of Seyfert 2 and Seyfert 1 objects. As expected the blazars show stronger variability. 15% of the non-blazar AGN show variability of >20% compared to the average flux on time scales of 20 days, and 30% show at least 10% flux variation. All the non-blazar AGN which show strong variability are low-luminosity objects with L(14-195 keV) < 1E44 erg/sec. Conclusions: Concerning the variability pattern, there is a tendency of unabsorbed or type 1 galaxies being less variable than the absorbed or type 2 objects at hardest X-rays. A more solid anti-correlation is found between variability and luminosity, which has been previously observed in soft X-rays, in the UV, and in the optical domain.
Within smart manufacturing, data driven techniques are commonly adopted for condition monitoring and fault diagnosis of rotating machinery. Classical approaches use supervised learning where a classifier is trained on labeled data to predict or classify different operational states of the machine. However, in most industrial applications, labeled data is limited in terms of its size and type. Hence, it cannot serve the training purpose. In this paper, this problem is tackled by addressing the classification task as a similarity measure to a reference sample rather than a supervised classification task. Similarity-based approaches require a limited amount of labeled data and hence, meet the requirements of real-world industrial applications. Accordingly, the paper introduces a similarity-based framework for predictive maintenance (PdM) of rotating machinery. For each operational state of the machine, a reference vibration signal is generated and labeled according to the machine's operational condition. Consequentially, statistical time analysis, fast Fourier transform (FFT), and short-time Fourier transform (STFT) are used to extract features from the captured vibration signals. For each feature type, three similarity metrics, namely structural similarity measure (SSM), cosine similarity, and Euclidean distance are used to measure the similarity between test signals and reference signals in the feature space. Hence, nine settings in terms of feature type-similarity measure combinations are evaluated. Experimental results confirm the effectiveness of similarity-based approaches in achieving very high accuracy with moderate computational requirements compared to machine learning (ML)-based methods. Further, the results indicate that using FFT features with cosine similarity would lead to better performance compared to the other settings.
We investigate $ B_s^0\rightarrow l^+ l^-\gamma $ decays in a nonuniversal Z' model derived from extension of the standard model (SM). Considering the Z'-mediated flavor changing neutral current (FCNC) effects we calculate the branching ratio and forwardbackward asymmetry for $ B_s^0\rightarrow l^+ l^-\gamma $ decay processes. We compare the obtained results with predictions of the SM and discuss the sensitivity of the observables to Z' boson coupling parameters. We find the branching ratios are enhanced by one order from SM predictions in Z' model scenario. We also observe the variation of forward-backward asymmetry with the Z' boson coupling parameters portrait discrimination between NP effects and SM results.
Regular polygonal complexes in euclidean 3-space are discrete polyhedra-like structures with finite or infinite polygons as faces and with finite graphs as vertex-figures, such that their symmetry groups are transitive on the flags. The present paper and its predecessor describe a complete classification of regular polygonal complexes in 3-space. In Part I we established basic structural results for the symmetry groups, discussed operations on their generators, characterized the complexes with face mirrors as the 2-skeletons of the regular 4-apeirotopes in 3-space, and fully enumerated the simply flag-transitive complexes with mirror vector (1,2). In this paper, we complete the enumeration of all regular polygonal complexes and in particular describe the simply flag-transitive complexes for the remaining mirror vectors. It is found that, up to similarity, there are precisely 25 regular polygonal complexes which are not regular polyhedra, namely 21 simply flag-transitive complexes and 4 complexes which are 2-skeletons of regular 4-apeirotopes.
A multitude of explainability methods and associated fidelity performance metrics have been proposed to help better understand how modern AI systems make decisions. However, much of the current work has remained theoretical -- without much consideration for the human end-user. In particular, it is not yet known (1) how useful current explainability methods are in practice for more real-world scenarios and (2) how well associated performance metrics accurately predict how much knowledge individual explanations contribute to a human end-user trying to understand the inner-workings of the system. To fill this gap, we conducted psychophysics experiments at scale to evaluate the ability of human participants to leverage representative attribution methods for understanding the behavior of different image classifiers representing three real-world scenarios: identifying bias in an AI system, characterizing the visual strategy it uses for tasks that are too difficult for an untrained non-expert human observer as well as understanding its failure cases. Our results demonstrate that the degree to which individual attribution methods help human participants better understand an AI system varied widely across these scenarios. This suggests a critical need for the field to move past quantitative improvements of current attribution methods towards the development of complementary approaches that provide qualitatively different sources of information to human end-users.
We characterise, in the setting of the Kodaira-Spencer deformation theory, the twistor spaces of (co-)CR quaternionic manifolds. As an application, we prove that, locally, the leaf space of any nowhere zero quaternionic vector field on a quaternionic manifold is endowed with a natural co-CR quaternionic structure. Also, for any positive integers $k$ and $l$, with $kl$ even, we obtain the geometric objects whose twistorial counterparts are complex manifolds endowed with a conjugation without fixed points and which preserves an embedded Riemann sphere whose normal bundle is $l$ times the line bundle of Chern number $k$. We apply these results to prove the existence of natural classes of co-CR quaternionic manifolds.
In autonomous driving, 3D object detection based on multi-modal data has become an indispensable approach when facing complex environments around the vehicle. During multi-modal detection, LiDAR and camera are simultaneously applied for capturing and modeling. However, due to the intrinsic discrepancies between the LiDAR point and camera image, the fusion of the data for object detection encounters a series of problems. Most multi-modal detection methods perform even worse than LiDAR-only methods. In this investigation, we propose a method named PTA-Det to improve the performance of multi-modal detection. Accompanied by PTA-Det, a Pseudo Point Cloud Generation Network is proposed, which can convert image information including texture and semantic features by pseudo points. Thereafter, through a transformer-based Point Fusion Transition (PFT) module, the features of LiDAR points and pseudo points from image can be deeply fused under a unified point-based representation. The combination of these modules can conquer the major obstacle in feature fusion across modalities and realizes a complementary and discriminative representation for proposal generation. Extensive experiments on the KITTI dataset show the PTA-Det achieves a competitive result and support its effectiveness.
The purpose of this paper is to complete the proof of the following result. Let $0 < \beta \leq \alpha < 1$ and $\kappa > 0$. Then, there exists $\eta > 0$ such that whenever $A,B \subset \mathbb{R}$ are Borel sets with $\dim_{\mathrm{H}} A = \alpha$ and $\dim_{\mathrm{H}} B = \beta$, then $$\dim_{\mathrm{H}} \{c \in \mathbb{R} : \dim_{\mathrm{H}} (A + cB) \leq \alpha + \eta\} \leq \tfrac{\alpha - \beta}{1 - \beta} + \kappa.$$ This extends a result of Bourgain from 2010, which contained the case $\alpha = \beta$. This paper is a sequel to the author's previous work from 2021 which, roughly speaking, established the same result with $\dim_{\mathrm{H}} (A + cB)$ replaced by $\dim_{\mathrm{B}}(A + cB)$, the box dimension of $A + cB$. It turns out that, at the level of $\delta$-discretised statements, the superficially weaker box dimension result formally implies the Hausdorff dimension result.
We present a method for computing the flux of energy through a closed surface containing a gravitating system. This method, which is based on the quasilocal formalism of Brown and York, is illustrated by two applications: a calculation of (i) the energy flux, via gravitational waves, through a surface near infinity and (ii) the tidal heating in the local asymptotic frame of a body interacting with an external tidal field. The second application represents the first use of the quasilocal formalism to study a non-stationary spacetime and shows how such methods can be used to study tidal effects in isolated gravitating systems.
Moore digraphs, that is digraphs with out-degree $d$, diameter $k$ and order equal to the Moore bound $M(d,k) = 1 + d + d^2 + \dots +d^k$, arise in the study of optimal network topologies. In an attempt to find digraphs with a `Moore-like' structure, attention has recently been devoted to the study of small digraphs with minimum out-degree $d$ such that between any pair of vertices $u,v$ there is at most one directed path of length $\leq k$ from $u$ to $v$; such a digraph has order $M(d,k)+\epsilon $ for some small excess $\epsilon $. Sillasen et al. have shown that there are no digraphs with out-degree two and excess one. The present author has classified all digraphs with out-degree two and excess two. In this paper it is proven that there are no diregular digraphs with out-degree two and excess three for $k \geq 3$, thereby providing the first classification of digraphs with order three away from the Moore bound for a fixed out-degree.
Voice User Interfaces (VUIs) owing to recent developments in Artificial Intelligence (AI) and Natural Language Processing (NLP), are becoming increasingly intuitive and functional. They are especially promising for older adults, also with special needs, as VUIs remove some barriers related to access to Information and Communications Technology (ICT) solutions. In this pilot study we examine interdisciplinary opportunities in the area of VUIs as assistive technologies, based on an exploratory study with older adults, and a follow-up in-depth pilot study with two participants regarding the needs of people who are gradually losing their sight at a later age.
We consider the structure of anisotropic exchange interactions in ytterbium-based insulating rare-earth magnets built from edge-sharing octahedra. We argue the features of trivalent ytterbium and this structural configuration allow for a qualitative determination of the different anisotropic exchange regimes that may manifest in such compounds. The validity of such super-exchange calculations is tested through comparison to the well-characterized breathing pyrochlore compound Ba$_3$Yb$_2$Zn$_5$O$_{11}$. With this in hand, we then consider application to three-dimensional pyrochlore spinels as well as two-dimensional honeycomb and triangular lattice systems built from such edge-sharing octahedra. We find an extended regime of robust emergent weak anisotropy with dominant antiferromagnetic Heisenberg interactions as well as smaller regions with strong anisotropy. We discuss the implications of our results for known compounds with the above structures, such as the spinels AYb$_2$X$_4$ (A = Cd, Mg, X = S, Se), the triangular compound YbMgGaO$_4$, which have recently emerged as promising candidates for observing unconventional magnetic phenomena. Finally, we speculate on implications for the R$_2$M$_2$O$_7$ pyrochlore compounds and some little studied honeycomb ytterbium magnets.
We give a proof of the H\"older continuity of weak solutions of certain degenerate doubly nonlinear parabolic equations in measure spaces. We only assume the measure to be a doubling non-trivial Borel measure which supports a Poincar\'e inequality. The proof discriminates between large scales, for which a Harnack inequality is used, and small scales, that require intrinsic scaling methods.
Pulsars are precision celestial clocks. When being put in a binary, the ticking conveys the secret of underlying spacetime geometrodynamics. We use pulsars to test if the gravitational interaction possesses a tiny deviation from Einstein's General Relativity (GR). In the framework of Standard-Model Extension (SME), we systematically search for Lorentz-violating operators cataloged by (a) the minimal couplings of mass dimension 4, (b) the CPT symmetry of mass dimension 5, and (c) the gravitational weak equivalence principle (GWEP) of mass dimension 8. No deviation from GR was found yet.
The isolated horizon framework was introduced in order to provide a local description of black holes that are in equilibrium with their (possibly dynamic) environment. Over the past several years, the framework has been extended to include matter fields (dilaton, Yang-Mills etc) in D=4 dimensions and cosmological constant in $D\geq3$ dimensions. In this article we present a further extension of the framework that includes black holes in higher-dimensional Einstein-Gauss-Bonnet (EGB) gravity. In particular, we construct a covariant phase space for EGB gravity in arbitrary dimensions which allows us to derive the first law. We find that the entropy of a weakly isolated and non-rotating horizon is given by $\mathcal{S}=(1/4G_{D})\oint_{S^{D-2}}\bm{\tilde{\epsilon}}(1+2\alpha\mathcal{R})$. In this expression $S^{D-2}$ is the $(D-2)$-dimensional cross section of the horizon with area form $\bm{\tilde{\epsilon}}$ and Ricci scalar $\mathcal{R}$, $G_{D}$ is the $D$-dimensional Newton constant and $\alpha$ is the Gauss-Bonnet parameter. This expression for the horizon entropy is in agreement with those predicted by the Euclidean and Noether charge methods. Thus we extend the isolated horizon framework beyond Einstein gravity.
Quasi branch and bound is a recently introduced generalization of branch and bound, where lower bounds are replaced by a relaxed notion of quasi-lower bounds, required to be lower bounds only for sub-cubes containing a minimizer. This paper is devoted to studying the possible benefits of this approach, for the problem of minimizing a smooth function over a cube. This is accomplished by suggesting two quasi branch and bound algorithms, qBnB(2) and qBnB(3), that compare favorably with alternative branch and bound algorithms. The first algorithm we propose, qBnB(2), achieves second order convergence based only on a bound on second derivatives, without requiring calculation of derivatives. As such, this algorithm is suitable for derivative free optimization, for which typical algorithms such as Lipschitz optimization only have first order convergence and so suffer from limited accuracy due to the clustering problem. Additionally, qBnB(2) is provably more efficient than the second order Lipschitz gradient algorithm which does require exact calculation of gradients. The second algorithm we propose, qBnB(3), has third order convergence and finite termination. In contrast with BnB algorithms with similar guarantees who typically compute lower bounds via solving relatively time consuming convex optimization problems, calculation of qBnB(3) bounds only requires solving a small number of Newton iterations. Our experiments verify the potential of both these methods in comparison with state of the art branch and bound algorithms.
Using an approach that treats the Ricci scalar itself as a degree of freedom, we analyze the cosmological evolution within an f(R) model that has been proposed recently (exponential gravity) and that can be viable for explaining the accelerated expansion and other features of the Universe. This approach differs from the usual scalar-tensor method and, among other things, it spares us from dealing with unnecessary discussions about frames. It also leads to a simple system of equations which is particularly suited for a numerical analysis.
A method is presented for deriving the nonrelativistic quantum hamiltonian of a free massive fermion from the relativistic lagrangian of the Lorentz-violating standard-model extension. It permits the extraction of terms at arbitrary order in a Foldy-Wouthuysen expansion in inverse powers of the mass. The quantum particle hamiltonian is obtained and its nonrelativistic limit is given explicitly to third order.
Recent work on entity coreference resolution (CR) follows current trends in Deep Learning applied to embeddings and relatively simple task-related features. SOTA models do not make use of hierarchical representations of discourse structure. In this work, we leverage automatically constructed discourse parse trees within a neural approach and demonstrate a significant improvement on two benchmark entity coreference-resolution datasets. We explore how the impact varies depending upon the type of mention.
We present the analysis of the Chandra X-ray Observatory observations of the eccentric gamma-ray binary PSR B1259-63/LS 2883. The analysis shows that the extended X-ray feature seen in previous observations is still moving away from the binary with an average projected velocity of about 0.07c and shows a hint of acceleration. The spectrum of the feature appears to be hard (photon index of 0.8) with no sign of softening compared to previously measured values. We interpret it as a clump of plasma ejected from the binary through the interaction of the pulsar with the decretion disk of the O-star around periastron passage. We suggest that the clump is moving in the unshocked relativistic pulsar wind (PW), which can accelerate the clump. Its X-ray emission can be interpreted as synchrotron radiation of the PW shocked by the collision with the clump.
We introduce new methods for phylogenetic tree quartet construction by using machine learning to optimize the power of phylogenetic invariants. Phylogenetic invariants are polynomials in the joint probabilities which vanish under a model of evolution on a phylogenetic tree. We give algorithms for selecting a good set of invariants and for learning a metric on this set of invariants which optimally distinguishes the different models. Our learning algorithms involve linear and semidefinite programming on data simulated over a wide range of parameters. We provide extensive tests of the learned metrics on simulated data from phylogenetic trees with four leaves under the Jukes-Cantor and Kimura 3-parameter models of DNA evolution. Our method greatly improves on other uses of invariants and is competitive with or better than neighbor-joining. In particular, we obtain metrics trained on trees with short internal branches which perform much better than neighbor joining on this region of parameter space.
In the perturbative QCD with $N_c\to\infty$ equations for the amplitude of the nucleus-nucleus scattering are derived by the effective field method. The asymptotic form of the solution is discussed. It is argued that in the high-energy limit the total nucleus-nucleus cross-sections become constant and purely geometrical.
Computation of collisional energy loss in a finite size QCD medium has become crucial to obtain reliable predictions for jet quenching in ultra-relativistic heavy ion collisions. We here compute this energy loss up to the zeroth order in opacity. Our approach consistently treats both soft and hard contributions to the collisional energy loss. Consequently, it removes the unphysical energy gain in a region of lower momenta obtained by previous computations. Most importantly, we show that for characteristic QCD medium scales, finite size effects on the collisional energy loss are not significant.
Strongly correlated layered 2D systems are of central importance in condensed matter physics, but their numerical study is very challenging. Motivated by the enormous successes of tensor networks for 1D and 2D systems, we develop an efficient tensor network approach based on infinite projected entangled-pair states (iPEPS) for layered 2D systems. Starting from an anisotropic 3D iPEPS ansatz, we propose a contraction scheme in which the weakly-interacting layers are effectively decoupled away from the center of the layers, such that they can be efficiently contracted using 2D contraction methods while keeping the center of the layers connected in order to capture the most relevant interlayer correlations. We present benchmark data for the anisotropic 3D Heisenberg model on a cubic lattice, which shows close agreement with quantum Monte Carlo (QMC) and full 3D contraction results. Finally, we study the dimer to N\'eel phase transition in the Shastry-Sutherland model with interlayer coupling, a frustrated spin model which is out of reach of QMC due to the negative sign problem.
We present examples of simple electromagnetic systems in which energy, linear momentum, and angular momentum exhibit interesting behavior. The systems are sufficiently simple to allow exact solutions of Maxwell's equations in conjunction with the electrodynamic laws of force, torque, energy, and momentum. In all the cases examined, conservation of energy and momentum is confirmed.
In a distributed storage systems (DSS), regenerating codes are used to optimize bandwidth in the repair process of a failed node. To optimize other DSS parameters such as computation and disk I/O, Distributed Replication-based Simple Storage (Dress) Codes consisting of an inner Fractional Repetition (FR) code and an outer MDS code are commonly used. Thus constructing FR codes is an important research problem, and several constructions using graphs and designs have been proposed. In this paper, we present an algorithm for constructing the node-packet distribution matrix of FR codes and thus enumerate some FR codes up to a given number of nodes n. We also present algorithms for constructing regular graphs which give rise to FR codes.
Shock wave lithotripsy (SWL) has been widely used for non-invasive treatment of kidney stones. Cavitation plays an important role in stone fragmentation, yet may also contribute to renal injury during SWL. It is therefore crucial to determine the spatiotemporal distributions of cavitation activities to maximize stone fragmentation while minimizing tissue injury. Traditional cavitation detection methods include high-speed optical imaging, active cavitation mapping (ACM), and passive cavitation mapping (PCM). While each of the three methods provides unique information about the dynamics of the bubbles, PCM has most practical applications in biological tissues. To image the dynamics of cavitation bubble collapse, we previously developed a sliding-window PCM (SW-PCM) method to identify each bubble collapse with high temporal and spatial resolution. To further validate and optimize the SW-PCM method, in this work, we have developed tri-modality cavitation imaging that includes 3D high-speed optical imaging, ACM, and PCM seamlessly integrated in a single system. Using the tri-modality system, we imaged and analyzed laser-induced single cavitation bubbles in both free and constricted space and shockwave-induced cavitation clusters. Collectively, our results have demonstrated the high reliability and spatial-temporal accuracy of the SW-PCM approach, which paves the way for future in vivo applications on large animals and humans in SWL.
Fundamental solutions for the free Dirac electron and Einstein photon equations in position coordinates are constructed as matrix valued functionals on the space of bump functions. It is shown that these fundamental solutions are related by a unitary transform via the Cauchy distribution in imaginary time. We study the way the classical relativistic mechanics of the free particle comes from the quantum mechanics of the free Dirac electron.
We investigate a large class of linear boundary value problems for the general first-order one-dimensional hyperbolic systems in the strip $[0,1]\times\R$. We state rather broad natural conditions on the data under which the operators of the problems satisfy the Fredholm alternative in the spaces of continuous and time-periodic functions. A crucial ingredient of our analysis is a non-resonance condition, which is formulated in terms of the data responsible for the bijective part of the Fredholm operator. In the case of $2\times 2$ systems with reflection boundary conditions, we provide a criterium for the non-resonant behavior of the system.
Generative artificial intelligence (AI) is poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make numerous human decisions, there is limited understanding of how individuals respond to its use in social interaction. In particular, it remains unclear how individuals engage with algorithms when the interaction entails consequences for other people. Here, we report the results of a large-scale pre-registered online experiment (N = 3,552) indicating diminished fairness, trust, trustworthiness, cooperation, and coordination by human players in economic twoplayer games, when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse welfare effects when individuals are uncertain about whether they are interacting with a human or generative AI. Therefore, the promotion of AI transparency, often suggested as a solution to mitigate the negative impacts of generative AI on society, shows a detrimental effect on welfare in our study. Concurrently, participants frequently delegate decisions to ChatGPT, particularly when the AI's involvement is undisclosed, and individuals struggle to discern between AI and human decisions.
We prove that for the $d$-regular tessellations of the hyperbolic plane by $k$-gons, there are exponentially more self-avoiding walks of length $n$ than there are self-avoiding polygons of length $n$. We then prove that this property implies that the self-avoiding walk is ballistic, even on an arbitrary vertex-transitive graph. Moreover, for every fixed $k$, we show that the connective constant for self-avoiding walks satisfies the asymptotic expansion $d-1-O(1/d)$ as $d\to \infty$; on the other hand, the connective constant for self-avoiding polygons remains bounded. Finally, we show for all but two tessellations that the number of self-avoiding walks of length $n$ is comparable to the $n$th power of their connective constant. Some of these results were previously obtained by Madras and Wu \cite{MaWuSAW} for all but finitely many regular tessellations of the hyperbolic plane.
As data collections become larger, exploratory regression analysis becomes more important but more challenging. When observations are hierarchically clustered the problem is even more challenging because model selection with mixed effect models can produce misleading results when nonlinear effects are not included into the model (Bauer and Cai, 2009). A machine learning method called boosted decision trees (Friedman, 2001) is a good approach for exploratory regression analysis in real data sets because it can detect predictors with nonlinear and interaction effects while also accounting for missing data. We propose an extension to boosted decision decision trees called metboost for hierarchically clustered data. It works by constraining the structure of each tree to be the same across groups, but allowing the terminal node means to differ. This allows predictors and split points to lead to different predictions within each group, and approximates nonlinear group specific effects. Importantly, metboost remains computationally feasible for thousands of observations and hundreds of predictors that may contain missing values. We apply the method to predict math performance for 15,240 students from 751 schools in data collected in the Educational Longitudinal Study 2002 (Ingels et al., 2007), allowing 76 predictors to have unique effects for each school. When comparing results to boosted decision trees, metboost has 15% improved prediction performance. Results of a large simulation study show that metboost has up to 70% improved variable selection performance and up to 30% improved prediction performance compared to boosted decision trees when group sizes are small
For a certain class of asymmetric L\'evy processes where the origin is regular for itself, the renormalized zero resolvent is proved to be harmonic for the killed process upon hitting zero.
In this paper we propose a new efficient approach for numerical calculation of equillibriums in multistage transport problems. In the very core of our approach lies the proper combination of Universal Gradient Method proposed by Yu. Nesterov (2013) and conception of inexact oracle (Devolder--Glineur--Nesterov, 2011). In particular our technique allows us to calculate Wasserstein's Barycenter in a fast manner (this results generalized M. Cuturi et al. (2014)).
In matrix theory, a well established relation $(AB)^{T}=B^{T}A^{T}$ holds for any two matrices $A$ and $B$ for which the product $AB$ is defined. Here $T$ denote the usual transposition. In this work, we explore the possibility of deriving the matrix equality $(AB)^{\Gamma}=A^{\Gamma}B^{\Gamma}$ for any $4 \times 4$ matrices $A$ and $B$, where $\Gamma$ denote the partial transposition. We found that, in general, $(AB)^{\Gamma}\neq A^{\Gamma}B^{\Gamma}$ holds for $4 \times 4$ matrices $A$ and $B$ but there exist particular set of $4 \times 4$ matrices for which $(AB)^{\Gamma}= A^{\Gamma}B^{\Gamma}$ holds. We have exploited this matrix equality to investigate the separability problem. Since it is possible to decompose the density matrices $\rho$ into two positive semi-definite matrices $A$ and $B$ so we are able to derive the separability condition for $\rho$ when $\rho^{\Gamma}=(AB)^{\Gamma}=A^{\Gamma}B^{\Gamma}$ holds. Due to the non-uniqueness property of the decomposition of the density matrix into two positive semi-definte matrices $A$ and $B$, there is a possibility to generalise the matrix equality for density matrices lives in higher dimension. These results may help in studying the separability problem for higher dimensional and multipartite system.
The vertical shear instability (VSI) is a hydrodynamical instability that requires rapid gas cooling and has been suggested to operate in outer regions of protoplanetary disks. The VSI drives turbulence with strong vertical motions, which could regulate the dust growth and settling. However, dust growth and settling can regulate the VSI because dust depletion makes gas cooling inefficient in outer disk regions that are optically thin to their own thermal emission. In this study, we quantify this potentially stabilizing effects of dust evolution on the VSI based on the linear analysis. We construct a model for calculating the cooling timescale, taking into account dust growth beyond micron sizes and size-dependent settling. Combining the model with the linear stability analysis, we map the region where the VSI operates, which we call the VSI zone, and estimate the maximum growth rate at each radial position. We find that dust growth as well as settling makes the VSI zone more confined around the midplane. This causes a decrease in the growth rate because the vertical shear of the rotation velocity, which is the source of the instability, is weaker at lower altitude. In our default disk model with 0.01 solar masses, dust growth from 10 micron to 1 mm causes a decrease in the growth rate by a factor of more than 10. The suppression of VSI-driven turbulence by dust evolution may promote further dust evolution in the outer regions and also explain a high degree of dust settling observed in the disk around HL Tau.
Recent research has shown that word embedding spaces learned from text corpora of different languages can be aligned without any parallel data supervision. Inspired by the success in unsupervised cross-lingual word embeddings, in this paper we target learning a cross-modal alignment between the embedding spaces of speech and text learned from corpora of their respective modalities in an unsupervised fashion. The proposed framework learns the individual speech and text embedding spaces, and attempts to align the two spaces via adversarial training, followed by a refinement procedure. We show how our framework could be used to perform spoken word classification and translation, and the results on these two tasks demonstrate that the performance of our unsupervised alignment approach is comparable to its supervised counterpart. Our framework is especially useful for developing automatic speech recognition (ASR) and speech-to-text translation systems for low- or zero-resource languages, which have little parallel audio-text data for training modern supervised ASR and speech-to-text translation models, but account for the majority of the languages spoken across the world.
We uncover the quantum fluctuation-response inequality, which, in the most general setting, establishes a bound for the mean difference of an observable at two different quantum states, in terms of the quantum relative entropy. When the spectrum of the observable is bounded, the sub-Gaussian property is used to further our result by explicitly linking the bound with the sub-Gaussian norm of the observable, based on which we derive a novel bound for the sum of statistical errors in quantum hypothesis testing. This error bound holds nonasymptotically and is stronger and more informative than that based on quantum Pinsker's inequality. We also show the versatility of our results by their applications in problems like thermodynamic inference and speed limit.
The emergence of Linked Data on the WWW has spawned research interest in an online execution of declarative queries over this data. A particularly interesting approach is traversal-based query execution which fetches data by traversing data links and, thus, is able to make use of up-to-date data from initially unknown data sources. The downside of this approach is the delay before the query engine completes a query execution. In this paper, we address this problem by proposing an approach to return as many elements of the result set as soon as possible. The basis of this approach is a traversal strategy that aims to fetch result-relevant data as early as possible. The challenge for such a strategy is that the query engine does not know a priori which of the data sources that will be discovered during the query execution contain result-relevant data. We introduce 16 different traversal approaches and experimentally study their impact on response times. Our experiments show that some of the approaches can achieve significant improvements over the baseline of looking up URIs on a first-come, first-served basis. Additionally, we verify the importance of these approaches by showing that typical query optimization techniques that focus solely on the process of constructing the query result cannot have any significant impact on the response times of traversal-based query executions.
New hard-scattering measurements from the LHC proton-lead run have the potential to provide important constraints on the nuclear parton distributions and thus contributing to a better understanding of the initial state in heavy ion collisions. In order to quantify these constraints, as well as to assess the compatibility with available nuclear data from fixed target experiments and from RHIC, the traditional strategy is to perform a global fit of nuclear PDFs. This procedure is however time consuming and technically challenging, and moreover can only be performed by the PDF fitters themselves. In the case of proton PDFs, an alternative approach has been suggested that uses Bayesian inference to propagate the effects of new data into the PDFs without the need of refitting. In this work, we apply this reweighting procedure to study the impact on nuclear PDFs of low-mass Drell-Yan and single-inclusive hadroproduction pseudo-data from proton-lead collisions at the LHC as representative examples. In the hadroproduction case, in addition we assess the possibility of discriminating between the DGLAP and CGC production frameworks. We find that the LHC proton-lead data could lead to a substantial reduction of the uncertainties on nuclear PDFs, in particular for the small-x gluon PDF where uncertainties could decrease by up to a factor two. The Monte Carlo replicas of EPS09 used in the analysis are released as a public code for general use. It can be directly used, in particular, by the experimental collaborations to check, in a straightforward manner, the degree of compatibility of the new data with the global nPDF analyses.
Let $D$ be a digraph. We define the minimum semi-degree of $D$ as $\delta^{0}(D) := \min \{\delta^{+}(D), \delta^{-}(D)\}$. Let $k$ be a positive integer, and let $S = \{s\}$ and $T = \{t_{1}, \dots ,t_{k}\}$ be any two disjoint subsets of $V(D)$. A set of $k$ internally disjoint paths joining source set $S$ and sink set $T$ that cover all vertices $D$ are called a one-to-many $k$-disjoint directed path cover ($k$-DDPC for short) of $D$. A digraph $D$ is semicomplete if for every pair $x,y$ of vertices of it, there is at least one arc between $x$ and $y$. In this paper, we prove that every semicomplete digraph $D$ of sufficiently large order $n$ with $\delta^{0}(D) \geq \lceil (n+k-1)/2\rceil$ has a one-to-many $k$-DDPC joining any disjoint source set $S$ and sink set $T$, where $S = \{s\}, T = \{t_{1}, \dots, t_{k}\}$.