text
stringlengths
6
128k
Let T be an involution of the finite dimensional complex reductive Lie algebra g and g=k+p be the associated Cartan decomposition. Denote by K the adjoint group of k. The K-module p is the union of the subsets p^{(m)}={x | dim K.x =m}, indexed by integers m, and the K-sheets of (g,T) are the irreducible components of the p^{(m)}. The sheets can be, in turn, written as a union of so-called Jordan K-classes. We introduce conditions in order to describe the sheets and Jordan K-classes in terms of Slodowy slices. When g is of classical type, the K-sheets are shown to be smooth; if g=gl_N a complete description of sheets and Jordan K-classes is then obtained.
We show how the semiclassical Langevin method can be extended to calculations of higher-than-second cumulants of noise. These cumulants are affected by indirect correlations between the fluctuations, which may be considered as "noise of noise." We formulate simple diagrammatic rules for calculating the higher cumulants and apply them to mesoscopic diffusive contacts and chaotic cavities. As one of the application of the method, we analyze the frequency dependence of the third cumulant of current in these systems and show that it contains additional peculiarities as compared to the second cumulant. The effects of environmental feedback in measurements of the third cumulant are also discussed in terms of this method.
This paper proposes high-order accurate well-balanced (WB) energy stable (ES) adaptive moving mesh finite difference schemes for the shallow water equations (SWEs) with non-flat bottom topography. To enable the construction of the ES schemes on moving meshes, a reformulation of the SWEs is introduced, with the bottom topography as an additional conservative variable that evolves in time. The corresponding energy inequality is derived based on a modified energy function, then the reformulated SWEs and energy inequality are transformed into curvilinear coordinates. A two-point energy conservative (EC) flux is constructed, and high-order EC schemes based on such a flux are proved to be WB that they preserve the lake at rest. Then high-order ES schemes are derived by adding suitable dissipation terms to the EC schemes, which are newly designed to maintain the WB and ES properties simultaneously. The adaptive moving mesh strategy is performed by iteratively solving the Euler-Lagrangian equations of a mesh adaptation functional. The fully-discrete schemes are obtained by using the explicit strong-stability preserving third-order Runge-Kutta method. Several numerical tests are conducted to validate the accuracy, WB and ES properties, shock-capturing ability, and high efficiency of the schemes.
Deep reinforcement learning (DRL) has revolutionized learning and actuation in applications such as game playing and robotic control. The cost of data collection, i.e., generating transitions from agent-environment interactions, remains a major challenge for wider DRL adoption in complex real-world problems. Following a cloud-native paradigm to train DRL agents on a GPU cloud platform is a promising solution. In this paper, we present a scalable and elastic library ElegantRL-podracer for cloud-native deep reinforcement learning, which efficiently supports millions of GPU cores to carry out massively parallel training at multiple levels. At a high-level, ElegantRL-podracer employs a tournament-based ensemble scheme to orchestrate the training process on hundreds or even thousands of GPUs, scheduling the interactions between a leaderboard and a training pool with hundreds of pods. At a low-level, each pod simulates agent-environment interactions in parallel by fully utilizing nearly 7,000 GPU CUDA cores in a single GPU. Our ElegantRL-podracer library features high scalability, elasticity and accessibility by following the development principles of containerization, microservices and MLOps. Using an NVIDIA DGX SuperPOD cloud, we conduct extensive experiments on various tasks in locomotion and stock trading and show that ElegantRL-podracer substantially outperforms RLlib. Our codes are available on GitHub.
In this study, we reveal noncontact frictional forces between surfaces in the presence of peristaltic permittivity modulation. Our setup comprises a conducting medium, an air gap, and a dielectric substrate on which we have a space-time-modulated grating that emits electromagnetic radiation. The radiation receives energy and momentum from the grating, which is eventually absorbed by the conducting medium or propagates away from the grating on the dielectric side, resulting in electromagnetic power loss and lateral forces at the surfaces.
ROSAT observations of the Vela pulsar and its surroundings revealed a collimated X-ray feature almost 45' in length (Markwardt & Ogelman 1995), interpreted as the signature ``cocoon'' of a one-sided jet from the Vela pulsar. We report on a new ASCA observation of the Vela pulsar jet at its head, the point where the jet is believed to interact with the supernova remnant. The head is clearly detected, and its X-ray spectrum is remarkably similar to the surrounding supernova remnant spectrum, extending to X-ray energies of at least 7 keV. A ROSAT+ASCA spectrum can be fit by two-component emission models but not standard one-component models. The lower energy component is thermal and has a temperature of 0.29+/-0.03 keV (1 sigma); the higher energy component can be fit by either a thermal component of temperature ~4 keV or a power law with photon index ~2.0. Compared to the ROSAT-only results, the mechanical properties of the jet and its cocoon do not change much. If the observed spectrum is that of a hot jet cocoon, then the speed of the jet is at least 800 km s^-1, depending on the angle of inclination. The mechanical power driving the jet is >10^36 erg s^-1, and the mass flow rate at the head is > 10^-6 M_sun yr^-1. We conclude that the jet must be entraining material all along its length in order to generate such a large mass flow rate. We also explore the possibility that the cocoon emission is synchrotron radiation instead of thermal.
Conical functions appear in a large number of applications in physics and engineering. In this paper we describe an extension of our module CONICAL for the computation of conical functions. Specifically, the module includes now a routine for computing the function ${{\rm R}}^{m}_{-\frac{1}{2}+i\tau}(x)$, a real-valued numerically satisfactory companion of the function ${\rm P}^m_{-\tfrac12+i\tau}(x)$ for $x>1$. In this way, a natural basis for solving Dirichlet problems bounded by conical domains is provided.
The representation of the approximate posterior is a critical aspect of effective variational autoencoders (VAEs). Poor choices for the approximate posterior have a detrimental impact on the generative performance of VAEs due to the mismatch with the true posterior. We extend the class of posterior models that may be learned by using undirected graphical models. We develop an efficient method to train undirected approximate posteriors by showing that the gradient of the training objective with respect to the parameters of the undirected posterior can be computed by backpropagation through Markov chain Monte Carlo updates. We apply these gradient estimators for training discrete VAEs with Boltzmann machines as approximate posteriors and demonstrate that undirected models outperform previous results obtained using directed graphical models. Our implementation is available at https://github.com/QuadrantAI/dvaess .
Nowadays, it's a very significant way for researchers and other individuals to achieve their interests because it provides short solutions to satisfy their demands. Because there are so many pieces of information on the internet, news recommendation systems allow us to filter content and deliver it to the user in proportion to his desires and interests. RSs have three techniques: content-based filtering, collaborative filtering, and hybrid filtering. We will use the MIND dataset with our system, which was collected in 2019, the big challenge in this dataset because there is a lot of ambiguity and complex text processing. In this paper, will present our proposed recommendation system. The core of our system we have used the GloVe algorithm for word embeddings and representation. Besides, the Multi-head Attention Layer calculates the attention of words, to generate a list of recommended news. Finally, we achieve good results more than some other related works in AUC 71.211, MRR 35.72, nDCG@5 38.05, and nDCG@10 44.45.
We give a bound on the number of isolated, essential singularities of determinantal quartic surfaces in 3-space. We also provide examples of different configurations of real singularities on quartic surfaces with a definite Hermitian determinantal representation, and conjecture an extension of a theorem by Degtyarev and Itenberg.
Forensic audio analysis for speaker verification offers unique challenges due to location/scenario uncertainty and diversity mismatch between reference and naturalistic field recordings. The lack of real naturalistic forensic audio corpora with ground-truth speaker identity represents a major challenge in this field. It is also difficult to directly employ small-scale domain-specific data to train complex neural network architectures due to domain mismatch and loss in performance. Alternatively, cross-domain speaker verification for multiple acoustic environments is a challenging task which could advance research in audio forensics. In this study, we introduce a CRSS-Forensics audio dataset collected in multiple acoustic environments. We pre-train a CNN-based network using the VoxCeleb data, followed by an approach which fine-tunes part of the high-level network layers with clean speech from CRSS-Forensics. Based on this fine-tuned model, we align domain-specific distributions in the embedding space with the discrepancy loss and maximum mean discrepancy (MMD). This maintains effective performance on the clean set, while simultaneously generalizes the model to other acoustic domains. From the results, we demonstrate that diverse acoustic environments affect the speaker verification performance, and that our proposed approach of cross-domain adaptation can significantly improve the results in this scenario.
Stacking interactions in single stranded nucleic acids give rise to configurations of an annealed rod-coil multiblock copolymer. Theoretical analysis identifies the resulting signatures for long homopolynucleotides: A non monotonous dependence of size on temperature, corresponding effects on cyclization and a plateau in the extension force law. Explicit numerical results for poly(dA) and poly(rU) are presented.
The online optimization problem with non-convex loss functions over a closed convex set, coupled with a set of inequality (possibly non-convex) constraints is a challenging online learning problem. A proximal method of multipliers with quadratic approximations (named as OPMM) is presented to solve this online non-convex optimization with long term constraints. Regrets of the violation of Karush-Kuhn-Tucker conditions of OPMM for solving online non-convex optimization problems are analyzed. Under mild conditions, it is shown that this algorithm exhibits ${\cO}(T^{-1/8})$ Lagrangian gradient violation regret, ${\cO}(T^{-1/8})$ constraint violation regret and ${\cO}(T^{-1/4})$ complementarity residual regret if parameters in the algorithm are properly chosen, where $T$ denotes the number of time periods. For the case that the objective is a convex quadratic function, we demonstrate that the regret of the objective reduction can be established even the feasible set is non-convex. For the case when the constraint functions are convex, if the solution of the subproblem in OPMM is obtained by solving its dual, OPMM is proved to be an implementable projection method for solving the online non-convex optimization problem.
I will review the latest results for the presence of diffuse light in the nearby universe and at intermediate redshift, and then discuss the latest results from hydrodynamical cosmological simulations of cluster formation on the expected properties of diffuse light in clusters. I shall present how intracluster planetary nebulae (ICPNe) can be used as excellent tracers of the diffuse stellar population in nearby clusters, and how their number density profile and radial velocity distribution can provide an observational test for models of cluster formation. The preliminary comparison of available ICPN samples with predictions from cosmological simulations support late infall as the most likely mechanism for the origin of diffuse stellar light in clusters.
This work was inspired by the article of Parkhomenko, who drew attention to the central role played in the work of Spindel, Sevrin, Troust and van Proyen, by Manin triples. These authors have shown how to associate to a Manin triple an $N=2$ superconformal field theory (the work of Kazama-Suzuki is a special case of their results). In this paper, we construct a deformation of their theory, with continuously varying central charge, analogous to the Fock representations of the Virasoro algebra with stress-energy tensor $-(\phi')^2/2+\alpha\phi''$.
In Euclidean space of dimension 2 or 3, we study a minimum time problem associated with a system of real-analytic vector fields satisfying H\"ormander's bracket generating condition, where the target is a nonempty closed set. We show that, in dimension 2, the minimum time function is locally Lipschitz continuous while, in dimension 3, it is Lipschitz continuous in the complement of a set of measure zero. In particular, in both cases, the minimum time function is a.e. differentiable on the complement of the target. In dimension 3, in general, there is no hope to have the same regularity result as in dimension 2. Indeed, examples are known where the minimum time function fails to be locally Lipschitz continuous.
A long-term numerical integration of the classical Newtonian approximation to the planetary orbital motions of the full Solar System (sun + 8 planets), spanning 20 Gyr, was performed. The results showed no severe instability arising over this time interval. Subsequently, utilizing a bifurcation method described by Jacques Laskar, two numerical experiments were performed with the goal of determining dynamically allowed evolutions for the Solar System in which the planetary orbits become unstable. The experiments yielded one evolution in which Mercury falls onto the Sun at ~1.261Gyr from now, and another in which Mercury and Venus collide in ~862Myr. In the latter solution, as a result of Mercury's unstable behavior, Mars was ejected from the Solar System at ~822Myr. We have performed a number of numerical tests that confirm these results, and indicate that they are not numerical artifacts. Using synthetic secular perturbation theory, we find that Mercury is destabilized via an entrance into a linear secular resonance with Jupiter in which their corresponding eigenfrequencies experience extended periods of commensurability. The effects of general relativity on the dynamical stability are discussed. An application of the bifurcation method to the outer Solar System (Jupiter, Saturn, Uranus, and Neptune) showed no sign of instability during the course of 24Gyr of integrations, in keeping with an expected Uranian dynamical lifetime of 10^(18) years.
In a recent paper Petrov and Pohoata developed a new algebraic method which combines the Croot-Lev-Pach Lemma from additive combinatorics and Sylvester's Law of Inertia for real quadratic forms. As an application, they gave a simple proof of the Bannai-Bannai-Stanton bound on the size of $s$-distance sets (subsets $\mbox{$\cal A$}\subseteq {\mathbb R}^n$ which determine at most $s$ different distances). In this paper we extend their work and prove upper bounds for the size of $s$-distance sets in various real algebraic sets. This way we obtain a novel and short proof for the bound of Delsarte-Goethals-Seidel on spherical $s$-distance sets and a generalization of a bound by Bannai-Kawasaki-Nitamizu-Sato on $s$-distance sets on unions of spheres. In our arguments we use the method of Petrov and Pohoata together with some Gr\"obner basis techniques.
Most of the learning-based algorithms for bitrate adaptation are limited to offline learning, which inevitably suffers from the simulation-to-reality gap. Online learning can better adapt to dynamic real-time communication scenes but still face the challenge of lengthy training convergence time. In this paper, we propose a novel online grouped federated transfer learning framework named Bamboo to accelerate training efficiency. The preliminary experiments validate that our method remarkably improves online training efficiency by up to 302% compared to other reinforcement learning algorithms in various network conditions while ensuring the quality of experience (QoE) of real-time video communication.
Recent SPS data on the rapidity distribution of protons in p+S, p+Au and S+S collisions at 200 AGeV and preliminary Pb+Pb collisions at 160 AGeV are compared to HIJING and VENUS calculations as well as to predictions based on the Multi-Chain Model (MCM). The preliminary Pb data suggest that a linear dependence of the proton rapidity shift as a function of the nuclear thickness, as first observed in p+A reactions, may apply up to Pb+Pb reactions. The observed rapidity dependence of produced hyperons in both p+A and A+A reactions however cannot be explained in terms of such models without introducing additional non-linear effects.
Polar codes are a class of channel capacity achieving codes that has been selected for the next generation of wireless communication standards. Successive-cancellation (SC) is the first proposed decoding algorithm, suffering from mediocre error-correction performance at moderate code length. In order to improve the error-correction performance of SC, two approaches are available: (i) SC-List decoding which keeps a list of candidates by running a number of SC decoders in parallel, thus increasing the implementation complexity, and (ii) SC-Flip decoding that relies on a single SC module, and keeps the computational complexity close to SC. In this work, we propose the partitioned SC-Flip (PSCF) decoding algorithm, which outperforms SC-Flip in terms of error-correction performance and average computational complexity, leading to higher throughput and reduced energy consumption per codeword. We also introduce a partitioning scheme that best suits our PSCF decoder. Simulation results show that at equivalent frame error rate, PSCF has up to $5 \times$ less computational complexity than the SC-Flip decoder. At equivalent average number of iterations, the error-correction performance of PSCF outperforms SC-Flip by up to $0.15$ dB at frame error rate of $10^{-3}$.
We study, within the fluctuation exchange approximation, the spin-fluctuation-mediated superconductivity in Hubbard-type models possessing electron and hole bands, and compare them with a model on a square lattice with a large Fermi surface. In the square lattice model, superconductivity is more enhanced for better nesting for a fixed band filling. By contrast, in the models with electron and hole bands, superconductivity is optimized when the Fermi surface nesting is degraded to some extent, where finite energy spin fluctuation around the nesting vector develops. The difference lies in the robustness of the nesting vector, namely, in models with electron and hole bands, the wave vector at which the spin susceptibility is maximized is fixed even when the nesting is degraded, whereas when the Fermi surface is large, the nesting vector varies with the deformation of the Fermi surface. We also discuss the possibility of realizing in actual materials the bilayer Hubbard model, which is a simple model with electron and hole bands, and is expected to have a very high T_c.
The observed correlation between global low cloud amount and the flux of high energy cosmic rays supports the idea that ionization plays a crucial role in tropospheric cloud formation. We explore this idea quantitatively with a simple model linking the concentration of cloud condensation nuclei to the varying ionization rate due to cosmic rays. Among the predictions of the model is a variation in global cloud optical thickness, or opacity, with cosmic-ray rate. Using the International Satellite Cloud Climatology Project database (1983-1999), we search for variations in the yearly mean visible cloud opacity and visible cloud amount due to cosmic rays. After separating out temporal variations in the data due to the Mt. Pinatubo eruption and El Nino/Southern Oscillation, we identify systematic variations in opacity and cloud amount due to cosmic rays. We find that the fractional amplitude of the opacity variations due to cosmic rays increases with cloud altitude, becoming approximately zero or negative (inverse correlation) for low clouds. Conversely, the fractional changes in visible cloud amount due to cosmic rays are only positively-correlated for low clouds and become negative or zero for the higher clouds. The opacity trends suggest behavior contrary to the current predictions of ion-mediated nucleation (IMN) models, but more accurate temporal modeling of the ISCCP data is needed before definitive conclusions can be drawn.
Screened modified gravity (SMG) is a kind of scalar-tensor theories with screening mechanisms, which can generate screening effect to suppress the fifth force in high density environments and pass the solar system tests. Meanwhile, the potential of scalar field in the theories can drive the acceleration of the late universe. In this paper, we calculate the parameterized post-Newtonian (PPN) parameters $\gamma$ and $\beta$, the effective gravitational constant $G_{\rm eff}$ and the effective cosmological constant $\Lambda$ for SMG with a general potential $V$ and coupling function $A$. The dependence of these parameters on the model parameters of SMG and/or the physical properties of the source object are clearly presented. As an application of these results, we focus on three specific theories of SMG (chameleon, symmetron and dilaton models). Using the formulae to calculate their PPN parameters and cosmological constant, we derive the constraints on the model parameters by combining the observations on solar system and cosmological scales.
Several properties of bound states in potential $ V(x)= g^2\exp (|x|)$ are studied. Firstly, with the emphasis on the reliability of our arbitrary-precision construction, wave functions are considered in the two alternative (viz., asymptotically decreasing or regular) exact Bessel-function forms which obey the asymptotic or matching conditions, respectively. The merits of the resulting complementary transcendental secular equation approaches are compared and their applicability is discussed.
We analyze the neutrino conversions inside a supernova in the 3$\nu$ mixing scheme, and their effects on the neutrino spectra observed at the earth. We find that the observations of the energy spectra of neutrinos from a future galactic supernova may enable us to identify the solar neutrino solution, to determine the sign of $\Delta m^2_{32}$, and to probe the mixing matrix element |U_{e3}|^2 to values as low as 10^{-4}-10^{-3}.
The notion of Macaulayfication, which is analogous of the desingularization, was introduced by Faltings in 1978 and he construct a Macaulayfication of quasi-projective scheme whose non-Cohen-Macaulay locus is of dimension 0 or 1 by a characteristic free method. In this paper, we gave a Macaulayfication of a quasi-projective scheme whose non-Cohen-Macaulay locus is of dimension 2. Of course out method is independent of the characteristic.
We have designed, constructed and put into operation a large area CCD camera that covers a large fraction of the image plane of the 1 meter Schmidt telescope at Llano del Hato in Venezuela. The camera consists of 16 CCD devices arranged in a 4 x 4 mosaic covering 2.3 degrees x 3.5 degrees of sky. The CCDs are 2048 x 2048 LORAL devices with 15 micron pixels. The camera is optimized for drift scan photometry and objective prism spectroscopy. The design considerations, construction features and performance parameters are described in the following article.
We present an overview of and preliminary results from an ongoing comprehensive program that has a goal of determining the Hubble constant to a systematic accuracy of 2%. As part of this program, we are currently obtaining 3.6 micron data using the Infrared Array Camera (IRAC) on Spitzer, and the program is designed to include JWST in the future. We demonstrate that the mid-infrared period-luminosity relation for Cepheids at 3.6 microns is the most accurate means of measuring Cepheid distances to date. At 3.6 microns, it is possible to minimize the known remaining systematic uncertainties in the Cepheid extragalactic distance scale. We discuss the advantages of 3.6 micron observations in minimizing systematic effects in the Cepheid calibration of the Hubble constant including the absolute zero point, extinction corrections, and the effects of metallicity on the colors and magnitudes of Cepheids. We are undertaking three independent tests of the sensitivity of the mid-IR Cepheid Leavitt Law to metallicity, which when combined will allow a robust constraint on the effect. Finally, we are providing a new mid-IR Tully-Fisher relation for spiral galaxies.
In this work, we study the exact solution of Dirac equation in the hyper-spherical coordinate under influence of separable q-Deformed quantum potentials. The q-deformed hyperbolic Rosen-Morse potential is perturbed by q-deformed non-central trigonometric Scarf potentials, where whole of them can be solved by using Asymptotic Iteration Method (AIM). This work is limited to spin symmetry case. The relativistic energy equation and orbital quantum number equation lD-1 have been obtained using Asymptotic Iteration Method. The upper radial wave function equations and angular wave function equations are also obtained by using this method. The relativistic energy levels are numerically calculated using Mat Lab, the increase of radial quantum number n causes the increase of bound state relativistic energy level both in dimension D = 5 and D = 3. The bound state relativistic energy level decreases with increasing of both deformation parameter q and orbital quantum number nl.
Giant cluster radio relics are thought to form at shock fronts in the course of collisions between galaxy clusters. Via processes that are still poorly understood, these shocks accelerate or re-accelerate cosmic-ray electrons and might amplify magnetic fields. The best object to study this phenomenon is the galaxy cluster CIZA J2242.8+5301 as it shows the most undisturbed relic. By means of Giant Metrewave Radio Telescope (GMRT) and Westerbork Synthesis Radio Telescope (WSRT) data at seven frequencies spanning from 153 MHz to 2272 MHz, we study the synchrotron emission in this cluster. We aim at distinguishing between theoretical injection and acceleration models proposed for the formation of radio relics. We also study the head-tail radio sources to reveal the interplay between the merger and the cluster galaxies. We produced spectral index, curvature maps and radio colour-colour plots and compared our data with predictions from models. We present one of the deepest 153 MHz maps of a cluster ever produced, reaching a noise level of 1.5 mJy/beam. We derive integrated spectra for four relics in the cluster, discovering extremely steep spectrum diffuse emission concentrated in multiple patches. We find a possible radio phoenix embedded in the relic to the south of the cluster. The spectral index of the northern relic retains signs of steepening from the front towards the back of the shock also at the radio frequencies below 600 MHz. The spectral curvature in the same relic also increases in the downstream area. The data is consistent with the Komissarov-Gubanov injection models, meaning that the emission we observe is produced by a single burst of spectrally-aged accelerated radio electrons.
We use information from higher order moments to achieve identification of non-Gaussian structural vector autoregressive moving average (SVARMA) models, possibly non-fundamental or non-causal, through a frequency domain criterion based on a new representation of the higher order spectral density arrays of vector linear processes. This allows to identify the location of the roots of the determinantal lag matrix polynomials based on higher order cumulants dynamics and to identify the rotation of the model errors leading to the structural shocks up to sign and permutation. We describe sufficient conditions for global and local parameter identification that rely on simple rank assumptions on the linear dynamics and on finite order serial and component independence conditions for the structural innovations. We generalize previous univariate analysis to develop asymptotically normal and efficient estimates exploiting second and non-Gaussian higher order dynamics given a particular structural shocks ordering without assumptions on causality or invertibility. Bootstrap approximations to finite sample distributions and the properties of numerical methods are explored with real and simulated data.
We highlight the important role that canonical normalisation of kinetic terms in flavour models based on family symmetries can play in determining the Yukawa matrices. Even though the kinetic terms may be correctly canonically normalised to begin with, they will inevitably be driven into a non-canonical form by a similar operator expansion to that which determines the Yukawa operators. Therefore in models based on family symmetry canonical re-normalisation is mandatory before the physical Yukawa matrices can be extracted. In nearly all examples in the literature this is not done. As an example we perform an explicit calculation of such mixing associated with canonical normalisation of the Kahler metric in a supersymmetric model based on SU(3) family symmetry, where we show that such effects can significantly change the form of the Yukawa matrix. In principle quark mixing could originate entirely from canonical normalisation, with only diagonal Yukawa couplings before canonical normalisation.
Predictions state that graphene can spontaneously develop magnetism from the Coulomb repulsion of its $\pi$-electrons, but its experimental verification has been a challenge. Here, we report on the observation and manipulation of individual magnetic moments localized in graphene nanostructures on a Au(111) surface. Using scanning tunneling spectroscopy, we detected the presence of single electron spins localized around certain zigzag sites of the carbon backbone via the Kondo effect. Two near-by spins were found coupled into a singlet ground state, and the strength of their exchange interaction was measured via singlet-triplet inelastic tunnel electron excitations. Theoretical simulations demonstrate that electron correlations result in spin-polarized radical states with the experimentally observed spatial distributions. Hydrogen atoms bound to these radical sites quench their magnetic moment, permitting us to switch the spin of the nanostructure using the tip of the microscope.
In this paper, we propose a phase attention residual network (PA-ResSeg) to model multi-phase features for accurate liver tumor segmentation, in which a phase attention (PA) is newly proposed to additionally exploit the images of arterial (ART) phase to facilitate the segmentation of portal venous (PV) phase. The PA block consists of an intra-phase attention (Intra-PA) module and an inter-phase attention (Inter-PA) module to capture channel-wise self-dependencies and cross-phase interdependencies, respectively. Thus it enables the network to learn more representative multi-phase features by refining the PV features according to the channel dependencies and recalibrating the ART features based on the learned interdependencies between phases. We propose a PA-based multi-scale fusion (MSF) architecture to embed the PA blocks in the network at multiple levels along the encoding path to fuse multi-scale features from multi-phase images. Moreover, a 3D boundary-enhanced loss (BE-loss) is proposed for training to make the network more sensitive to boundaries. To evaluate the performance of our proposed PA-ResSeg, we conducted experiments on a multi-phase CT dataset of focal liver lesions (MPCT-FLLs). Experimental results show the effectiveness of the proposed method by achieving a dice per case (DPC) of 0.77.87, a dice global (DG) of 0.8682, a volumetric overlap error (VOE) of 0.3328 and a relative volume difference (RVD) of 0.0443 on the MPCT-FLLs. Furthermore, to validate the effectiveness and robustness of PA-ResSeg, we conducted extra experiments on another multi-phase liver tumor dataset and obtained a DPC of 0.8290, a DG of 0.9132, a VOE of 0.2637 and a RVD of 0.0163. The proposed method shows its robustness and generalization capability in different datasets and different backbones.
The tractor behavior of a zero-order Bessel acoustic beam acting on a fluid sphere, and emanating from a finite circular aperture (as opposed to waves of infinite extent) is demonstrated theoretically. Conditions for an attractive force acting in opposite direction of the radiating waves, determined by the choice of the beam's half-cone angle, the size of the radiator, and its distance from a fluid sphere, are established and discussed. Numerical predictions for the radiation force function, which is the radiation force per unit energy density and cross-sectional surface, are provided using a partial-wave expansion method stemming from the acoustic scattering. The results suggest a simple and reliable analysis for the design of Bessel beam acoustical tweezers and tractor beam devices.
In statistical mechanics, the generally called Stirling approximation is actually an approximation of Stirling's formula. In this article, it is shown that the term that is dropped is in fact the one that takes fluctuations into account. The use of the Stirling's exact formula forces us to reintroduce them into the already proposed solutions of well-know puzzles such as the extensivity paradox or the Gibbs' paradox of joining two volumes of identical gas. This amendment clearly results in a gain in consistency and rigor of these solutions.
We propose a method to realize the Frenkel-Kontorova model using an array of Rydberg dressed atoms. Our platform can be used to study this model with a range of realistic interatomic potentials. In particular, we concentrate on two types of interaction potentials: a springlike potential and a repulsive long-range potential. We numerically calculate the phase diagram for such systems and characterize the Aubry-like and commensurate-incommensurate phase transitions. Experimental realizations of this system that are possible to achieve using current technology are discussed.
There has been a tremendous progress in Domain Adaptation (DA) for visual recognition tasks. Particularly, open-set DA has gained considerable attention wherein the target domain contains additional unseen categories. Existing open-set DA approaches demand access to a labeled source dataset along with unlabeled target instances. However, this reliance on co-existing source and target data is highly impractical in scenarios where data-sharing is restricted due to its proprietary nature or privacy concerns. Addressing this, we introduce a practical DA paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future. To this end, we formalize knowledge inheritability as a novel concept and propose a simple yet effective solution to realize inheritable models suitable for the above practical paradigm. Further, we present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data. We provide theoretical insights followed by a thorough empirical evaluation demonstrating state-of-the-art open-set domain adaptation performance.
Regression discontinuity design (RDD) is a quasi-experimental approach used to estimate the causal effects of an intervention assigned based on a cutoff criterion. RDD exploits the idea that close to the cutoff units below and above are similar; hence, they can be meaningfully compared. Consequently, the causal effect can be estimated only locally at the cutoff point. This makes the cutoff point an essential element of RDD. However, especially in medical applications, the exact cutoff location may not always be disclosed to the researcher, and even when it is, the actual location may deviate from the official one. As we illustrate on the application of RDD to the HIV treatment eligibility data, estimating the causal effect at an incorrect cutoff point leads to meaningless results. Moreover, since the cutoff criterion often acts as a guideline rather than as a strict rule, the location of the cutoff may be unclear from the data. The method we present can be applied both as an estimation and validation tool in RDD. We use a Bayesian approach to incorporate prior knowledge and uncertainty about the cutoff location in the causal effect estimation. At the same time, our Bayesian model LoTTA is fitted globally to the whole data, whereas RDD is a local, boundary point estimation problem. In this work we address a natural question that arises: how to make Bayesian inference more local to render a meaningful and powerful estimate of the treatment effect?
The article is devoted to the investigation of operators on a non locally compact group algebra. Their isomorphisms are also studied.
Interactions growing slower than a certain exponential of the square of a scalar field, are well behaved when evolved under the functional renormalization group linearised around the Gaussian fixed point. They satisfy properties usually taken for granted, and reproduce standard perturbative quantisation. However, ever more challenging effects appear the more interactions grow faster than this. We show explicitly that firstly the flow no longer splits uniquely into operators of definite scaling dimension; then (linearised) flows to the infrared can end prematurely in a singularity; and finally new interactions can spontaneously appear at any scale.
The equation $x^2 + 1 = 0\mod p$ has solutions whenever $p = 2$ or $4n + 1$. A famous theorem of Fermat says that these primes are exactly the ones that can be described as a sum of two squares. That the roots of the former equation are equidistributed is a beautiful theorem of Duke, Friedlander and Iwaniec from 1995. We show that a subsequence of the roots of the equation remains equidistributed even when one adds a restriction on the primes which has to do with the angle in the plane formed by their corresponding representation as a sum of squares. Similar to Duke, Friedlander and Iwaniec, we reduce the problem to the study of certain Poincare series, however, while their Poincare series were functions on an arithmetic quotient of the upper half plane, our Poincare series are functions on arithmetic quotients of $SL_2(\mathbb{R})$, as they have a nontrivial dependence on their Iwasawa $\theta$-coordinate. Spectral analysis on these higher dimensional varieties involves the nonspherical spectrum, which posed a few new challenges. A couple of notable ones were that of obtaining pointwise bounds for nonspherical Eisenstein series and utilizing a non-spherical analogue of the Selberg inversion formula.
We studied double superconducting (SC) domes in LaFeAsO1-xHx by using 75As- and 1H-nuclear magnetic resonance techniques, and unexpectedly discovered that a new antiferromagnetic (AF) phase follows the double SC domes on further H doping, forming a symmetric alignment of AF and SC phases in the electronic phase diagram. We demonstrated that the new AF ordering originates from the nesting between electron pockets, unlike the nesting between electron and hole pockets as seen in the majority of undoped pnictides. The new AF ordering is derived from the features common to high-Tc pnictides: however, it has not been reported so far for other high-Tc pnictides because of their poor electron doping capability.
Low-light image enhancement exhibits an ill-posed nature, as a given image may have many enhanced versions, yet recent studies focus on building a deterministic mapping from input to an enhanced version. In contrast, we propose a lightweight one-path conditional generative adversarial network (cGAN) to learn a one-to-many relation from low-light to normal-light image space, given only sets of low- and normal-light training images without any correspondence. By formulating this ill-posed problem as a modulation code learning task, our network learns to generate a collection of enhanced images from a given input conditioned on various reference images. Therefore our inference model easily adapts to various user preferences, provided with a few favorable photos from each user. Our model achieves competitive visual and quantitative results on par with fully supervised methods on both noisy and clean datasets, while being 6 to 10 times lighter than state-of-the-art generative adversarial networks (GANs) approaches.
A theoretical estimate of the explicit time dependence of a drainage water of shallow lakes is presented as an important contribution for understanding the lake dynamics. This information can be obtained from a sum of functions, largely used in fitting of experimental data. These functions were chosen because their centre and weight yield a good description of the water basin behaviour. The coefficients of these functions are here extracted using results of measured and / or calculated data for the state variables describing the shallow West Lake, Hangzou. This procedure can also be applied to other shallow lakes, generating geological information about their drainage basin, which is one of the most important parameters to describe their micrometeorological behaviour. One concludes this work emphasizing the relevance of the explicit time dependence of the drainage variables and the requirement of measured data to validate this approach.
Astrometry is one of the main science case which drives the requirements of the next multiconjugate adaptive optics (MCAO) systems for future extremely large telescopes. The small diffraction limited point-spread function (PSF) and the high Signal-to-Noise Ratio (SNR) of these instruments, promise astrometric precision at the level of micro-arcseconds. However, optical distortions have to be as low as possible to achieve the high demanding astrometry requirements. In addition to static distortions, the opto-mechanical instabilities cause astrometric errors that can be major contributors to the astrometry error budget. The present article describes the analysis, at design level, of the effects of opto-mechanical instabilities when coupled with optical surface irregularities due to the manufacturing process. We analyse the notable example of the Multi-conjugate Adaptive Optics RelaY (MAORY) for the extremely large telescope (ELT). Ray-tracing simulations combined with a Monte Carlo approach are used to estimate the geometrical structure and magnitude of field distortion resulting from the optical design. We consider the effects of distortion on the MCAO correction showing that it is possible achieve the micro-arcseconds astrometric precision once corresponding accuracy is obtained by both optical design and manufacturing. We predict that for single-epoch observations, an astrometric error below 50$\mu$as can be achieved for exposure times up to 2 min, provided about 100 stars are available to remove fifth-order distortions. Such performance could be reproducible for multi-epoch observations despite the time-variable distortion induced by instrument instabilities.
In Automatic Speech Recognition (ASR) systems, a recurring obstacle is the generation of narrowly focused output distributions. This phenomenon emerges as a side effect of Connectionist Temporal Classification (CTC), a robust sequence learning tool that utilizes dynamic programming for sequence mapping. While earlier efforts have tried to combine the CTC loss with an entropy maximization regularization term to mitigate this issue, they employed a constant weighting term on the regularization during the training, which we find may not be optimal. In this work, we introduce Adaptive Maximum Entropy Regularization (AdaMER), a technique that can modulate the impact of entropy regularization throughout the training process. This approach not only refines ASR model training but ensures that as training proceeds, predictions display the desired model confidence.
We propose a new approach to automated theorem proving where an AlphaZero-style agent is self-training to refine a generic high-level expert strategy expressed as a nondeterministic program. An analogous teacher agent is self-training to generate tasks of suitable relevance and difficulty for the learner. This allows leveraging minimal amounts of domain knowledge to tackle problems for which training data is unavailable or hard to synthesize. As a specific illustration, we consider loop invariant synthesis for imperative programs and use neural networks to refine both the teacher and solver strategies.
We determine Davenport's constant for all groups of the form $\Z\_3\oplus \Z\_3\oplus\Z\_{3d}$.
We provide new values for the model parameters of the covariant constituent quark model (with built--in infrared confinement) in the meson sector by a fit to the leptonic decay constants and a number of electromagnetic decays. We then evaluate, in a parameter free way, the form factors of the $B(B_s)\to P(V)$-transitions in the full kinematical region of momentum transfer. As an application of our results we calculate the widths of the nonleptonic $B_s$-decays into $D_s^- D_s^+,$ $D_s^{\ast\,-} D_s^{+}+D_s^- D_s^{\ast\,+}$ and $D_s^{\ast\,-} D_s^{\ast\,+}$. These modes give the largest contribution to $\Delta\Gamma$ for the $B_s-\bar B_s$ system. We also treat the nonleptonic decay $B_{s}\to\Jpsi\phi$. Although this mode is color suppressed this decay has important implications for the search of possible CP-violating New Physics effects in $B_s-\bar B_s$ mixing.
Speech emotion analysis is an important task which further enables several application use cases. The non-verbal sounds within speech utterances also play a pivotal role in emotion analysis in speech. Due to the widespread use of smartphones, it becomes viable to analyze speech commands captured using microphones for emotion understanding by utilizing on-device machine learning models. The non-verbal information includes the environment background sounds describing the type of surroundings, current situation and activities being performed. In this work, we consider both verbal (speech commands) and non-verbal sounds (background noises) within an utterance for emotion analysis in real-life scenarios. We create an indigenous dataset for this task namely "Indian EmoSpeech Command Dataset". It contains keywords with diverse emotions and background sounds, presented to explore new challenges in audio analysis. We exhaustively compare with various baseline models for emotion analysis on speech commands on several performance metrics. We demonstrate that we achieve a significant average gain of 3.3% in top-one score over a subset of speech command dataset for keyword spotting.
In the absence of acceleration, the velocity formula gives "distance travelled equals speed multiplied by time". For a broad class of Markov chains such as circulant Markov chains or random walk on complete graphs, we prove a probabilistic analogue of the velocity formula between entropy and hitting time, where distance is the entropy of the Markov trajectories from state $i$ to state $j$ in the sense of [L. Ekroot and T. M. Cover. The entropy of Markov trajectories. IEEE Trans. Inform. Theory 39(4): 1418-1421.], speed is the classical entropy rate of the chain, and the time variable is the expected hitting time between $i$ and $j$. This motivates us to define new entropic counterparts of various hitting time parameters such as average hitting time or commute time, and prove analogous velocity formulae and estimates between these quantities.
More than 100 excited states of the Calcium-40 nucleus, with isospin 0, are classified into rotational-vibrational bands of an intrinsic tetrahedral structure. Almost all observed states below 8 MeV can be accommodated, as well as many high-spin states above 8 MeV. The bands have some similarity to those classifying states of Oxygen-16, but the A-mode vibrational frequency is lower relative to the E-mode and F-mode frequencies than in Oxygen-16. Previously identified rotational bands up to spin 16 and energy above 20 MeV are unified here into a smaller number of tetrahedral bands.
Realizing direct-bandgap quantum dots working within the deep-ultraviolet frequency is highly desired for electro-optical and biomedical applications while remaining challenging. In this work, we combine the first-principles many-body perturbation theory and effective Hamiltonian approximation to propose the realization of arrays of deep-ultraviolet excitonic quantum dots in twisted bilayer hexagonal boron nitride. The effective quantum confinement of excitons can reach ~400 meV within small twisting angles, which is about four times larger than those observed in twisted semiconducting transitional metal dichalcogenides. Especially because of enhanced electron-hole attraction, those excitons will accumulate via the so-call exciton funnel effect to the direct-bandgap regime, giving the possibility to better luminescence performance and manipulating coherent arrays of deep-ultraviolet quantum dots.
Here we show, combining a simulation and theoretical study, that electrostatic correlations typical of multivalent ions can reverse the selectivity of a biological nanochannel. Our results provide a physical mechanism for a new, experimentally observed phenomenon, namely the inversion of the selectivity of a bacterial porin (the E. Coli OmpF) in presence of divalent and trivalent cations. Also, the differences and similarities between the driving force for this phenomenon and other similar nano and micro-escale electrokinetic effects (e.g. inversion of streaming current in silica nanochannels) are explored.
The density density correlation function is computed for the Bogoliubov pseudoparticles created in a Bose-Einstein condensate undergoing a black hole flow. On the basis of the gravitational analogy, the method used relies only on quantum field theory in curved spacetime techniques. A comparison with the results obtained by ab initio full condensed matter calculations is given, confirming the validity of the approximation used provided the profile of the flow varies smoothly on scales compared to the condensate healing length.
This paper develops a Multiset Rewriting language with explicit time for the specification and analysis of Time-Sensitive Distributed Systems (TSDS). Goals are often specified using explicit time constraints. A good trace is an infinite trace in which the goals are satisfied perpetually despite possible interference from the environment. In our previous work (FORMATS 2016), we discussed two desirable properties of TSDSes, realizability (there exists a good trace) and survivability (where, in addition, all admissible traces are good). Here we consider two additional properties, recoverability (all compliant traces do not reach points-of-no-return) and reliability (the system can always continue functioning using a good trace). Following (FORMATS 2016), we focus on a class of systems called Progressing Timed Systems (PTS), where intuitively only a finite number of actions can be carried out in a bounded time period. We prove that for this class of systems the properties of recoverability and reliability coincide and are PSPACE-complete. Moreover, if we impose a bound on time (as in bounded model-checking), we show that for PTS the reliability property is in the $\Pi_2^p$ class of the polynomial hierarchy, a subclass of PSPACE. We also show that the bounded survivability is both NP-hard and coNP-hard.
The obstructions for an arbitrary fusion algebra to be a fusion algebra of some semisimple monoidal category are constructed. Those obstructions lie in groups which are closely related to the Hochschild cohomology of fusion algebras with coefficients in the K-theory of the ground (algebraically closed) field.
We study mappings satisfying some estimate of distortion of modulus of families of paths. Under some conditions on definition and mapped domains, we have proved that these mappings are logarithmic H\"{o}lder continuous at boundary points.
We investigate the dynamics of magnetization in the phase separated (PS) state after introducing the quenched disorder at the Mn-site of a manganite around half doping. The compound, Pr$_{0.5}$Sr$_{0.5}$Mn$_{0.925}$Ga$_{0.075}$O$_{3}$, exhibits PS with the coexistence of ferromagnetic (FM) and antiferromagnetic (AFM) clusters where the size of the FM clusters is substantially reduced due to the disorder introduced by nonmagnetic Ga substitution. At low temperature, the system develops a new magnetic anomaly, which is marked by a peak in the zero field cooled magnetization. Detailed study of linear as well as nonlinear ac susceptibilities coupled with dc magnetization indicates that this peak arises due to the thermal blocking of nanometer size FM clusters demonstrating superparamagnetic behavior. The system, however, exhibits slow magnetic relaxation, aging effect, memory effect in both field cooled and zero field cooled magnetization below the blocking temperature. These imply the presence of collective behavior induced by the interaction between the clusters. Moreover, the magnetic relaxation measured with positive and negative temperature excursions exhibits asymmetric response suggesting that the dynamics in this phase separated system is accounted by the hierarchical model rather than the droplet model which are commonly used to describe the similar collective dynamics in glassy system.
In this note we are going to consider a smooth projective surface equipped with an involution and study the action of the involution at the level of Chow group of zero cycles.
Szegedy developed a generic method for quantizing classical algorithms based on random walks [Proceedings of FOCS, 2004, pp. 32-41]. A major contribution of his work was the construction of a walk unitary for any reversible random walk. Such unitary posses two crucial properties: its eigenvector with eigenphase $0$ is a quantum sample of the limiting distribution of the random walk and its eigenphase gap is quadratically larger than the spectral gap of the random walk. It was an open question if it is possible to generalize Szegedy's quantization method for stochastic maps to quantum maps. We answer this in the affirmative by presenting an explicit construction of a Szegedy walk unitary for detailed balanced Lindbladians -- generators of quantum Markov semigroups -- and detailed balanced quantum channels. We prove that our Szegedy walk unitary has a purification of the fixed point of the Lindbladian as eigenvector with eigenphase $0$ and that its eigenphase gap is quadratically larger than the spectral gap of the Lindbladian. To construct the walk unitary we leverage a canonical form for detailed balanced Lindbladians showing that they are structurally related to Davies generators. We also explain how the quantization method for Lindbladians can be applied to quantum channels. We give an efficient quantum algorithm for quantizing Davies generators that describe many important open-system dynamics, for instance, the relaxation of a quantum system coupled to a bath. Our algorithm extends known techniques for simulating quantum systems on a quantum computer.
Deep learning (DL) methods have been recently proposed for user equipment (UE) localization in wireless communication networks, based on the channel state information (CSI) between a UE and each base station (BS) in the uplink. With the CSI from the available BSs, UE localization can be performed in different ways. One the one hand, a single neural network (NN) can be trained for the UE localization by considering the CSI from all the available BSs as one overall fingerprint of the user's location. On the other hand, the CSI at each BS can be used to obtain an estimate of the UE's position with a separate NN at each BS, and then the position estimates of all BSs are combined to obtain an overall estimate of the UE position. In this work, we show that UE localization with the latter approach can achieve a higher positioning accuracy. We propose to consider the uncertainty in the UE localization at each BS, such that overall UE's position is determined by combining the position estimates of the different BSs based on the uncertainty at each BS. With this approach, a more reliable position estimate can be obtained in case of variations in the channel.
Learning from raw high dimensional data via interaction with a given environment has been effectively achieved through the utilization of deep neural networks. Yet the observed degradation in policy performance caused by imperceptible worst-case policy dependent translations along high sensitivity directions (i.e. adversarial perturbations) raises concerns on the robustness of deep reinforcement learning policies. In our paper, we show that these high sensitivity directions do not lie only along particular worst-case directions, but rather are more abundant in the deep neural policy landscape and can be found via more natural means in a black-box setting. Furthermore, we show that vanilla training techniques intriguingly result in learning more robust policies compared to the policies learnt via the state-of-the-art adversarial training techniques. We believe our work lays out intriguing properties of the deep reinforcement learning policy manifold and our results can help to build robust and generalizable deep reinforcement learning policies.
Multitask Learning is a learning paradigm that deals with multiple different tasks in parallel and transfers knowledge among them. XOF, a Learning Classifier System using tree-based programs to encode building blocks (meta-features), constructs and collects features with rich discriminative information for classification tasks in an observed list. This paper seeks to facilitate the automation of feature transferring in between tasks by utilising the observed list. We hypothesise that the best discriminative features of a classification task carry its characteristics. Therefore, the relatedness between any two tasks can be estimated by comparing their most appropriate patterns. We propose a multiple-XOF system, called mXOF, that can dynamically adapt feature transfer among XOFs. This system utilises the observed list to estimate the task relatedness. This method enables the automation of transferring features. In terms of knowledge discovery, the resemblance estimation provides insightful relations among multiple data. We experimented mXOF on various scenarios, e.g. representative Hierarchical Boolean problems, classification of distinct classes in the UCI Zoo dataset, and unrelated tasks, to validate its abilities of automatic knowledge-transfer and estimating task relatedness. Results show that mXOF can estimate the relatedness reasonably between multiple tasks to aid the learning performance with the dynamic feature transferring.
We recall the generalization of the Feynman-Metropolis-Teller approximation for a compressed atom using a relativistic Fermi-Thomas model. These results within a Wigner-Seitz approximation lead to a new equation of state for white dwarfs and to a new value of their critical mass, smaller than the one obtained by Chandrasekhar. The possible observations of these effects in binary neutron stars are outlined.
For each $n$, we construct a separable metric space $\mathbb{U}_n$ that is universal in the coarse category of separable metric spaces with asymptotic dimension ($\mathop{asdim}$) at most $n$ and universal in the uniform category of separable metric spaces with uniform dimension ($\mathop{udim}$) at most $n$. Thus, $\mathbb{U}_n$ serves as a universal space for dimension $n$ in both the large-scale and infinitesimal topology. More precisely, we prove: \[ \mathop{asdim} \mathbb{U}_n = \mathop{udim} \mathbb{U}_n = n \] and such that for each separable metric space $X$, a) if $\mathop{asdim} X \leq n$, then $X$ is coarsely equivalent to a subset of $\mathbb{U}_n$; b) if $\mathop{udim} X \leq n$, then $X$ is uniformly homeomorphic to a subset of $\mathbb{U}_n$.
In this paper we develop a quantitative Harris theorem with effective control over the constants. A benefit of our methodology is the decoupling of the small set and Lyapunov-Foster Drift conditions. Our methodology allows any small set and any set in the Lyapunov-Foster condition as long as the second satisfies a so-called ``quantitative petiteness" condition. The theorem relies on a novel proof of a quantitative Kendall-type theorem which is inspired by the techniques of Markov Chains on general state spaces. We give an application of the technique to the Markov chain approximation of mixing processes.
Information on the existence and properties of diffuse interstellar bands (DIBs) outside the optical domain is still limited. Additional infra-red (IR) measurements and IR-optical correlative studies are needed to constrain DIB carriers and locate various absorbers in 3D maps of the interstellar matter. We extended our study of H-band DIBs in Apache Point Observatory Galactic Evolution Experiment (APOGEE) Telluric Standard Star (TSS) spectra. We used the strong 15273A band to select the most and least absorbed targets. We used individual spectra of the former subsample to extract weaker DIBs, and we searched the two stacked series for differences that could indicate additional bands. High-resolution NARVAL and SOPHIE optical spectra for a subsample of 55 TSS targets were additionally recorded for NIR/optical correlative studies. From the TSS spectra we extract a catalog of measurements of the poorly studied 15617, 15653, and 15673A DIBs in about 300 sightlines, we obtain a first accurate determination of their rest wavelength and constrained their intrinsic width and shape. In addition, we studied the relationship between these weak bands and the strong 15273A DIB. We provide a first or second confirmation of several other weak DIBs that have been proposed based on different instruments, and we add new constraints on their widths and locations. We finally propose two new DIB candidates. We compared the strength of the 15273A absorptions with their optical counterparts 5780, 5797, 6196, 6283, and 6614A. Using the 5797-5780 ratio as a tracer of shielding against the radiation field, we showed that the 15273A DIB carrier is significantly more abundant in unshielded (sigma-type) clouds, and it responds even more strongly than the 5780A band carrier to the local ionizing field.
Two types of room temperature detectors of terahertz laser radiation have been developed which allow in an all-electric manner to determine the plane of polarization of linearly polarized radiation and the ellipticity of elliptically polarized radiation, respectively. The operation of the detectors is based on photogalvanic effects in semiconductor quantum well structures of low symmetry. The photogalvanic effects have sub-nanosecond time constants at room temperature making a high time resolution of the polarization detectors possible.
A versatile and portable magnetically shielded room with a field of (700 \pm 200) pT within a central volume of 1m x 1m x 1m and a field gradient less than 300 pT/m is described. This performance represents more than a hundred-fold improvement of the state of the art for a two-layer magnetic shield and provides an environment suitable for a next generation of precision experiments in fundamental physics at low energies; in particular, searches for electric dipole moments of fundamental systems and tests of Lorentz-invariance based on spin-precession experiments. Studies of the residual fields and their sources enable improved design of future ultra-low gradient environments and experimental apparatus.
We analyze a task in which classical and quantum messages are simultaneously communicated via a noisy quantum channel, assisted with a limited amount of shared entanglement. We derive direct and converse bounds for the one-shot capacity region, represented by the smooth conditional entropies and the error tolerance. The proof is based on the randomized partial decoupling theorem, which is a generalization of the decoupling theorem. The two bounds match in the asymptotic limit of infinitely many uses of a memoryless channel and coincide with the previous result obtained by Hsieh and Wilde. Direct and converse bounds for various communication tasks are obtained as corollaries, both for the one-shot and asymptotic scenarios.
As known, external fields can have a noticeable effect on a number of processes such as recombination of radicals, processes occurring in biological systems, some processes of adsorption, etc. However, there are no any mentions in literature about the impact on chemical processes of non-magnetic and non-charged bodies which do not have direct contact with the reagents. In the current study we examine the effect on the hydration of YBa2Cu3O6.75 (YBCO) of a closely placed non-magnetic and non-charged steel disk. For this purpose we utilize a previously developed special method of material preparation that has been published earlier. The resulting material we analyze by the iodometric titration technique, X-ray diffraction, and X-ray photoelectron spectroscopy. Conducting the experiment on a large number of the YBCO samples being hydrated at different distances from the disk clearly indicates a significant inhibitory effect of this factor. We find that there is a certain distance from the surface of the metal body, at which the hydration process achieves maximum sensitivity to the metal presence. At this stage of research, it is assumed that the mediator, with the aid of which the influence on the hydration process occurs, might be electromagnetic radiation emanating from the YBCO samples during their hydration and reflected from the surface of metal.
In turbulent flows, local velocity differences often obey a cascade-like hierarchical dynamics, in the sense that local velocity differences at a given scale k are driven by deterministic and random forces from the next-higher scale k-1. Here we consider such a hierarchically coupled model with periodic boundary conditions, and show that it leads to an N-th order initial value problem, where N is the number of cascade steps. We deal in detail with the case N=7 and introduce a non-polynomial spline method that solves the problem for arbitrary driving forces. Several examples of driving forces are considered, and estimates of the numerical precision of our method are given. We show how to optimize the numerical method to obtain a truncation error of order O(h^5) rather than O(h^2), where h is the discretization step.
Rapid categorization paradigms have a long history in experimental psychology: Characterized by short presentation times and speedy behavioral responses, these tasks highlight the efficiency with which our visual system processes natural object categories. Previous studies have shown that feed-forward hierarchical models of the visual cortex provide a good fit to human visual decisions. At the same time, recent work in computer vision has demonstrated significant gains in object recognition accuracy with increasingly deep hierarchical architectures. But it is unclear how well these models account for human visual decisions and what they may reveal about the underlying brain processes. We have conducted a large-scale psychophysics study to assess the correlation between computational models and human participants on a rapid animal vs. non-animal categorization task. We considered visual representations of varying complexity by analyzing the output of different stages of processing in three state-of-the-art deep networks. We found that recognition accuracy increases with higher stages of visual processing (higher level stages indeed outperforming human participants on the same task) but that human decisions agree best with predictions from intermediate stages. Overall, these results suggest that human participants may rely on visual features of intermediate complexity and that the complexity of visual representations afforded by modern deep network models may exceed those used by human participants during rapid categorization.
We show that the set of real polynomials in two variables that are sums of three squares of rational functions is dense in the set of those that are positive semidefinite. We also prove that the set of real surfaces in P^3 whose function field has level 2 is dense in the set of those that have no real points.
We study the adsorption geometry of 3,4,9,10-perylene-tetracarboxylic-dianhydride (PTCDA) on Ag(111) and Cu(111) using X-ray standing waves. The element-specific analysis shows that the carbon core of the molecule adsorbs in a planar configuration, whereas the oxygen atoms experience a non-trivial and substrate dependent distortion. On copper (silver) the carbon rings resides 2.66 A (2.86 A) above the substrate. In contrast to the conformation on Ag(111), where the carboxylic oxygen atoms are bent towards the surface, we find that on Cu(111) all oxygen atoms are above the carbon plane at 2.73 A and 2.89 A, respectively.
We present a new framework for multi-view geometry in computer vision. A camera is a mapping between $\mathbb{P}^3$ and a line congruence. This model, which ignores image planes and measurements, is a natural abstraction of traditional pinhole cameras. It includes two-slit cameras, pushbroom cameras, catadioptric cameras, and many more. We study the concurrent lines variety, which consists of $n$-tuples of lines in $\mathbb{P}^3$ that intersect at a point. Combining its equations with those of various congruences, we derive constraints for corresponding images in multiple views. We also study photographic cameras which use image measurements and are modeled as rational maps from $\mathbb{P}^3$ to $\mathbb{P}^2$ or $\mathbb{P}^1\times \mathbb{P}^1$.
We prove a generalization of the Hochster-Roberts-Boutot-Kawamata Theorem conjectured by Aschenbrenner and the author: let $R\to S$ be a pure homomorphism of equicharacteristic zero Noetherian local rings. If $S$ is regular, then $R$ is pseudo-rational, and if $R$ is moreover $\mathbb Q$-Gorenstein, then it pseudo-log-terminal.
The usual way to interpret language models (LMs) is to test their performance on different benchmarks and subsequently infer their internal processes. In this paper, we present an alternative approach, concentrating on the quality of LM processing, with a focus on their language abilities. To this end, we construct 'linguistic task spaces' -- representations of an LM's language conceptualisation -- that shed light on the connections LMs draw between language phenomena. Task spaces are based on the interactions of the learning signals from different linguistic phenomena, which we assess via a method we call 'similarity probing'. To disentangle the learning signals of linguistic phenomena, we further introduce a method called 'fine-tuning via gradient differentials' (FTGD). We apply our methods to language models of three different scales and find that larger models generalise better to overarching general concepts for linguistic tasks, making better use of their shared structure. Further, the distributedness of linguistic processing increases with pre-training through increased parameter sharing between related linguistic tasks. The overall generalisation patterns are mostly stable throughout training and not marked by incisive stages, potentially explaining the lack of successful curriculum strategies for LMs.
A spiral wave is a macroscopic dynamic of excitable media that plays an important role in several distinct systems, including the Belousov-Zhabotinsky reaction, seizures in the brain, and lethal arrhythmia in the heart. Because spiral wave dynamics can exhibit a wide spectrum of behaviors, its precise quantification can be challenging. Here we present a hybrid geometric and information-theoretic approach to quantifying spiral wave dynamics. We demonstrate the effectiveness of our approach by applying it to numerical simulations of a two-dimensional excitable medium with different numbers and spatial patterns of spiral waves. We show that, by defining information flow over the excitable medium, hidden coherent structures emerge that effectively quantify the information transport underlying spiral wave dynamics. Most importantly, we find that some coherent structures become more clearly defined over a longer observation period. These findings validate our approach to quantitatively characterize spiral wave dynamics by focusing on information transport. Our approach is computationally efficient and is applicable to many excitable media of interest in distinct physical, chemical and biological systems. Our approach could ultimately contribute to an improved therapy of clinical conditions such as seizures and cardiac arrhythmia by identifying potential targets of interventional therapies.
Crucial developments to the recently introduced signal-space approach for multiplexing multiple data symbols using a single-radio switched antenna are presented. First, we introduce a general framework for expressing the spatial multiplexing relation of the transmit signals only from the antenna scattering parameters and the modulating reactive loading. This not only avoids tedious far-field calculations, but more importantly provides an efficient and practical strategy for spatially multiplexing PSK signals of any modulation order. The proposed approach allows ensuring a constant impedance matching at the input of the driving antenna for all symbol combinations, and as importantly uses only passive reconfigurable loads. This obviates the use of reconfigurable matching networks and active loads, respectively, thereby overcoming stringent limitations of previous single-feed MIMO techniques in terms of complexity, efficiency, and power consumption. The proposed approach is illustrated by the design of a realistic very compact antenna system optimized for multiplexing QPSK signals. The results show that the proposed approach can bring the MIMO benefits to the low-end user terminals at a reduced RF complexity.
Multi-layer graphene with rhombohedral stacking is a promising carbon phase possibly displaying correlated states like magnetism or superconductivity due to the occurrence of a flat surface band at the Fermi level. Recently, flakes of thickness up to 17 layers were tentatively attributed ABC sequences although the Raman fingerprint of rhombohedral multilayer graphene is currently unknown and the 2D resonant Raman spectrum of Bernal graphite not understood. We provide a first principles description of the 2D Raman peak in three and four layers graphene (all stackings) as well as in Bernal, rhombohedral and an alternation of Bernal and rhombohedral graphite. We give practical prescriptions to identify long range sequences of ABC multi-layer graphene. Our work is a prerequisite to experimental non-destructive identification and synthesis of rhombohedral graphite.
Furigana are pronunciation notes used in Japanese writing. Being able to detect these can help improve optical character recognition (OCR) performance or make more accurate digital copies of Japanese written media by correctly displaying furigana. This project focuses on detecting furigana in Japanese books and comics. While there has been research into the detection of Japanese text in general, there are currently no proposed methods for detecting furigana. We construct a new dataset containing Japanese written media and annotations of furigana. We propose an evaluation metric for such data which is similar to the evaluation protocols used in object detection except that it allows groups of objects to be labeled by one annotation. We propose a method for detection of furigana that is based on mathematical morphology and connected component analysis. We evaluate the detections of the dataset and compare different methods for text extraction. We also evaluate different types of images such as books and comics individually and discuss the challenges of each type of image. The proposed method reaches an F1-score of 76\% on the dataset. The method performs well on regular books, but less so on comics, and books of irregular format. Finally, we show that the proposed method can improve the performance of OCR by 5\% on the manga109 dataset. Source code is available via \texttt{\url{https://github.com/nikolajkb/FuriganaDetection}}
Large-scale arrays of quantum-dot spin qubits in Si/SiGe quantum wells require large or tunable energy splittings of the valley states associated with degenerate conduction band minima. Existing proposals to deterministically enhance the valley splitting rely on sharp interfaces or modifications in the quantum well barriers that can be difficult to grow. Here, we propose and demonstrate a new heterostructure, the "Wiggle Well," whose key feature is Ge concentration oscillations inside the quantum well. Experimentally, we show that placing Ge in the quantum well does not significantly impact our ability to form and manipulate single-electron quantum dots. We further observe large and widely tunable valley splittings, from 54 to 239 ueV. Tight-binding calculations, and the tunability of the valley splitting, indicate that these results can mainly be attributed to random concentration fluctuations that are amplified by the presence of Ge alloy in the heterostructure, as opposed to a deterministic enhancement due to the concentration oscillations. Quantitative predictions for several other heterostructures point to the Wiggle Well as a robust method for reliably enhancing the valley splitting in future qubit devices.
I. Raeburn and J. Taylor have constructed continuous-trace C*-algebras with a prescribed Dixmier-Douady class, which also depend on the choice of an open cover of the spectrum. We study the asymptotic behavior of these algebras with respect to certain refinements of the cover and appropriate extension of cocycles. This leads to the analysis of a limit groupoid G and a cocycle \sigma, and the algebra C*(G, \sigma) may be regarded as a generalized direct limit of the Raeburn-Taylor algebras. As a special case, all UHF C*-algebras arise from this limit construction.
Very faint X-ray binaries appear to be transient in many cases with peak luminosities much fainter than that of usual soft X-ray transients, but their nature still remains elusive. We investigate the possibility that this transient behaviour is due to the same thermal/viscous instability which is responsible for outbursts of bright soft X-ray transients, occurring in ultracompact binaries for adequately low mass-transfer rates. More generally, we investigate the observational consequences of this instability when it occurs in ultracompact binaries. We use our code for modelling the thermal-viscous instability of the accretion disc, assumed here to be hydrogen poor. We also take into account the effects of disc X-ray irradiation, and consider the impact of the mass-transfer rate on the outburst brightness. We find that one can reproduce the observed properties of both the very faint and the brighter short transients (peak luminosity, duration, recurrence times), provided that the viscosity parameter in quiescence is slightly smaller (typically a factor of between two and four) than in bright soft X-ray transients and normal dwarf nova outbursts, the viscosity in outburst being unchanged. This possibly reflects the impact of chemical composition on non-ideal MHD effects affecting magnetically driven turbulence in poorly ionized discs.
Non-linear transport properties of single-layer metal-on-metal islands driven with strong static and time-dependent forces are studied. We apply a semi-empirical lattice model and use master equation and kinetic Monte Carlo simulation methods to compute observables such as the velocity and the diffusion coefficient. Two types of time-dependent driving are considered: a pulsed rotated field and an alternating field with a zero net force (electrophoretic ratchet). Small islands up to 12 atoms were studied in detail with the master equation method and larger ones with simulations. Results are presented mainly for a parametrization of Cu on Cu(001) surface, which has been the main system of interest in several previous studies. The main results are that the pulsed field can increase the current in both diagonal and axis direction when compared to static field, and there exists a current inversion in the electrophoretic ratchet. Both of these phenomena are a consequence of the coupling of the internal dynamics of the island with its transport. In addition to the previously discovered "magic size" effect for islands in equilibrium, a strong odd-even effect was found for islands driven far out of equilibrium. Master equation computations revealed non-monotonous behavior for the leading relaxation constant and effective Arrhenius parameters. Using cycle optimization methods, typical island transport mechanisms are identified for small islands.
We develop algorithms and computer programs which verify criteria of properness of discrete group actions on semisimple homogeneous spaces. We apply these algorithms to find new examples of non-virtually abelian discontinuous group actions on homogeneous spaces which do not admit proper SL(2,R)-actions.
Often in mathematics it is useful to summarize a multivariate phenomenon with a single number and in fact, the determinant -- which is represented by det -- is one of the simplest cases. In fact, this number it is defined only for square matrices and a lot of its properties are very well-known. For instance, the determinant is a multiplicative function, i.e. det(AB)=detA detB, but it is not, in general, an additive function. Another interesting function in the matrix analysis is the characteristic polynomial -- in fact, given a matrix A, this function is defined by $p_A(t)=det(tI-A)$ where I is the identity matrix -- which elements are, up a sign, the elementary symmetric functions associated to the eigenvalues of the matrix A. In the present paper new expressions related with the determinant of sum of matrices and the elementary symmetric functions are given. Moreover, the connection with the Mobius function and the partial ordered sets (poset) is presented. Finally, a problem related with the determinant of sum of matrices is solved.
In this paper invariant subspace method has been employed for solving linear and non-linear fractional partial differential equations involving Caputo derivative. A variety of illustrative examples are solved to demonstrate the effectiveness and applicability of the method.
$B_s \to \rho(\omega) K^{\ast}$ are useful to determine the $B_s$ distribution amplitude, as well as constrain the CKM phase angle $\alpha$. We study these decays within the Perturbative QCD (PQCD) picture. In this approach, we calculate factorizable, non-factorizable, as well as annihilation diagrams. We find the branching ratio for $B_s \to \rho^+ K^{*-}$ is big to order $10^{-5}$, we also find there's large direct CP violation in $B_s(\bar B_s) \to \rho^0(\omega) \bar K^{*0}(K^{*0})$. Our predictions are consistent with those from other methods and current experiments.
We address the problem of unambiguous discrimination among a given set of quantum operations. The necessary and sufficient condition for them to be unambiguously distinguishable is derived in the cases of single use and multiple uses respectively. For the latter case we explicitly construct the input states and corresponding measurements that accomplish the task. It is found that the introduction of entanglement can improve the discrimination.
While tunable filters are a recent development in night time astronomy, they have long been used in other physical sciences, e.g. solar physics, remote sensing and underwater communications. With their ability to tune precisely to a given wavelength using a bandpass optimized for the experiment, tunable filters are already producing some of the deepest narrowband images to date of astrophysical sources. Furthermore, some classes of tunable filters can be used in fast telescope beams and therefore allow for narrowband imaging over angular fields of more than a degree over the sky.
In a recent paper \cite{[Good1]} Good postulated new rules of quantization, one of the major features of which is that the quantum evolution of the wave function is always given by ordinary differential equations. In this paper we analyse the proposal in some detail and discuss its viability and its relationship with the standard quantum theory. As a byproduct, a simple derivation of the `mass spectrum' for the Klein-Gordon field is presented, but it is also shown that there is a complete additional spectrum of negative `masses'. Finally, two major reasons are presented against the viability of this alternative proposal: a) It does not lead to the correct energy spectrum for the hydrogen atom. b) For field models, the standard quantum theory cannot be recovered from this alternative description.
Fairness in AI is a growing concern for high-stakes decision making. Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked. Recent efforts explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of understanding of how to integrate users' feedback into an AI model and the impacts of doing so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost model trained on the Home Credit dataset, and conducted offline experiments to investigate the effects of retraining models on accuracy, and individual and group fairness. Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness. Our discussion highlights the challenges of employing user feedback in AI fairness and points the way to a future application area of interactive machine learning.
A method of temporal factor prognosis of TE (tick-borne encephalitis) infection has been developed. The high precision of the prognosis results for a number of geographical regions of Primorsky Krai has been achieved. The method can be applied not only to epidemiological research but also to others.
This paper explores the combination of neural network quantization and entropy coding for memory footprint minimization. Edge deployment of quantized models is hampered by the harsh Pareto frontier of the accuracy-to-bitwidth tradeoff, causing dramatic accuracy loss below a certain bitwidth. This accuracy loss can be alleviated thanks to mixed precision quantization, allowing for more flexible bitwidth allocation. However, standard mixed precision benefits remain limited due to the 1-bit frontier, that forces each parameter to be encoded on at least 1 bit of data. This paper introduces an approach that combines mixed precision, zero-point quantization and entropy coding to push the compression boundary of Resnets beyond the 1-bit frontier with an accuracy drop below 1% on the ImageNet benchmark. From an implementation standpoint, a compact decoder architecture features reduced latency, thus allowing for inference-compatible decoding.
Non-invertible symmetries have recently been understood to provide interesting contraints on RG flows of QFTs. In this work, we show how non-invertible symmetries can also be used to generate entirely new RG flows, by means of so-called "non-invertible twisted compactification". We illustrate the idea in the example of twisted compactifications of 4d $\mathcal{N}=4$ super-Yang-Mills (SYM) to three dimensions. After giving a catalogue of non-invertible symmetries descending from Montonen-Olive duality transformations of 4d $\mathcal{N}=4$ SYM, we show that twisted compactification by non-invertible symmetries can be used to obtain 3d $\mathcal{N}=6$ theories which appear otherwise unreachable if one restricts to twists by invertible symmetries.