text
stringlengths
6
128k
Quantum algorithms for unstructured search problems rely on the preparation of a uniform superposition, traditionally achieved through Hadamard gates. However, this incidentally creates an auxiliary search space consisting of nonsensical answers that do not belong in the search space and reduce the efficiency of the algorithm due to the need to neglect, un-compute, or destructively interfere with them. Previous approaches to removing this auxiliary search space yielded large circuit depth and required the use of ancillary qubits. We have developed an optimized general solver for a circuit that prepares a uniform superposition of any N states while minimizing depth and without the use of ancillary qubits. We show that this algorithm is efficient, especially in its use of two wire gates, and that it has been verified on an IonQ quantum computer and through application to a quantum unstructured search algorithm.
The color of galaxies is a fundamental property, easily measured, that constrains models of galaxies and their evolution. Dust attenuation and star formation history (SFH) are the dominant factors affecting the color of galaxies. Here we explore the empirical relation between SFH, attenuation, and color for a wide range of galaxies, including early types. These galaxies have been observed by GALEX, SDSS, and Spitzer, allowing the construction of measures of dust attenuation from the ratio of infrared (IR) to ultraviolet (UV) flux and measures of SFH from the strength of the 4000A break. The empirical relation between these three quantities is compared to models that separately predict the effects of dust and SFH on color. This comparison demonstrates the quantitative consistency of these simple models with the data and hints at the power of multiwavelength data for constraining these models. The UV color is a strong constraint; we find that a Milky Way extinction curve is disfavored, and that the UV emission of galaxies with large 4000A break strengths is likely to arise from evolved populations. We perform fits to the relation between SFH, attenuation, and color. This relation links the production of starlight and its absorption by dust to the subsequent reemission of the absorbed light in the IR. Galaxy models that self-consistently treat dust absorption and emission as well as stellar populations will need to reproduce these fitted relations in the low-redshift universe.
Magnetic skyrmions are topologically protected nanoscale objects, which are promising building blocks for novel magnetic and spintronic devices. Here, we investigate the dynamics of a skyrmion driven by a spin wave in a magnetic nanowire. It is found that (i) the skyrmion is first accelerated and then decelerated exponentially; (ii) it can turn L-corners with both right and left turns; and (iii) it always turns left (right) when the skyrmion number is positive (negative) in the T- and Y-junctions. Our results will be the basis of skyrmionic devices driven by a spin wave.
Blockchains have popularized automated market makers (AMMs). An AMM exchange is an application running on a blockchain which maintains a pool of crypto-assets and automatically trades assets with users governed by some pricing function that prices the assets based on their relative demand/supply. AMMs have created an important challenge commonly known as the Miner Extractable Value (MEV). In particular, the miners who control the contents and ordering of transactions in a block can extract value by front-running and back-running users' transactions, leading to arbitrage opportunities that guarantee them risk-free returns. In this paper, we consider how to design AMM mechanisms that eliminate MEV opportunities. Specifically, we propose a new AMM mechanism that processes all transactions contained within a block in a batch. We show that our new mechanism satisfies two tiers of guarantees. First, for legacy blockchains where each block is proposed by a single (possibly rotating) miner, we prove that our mechanism satisfies arbitrage resilience, i.e., a miner cannot gain risk-free profit. Moreover, we also guarantee fair treatment among all transactions within the same block, such that the miner is unable to sell off favorable positions in the block to users or arbitragers. Second, for blockchains where the block proposal process is decentralized and offers sequencing-fairness, we prove a stronger notion called incentive compatibility -- roughly speaking, we guarantee that any individual user's best response is to follow the honest strategy.
The spectral line datacubes obtained from the Square Kilometre Array (SKA) and its precursors, such as the Australian SKA Pathfinder (ASKAP), will be sufficiently large to necessitate automated detection and parametrisation of sources. Matched filtering is widely acknowledged as the best possible method for the automated detection of sources. This paper presents the Characterised Noise Hi (CNHI) source finder, which employs a novel implementation of matched filtering. This implementation is optimised for the 3-D nature of the planned Wide-field ASKAP Legacy L-band All- sky Blind surveY's (WALLABY) Hi spectral line observations. The CNHI source finder also employs a novel sparse representation of 3-D objects, with a high compression rate, to implement Lutz one-pass algorithm on datacubes that are too large to process in a single pass. WALLABY will use ASKAP's phenomenal 30 square degree field of view to image \sim 70% of the sky. It is expected that WALLABY will find 500 000 Hi galaxies out to z \sim 0.2.
Load balancing is the process of improving the Performance of a parallel and distributed system through is distribution of load among the processors [1-2]. Most of the previous work in load balancing and distributed decision making in general, do not effectively take into account the uncertainty and inconsistency in state information but in fuzzy logic, we have advantage of using crisps inputs. In this paper, we present a new approach for implementing dynamic load balancing algorithm with fuzzy logic, which can face to uncertainty and inconsistency of previous algorithms, further more our algorithm shows better response time than round robin and randomize algorithm respectively 30.84 percent and 45.45 percent.
Viscously damped particles driven past an evenly spaced array of potential energy wells or barriers may become kinetically locked in to the array, or else may escape from the array. The transition between locked-in and free-running states has been predicted to depend sensitively on the ratio between the particles' size and the separation between wells. This prediction is confirmed by measurements on monodisperse colloidal spheres driven through arrays of holographic optical traps.
In this paper we investigate explicit numerical approximations for stochastic differential delay equations (SDDEs) under a local Lipschitz condition by employing the adaptive Euler-Maruyama (EM) method. Working in both finite and infinite horizons, we achieve strong convergence results by showing the boundedness of the pth moments of the adaptive EM solution. We also obtain the order of convergence infinite horizon. In addition, we show almost sure exponential stability of the adaptive approximate solution for both SDEs and SDDEs.
We discuss the role of gluon poles and the gauge invariance for the hadron tensors of Drell-Yan and direct photon production processes with the transversely polarized hadron. These hadron tensors are needed to construct the corresponding single spin asymmetries. For the Drell-Yan process, we perform our analysis within both the Feynman and axial-type (contour) gauges for gluons. In both the Feynman and contour gauges, we demonstrate that the gauge invariance leads to the need of the new (non-standard) diagrams. Moreover, in the Feynman gauge, we argue the absence of gluon poles in the correlators $\langle\bar\psi\gamma_\perp A^+\psi\rangle$ related traditionally to $dT(x,x)/dx$. As a result, these terms disappear from the final QED gauge invariant Drell-Yan hadron tensor. For the direct photon production, by using the contour gauge for gluon fields, we find that there are new twist-$3$ terms present in the hadron tensor of the considering process in addition to the standard twist-$3$ terms.
Vincular and covincular patterns are generalizations of classical patterns allowing restrictions on the indices and values of the occurrences in a permutation. In this paper we study the integer sequences arising as the enumerations of permutations simultaneously avoiding a vincular and a covincular pattern, both of length 3, with at most one restriction. We see familiar sequences, such as the Catalan and Motzkin numbers, but also some previously unknown sequences which have close links to other combinatorial objects such as lattice paths and integer partitions. Where possible we include a generating function for the enumeration. One of the cases considered settles a conjecture by Pudwell (2010) on the Wilf-equivalence of barred patterns. We also give an alternative proof of the classic result that permutations avoiding 123 are counted by the Catalan numbers.
We calculate the contribution of graviton exchange to the running of gauge couplings at lowest non-trivial order in perturbation theory. Including this contribution in a theory that features coupling constant unification does not upset this unification, but rather shifts the unification scale. When extrapolated formally, the gravitational correction renders all gauge couplings asymptotically free.
The cross coproduct braided group $Aut(C) \rcocross B$ is obtained by Tannaka-Krein reconstruction from $C^B\to C$ for a braided group $B$ in braided category $C$. We apply this construction to obtain partial solutions to two problems in braided group theory, namely the tensor problem and the braided double. We obtain $Aut(C)\rcocross Aut(C)\isom Aut(C)\lcross Aut(C)$ and higher braided group `spin chains'. The example of the braided group $B(R)\lcross B(R)\lcross...\lcross B(R)$ is described explicitly by R-matrix relations. We also obtain $Aut(C)\rcocross Aut(C)^*$ as a dual quasitriangular `codouble' braided group by reconstruction from the dual category $C^\circ\to C$. General braided double crossproducts $B\dcross C$ are also considered.
To find out whether toroidal field can stably exist in galaxies the current-driven instability of toroidal magnetic fields is considered under the influence of an axial magnetic field component and under the influence of both rigid and differential rotation. The MHD equations are solved in a simplified model with cylindric geometry. We assume the axial field as uniform and the fluid as incompressible. The stability of a toroidal magnetic field is strongly influenced by uniform axial magnetic fields. If both field components are of the same order of magnitude then the instability is slightly supported and modes with m>1 dominate. If the axial field even dominates the most unstable modes have again m>1 but the field is strongly stabilized. All modes are suppressed by a fast rigid rotation where the m=1 mode maximally resists. Just this mode becomes best re-animated for \Omega > \Omega^A (\Omega^A the Alfven frequency) if the rotation has a negative shear. -- Strong indication has been found for a stabilization of the nonaxisymmetric modes for fluids with small magnetic Prandtl number if they are unstable for Pm=1. For rotating fluids the higher modes with m>1 do not play an important role in the linear theory. In the light of our results galactic fields should be marginally unstable against perturbations with m<= 1. The corresponding growth rates are of the order of the rotation period of the inner part of the galaxy.
The spontaneous decay rates of an excited atom placed near a dielectric cylinder are investigated. A special attention is paid to the case when the cylinder radius is small in comparison with radiation wavelength (nanofiber or photonic wire). In this case, the analytical expressions of the transition rates for different orientations of dipole are derived. It is shown that the main contribution to decay rates is due to quasistatic interaction of atom dipole momentum with nanofiber and the contributions of guided modes are exponentially small. On the contrary, in the case when the radius of fiber is only slightly less than radiation wavelength, the influence of guided modes can be substantial. The results obtained are compared with the case of dielectric nanospheroid and ideally conducting wire.
Quantitative stochastic homogenization of linear elliptic operators is by now well-understood. In this contribution we move forward to the nonlinear setting of monotone operators with $p$-growth. This work is dedicated to a quantitative two-scale expansion result. By treating the range of exponents $2\le p <\infty$ in dimensions $d\le 3$, we are able to consider genuinely nonlinear elliptic equations and systems such as $-\nabla \cdot A(x)(1+|\nabla u|^{p-2})\nabla u=f$ (with $A$ random, non-necessarily symmetric) for the first time. When going from $p=2$ to $p>2$, the main difficulty is to analyze the associated linearized operator, whose coefficients are degenerate, unbounded, and depend on the random input $A$ via the solution of a nonlinear equation. One of our main achievements is the control of this intricate nonlinear dependence, leading to annealed Meyers' estimates for the linearized operator, which are key to the optimal quantitative two-scale expansion result we derive (this is also new in the periodic setting).
Understanding the feasible power flow region is of central importance to power system analysis. In this paper, we propose a geometric view of the power system loadability problem. By using rectangular coordinates for complex voltages, we provide an integrated geometric understanding of active and reactive power flow equations on loadability boundaries. Based on such an understanding, we develop a linear programming framework to 1) verify if an operating point is on the loadability boundary, 2) compute the margin of an operating point to the loadability boundary, and 3) calculate a loadability boundary point of any direction. The proposed method is computationally more efficient than existing methods since it does not require solving nonlinear optimization problems or calculating the eigenvalues of the power flow Jacobian. Standard IEEE test cases demonstrate the capability of the new method compared to the current state-of-the-art methods.
The goal of these notes is to provide an informal introduction to Gromov-Witten theory with an emphasis on its role in counting curves in surfaces. These notes are based on a talk given at the Fields Institute during a week-long conference aimed at introducing graduate students to the subject which took place during the thematic program on Calabi-Yau Varieties: Arithmetic, Geometry, and Physics.
The Proca theory of the real massive vector field admits non-equilibrium solutions, where the asymptotic dynamics of the electric field is dominated by the periodically oscillating Coulomb component. We discuss how such field configurations are seen in different reference frames, where we find an intriguing spatial pattern of the vector field and the electromagnetic field associated with it. Our studies are carried out in the framework of the classical Proca theory.
We study a class of three-point functions on the de Sitter universe and on the asymptotic cone. A blending of geometrical ideas and analytic methods is used to compute some remarkable integrals, on the basis of a generalized star-triangle identity living on the cone and on the complex de Sitter manifold. We discuss an application of the general results to the study of the stability of scalar particles on the Sitter universe.
By using the direct coexistence method, we have calculated the melting points of ice Ih at normal pressure for three recently proposed water models, namely, TIP3P-FB, TIP4P-FB, and TIP4P-D. We obtained Tm = 216 K for TIP3P-FB, Tm = 242 K for TIP4P-FB, and Tm = 247 K for TIP4P-D. We revisited the melting point of TIP4P/2005 and TIP5P obtaining Tm = 250 and 274 K, respectively. We summarize the current situation of the melting point of ice Ih for a number of water models and conclude that no model is yet able to simultaneously reproduce the melting temperature of ice Ih and the temperature of the maximum in density at room pressure. This probably points toward our both still incomplete knowledge of the potential energy surface of water and the necessity of incorporating nuclear quantum effects to describe both properties simultaneously.
Tunneling of fractionally charged quasiparticles across a two-dimensional electron system on a fractional quantum Hall plateau is expected to be strongly enhanced at low temperatures. This theoretical prediction is at odds with recent experimental studies of samples with weakly-pinched quantum-point-contact constrictions, in which the opposite behavior is observed. We argue here that this unexpected finding is a consequence of electron-electron interactions near the point contact.
Detailed mobile sensing data from phones, watches, and fitness trackers offer an unparalleled opportunity to quantify and act upon previously unmeasurable behavioral changes in order to improve individual health and accelerate responses to emerging diseases. Unlike in natural language processing and computer vision, deep representation learning has yet to broadly impact this domain, in which the vast majority of research and clinical applications still rely on manually defined features and boosted tree models or even forgo predictive modeling altogether due to insufficient accuracy. This is due to unique challenges in the behavioral health domain, including very small datasets (~10^1 participants), which frequently contain missing data, consist of long time series with critical long-range dependencies (length>10^4), and extreme class imbalances (>10^3:1). Here, we introduce a neural architecture for multivariate time series classification designed to address these unique domain challenges. Our proposed behavioral representation learning approach combines novel tasks for self-supervised pretraining and transfer learning to address data scarcity, and captures long-range dependencies across long-history time series through transformer self-attention following convolutional neural network-based dimensionality reduction. We propose an evaluation framework aimed at reflecting expected real-world performance in plausible deployment scenarios. Concretely, we demonstrate (1) performance improvements over baselines of up to 0.15 ROC AUC across five prediction tasks, (2) transfer learning-induced performance improvements of 16% PR AUC in small data scenarios, and (3) the potential of transfer learning in novel disease scenarios through an exploratory case study of zero-shot COVID-19 prediction in an independent data set. Finally, we discuss potential implications for medical surveillance testing.
The functional space of biquaternions is considered on Minkovskiy space. The scalar-vector biquaternions representation is used which was offered by W. Hamilton for quaternions. With introduction of differential operator - a mutual complex gradient (bigradients), which generalize the notion of a gradient on biquaternions space, biquaternionic wave (biwave) equations are considered, their invariance for group of the Lorentz-Puancare transformations is shown and their generalized solutions are obtained. Biquaternionic form of generalized Maxwell-Dirac equation is constructed and its solutions are researched on base of the differential biquaternions algebra. Its generalized decisions are built with use of scalar potential. The new equation for these potential are constructed which unites known equations of quantum mechanics (Klein-Gordon and Schrodinger Eq.). The nonstationary, steady-state and harmonic on time scalar fields and generated by them the spinors and spinors fields in biquaternionic form are constructed.
Modeling and predicting extreme movements in GDP is notoriously difficult and the selection of appropriate covariates and/or possible forms of nonlinearities are key in obtaining precise forecasts. In this paper, our focus is on using large datasets in quantile regression models to forecast the conditional distribution of US GDP growth. To capture possible non-linearities, we include several nonlinear specifications. The resulting models will be huge dimensional and we thus rely on a set of shrinkage priors. Since Markov Chain Monte Carlo estimation becomes slow in these dimensions, we rely on fast variational Bayes approximations to the posterior distribution of the coefficients and the latent states. We find that our proposed set of models produces precise forecasts. These gains are especially pronounced in the tails. Using Gaussian processes to approximate the nonlinear component of the model further improves the good performance, in particular in the right tail.
We prove the rationality and irreducibility of the moduli space of---what we call---the endomorphism-general instanton vector bundles of arbitrary rank on the projective space. In particular, we deduce the rationality of the moduli spaces of rank-two mathematical instantons. This problem was first studied by Hartshorne, Hirschowitz-Narasimhan in the late 1970s, and it has been reiterated within the framework of the ICM 2018.
The applicability of Doppler radar for gait analysis is investigated by quantitatively comparing the measured biomechanical parameters to those obtained using motion capturing and ground reaction forces. Nineteen individuals walked on a treadmill at two different speeds, where a radar system was positioned in front of or behind the subject. The right knee angle was confined by an adjustable orthosis in five different degrees. Eleven gait parameters are extracted from radar micro-Doppler signatures. Here, new methods for obtaining the velocities of individual lower limb joints are proposed. Further, a new method to extract individual leg flight times from radar data is introduced. Based on radar data, five spatiotemporal parameters related to rhythm and pace could reliably be extracted. Further, for most of the considered conditions, three kinematic parameters could accurately be measured. The radar-based stance and flight time measurements rely on the correct detection of the time instant of maximal knee velocity during the gait cycle. This time instant is reliably detected when the radar has a back view, but is underestimated when the radar is positioned in front of the subject. The results validate the applicability of Doppler radar to accurately measure a variety of medically relevant gait parameters. Radar has the potential to unobtrusively diagnose changes in gait, e.g., to design training in prevention and rehabilitation. As contact-less and privacy-preserving sensor, radar presents a viable technology to supplement existing gait analysis tools for long-term in-home examinations.
NASA should design missions to Mars for the purpose of generating "Aha!" discoveries to jolt scientists contemplating the molecular origins of life. These missions should be designed with an understanding of the privileged chemistry that likely created RNA prebiotically on Earth, and universal chemical principles that constrain the structure of Darwinian molecules generally.
We report the results of a visual inspection of images of the Rapid ASKAP Continuum Survey (RACS) in search of extended radio galaxies (ERG) that reach or exceed linear sizes on the order of one Megaparsec. We searched a contiguous area of 1059deg$^2$ from RA$_{\rm J}$=20$^h$20$^m$ to 06$^h$20$^m$, and $-50^{\circ}<\rm{Dec}_J<-40^{\circ}$, which is covered by deep multi-band optical images of the Dark Energy Survey (DES), and in which previously only three ERGs larger than 1Mpc had been reported. For over 1800 radio galaxy candidates inspected, our search in optical and infrared images resulted in hosts for 1440 ERG, for which spectroscopic and photometric redshifts from various references were used to convert their largest angular size (LAS) to projected linear size (LLS). This resulted in 178 newly discovered giant radio sources (GRS) with LLS$>$1Mpc, of which 18 exceed 2Mpc and the largest one is 3.4Mpc. Their redshifts range from 0.02 to $\sim$2.0, but only 10 of the 178 new GRS have spectroscopic redshifts. For the 146 host galaxies the median $r$-band magnitude and redshift are 20.9 and 0.64, while for the 32 quasars or candidates these are 19.7 and 0.75. Merging the six most recent large compilations of GRS results in 458 GRS larger than 1Mpc, so we were able to increase this number by $\sim39\%$ to now 636.
We consider the problem of controlling switched-mode power converters using model predictive control. Model predictive control requires solving optimization problems in real time, limiting its application to systems with small numbers of switches and a short horizon. We propose a technique for using off-line computation to approximate the model predictive controller. This is done by dividing the planning horizon into two segments, and using a quadratic function to approximate the optimal cost over the second segment. The approximate model predictive algorithm minimizes the true cost over the first segment, and the approximate cost over the second segment, allowing the user to adjust the computational requirements by changing the length of the first segment. We conclude with two simulated examples.
We present distance estimates to a set of high-latitude intermediate-velocity HI clouds. We explore some of the physical parameters that can be determined from these results, such as cloud mass, infall velocity and height above the Galactic plane. We also briefly describe some astrophysical applications of these data and explore future work.
Entanglement in multipartite systems can be achieved by the coherent superposition of product states, generated through a universal unitary transformation, followed by spontaneous parametric down-conversions and path identification.
There is a parallelism between growth in arithmetic combinatorics and growth in a geometric context. While, over $\mathbb{R}$ or $\mathbb{C}$, geometric statements on growth often have geometric proofs, what little is known over finite fields rests on arithmetic proofs. We discuss strategies for geometric proofs of growth over finite fields, and show that growth can be defined and proven in an abstract projective plane -- even one with weak axioms.
We present high S/N optical spectra of 10 BL Lac objects detected at GeV energies by Fermi satellite (3FGL catalog), for which previous observations suggested that they are at relatively high redshift. The new observations, obtained at the 10 m Gran Telescopio Canarias, allowed us to find the redshift for J0814.5+2943 (z = 0.703) and we can set spectroscopic lower limit for J0008.0+4713 (z>1.659) and J1107.7+0222 (z>1.0735) on the basis of Mg II intervening absorption features. In addition we confirm the redshifts for J0505.5+0416 (z=0.423) and for J1450+5200 (z>2.470). Finally we contradict the previous z estimates for five objects (J0049.7+0237, J0243.5+7119, J0802.0+1005, J1109.4+2411, and J2116.1+3339).
We computationally study the Fermi arc states in a Dirac semimetal, both in a semi-infinite slab and in the thin-film limit. We use Cd$_3$A$_2$ as a model system, and include perturbations that break the $C_4$ symmetry and inversion symmetry. The surface states are protected by the mirror symmetries present in the bulk states and thus survive these perturbations. The Fermi arc states persist down to very thin films, thinner than presently measured experimentally, but are affected by breaking the symmetry of the Hamiltonian. Our findings are compatible with experimental observations of transport in Cd$_3$As$_2$, and also suggest that symmetry-breaking terms that preserve the Fermi arc states nevertheless can have a profound effect in the thin film limit.
We classify parallel and totally geodesic hypersurfaces of the relevant class of G\"odel-type spacetimes, with particular regard to the homogeneous examples.
We show that Nash-Williams' theorem asserting that the countable transfinite sequences of elements of a better-quasi-ordering ordered by embeddability form a better-quasi-ordering is provable in the subsystem of second order arithmetic Pi^1_1-CA_0 but is not equivalent to Pi^1_1-CA_0. We obtain some partial results towards the proof of this theorem in the weaker subsystem ATR_0 and we show that the minimality lemmas typical of wqo and bqo theory imply Pi^1_1-CA_0 and hence cannot be used in such a proof.
Consider a logistic partially linear model, in which the logit of the mean of a binary response is related to a linear function of some covariates and a nonparametric function of other covariates. We derive simple, doubly robust estimators of coefficient for the covariates in the linear component of the partially linear model. Such estimators remain consistent if either a nuisance model is correctly specified for the nonparametric component, or another nuisance model is correctly specified for the means of the covariates of interest given other covariates and the response at a fixed value. In previous works, conditional density models are needed for the latter purposes unless a scalar, binary covariate is handled. We also propose two specific doubly robust estimators: one is locally-efficient like in our class of doubly robust estimators and the other is numerically and statistically simpler and can achieve reasonable efficiency especially when the true coefficients are close to 0.
We study electron transport through double quantum dots (DQD) coupled to a cavity with a single photon mode. The DQD is connected to two electron reservoirs, and the total system is under an external perpendicular magnetic field. The DQD system exhibits a complex multi-level energy spectrum. By varying the photon energy, several anti-crossings between photon dressed electron states of the DQD-cavity system are found at low strength of the magnetic field. The anti-crossings are identified as multiple Rabi resonances arising from the photon exchange between these states. As the results, a dip in the current is seen caused by the multiple Rabi resonances. By increasing the strength of the external magnetic field, a dislocation of the current dip to a lower photon energy is found and the current dip can be diminished. The interplay of the strength of the magnetic field and the geometry of the states the DQD system can weaken the multiple Rabi resonances in which the exchange of photon between the anti-crossings is decreased. We can therefore confirm that the electron transport behavior in the DQD-cavity system can be controlled by manipulating the external magnetic field and the photon cavity parameters.
Gravitational lensing can provide pure geometric tests of the structure of space-time, for instance by determining empirically the angular diameter distance-redshift relation. This geometric test has been demonstrated several times using massive clusters which produce a large lensing signal. In this case, matter at a single redshift dominates the lensing signal, so the analysis is straightforward. It is less clear how weaker signals from multiple sources at different redshifts can be stacked to demonstrate the geometric dependence. We introduce a simple measure of relative shear which for flat cosmologies separates the effect of lens and source positions into multiplicative terms, allowing signals from many different source-lens pairs to be combined. Applying this technique to a sample of groups and low-mass clusters in the COSMOS survey, we detect a clear variation of shear with distance behind the lens. This represents the first detection of the geometric effect using weak lensing by multiple, low-mass systems. The variation of distance with redshift is measured with sufficient precision to constrain the equation of state of the universe under the assumption of flatness, equivalent to a detection of a dark energy component Omega_X at greater than 99% confidence for an equation-of-state parameter -2.5 < w < -0.1. For the case w = -1, we find a value for the cosmological constant density parameter Omega_Lambda = 0.85+0.044-0.19 (68% C.L.), and detect cosmic acceleration (q_0 < 0) at the 98% C.L.. We consider the systematic uncertainties associated with this technique and discuss the prospects for applying it in forthcoming weak-lensing surveys.
Context: R Coronae Borealis (RCB) variables and their non-variable counterparts, the dustless Hydrogen-Deficient Carbon (dLHdC) stars have been known to exhibit enhanced s-processed material on their surfaces, especially Sr, Y, and Ba. No comprehensive work has been done to explore the s-process in these types of stars, however one particular RCB star, U Aqr, has been under scrutiny for its extraordinary Sr enhancement. Aims: We aim to identify RCB and dLHdC stars that have significantly enhanced Sr abundances, such as U Aqr, and use stellar evolution models to begin to estimate the type of neutron exposure that occurs in a typical HdC star. Methods: We compare the strength of the Sr II 4077 $\AA$ spectral line to Ca II H to identify the new subclass of Sr-rich HdCs. We additionally use the structural and abundance information from existing RCB MESA models to calculate the neutron exposure parameter, $\tau$ Results: We identify six stars in the Sr-rich class. Two are RCBs, and four are dLHdCs. We additionally find that the preferred RCB MESA model has a neutron exposure $\tau$ ~ 0.1 mb$^{-1}$, which is lower than the estimated $\tau$ between 0.15 and 0.6 mb$^{-1}$ for the Sr-rich star U Aqr found in the literature. We find trends in the neutron exposure corresponding to He-burning shell temperature, metallicity, and assumed s-processing site. Conclusions: We have found a sub-class of 6 HdCs known as the Sr-rich class, which tend to lie in the halo, outside the typical distribution of RCBs and dLHdCs. We find that dLHdC stars are more likely to be Sr-rich than RCBs, with an occurrence rate of ~13\% for dLHdCs and ~2\% for RCBs. This is one of the first potential spectroscopic differences between RCBs and dLHdCs, along with dLHdCs having stronger surface abundances of $^{18}$O.
During last two decades it has been discovered that the statistical properties of a number of microscopically rather different random systems at the macroscopic level are described by {\it the same} universal probability distribution function which is called the Tracy-Widom (TW) distribution. Among these systems we find both purely methematical problems, such as the longest increasing subsequences in random permutations, and quite physical ones, such as directed polymers in random media or polynuclear crystal growth. In the extensive Introduction we discuss in simple terms these various random systems and explain what the universal TW function is. Next, concentrating on the example of one-dimensional directed polymers in random potential we give the main lines of the formal proof that fluctuations of their free energy are described the universal TW distribution. The second part of the review consist of detailed appendices which provide necessary self-contained mathematical background for the first part.
High-fidelity 3D scene reconstruction has been substantially advanced by recent progress in neural fields. However, most existing methods train a separate network from scratch for each individual scene. This is not scalable, inefficient, and unable to yield good results given limited views. While learning-based multi-view stereo methods alleviate this issue to some extent, their multi-view setting makes it less flexible to scale up and to broad applications. Instead, we introduce training generalizable Neural Fields incorporating scene Priors (NFPs). The NFP network maps any single-view RGB-D image into signed distance and radiance values. A complete scene can be reconstructed by merging individual frames in the volumetric space WITHOUT a fusion module, which provides better flexibility. The scene priors can be trained on large-scale datasets, allowing for fast adaptation to the reconstruction of a new scene with fewer views. NFP not only demonstrates SOTA scene reconstruction performance and efficiency, but it also supports single-image novel-view synthesis, which is underexplored in neural fields. More qualitative results are available at: https://oasisyang.github.io/neural-prior
We explore theoretically the optical response properties in an optomechanical system under electromagneti- cally induced transparency condition but with the mechanical resonator being driven by an additional coherent field. In this configuration, more complex quantum coherent and interference phenomena occur. In partic- ular, we find that the probe transmission spectra depend on the total phase of the applied fields. Our study also provides an efficient way to control propagation of amplification.
We present an approach for recognizing all objects in a scene and estimating their full pose from an accurate 3D instance-aware semantic reconstruction using an RGB-D camera. Our framework couples convolutional neural networks (CNNs) and a state-of-the-art dense Simultaneous Localisation and Mapping (SLAM) system, ElasticFusion, to achieve both high-quality semantic reconstruction as well as robust 6D pose estimation for relevant objects. While the main trend in CNN-based 6D pose estimation has been to infer object's position and orientation from single views of the scene, our approach explores performing pose estimation from multiple viewpoints, under the conjecture that combining multiple predictions can improve the robustness of an object detection system. The resulting system is capable of producing high-quality object-aware semantic reconstructions of room-sized environments, as well as accurately detecting objects and their 6D poses. The developed method has been verified through experimental validation on the YCB-Video dataset and a newly collected warehouse object dataset. Experimental results confirmed that the proposed system achieves improvements over state-of-the-art methods in terms of surface reconstruction and object pose prediction. Our code and video are available at https://sites.google.com/view/object-rpe.
We establish a formula for the SL(2,C) Casson invariant of spliced sums of homology spheres along knots. Along the way, we show that the SL(2,C) Casson invariant vanishes for spliced sums along knots in the 3-sphere.
Short-term hydro-generation management poses a non-convex or even non-continuous optimization problem. For this reason, the problem of systematically obtaining feasible and economically satisfying solutions has not yet been completely solved. Two decomposition methods, which, as far as we know, have not been applied in this field, are here proposed : $\bullet$ the first is based on a decomposition by prediction method and the coordination is a primal-dual relaxation algorithm, $\bullet$ handling the dynamic constraints by duality, the second achieves a price decomposition by an Augmented Lagrangian technique. Numerical tests show the efficiency of these algorithms. They will enable the process in use at Electricit{\'e} de France to be improved.
This report addresses state inference for hidden Markov models. These models rely on unobserved states, which often have a meaningful interpretation. This makes it necessary to develop diagnostic tools for quantification of state uncertainty. The entropy of the state sequence that explains an observed sequence for a given hidden Markov chain model can be considered as the canonical measure of state sequence uncertainty. This canonical measure of state sequence uncertainty is not reflected by the classic multivariate state profiles computed by the smoothing algorithm, which summarizes the possible state sequences. Here, we introduce a new type of profiles which have the following properties: (i) these profiles of conditional entropies are a decomposition of the canonical measure of state sequence uncertainty along the sequence and makes it possible to localize this uncertainty, (ii) these profiles are univariate and thus remain easily interpretable on tree structures. We show how to extend the smoothing algorithms for hidden Markov chain and tree models to compute these entropy profiles efficiently.
We initiate the recently proposed $\boldsymbol{w}$-ensemble one-particle reduced density matrix functional theory ($\boldsymbol{w}$-RDMFT) by deriving the first functional approximations and illustrate how excitation energies can be calculated in practice. For this endeavour, we first study the symmetric Hubbard dimer, constituting the building block of the Hubbard model, for which we execute the Levy-Lieb constrained search. Second, due to the particular suitability of $\boldsymbol{w}$-RDMFT for describing Bose-Einstein condensates, we demonstrate three conceptually different approaches for deriving the universal functional in a homogeneous Bose gas for arbitrary pair interaction in the Bogoliubov regime. Remarkably, in both systems the gradient of the functional is found to diverge repulsively at the boundary of the functional's domain, extending the recently discovered Bose-Einstein condensation force to excited states. Our findings highlight the physical relevance of the generalized exclusion principle for fermionic and bosonic mixed states and the curse of universality in functional theories.
The Shack-Hartmann wavefront sensor is widely used to measure aberrations induced by atmospheric turbulence in adaptive optics systems. However if there exists strong atmospheric turbulence or the brightness of guide stars is low, the accuracy of wavefront measurements will be affected. In this paper, we propose a compressive Shack-Hartmann wavefront sensing method. Instead of reconstructing wavefronts with slope measurements of all sub-apertures, our method reconstructs wavefronts with slope measurements of sub-apertures which have spot images with high signal to noise ratio. Besides, we further propose to use a deep neural network to accelerate wavefront reconstruction speed. During the training stage of the deep neural network, we propose to add a drop-out layer to simulate the compressive sensing process, which could increase development speed of our method. After training, the compressive Shack-Hartmann wavefront sensing method can reconstruct wavefronts in high spatial resolution with slope measurements from only a small amount of sub-apertures. We integrate the straightforward compressive Shack-Hartmann wavefront sensing method with image deconvolution algorithm to develop a high-order image restoration method. We use images restored by the high-order image restoration method to test the performance of our the compressive Shack-Hartmann wavefront sensing method. The results show that our method can improve the accuracy of wavefront measurements and is suitable for real-time applications.
Nonequilibrium processes of small systems such as molecular machines are ubiquitous in biology, chemistry and physics, but are often challenging to comprehend. In the past two decades, several exact thermodynamic relations of nonequilibrium processes, collectively known as fluctuation theorems, have been discovered and provided critical insights. These fluctuation theorems are generalizations of the second law, and can be unified by a differential fluctuation theorem. Here we perform the first experimental test of the differential fluctuation theorem, using an optically levitated nanosphere in both underdamped and overdamped regimes, and in both spatial and velocity spaces. We also test several theorems that can be obtained from it directly, including a generalized Jarzynski equality that is valid for arbitrary initial states, and the Hummer-Szabo relation. Our study experimentally verifies these fundamental theorems, and initiates the experimental study of stochastic energetics with the instantaneous velocity measurement.
Correlated outcomes are common in many practical problems. In some settings, one outcome is of particular interest, and others are auxiliary. To leverage information shared by all the outcomes, traditional multi-task learning (MTL) minimizes an averaged loss function over all the outcomes, which may lead to biased estimation for the target outcome, especially when the MTL model is mis-specified. In this work, based on a decomposition of estimation bias into two types, within-subspace and against-subspace, we develop a robust transfer learning approach to estimating a high-dimensional linear decision rule for the outcome of interest with the presence of auxiliary outcomes. The proposed method includes an MTL step using all outcomes to gain efficiency, and a subsequent calibration step using only the outcome of interest to correct both types of biases. We show that the final estimator can achieve a lower estimation error than the one using only the single outcome of interest. Simulations and real data analysis are conducted to justify the superiority of the proposed method.
OpenAI's ChatGPT initiated a wave of technical iterations in the space of Large Language Models (LLMs) by demonstrating the capability and disruptive power of LLMs. OpenAI has prompted large organizations to respond with their own advancements and models to push the LLM performance envelope. OpenAI has prompted large organizations to respond with their own advancements and models to push the LLM performance envelope. OpenAI's success in spotlighting AI can be partially attributed to decreased barriers to entry, enabling any individual with an internet-enabled device to interact with LLMs. What was previously relegated to a few researchers and developers with necessary computing resources is now available to all. A desire to customize LLMs to better accommodate individual needs prompted OpenAI's creation of the GPT Store, a central platform where users can create and share custom GPT models. Customization comes in the form of prompt-tuning, analysis of reference resources, browsing, and external API interactions, alongside a promise of revenue sharing for created custom GPTs. In this work, we peer into the window of the GPT Store and measure its impact. Our analysis constitutes a large-scale overview of the store exploring community perception, GPT details, and the GPT authors, in addition to a deep-dive into a 3rd party storefront indexing user-submitted GPTs, exploring if creators seek to monetize their creations in the absence of OpenAI's revenue sharing.
We introduce a switching mechanism in the asymptotic occupations of quantum states induced by the combined effects of a periodic driving and a weak coupling to a heat bath. It exploits one of the ubiquitous avoided crossings in driven systems and works even if both involved Floquet states have small occupations. It is independent of the initial state and the duration of the driving. As a specific example of this general switching mechanism we show how an asymmetric double well potential can be switched between the lower and the upper well by a periodic driving that is much weaker than the asymmetry.
In this paper a new method for information hiding in club music is introduced. The method called StegIbiza is based on using the music tempo as a carrier. The tempo is modulated by hidden messages with a 3-value coding scheme, which is an adoption of Morse code for StegIbiza. The evaluation of the system was performed for several music samples (with and without StegIbiza enabled) on a selected group of testers who had a music background. Finally, for the worst case scenario, none of them could identify any differences in the audio with a 1% margin of changed tempo.
In the era of the fourth industrial revolution, it is essential to automate fault detection and diagnosis of machineries so that a warning system can be developed that will help to take an appropriate action before any catastrophic damage. Some machines health monitoring systems are used globally but they are expensive and need trained personnel to operate and analyse. Predictive maintenance and occupational health and safety culture are not available due to inadequate infrastructure, lack of skilled manpower, financial crisis, and others in developing countries. Starting from developing a cost-effective DAS for collecting fault data in this study, the effect of limited data and resources has been investigated while automating the process. To solve this problem, A feature engineering and data reduction method has been developed combining the concepts from wavelets, differential calculus, and signal processing. Finally, for automating the whole process, all the necessary theoretical and practical considerations to develop a predictive model have been proposed. The DAS successfully collected the required data from the machine that is 89% accurate compared to the professional manual monitoring system. SVM and NN were proposed for the prediction purpose because of their high predicting accuracy greater than 95% during training and 100% during testing the new samples. In this study, the combination of the simple algorithm with a rule-based system instead of a data-intensive system turned out to be hybridization by validating with collected data. The outcome of this research can be instantly applied to small and medium-sized industries for finding other issues and developing accordingly. As one of the foundational studies in automatic FDD, the findings and procedure of this study can lead others to extend, generalize, or add other dimensions to FDD automation.
We analyze the ultimate quantum limit of resolving two identical sources in a noisy environment. We prove that in the presence of noise causing false excitation, such as thermal noise, the quantum Fisher information of arbitrary quantum states for the separation of the objects, which quantifies the resolution, always converges to zero as the separation goes to zero. Noisy cases contrast with a noiseless case where it has been shown to be nonzero for a small distance in various circumstances, revealing the superresolution. In addition, we show that false excitation on an arbitrary measurement, such as dark counts, also makes the classical Fisher information of the measurement approach to zero as the separation goes to zero. Finally, a practically relevant situation resolving two identical thermal sources, is quantitatively investigated by using the quantum and classical Fisher information of finite spatial mode multiplexing, showing that the amount of noise poses a limit on the resolution in a noisy system.
Federated Learning (FL) has emerged as a potent framework for training models across distributed data sources while maintaining data privacy. Nevertheless, it faces challenges with limited high-quality labels and non-IID client data, particularly in applications like autonomous driving. To address these hurdles, we navigate the uncharted waters of Semi-Supervised Federated Object Detection (SSFOD). We present a pioneering SSFOD framework, designed for scenarios where labeled data reside only at the server while clients possess unlabeled data. Notably, our method represents the inaugural implementation of SSFOD for clients with 0% labeled non-IID data, a stark contrast to previous studies that maintain some subset of labels at each client. We propose FedSTO, a two-stage strategy encompassing Selective Training followed by Orthogonally enhanced full-parameter training, to effectively address data shift (e.g. weather conditions) between server and clients. Our contributions include selectively refining the backbone of the detector to avert overfitting, orthogonality regularization to boost representation divergence, and local EMA-driven pseudo label assignment to yield high-quality pseudo labels. Extensive validation on prominent autonomous driving datasets (BDD100K, Cityscapes, and SODA10M) attests to the efficacy of our approach, demonstrating state-of-the-art results. Remarkably, FedSTO, using just 20-30% of labels, performs nearly as well as fully-supervised centralized training methods.
In the last decade, there has been a growing realization that the current Internet Protocol is reaching the limits of its senescence. This has prompted several research efforts that aim to design potential next-generation Internet architectures. Named Data Networking (NDN), an instantiation of the content-centric approach to networking, is one such effort. In contrast with IP, NDN routers maintain a significant amount of user-driven state. In this paper we investigate how to use this state for covert ephemeral communication (CEC). CEC allows two or more parties to covertly exchange ephemeral messages, i.e., messages that become unavailable after a certain amount of time. Our techniques rely only on network-layer, rather than application-layer, services. This makes our protocols robust, and communication difficult to uncover. We show that users can build high-bandwidth CECs exploiting features unique to NDN: in-network caches, routers' forwarding state and name matching rules. We assess feasibility and performance of proposed cover channels using a local setup and the official NDN testbed.
n a recent paper we proposed the expansion of the space of variations in energy calculations by considering the approximate wave function $\psi$ to be a functional of functions $\chi: \psi = \psi[\chi]$ rather than a function. For the determination of such a wave function functional, a constrained search is first performed over the subspace of all functions $\chi$ such that $\psi[\chi]$ satisfies a physical constraint or leads to the known value of an observable. A rigorous upper bound to the energy is then obtained by application of the variational principle. To demonstrate the advantages of the expansion of variational space, we apply the constrained-search--variational method to the ground state of the negative ion of atomic Hydrogen, the Helium atom, and its isoelectronic sequence. The method is equally applicable to excited states, and its extension to such states in conjunction with the theorem of Theophilou is also described.
Measurements of the Hall effect and the resistivity in twinned YBa2Cu3O7-delta thin films in magnetic fields B oriented parallel to the crystallographic c-axis and to the twin boundaries reveal a double sign reversal of the Hall coefficient for B below 1 T. In high transport current densities, or with B tilted off the twin boundaries by 5 degrees, the second sign reversal vanishes. The power-law scaling of the Hall conductivity to the longitudinal conductivity in the mixed state is strongly modified in the regime of the second sign reversal. Our observations are interpreted as strong, disorder-type dependent vortex pinning and confirm that the Hall conductivity in high temperature superconductors is not independent of pinning.
We prove that the number of oscillating tableaux of length $n$ with at most $k$ columns, starting at $\emptyset$ and ending at the one-column shape $(1^m)$, is equal to the number of standard Young tableaux of size~$n$ with $m$ columns of odd length, all columns of length at most $2k$. This refines a conjecture of Burrill, which it thereby establishes. We prove as well a "Knuth-type" extension stating a similar equi-enumeration result between generalised oscillating tableaux and semistandard tableaux.
This paper concerns the physical behaviors of any solutions to the one dimensional compressible Navier-Stokes equations for viscous and heat conductive gases with constant viscosities and heat conductivity for fast decaying density at far fields only. First, it is shown that the specific entropy becomes not uniformly bounded immediately after the initial time, as long as the initial density decays to vacuum at the far field at the rate not slower than $O\left(\frac1{|x|^{\ell_\rho}}\right)$ with $\ell_\rho>2$. Furthermore, for faster decaying initial density, i.e., $\ell_\rho\geq4$, a sharper result is discovered that the absolute temperature becomes uniformly positive at each positive time, no matter whether it is uniformly positive or not initially, and consequently the corresponding entropy behaves as $O(-\log(\varrho_0(x)))$ at each positive time, independent of the boundedness of the initial entropy. Such phenomena are in sharp contrast to the case with slowly decaying initial density of the rate no faster than $O(\frac1{x^2})$, for which our previous works \cite{LIXINADV,LIXINCPAM,LIXIN3DK} show that the uniform boundedness of the entropy can be propagated for all positive time and thus the temperature decays to zero at the far field. These give a complete answer to the problem concerning the propagation of uniform boundedness of the entropy for the heat conductive ideal gases and, in particular, show that the algebraic decay rate $2$ of the initial density at the far field is sharp for the uniform boundedness of the entropy. The tools to prove our main results are based on some scaling transforms, including the Kelvin transform, and a Hopf type lemma for a class of degenerate equations with possible unbounded coefficients.
Ultrarelativistic heavy ion collisions at the laboratory provide a unique chance to study quantum chromodynamics (QCD) under extreme temperature (${\approx}150\,\mathrm{MeV}$) and density (${\approx}1\,\mathrm{GeV}/\mathrm{fm}^3$) conditions. Over the past decade, experimental results from LHC have shown further evidence for the formation of the quark-gluon plasma (QGP), a phase that is thought to permeate the early Universe and is formed in the high-density neutron-star cores. Various QCD predictions that model the behavior of the low-$x$ gluon nuclear density, a poorly explored region, are also tested. Since the photon flux per ion scales as the square of the emitting electric charge $Z^2$, cross sections of so far elusive photon-induced processes are extremely enhanced as compared to nucleon-nucleon collisions. Here, we review recent progress on CMS measurements of particle production with large transverse momentum or mass, photon-initiated processes, jet-induced medium response, and heavy quark production. These high-precision data, along with novel approaches, offer stringent constraints on initial state, QGP formation and transport parameters, and even parametrizations beyond the standard model.
We calculated the spectrum of normal scalar waves in a planar waveguide with absolutely soft randomly rough boundaries beyond the perturbation theories in the roughness heights and slopes, basing on the exact boundary scattering potential. The spectrum is proved to be a nearly real non-analytic function of the dispersion $\zeta^2$ of the roughness heights (with square-root singularity) as $\zeta^2 \to 0$. The opposite case of large boundary defects is summarized.
Regular nutrient intake monitoring in hospitalised patients plays a critical role in reducing the risk of disease-related malnutrition (DRM). Although several methods to estimate nutrient intake have been developed, there is still a clear demand for a more reliable and fully automated technique, as this could improve the data accuracy and reduce both the participant burden and the health costs. In this paper, we propose a novel system based on artificial intelligence to accurately estimate nutrient intake, by simply processing RGB depth image pairs captured before and after a meal consumption. For the development and evaluation of the system, a dedicated and new database of images and recipes of 322 meals was assembled, coupled to data annotation using innovative strategies. With this database, a system was developed that employed a novel multi-task neural network and an algorithm for 3D surface construction. This allowed sequential semantic food segmentation and estimation of the volume of the consumed food, and permitted fully automatic estimation of nutrient intake for each food type with a 15% estimation error.
Accurate QED calculation of transition probabilities for the low-lying two-electron configurations of multicharged ions is presented. The calculation is performed for the nondegenerate states $(1s2s) 3S1$, $(1s2p_{3/2}) 3P2$ ($M 1$ and $M 2$ transitions, respectively) and for the quasidegenerate states $(1s2p) 1P1$, $(1s2p) 3P1$ ($E 1$ transitions) decaying to the ground state $(1s1s) 1S0$. Two-electron ions with nuclear charge numbers $Z=10-92$ are considered. The line profile approach is employed for the description of the process in multicharged ions within the framework of QED.
It is likely that electricity storage will play a significant role in the balancing of future energy systems. A major challenge is then that of how to assess the contribution of storage to capacity adequacy, i.e. to the ability of such systems to meet demand. This requires an understanding of how to optimally schedule multiple storage facilities. The present paper studies this problem in the cases where the objective is the minimisation of expected energy unserved (EEU) and also a form of weighted EEU in which the unit cost of unserved energy is higher at higher levels of unmet demand. We also study how the contributions of individual stores may be identified for the purposes of their inclusion in electricity capacity markets.
Many supermassive black holes (SMBH) of mass $10^{6\sim9}M_{\odot}$ are observed at the center of each galaxy even in the high redshift ($z\approx7$) Universe. To explain the early formation and the common existence of SMBH, we proposed previously the SMBH formation scenario by the gravitational collapse of the coherent dark matter (DM) composed from the Bose-Einstein Condensed (BEC) objects. A difficult problem in this scenario is the inevitable angular momentum which prevents the collapse of BEC. To overcome this difficulty, in this paper, we consider the very early Universe when the BEC-DM acquires its proper angular momentum by the tidal torque mechanism. The balance of the density evolution and the acquisition of the angular momentum determines the mass of the SMBH as well as the mass ratio of BH and the surrounding dark halo (DH). This ratio turns out to be $M_{BH}/M_{DH}\approx10^{-3\sim-5}(M_{tot}/10^{12}\mathrm{M}_{\odot})^{-1/2}$ assuming simple density profiles of the initial DM cloud. This estimate turns out to be consistent with the observations at $z\approx0$ and $z\approx6$, although the data scatter is large. Thus the angular momentum determines the separation of black and dark, \textsl{i.e. }SMBH and DH, in the original DM cloud.
We address the problem of joint sparsity pattern recovery based on low dimensional multiple measurement vectors (MMVs) in resource constrained distributed networks. We assume that distributed nodes observe sparse signals which share the same sparsity pattern and each node obtains measurements via a low dimensional linear operator. When the measurements are collected at distributed nodes in a communication network, it is often required that joint sparse recovery be performed under inherent resource constraints such as communication bandwidth and transmit/processing power. We present two approaches to take the communication constraints into account while performing common sparsity pattern recovery. First, we explore the use of a shared multiple access channel (MAC) in forwarding observations residing at each node to a fusion center. With MAC, while the bandwidth requirement does not depend on the number of nodes, the fusion center has access to only a linear combination of the observations. We discuss the conditions under which the common sparsity pattern can be estimated reliably. Second, we develop two collaborative algorithms based on Orthogonal Matching Pursuit (OMP), to jointly estimate the common sparsity pattern in a decentralized manner with a low communication overhead. In the proposed algorithms, each node exploits collaboration among neighboring nodes by sharing a small amount of information for fusion at different stages in estimating the indices of the true support in a greedy manner. Efficiency and effectiveness of the proposed algorithms are demonstrated via simulations along with a comparison with the most related existing algorithms considering the trade-off between the performance gain and the communication overhead.
In this review, we present a self-contained introduction to axion-like particles (ALPs) with a particular focus on their effects on photon polarization: both theoretical and phenomenological aspects are discussed. We derive the photon survival probability in the presence of photon--ALP interaction, the corresponding final photon degree of linear polarization, and the polarization angle in a wide energy interval. The presented results can be tested by current and planned missions such as IXPE (already operative), eXTP, XL-Calibur, NGXP, XPP in the X-ray band and like COSI (approved to launch), e-ASTROGAM, and AMEGO in the high-energy range. Specifically, we describe ALP-induced polarization effects on several astrophysical sources, such as galaxy clusters, blazars, and gamma-ray bursts, and we discuss their real detectability. In particular, galaxy clusters appear as very good observational targets in this respect. Moreover, in the very-high-energy (VHE) band, we discuss a peculiar ALP signature in photon polarization, in principle capable of proving the ALP existence. Unfortunately, present technologies cannot detect photon polarization up to such high energies, but the observational capability of the latter ALP signature in the VHE band could represent an interesting challenge for the future. As a matter of fact, the aim of this review is to show new ways to make progress in the physics of ALPs, thanks to their effects on photon polarization, a topic that has aroused less interest in the past, but which is now timely with the advent of many new polarimetric missions.
In this study, we address the off-road traversability estimation problem, that predicts areas where a robot can navigate in off-road environments. An off-road environment is an unstructured environment comprising a combination of traversable and non-traversable spaces, which presents a challenge for estimating traversability. This study highlights three primary factors that affect a robot's traversability in an off-road environment: surface slope, semantic information, and robot platform. We present two strategies for estimating traversability, using a guide filter network (GFN) and footprint supervision module (FSM). The first strategy involves building a novel GFN using a newly designed guide filter layer. The GFN interprets the surface and semantic information from the input data and integrates them to extract features optimized for traversability estimation. The second strategy involves developing an FSM, which is a self-supervision module that utilizes the path traversed by the robot in pre-driving, also known as a footprint. This enables the prediction of traversability that reflects the characteristics of the robot platform. Based on these two strategies, the proposed method overcomes the limitations of existing methods, which require laborious human supervision and lack scalability. Extensive experiments in diverse conditions, including automobiles and unmanned ground vehicles, herbfields, woodlands, and farmlands, demonstrate that the proposed method is compatible for various robot platforms and adaptable to a range of terrains. Code is available at https://github.com/yurimjeon1892/FtFoot.
The interplay between band topology and magnetic order could generate a variety of time-reversal-breaking gapped topological phases with exotic topological quantization phenomena, such as quantum anomalous Hall (QAH) insulators and axion insulators (AxI). Here by combining analytic models and first-principles calculations, we predict QAH and AxI phases can be realized in thin film of an intrinsic antiferromagnetic van der Waal material Mn$_2$Bi$_2$Te$_5$. The phase transition between QAH and AxI is tuned by the layer magnetization, which would provide a promising platform for chiral superconducting phases. We further present a simple and unified continuum model that captures the magnetic topological features, and is generic for Mn$_2$Bi$_2$Te$_5$ and MnBi$_2$Te$_4$ family materials.
Let $\omega$ be an unbounded radial weight on $\mathbb{C}^d$, $d\ge 1$. Using results related to approximation of $\omega$ by entire maps, we investigate Volterra type and weighted composition operators defined on the growth space $\mathcal{A}^\omega(\mathbb{C}^d)$. Special attention is given to the operators defined on the growth Fock spaces.
We develop a fast and scalable computational framework to solve large-scale and high-dimensional Bayesian optimal experimental design problems. In particular, we consider the problem of optimal observation sensor placement for Bayesian inference of high-dimensional parameters governed by partial differential equations (PDEs), which is formulated as an optimization problem that seeks to maximize an expected information gain (EIG). Such optimization problems are particularly challenging due to the curse of dimensionality for high-dimensional parameters and the expensive solution of large-scale PDEs. To address these challenges, we exploit two essential properties of such problems: the low-rank structure of the Jacobian of the parameter-to-observable map to extract the intrinsically low-dimensional data-informed subspace, and the high correlation of the approximate EIGs by a series of approximations to reduce the number of PDE solves. We propose an efficient offline-online decomposition for the optimization problem: an offline stage of computing all the quantities that require a limited number of PDE solves independent of parameter and data dimensions, and an online stage of optimizing sensor placement that does not require any PDE solve. For the online optimization, we propose a swapping greedy algorithm that first construct an initial set of sensors using leverage scores and then swap the chosen sensors with other candidates until certain convergence criteria are met. We demonstrate the efficiency and scalability of the proposed computational framework by a linear inverse problem of inferring the initial condition for an advection-diffusion equation, and a nonlinear inverse problem of inferring the diffusion coefficient of a log-normal diffusion equation, with both the parameter and data dimensions ranging from a few tens to a few thousands.
Rank-width is a width parameter of graphs describing whether it is possible to decompose a graph into a tree-like structure by `simple' cuts. This survey aims to summarize known algorithmic and structural results on rank-width of graphs.
Arrays of metallic particles patterned on a substrate have emerged as a promising design for on-chip plasmonic lasers. In past examples of such devices, the periodic particles provided feedback at a single resonance wavelength, and organic dye molecules were used as the gain material. Here, we introduce a flexible template-based fabrication method that allows a broader design space for Ag particle-array lasers. Instead of dye molecules, we integrate colloidal quantum dots (QDs), which offer better photostability and wavelength tunability. Our fabrication approach also allows us to easily adjust the refractive index of the substrate and the QD-film thickness. Exploiting these capabilities, we demonstrate not only single-wavelength lasing but dual-wavelength lasing via two distinct strategies. First, by using particle arrays with rectangular lattice symmetries, we obtain feedback from two orthogonal directions. The two output wavelengths from this laser can be selected individually using a linear polarizer. Second, by adjusting the QD-film thickness, we use higher-order transverse waveguide modes in the QD film to obtain dual-wavelength lasing at normal and off-normal angles from a symmetric square array. We thus show that our approach offers various design possibilities to tune the laser output.
The magnetic structure of the "nonmetallic metal" FeCrAs, a compound with the characters of both metals and insulators, was examined as a function of temperature using single-crystal neutron diffraction. The magnetic propagation vector was found to be $\mathit{k}$ = (1/3, 1/3, 0), and the magnetic reflections disppeared above $\mathit{T_{N}}$ = 116(1) K. In the ground state, the Cr sublattice shows an in-plane spiral antiferromagnetic order. The moment sizes of the Cr ions were found to be small, due to strong magnetic frustration in the distorted Kagome lattice or the itinerant nature of the Cr magnetism, and vary between 0.8 and 1.4 $\mu_{B}$ on different sites as expected for a spin-density-wave (SDW) type order. The upper limit of the moment on the Fe sublattice is estimated to be less than 0.1 $\mu_{B}$. With increasing temperature up to 95 K, the Cr moments cant out of the $\mathit{ab}$ plane gradually, with the in-plane components being suppressed and the out-of-plane components increasing in contrast. This spin-reorientation of Cr moments can explain the dip in the $\mathit{c}$-direction magnetic susceptibility and the kink in the magnetic order parameter at $\mathit{T_{O}}$ ~ 100 K, a second magnetic transition which was unexplained before. We have also discussed the similarity between FeCrAs and the model itinerant magnet Cr, which exhibits spin-flip transitions and SDW-type antiferromagnetism.
Data extraction and management are crucial components of research and clinical workflows in Radiation Oncology (RO), where accurate and comprehensive data are imperative to inform treatment planning and delivery. The advent of automated data mining scripts, particularly using the Python Environment for Scripting APIs (PyESAPI), has been a promising stride towards enhancing efficiency, accuracy, and reliability in extracting data from RO Information Systems (ROIS) and Treatment Planning Systems (TPS). This review dissects the role, efficiency, and challenges of implementing PyESAPI in RO data extraction and management, juxtaposing manual data extraction techniques and explicating future avenues
We propose Stereo Direct Sparse Odometry (Stereo DSO) as a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, we propose a novel approach to integrate constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.
Carbon nanostructures with zigzag edges exhibit unique properties with exciting potential applications. Such nanostructures are generally synthesized under vacuum because their zigzag edges are unstable under ambient conditions: a barrier that must be surmounted to achieve their scalable exploitation. Here, we prove the viability of chemical protection/deprotection strategies for this aim, demonstrated on labile chiral graphene nanoribbons (chGNRs). Upon hydrogenation, the chGNRs survive an exposure to air, after which they are easily converted back to their original structure via annealing. We also approach the problem from another angle by synthesizing a chemically stable oxidized form of the chGNRs that can be converted to the pristine hydrocarbon form via hydrogenation and annealing. These findings may represent an important step toward the integration of zigzag-edged nanostructures in devices.
The collective operation of robots, such as unmanned aerial vehicles (UAVs) operating as a team or swarm, is affected by their individual capabilities, which in turn is dependent on their physical design, aka morphology. However, with the exception of a few (albeit ad hoc) evolutionary robotics methods, there has been very little work on understanding the interplay of morphology and collective behavior. There is especially a lack of computational frameworks to concurrently search for the robot morphology and the hyper-parameters of their behavior model that jointly optimize the collective (team) performance. To address this gap, this paper proposes a new co-design framework. Here the exploding computational cost of an otherwise nested morphology/behavior co-design is effectively alleviated through the novel concept of ``talent" metrics; while also allowing significantly better solutions compared to the typically sub-optimal sequential morphology$\to$behavior design approach. This framework comprises four major steps: talent metrics selection, talent Pareto exploration (a multi-objective morphology optimization process), behavior optimization, and morphology finalization. This co-design concept is demonstrated by applying it to design UAVs that operate as a team to localize signal sources, e.g., in victim search and hazard localization. Here, the collective behavior is driven by a recently reported batch Bayesian search algorithm called Bayes-Swarm. Our case studies show that the outcome of co-design provides significantly higher success rates in signal source localization compared to a baseline design, across a variety of signal environments and teams with 6 to 15 UAVs. Moreover, this co-design process provides two orders of magnitude reduction in computing time compared to a projected nested design approach.
In materials without spatial inversion symmetry the spin degeneracy of the conduction electrons can be lifted by an antisymmetric spin-orbit coupling. We discuss the influence of this spin-orbit coupling on the spin susceptibility of such superconductors, with a particular emphasis on the recently discovered heavy Fermion superconductor CePt3Si. We find that, for this compound (with tetragonal crystal symmetry,) irrespective of the pairing symmetry, the stable superconducting phases would give a very weak change of the spin susceptibility for fields along the c-axis and an intermediate reduction for fields in the basal plane. We also comment on the consequences for the paramagnetic limiting in this material.
The validation of any database mining methodology goes through an evaluation process where benchmarks availability is essential. In this paper, we aim to randomly generate relational database benchmarks that allow to check probabilistic dependencies among the attributes. We are particularly interested in Probabilistic Relational Models (PRMs), which extend Bayesian Networks (BNs) to a relational data mining context and enable effective and robust reasoning over relational data. Even though a panoply of works have focused, separately , on the generation of random Bayesian networks and relational databases, no work has been identified for PRMs on that track. This paper provides an algorithmic approach for generating random PRMs from scratch to fill this gap. The proposed method allows to generate PRMs as well as synthetic relational data from a randomly generated relational schema and a random set of probabilistic dependencies. This can be of interest not only for machine learning researchers to evaluate their proposals in a common framework, but also for databases designers to evaluate the effectiveness of the components of a database management system.
In 2007, Terence Tao wrote on his blog an essay about soft analysis, hard analysis and the finitization of soft analysis statements into hard analysis statements. One of his main examples was a quasi-finitization of the infinite pigeonhole principle IPP, arriving at the "finitary" infinite pigeonhole principle FIPP1. That turned out to not be the proper formulation and so we proposed an alternative version FIPP2. Tao himself formulated yet another version FIPP3 in a revised version of his essay. We give a counterexample to FIPP1 and discuss for both of the versions FIPP2 and FIPP3 the faithfulness of their respective finitization of IPP by studying the equivalences IPP <-> FIPP2 and IPP <-> FIPP3 in the context of reverse mathematics. In the process of doing this we also introduce a continuous uniform boundedness principle CUB as a formalization of Tao's notion of a correspondence principle and study the strength of this principle and various restrictions thereof in terms of reverse mathematics, i.e., in terms of the "big five" subsystems of second order arithmetic.
"KPipe" is a proposed experiment which will study muon neutrino disappearance for a sensitive test of the $\Delta m^2\sim1 \mathrm{eV}^2$ anomalies, possibly indicative of one or more sterile neutrinos. The experiment is to be located at the J-PARC Materials and Life Science Facility's spallation neutron source, which represents the world's most intense source of charged kaon decay-at-rest monoenergetic (236 MeV) muon neutrinos. The detector vessel, designed to measure the charged current interactions of these neutrinos, will be 3 m in diameter and 120 m long, extending radially at a distance of 32 m to 152 m from the source. This design allows a sensitive search for $\nu_\mu$ disappearance associated with currently favored light sterile neutrino models and features the ability to reconstruct the neutrino oscillation wave within a single, extended detector. The required detector design, technology, and costs are modest. The KPipe measurements will be robust since they depend on a known energy neutrino source with low expected backgrounds. Further, since the measurements rely only on the measured rate of detected events as a function of distance, with no required knowledge of the initial flux and neutrino interaction cross section, the results will be largely free of systematic errors. The experimental sensitivity to oscillations, based on a shape-only analysis of the $L/E$ distribution, will extend an order of magnitude beyond present experimental limits in the relevant high-$\Delta m^2$ parameter space.
Recent experimental data on the $\Upsilon(4S)\to\Upsilon(1S)\eta$ and $\Upsilon(4S)\to h_{b}(1P)\eta$ processes seem to contradict the naive expectation that hadronic transitions with spin-flipping terms should be suppressed with respect those without spin-flip. We analyze these transitions using the QCD Multipole Expansion (QCDME) approach and within a constituent quark model framework that has been applied successfully to the heavy-quark sectors during the last years. The QCDME formalism requires the computation of hybrid intermediate states which has been performed in a natural, parameter-free extension of our constituent quark model based on the Quark Confining String (QCS) scheme. We show that i) the M1-M1 contribution in the decay rate of the $\Upsilon(4S)\to\Upsilon(1S)\eta$ is important and its supression until now is not justified; ii) the role played by the $L=0$ hybrid states, which enter in the calculation of the M1-M1 contribution, explains the enhancement in the $\Upsilon(4S)\to\Upsilon(1S)\eta$ decay rate; and iii) the anomalously large decay rate of the $\Upsilon(4S)\to h_{b}(1P)\eta$ process has the same physical origin.
In this article we focus on a general model of random walk on random marked trees. We prove a recurrence criterion, analogue to the recurrence criterion proved by R. Lyons and Robin Pemantle (1992) in a slightly different model. In the critical case, we obtain a criterion for the positive/null recurrence. Several regimes appear, as proved (in a similar model), by Y. Hu and Z. Shi (2007). We focus on the "diffusive" regime and improve their result in this case, by obtaining a functional Central Limit Theorem. Our result is also an extension of a result by Y. Peres and O. Zeitouni (2008), obtained in the setting of biased random walk in Galton-Watson trees.
We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar System. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar System, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.
Ground-based laser interferometric gravitational wave detectors consist of complex multiple optical cavity systems. An arm-length stabilization (ALS) system has played an important role in bringing such complex detector into operational state and enhance the duty cycle. The sensitivity of these detectors can be improved if the thermal noise of their test mass mirror coatings is reduced. Crystalline AlGaAs coatings are a promising candidate for this. However, traditional ALS system with frequency-doubled 532 nm light is no longer an option with AlGaAs coatings due to the narrow bandgap of GaAs, thus alternative locking schemes must be developed. In this letter, we describe an experimental demonstration of a novel ALS scheme which is compatible with AlGaAs coatings. This ALS scheme will enable the use of AlGaAs coatings and contribute to improved sensitivity of future detectors.
In this paper, we study genus $0$ equivariant relative Gromov-Witten invariants of $\mathbb{P}^1$ whose corresponding relative stable maps are totally ramified over one point. For fixed number of marked points, we show that such invariants are piecewise polynomials in some parameter space. The parameter space can then be divided into polynomial domains, called chambers. We determine the difference of polynomials between two neighboring chambers. In some special chamber, which we called the totally negative chamber, we show that such a polynomial can be expressed in a simple way. The chamber structure here shares some similarities to that of double Hurwitz numbers.
We study the problem of the existence of arithmetic progressions of three cubes over quadratic number fields Q(sqrt(D)), where D is a squarefree integer. For this purpose, we give a characterization in terms of Q(sqrt(D))-rational points on the elliptic curve E:y^2=x^3-27. We compute the torsion subgroup of the Mordell-Weil group of this elliptic curve over Q(sqrt(D)) and we give partial answers to the finiteness of the free part of E(Q(sqrt(D))). This last task will be translated to compute if the rank of the quadratic D-twist of the modular curve X_0(36) is zero or not.
In this paper, which is the sequel to arXiv:1410.3742, we study the Frobenius pushforward of the structure sheaf on the adjoint varieties in type ${\bf A}_3$ and ${\bf A}_4$. We show that this pushforward sheaf decomposes into a direct sum of indecomposable bundles and explicitly determine this set that does not depend of the characteristic. In accordance with the results of arXiv:0707.0913, this set forms a strong full exceptional collection in the derived category of coherent sheaves. These computations lead to a natural conjectural answer in the general case that we state at the end.
In this paper we are concerned with numerical methods for nonhomogeneous Helmholtz equations in inhomogeneous media. We design a least squares method for discretization of the considered Helmholtz equations. In this method, an auxiliary unknown is introduced on the common interface of any two neighboring elements and a quadratic subject functional is defined by the jumps of the traces of the solutions of local Helmholtz equations across all the common interfaces, where the local Helmholtz equations are defined on elements and are imposed Robin-type boundary conditions given by the auxiliary unknowns. A minimization problem with the subject functional is proposed to determine the auxiliary unknowns. The resulting discrete system of the auxiliary unknowns is Hermitian positive definite and so it can be solved by the PCG method. Under some assumptions we show that the generated approximate solutions possess almost the optimal error estimates with little "wave number pollution". Moreover, we construct a substructuring preconditioner for the discrete system of the auxiliary unknowns. Numerical experiments show that the proposed methods are very effective for the tested Helmholtz equations with large wave numbers.
Foundation models are rapidly being developed for computational pathology applications. However, it remains an open question which factors are most important for downstream performance with data scale and diversity, model size, and training algorithm all playing a role. In this work, we present the result of scaling both data and model size, surpassing previous studies in both dimensions, and introduce two new models: Virchow 2, a 632M parameter vision transformer, and Virchow 2G, a 1.85B parameter vision transformer, each trained with 3.1M histopathology whole slide images. To support this scale, we propose domain-inspired adaptations to the DINOv2 training algorithm, which is quickly becoming the default method in self-supervised learning for computational pathology. We achieve state of the art performance on twelve tile-level tasks, as compared to the top performing competing models. Our results suggest that data diversity and domain-specific training can outperform models that only scale in the number of parameters, but, on average, performance benefits from domain-tailoring, data scale, and model scale.
We present a full-program induction technique for proving (a sub-class of) quantified as well as quantifier-free properties of programs manipulating arrays of parametric size N. Instead of inducting over individual loops, our technique inducts over the entire program (possibly containing multiple loops) directly via the program parameter N. Significantly, this does not require generation or use of loop-specific invariants. We have developed a prototype tool Vajra to assess the efficacy of our technique. We demonstrate the performance of Vajra vis-a-vis several state-of-the-art tools on a set of array manipulating benchmarks.
This paper rediscovers a classical homogenization result for a prototypical linear elliptic boundary value problem with periodically oscillating diffusion coefficient. Unlike classical analytical approaches such as asymptotic analysis, oscillating test functions, or two-scale convergence, the result is purely based on the theory of domain decomposition methods and standard finite elements techniques. The arguments naturally generalize to problems far beyond periodicity and scale separation and we provide a brief overview on such applications.
The gravity model, inspired by Newton's law of universal gravitation, has long served as a primary tool for interpreting trade flows between countries, using a country's economic `mass' as a key determinant. Despite its wide application, the definition of `mass' within this model remains ambiguous. It is often approximated using indicators like GDP, which may not accurately reflect a country's true trade potential. Here, we introduce a data-driven, self-consistent numerical approach that redefines `mass' from a static proxy to a dynamic attribute inferred directly from flow data. We infer mass distribution and interaction nature through our method, mirroring Newton's approach to understanding gravity. Our methodology accurately identifies predefined embeddings and reconstructs system attributes when applied to synthetic flow data, demonstrating its strong predictive power and adaptability. Further application to real-world trade networks yields critical insights, revealing the spatial spectrum of trade flows and the economic mass of countries, two key features unexplored in depth by existing models. Our methodology not only enables accurate reconstruction of the original flow but also allows for a deep understanding of the unique capabilities of each node within the network. This study marks a significant shift in the understanding and application of the gravity model, providing a more comprehensive tool for analyzing complex systems and uncovering new insights into various fields, including global trade, traffic engineering, epidemic disease prevention, and infrastructure design.
In this paper we obtain a representation as martingale transform operators for the rearrangement and shift operators introduced by T. Figiel. The martingale transforms and the underlying sigma algebras are obtained explicitly by combinatorial means. The known norm estimates for those operators are a direct consequence of our representation.
A conducting 1D line or 2D plane inside (or on the surface of) an insulator is considered.Impurities displace the charges inside the insulator. This results in a long-range fluctuating electric field acting on the conducting line (plane). This field can be modeled by that of randomly distributed electric dipoles. This model provides a random correlated potential with <U(r)U(r+k)> decaying as 1/k . In the 1D case such correlations give essential corrections to the localization length but do not destroy Anderson localization.
We investigate diagonal forms of degree $d$ over the function field $F$ of a smooth projective $p$-adic curve: if a form is isotropic over the completion of $F$ with respect to each discrete valuation of $F$, then it is isotropic over certain fields $F_U$, $F_P$ and $F_p$. These fields appear naturally when applying the methodology of patching; $F$ is the inverse limit of the finite inverse system of fields $\{F_U,F_P,F_p\}$. Our observations complement some known bounds on the higher $u$-invariant of diagonal forms of degree $d$. We only consider diagonal forms of degree $d$ over fields of characteristic not dividing $d!$.