text
stringlengths
6
128k
Highly obscured active galactic nuclei (AGN) are common in nearby galaxies, but are difficult to observe beyond the local Universe, where they are expected to significantly contribute to the black hole accretion rate density. Furthermore, Compton-thick (CT) absorbers (NH>10^24 cm^-2) suppress even the hard X-ray (2-10 keV) AGN nuclear emission, and therefore the column density distribution above 10^24 cm^-2 is largely unknown. We present the identification and multi-wavelength properties of a heavily obscured (NH>~10^25 cm^-2), intrinsically luminous (L(2-10keV)>10^44 erg s^-1) AGN at z=0.353 in the COSMOS field. Several independent indicators, such as the shape of the X-ray spectrum, the decomposition of the spectral energy distribution and X-ray/[NeV] and X-ray/6{\mu}m luminosity ratios, agree on the fact that the nuclear emission must be suppressed by a 10^25 cm^-2 column density. The host galaxy properties show that this highly obscured AGN is hosted in a massive star-forming galaxy, showing a barred morphology, which is known to correlate with the presence of CT absorbers. Finally, asymmetric and blueshifted components in several optical high-ionization emission lines indicate the presence of a galactic outflow, possibly driven by the intense AGN activity (L(Bol)/L(Edd) = 0.3-0.5). Such highly obscured, highly accreting AGN are intrinsically very rare at low redshift, whereas they are expected to be much more common at the peak of the star formation and BH accretion history, at z~2-3. We demonstrate that a fully multi-wavelength approach can recover a sizable sample of such peculiar sources in large and deep surveys such as COSMOS.
A simplified Bogoliubov transform reduces a fully-interacting many-fermion spin-1/2 system-plus-environment to a more tractable many-to-one variant. The transform additionally yields exact solutions for bosonic multi-particle interactions sans the approximation introduced by using discrete time steps to deal with quantum parallelism. The decohering effect of relatively general finite environments is therewith modeled and compared to the decohering effect of an infinite environmental "bath." The anti-symmetric singlet state formed by two maximally-entangled two-state particles is shown to be inherently decoherence-free. As a quantum bit ("qubit") it is thus potentially superior to any single-particle state.
Previously we calculated the binding energies of the triton and hypertriton, using an SU_6 quark-model interaction derived from a resonating-group method of two baryon clusters. In contrast to the previous calculations employing the energy-dependent interaction kernel, we present new results using a renormalized interaction, which is now energy independent and reserves all the two-baryon data. The new binding energies are slightly smaller than the previous values. In particular the triton binding energy turns out to be 8.14 MeV with a charge-dependence correction of the two-nucleon force, 190 keV, being included. This indicates that about 350 keV is left for the energy which is to be accounted for by three-body forces.
We present a first-principles study of 180-degree ferroelectric domain walls in tetragonal barium titanate. The theory is based on an effective Hamiltonian that has previously been determined from first-principles ultrasoft-pseudopotential calculations. Statistical properties are investigated using Monte Carlo simulations. We compute the domain-wall energy, free energy, and thickness, analyze the behavior of the ferroelectric order parameter in the interior of the domain wall, and study its spatial fluctuations. An abrupt reversal of the polarization is found, unlike the gradual rotation typical of the ferromagnetic case.
We investigate pores in fluid membranes by molecular dynamics simulations of an amphiphile-solvent mixture, using a molecular coarse-grained model. The amphiphilic membranes self-assemble into a lamellar stack of amphiphilic bilayers separated by solvent layers. We focus on the particular case of tension less membranes, in which pores spontaneously appear because of thermal fluctuations. Their spatial distribution is similar to that of a random set of repulsive hard discs. The size and shape distribution of individual pores can be described satisfactorily by a simple mesoscopic model, which accounts only for a pore independent core energy and a line tension penalty at the pore edges. In particular, the pores are not circular: their shapes are fractal and have the same characteristics as those of two dimensional ring polymers. Finally, we study the size-fluctuation dynamics of the pores, and compare the time evolution of their contour length to a random walk in a linear potential.
In this work, we study optimal transmit strategies for minimizing the positioning error bound in a line-of-sight scenario, under different levels of prior knowledge of the channel parameters. For the case of perfect prior knowledge, we prove that two beams are optimal, and determine their beam directions and optimal power allocation. For the imperfect prior knowledge case, we compute the optimal power allocation among the beams of a codebook for two different robustness-related objectives, namely average or maximum squared position error bound minimization. Our numerical results show that our low-complexity approach can outperform existing methods that entail higher signaling and computational overhead.
We derive the asymptotic properties of the mMKG system (Maxwell coupled with a massive Klein-Gordon scalar field), in the exterior of the domain of influence of a compact set. This complements the previous well known results, restricted to compactly supported initial conditions, based on the so called hyperboloidal method. That method takes advantage of the commutation properties of the Maxwell and Klein Gordon with the generators of the Poincar\'e group to resolve the difficulties caused by the fact that they have, separately, different asymptotic properties. Though the hyperboloidal method is very robust and applies well to other related systems it has the well known drawback that it requires compactly supported data. In this paper we remove this limitation based on a further extension of the vector-field method adapted to the exterior region. Our method applies, in particular, to nontrivial charges. The full problem could then be treated by patching together the new estimates in the exterior with the hyperboloidal ones in the interior. This purely physical space approach introduced here maintains the robust properties of the old method and can thus be applied to other situations such as the coupled Einstein Klein-Gordon equation.
Direct CP violation in the b->sgamma process is a sensitive probe of physics beyond the Standard Model. We report a measurement of the CP asymmetry in B -> X_s gamma, where the hadronic recoil system Xs is reconstructed using a pseudo-reconstruction technique. In this approach there is negligible contamination from b->dgamma decays, which are expected to have a much larger CP asymmetry. We find ACP = 0.002 +- 0.050(stat.) +- 0.030(syst.) for B -> Xs gamma events having recoil mass smaller than 2.1 GeV/c^2. The analysis is based on a data sample of 140 /fb recorded at the Upsilon(4S) resonance with the Belle detector at the KEKB e+e- storage ring.
We propose a line list that may be useful for the abundance analysis of G-type stars in the wavelength range 4080 -- 6780 A. It is expected that the line list will be useful for surveys/libraries with overlapping spectral regions (e.g. ELODIE/SOPHIE libraries, UVES-580 setting of Gaia-ESO), and in particular for the analysis of F- and G-type stars in general. The atomic data are supplemented by detailed references to the sources. We estimated the Solar abundances using stellar lines and the high-resolution Kitt Peak National Observatory (KPNO) spectra of the Sun to determine the uncertainty in the log gf values. By undertaking a systematic search that makes use of the lower excitation potential and gf-values and using revised multiplet table as an initial guide, we identified 363 lines of 24 species that have accurate gf-values and are free of blends in the spectra of the Sun and a Solar analogue star, HD 218209 (G6V), for which accurate and up-to-date abundances were obtained from both ELODIE and PolarBASE spectra of the star. For the common lines with the Gaia-ESO line list v.6 provided by the Gaia-ESO collaboration, we discovered significant inconsistencies in the gf-values for certain lines of varying species.
A high statistics sample of photoproduced charm particles from the FOCUS(E831) experiment at Fermilab has been used to measure the D0 and D+ lifetimes. Using about 210000 D0 and 110000 D+ events we obtained the following values: 409.6 +/- 1.1 (statistical) +/- 1.5 (systematic) fs for D0 and 1039.4 +/- 4.3 (statistical) +/- 7.0 (systematic) fs for D+.
We present a first 3D magnetohydrodynamic (MHD) simulation of convective oxygen and neon shell burning in a non-rotating $18\, M_\odot$ star shortly before core collapse to study the generation of magnetic fields in supernova progenitors. We also run a purely hydrodynamic control simulation to gauge the impact of the magnetic fields on the convective flow and on convective boundary mixing. After about 17 convective turnover times, the magnetic field is approaching saturation levels in the oxygen shell with an average field strength of $\mathord{\sim}10^{10}\, \mathrm{G}$, and does not reach kinetic equipartition. The field remains dominated by small to medium scales, and the dipole field strength at the base of the oxygen shell is only $10^{9}\, \mathrm{G}$. The angle-averaged diagonal components of the Maxwell stress tensor mirror those of the Reynolds stress tensor, but are about one order of magnitude smaller. The shear flow at the oxygen-neon shell interface creates relatively strong fields parallel to the convective boundary, which noticeably inhibit the turbulent entrainment of neon into the oxygen shell. The reduced ingestion of neon lowers the nuclear energy generation rate in the oxygen shell and thereby slightly slows down the convective flow. Aside from this indirect effect, we find that magnetic fields do not appreciably alter the flow inside the oxygen shell. We discuss the implications of our results for the subsequent core-collapse supernova and stress the need for longer simulations, resolution studies, and an investigation of non-ideal effects for a better understanding of magnetic fields in supernova progenitors.
Many sequential decision-making problems in communication networks can be modeled as contextual bandit problems, which are natural extensions of the well-known multi-armed bandit problem. In contextual bandit problems, at each time, an agent observes some side information or context, pulls one arm and receives the reward for that arm. We consider a stochastic formulation where the context-reward tuples are independently drawn from an unknown distribution in each trial. Motivated by networking applications, we analyze a setting where the reward is a known non-linear function of the context and the chosen arm's current state. We first consider the case of discrete and finite context-spaces and propose DCB($\epsilon$), an algorithm that we prove, through a careful analysis, yields regret (cumulative reward gap compared to a distribution-aware genie) scaling logarithmically in time and linearly in the number of arms that are not optimal for any context, improving over existing algorithms where the regret scales linearly in the total number of arms. We then study continuous context-spaces with Lipschitz reward functions and propose CCB($\epsilon, \delta$), an algorithm that uses DCB($\epsilon$) as a subroutine. CCB($\epsilon, \delta$) reveals a novel regret-storage trade-off that is parametrized by $\delta$. Tuning $\delta$ to the time horizon allows us to obtain sub-linear regret bounds, while requiring sub-linear storage. By exploiting joint learning for all contexts we get regret bounds for CCB($\epsilon, \delta$) that are unachievable by any existing contextual bandit algorithm for continuous context-spaces. We also show similar performance bounds for the unknown horizon case.
Several strategies have been recently proposed in order to improve Monte Carlo sampling efficiency using machine learning tools. Here, we challenge these methods by considering a class of problems that are known to be exponentially hard to sample using conventional local Monte Carlo at low enough temperatures. In particular, we study the antiferromagnetic Potts model on a random graph, which reduces to the coloring of random graphs at zero temperature. We test several machine-learning-assisted Monte Carlo approaches, and we find that they all fail. Our work thus provides good benchmarks for future proposals for smart sampling algorithms.
This article publicly releases three-dimensional reconstructions of the local Universe gravitational field below z=0.8 that were computed using the CosmicFlows-4 catalog of 56,000 galaxy distances and its sub-sample of 1,008 type Ia supernovae distances. The article also provides measurements of the growth rate of structure using the pairwise correlation of radial peculiar velocities f sigma8 = 0.38(+/-0.04) (ungrouped CF4), f sigma8 = 0.36(+/-0.05) (grouped CF4), f sigma8 = 0.30(+/-0.06) (SNIa) and of the bulk flow in the 3D reconstructed Local Universe of 230 +/- 136 km s-1 at 300 Mpc of distance from the observer. The exploration of 10,000 reconstructions gives that the distances delivered by the Cosmicflows-4 catalog are compatible with a Hubble constant of H0 = 74.5 +/- 0.1 (grouped CF4), H0 = 75.0 +/- 0.35 (ungrouped CF4) and H0 = 75.5 +/- 0.95 (CF4 SNIa subsample).
We investigate the local existence, finite time blow-up and global existence of sign-changing solutions to the inhomogeneous parabolic system with space-time forcing terms $$ u_t-\Delta u =|v|^{p}+t^\sigma w_1(x),\,\, v_t-\Delta v =|u|^{q}+t^\gamma w_2(x),\,\, (u(0,x),v(0,x))=(u_0(x),v_0(x)), $$ where $t>0$, $x\in \mathbb{R}^N$, $N\geq 1$, $p,q>1$, $\sigma,\gamma>-1$, $\sigma,\gamma\neq0$, $w_1,w_2\not\equiv0$, and $u_0,v_0\in C_0(\mathbb{R}^N)$. For the finite time blow-up, two cases are discussed under the conditions $w_i\in L^1(\mathbb{R}^N)$ and $\int_{\mathbb{R}^N} w_i(x)\,dx>0$, $i=1,2$. Namely, if $\sigma>0$ or $\gamma>0$, we show that the (mild) solution $(u,v)$ to the considered system blows up in finite time, while if $\sigma,\gamma\in(-1,0)$, then a finite time blow-up occurs when $\frac{N}{2}< \max\left\{\frac{(\sigma+1)(pq-1)+p+1}{pq-1},\frac{(\gamma+1)(pq-1)+q+1}{pq-1}\right\}$. Moreover, if $\frac{N}{2}\geq \max\left\{\frac{(\sigma+1)(pq-1)+p+1}{pq-1},\frac{(\gamma+1)(pq-1)+q+1}{pq-1}\right\}$, $p>\frac{\sigma}{\gamma}$ and $q>\frac{\gamma}{\sigma}$, we show that the solution is global for suitable initial values and $w_i$, $i=1,2$.
Many researchers have explored ways to bring static typing to dynamic languages. However, to date, such systems are not precise enough when types depend on values, which often arises when using certain Ruby libraries. For example, the type safety of a database query in Ruby on Rails depends on the table and column names used in the query. To address this issue, we introduce CompRDL, a type system for Ruby that allows library method type signatures to include type-level computations (or comp types for short). Combined with singleton types for table and column names, comp types let us give database query methods type signatures that compute a table's schema to yield very precise type information. Comp types for hash, array, and string libraries can also increase precision and thereby reduce the need for type casts. We formalize CompRDL and prove its type system sound. Rather than type check the bodies of library methods with comp types---those methods may include native code or be complex---CompRDL inserts run-time checks to ensure library methods abide by their computed types. We evaluated CompRDL by writing annotations with type-level computations for several Ruby core libraries and database query APIs. We then used those annotations to type check two popular Ruby libraries and four Ruby on Rails web apps. We found the annotations were relatively compact and could successfully type check 132 methods across our subject programs. Moreover, the use of type-level computations allowed us to check more expressive properties, with fewer manually inserted casts, than was possible without type-level computations. In the process, we found two type errors and a documentation error that were confirmed by the developers. Thus, we believe CompRDL is an important step forward in bringing precise static type checking to dynamic languages.
Quantum technology is an emerging cutting-edge field which offers a new paradigm for computation and research in the field of physics, mathematics and other scientific disciplines. This technology is of strategic importance to governments globally and heavy investments and budgets are being sanctioned to gain competitive advantage in terms of military, space and education. Due to this, it is important to understand the educational and research needs required to implement this technology at a large scale. Here, we propose a novel universal quantum technology master's curriculum which comprises a balance between quantum hardware and software skills to enhance the employability of professionals thereby reducing the skill shortage faced by the academic institutions and organizations today. The proposed curriculum holds the potential to revolutionize the quantum education ecosystem by reducing the pressure of hiring PhDs faced by startups and promoting the growth of a balanced scientific mindset in quantum research.
Multiplicity is a ubiquitous characteristic of massive stars. Multiple systems offer us a unique observational constraint on the formation of high-mass systems. Herschel 36 A is a massive triple system composed of a close binary (Ab1-Ab2) and an outer component (Aa). We measured the orbital motion of the outer component of Herschel 36 A using infrared interferometry with the AMBER and PIONIER instruments of ESO's Very Large Telescope Interferometer. Our immediate aims are to constrain the masses of all components of this system and to determine if the outer orbit is co-planar with the inner one. Reported spectroscopic data for all three components of this system and our interferometric data allow us to derive full orbital solutions for the outer orbit Aa-Ab and the inner orbit Ab1-Ab2. For the first time, we derive the absolute masses of mAa = 22.3 +/- 1.7 M_sun, mAb1 = 20.5 +/- 1.5 M_sun and mAb2 = 12.5 +/- 0.9 M_sun. Despite not being able to resolve the close binary components, we infer the inclination of their orbit by imposing the same parallax as the outer orbit. Inclinations derived from the inner and outer orbits imply a modest difference of about 22 deg. between the orbital planes. We discuss this result and the formation of Herschel 36 A in the context of Core Accretion and Competitive Accretion models, which make different predictions regarding the statistic of the relative orbital inclinations.
Although deep convolutional networks have achieved great performance in face recognition tasks, the challenge of domain discrepancy still exists in real world applications. Lack of domain coverage of training data (source domain) makes the learned models degenerate in a testing scenario (target domain). In face recognition tasks, classes in two domains are usually different, so classical domain adaptation approaches, assuming there are shared classes in domains, may not be reasonable solutions for this problem. In this paper, self-supervised learning is adopted to learn a better embedding space where the subjects in target domain are more distinguishable. The learning goal is maximizing the similarity between the embeddings of each image and its mirror in both domains. The experiments show its competitive results compared with prior works. To know the reason why it can achieve such performance, we further discuss how this approach affects the learning of embeddings.
The leading and next-to-leading radiative corrections to deep inelastic events with tagged photons are calculated analytically. Comparisons with previous results and numerical estimations are presented for the experimental conditions at HERA.
It has been demonstrated that small plaquettes of quantum dot spin qubits are capable of simulating condensed matter phenomena which arise from the Hubbard model, such as the collective Coulomb blockade and Nagaoka ferromagnetism. Motivated by recent materials developments, we investigate a bilayer arrangement of quantum dots with four dots in each layer which exhibits a complex ground state behavior. We find using a generalized Hubbard model with long-range Coulomb interactions, several distinct magnetic phases occur as the Coulomb interaction strength is varied, with possible ground states that are ferromagnetic, antiferromagnetic, or having both one antiferromagnetic and one ferromagnetic layer. We map out the full phase diagram of the system as it depends on the inter- and intra-layer Coulomb interaction strengths, and find that for a single layer, a similar but simpler effect occurs. We also predict interesting contrasts among electron, hole, and electron-hole bilayer systems arising from complex correlation physics. Observing the predicted magnetic configuration in already-existing few-dot semiconductor bilayer structures could prove to be an important assessment of current experimental quantum dot devices, particularly in the context of spin-qubit-based analog quantum simulations.
Recently, the model of the Einstein-Bardeen theory minimally coupled to a complex, massive, free scalar field was investigated in arXiv:2305.19057. The introduction of a scalar field disrupts the formation of an event horizon, leaving only a type of solution referred to as a Bardeen-boson star. When the magnetic charge $q$ exceeds a certain critical value, the frozen Bardeen-boson star can be obtained with $\omega \rightarrow 0$. In this paper, we extend to the investigation of Einstein-Hayward-scalar model, and obtain the solution of frozen Hayward-boson star, including the ground and excited states. Furthermore, under the same parameters, it is interesting to observe that both the ground state and the excited states frozen stars have the same critical horizon and mass.
As part of an ongoing search for hypervelocity stars (HVS) I found seventeen two micron all sky survey (2MASS) sources with Gaia G magnitudes less than 16.0 and radial velocities less than -600 km/sec. All these stars are brighter in the K band when compared with their V and G magnitudes. Ten of these (including three carbon stars) are long period variable stars (LPV) of Mira type. One is a relatively nearby high proper motion star and one is a very high galactic latitude chemically peculiar metal-poor star. It may be a galactic halo star. One star is a Kepler red giant, two stars may be cluster members and two are in the star forming region (probably YSOs). It is not clear how these stars acquired such high radial velocities. Further study of these seventeen stars is needed.
Quasilinear turbulent transport models are a successful tool for prediction of core tokamak plasma profiles in many regimes. Their success hinges on the reproduction of local nonlinear gyrokinetic fluxes. We focus on significant progress in the quasilinear gyrokinetic transport model QuaLiKiz [C. Bourdelle et al. 2016 Plasma Phys. Control. Fusion 58 014036], which employs an approximated solution of the mode structures to significantly speed up computation time compared to full linear gyrokinetic solvers. Optimization of the dispersion relation solution algorithm within integrated modelling applications leads to flux calculations $\times10^{6-7}$ faster than local nonlinear simulations. This allows tractable simulation of flux-driven dynamic profile evolution including all transport channels: ion and electron heat, main particles, impurities, and momentum. Furthermore, QuaLiKiz now includes the impact of rotation and temperature anisotropy induced poloidal asymmetry on heavy impurity transport, important for W-transport applications. Application within the JETTO integrated modelling code results in 1s of JET plasma simulation within 10 hours using 10 CPUs. Simultaneous predictions of core density, temperature, and toroidal rotation profiles for both JET hybrid and baseline experiments are presented, covering both ion and electron turbulence scales. The simulations are successfully compared to measured profiles, with agreement mostly in the 5-25% range according to standard figures of merit.
We use a publicly available numerical wave-propagation simulation of Hartlep et al. 2011 to test the ability of helioseismic holography to detect signatures of a compact, fully submerged, 5% sound-speed perturbation placed at a depth of 50 Mm within a solar model. We find that helioseismic holography as employed in a nominal "lateral-vantage" or "deep-focus" geometry employing quadrants of an annular pupil is capable of detecting and characterizing the perturbation. A number of tests of the methodology, including the use of a plane-parallel approximation, the definition of travel-time shifts, the use of different phase-speed filters, and changes to the pupils, are also performed. It is found that travel-time shifts made using Gabor-wavelet fitting are essentially identical to those derived from the phase of the Fourier transform of the cross-covariance functions. The errors in travel-time shifts caused by the plane-parallel approximation can be minimized to less than a second for the depths and fields of view considered here. Based on the measured strength of the mean travel-time signal of the perturbation, no substantial improvement in sensitivity is produced by varying the analysis procedure from the nominal methodology in conformance with expectations. The measured travel-time shifts are essentially unchanged by varying the profile of the phase-speed filter or omitting the filter entirely. The method remains maximally sensitive when applied with pupils that are wide quadrants, as opposed to narrower quadrants or with pupils composed of smaller arcs. We discuss the significance of these results for the recent controversy regarding suspected pre-emergence signatures of active regions.
In this article several properties of the inverse along an element will be studied in the context of unitary rings. New characterizations of the existence of this inverse will be proved. Moreover, the set of all invertible elements along a fixed element will be fully described. Futhermore, commuting inverse along an element will be characterized. The special cases of the group inverse, the (generalized) Drazin inverse and the Moore-Penrose inverse (in rings with involutions) will be also considered.
The analysis of the total and geometric phases generated by the neutrino oscillation shows that these phases for Majorana neutrinos are depending on the representation of the mixing matrix and they are different from those of Dirac neutrinos.
Existing face hallucination methods based on convolutional neural networks (CNN) have achieved impressive performance on low-resolution (LR) faces in a normal illumination condition. However, their performance degrades dramatically when LR faces are captured in low or non-uniform illumination conditions. This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images while compensating for low and non-uniform illumination. To this end, we develop two key components in our CPGAN: internal and external Copy and Paste nets (CPnets). Specifically, our internal CPnet exploits facial information residing in the input image to enhance facial details; while our external CPnet leverages an external HR face for illumination compensation. A new illumination compensation loss is thus developed to capture illumination from the external guided face image effectively. Furthermore, our method offsets illumination and upsamples facial details alternately in a coarse-to-fine fashion, thus alleviating the correspondence ambiguity between LR inputs and external HR inputs. Extensive experiments demonstrate that our method manifests authentic HR face images in a uniform illumination condition and outperforms state-of-the-art methods qualitatively and quantitatively.
Malware constitutes a major global risk affecting millions of users each year. Standard algorithms in detection systems perform insufficiently when dealing with malware passed through obfuscation tools. We illustrate this studying in detail an open source metamorphic software, making use of a hybrid framework to obtain the relevant features from binaries. We then provide an improved alternative solution based on adversarial risk analysis which we illustrate describe with an example.
We provide a simple lagrangian interpretation of the meaning of the $b_0^-$ semi-relative condition in closed string theory. Namely, we show how the semi-relative condition is equivalent to the requirement that physical operators be cohomology classes of the BRS operators acting on the space of local fields {\it covariant} under world-sheet reparametrizations. States trivial in the absolute BRS cohomology but not in the semi-relative one are explicitly seen to correspond to BRS variations of operators which are not globally defined world-sheet tensors. We derive the covariant expressions for the observables of topological gravity. We use them to prove a formula that equates the expectation value of the gravitational descendant of ghost number 4 to the integral over the moduli space of the Weil-Peterson K\"ahler form.
We analyse the STU sectors of the four-dimensional maximal gauged supergravities with gauge groups ${\rm SO(8)}$, ${\rm SO(6)}\ltimes\mathbb{R}^{12}$ and $[{\rm SO(6)}\times{\rm SO(2)}]\ltimes\mathbb{R}^{12}$, and construct new domain-wall black-hole solutions in $D=4$. The consistent Kaluza-Klein embedding of these theories is obtained using the techniques of Exceptional Field Theory combined with the 4$d$ tensor hierarchies, and their respective uplifts into $D=11$ and type IIB supergravities are connected through singular limits that relate the different gaugings.
A portion of 20 minutes of the GPS signals collected during the Bridge 2 experimental campaign, performed by ESA, have been processed. An innovative algorithm called Parfait, developed by Starlab and implemented within Starlab's GNSS-R Software package STARLIGHT (STARLab Interferometric Gnss Toolkit), has been successfully used with this set of data. A comparison with tide values independently collected and with differential GPS processed data has been performed. We report a successful PARIS phase altimetric measure of the Zeeland Brug over the sea surface with a rapidly changing tide, with a precision better than 2 cm.
We quantize Quantum Electrodynamics in $2+1$ dimensions coupled to a Chern-Simons (CS) term and a charged spinor field, in covariant gauges and in the Coulomb gauge. The resulting Maxwell-Chern-Simons (MCS) theory describes charged fermions interacting with each other and with topologically massive propagating photons. We impose Gauss's law and the gauge conditions and investigate their effect on the dynamics and on the statistics of $n$-particle states. We construct charged spinor states that obey Gauss's law and the gauge conditions, and transform the theory to representations in which these states constitute a Fock space. We demonstrate that, in these representations, the nonlocal interactions between charges and between charges and transverse currents, as well as the interactions between currents and massive propagating photons, are identical in the different gauges we analyze in this and in earlier work. We construct the generators of the Poincar\'e group, show that they implement the Poincar\'e algebra, and explicitly demonstrate the effect of rotations and Lorentz boosts on the particle states. We show that the imposition of Gauss's law does not produce any ``exotic'' fractional statistics. In the case of the covariant gauges, this demonstration makes use of unitary transformations that provide charged particles with the gauge fields required by Gauss's law, but that leave the anticommutator algebra of the spinor fields untransformed. In the Coulomb gauge, we show that the anticommutators of the spinor fields apply to the Dirac-Bergmann constraint surfaces, on which Gauss's law and the gauge conditions obtain. We examine MCS theory in the large CS coupling constant limit, and compare that limiting form with CS theory, in which the Maxwell kinetic energy term is not included in the
If Nature is supersymmetric at the weak interaction scale, what can we hope to learn from experiments on supersymmetric particles? The most mysterious aspect of phenomenological supersymmetry is the mechanism of spontaneous supersymmetry breaking. This mechanism ties the observable pattern of supersymmetric particle masses to aspects of the underlying unified theory at very small distance scales. In this article, I will discuss a systematic experimental program to determine the mechanism of supersymmetry breaking. Both $pp$ and $e^+e^-$ colliders of the next generation play an essential role. [Lecture presented at the 1995 Yukawa International Symposium (YKIS`95), to appear in the proceedings.]
Studies of n-CdZnTe crystals (photoluminescence, extrinsic photoconductivity, Hall effect, time-of-flight technique) have shown that the excess concentration of vacancies of cadmium (Vcd) is the main reason of low, as a rule, values of product of mobility to life time of holes (mhth). The reduction of the concentration of cadmium vacancies (decreasing of the intensity of near 1eV photoluminescence band and an intensity of the (0.9-1.3) eV extrinsic photoconductivity band) by annealing of the crystals at 600 C results in increasing of value of mhth. Influence of Zn on formation of the basic photoelectric properties of CdZnTe crystals has been explained by "self-control" of a concentration of cadmium vacancies Vcd due to addition of Zn results in formation of divacancies of metal, which in part dissociate and provide a crystal with necessary quantity of monovacancies for processes of complex formation. That makes process of obtaining of semi-insulating CdZnTe crystals less dependent from pressure Pcd in comparison with CdTe. However with the purpose of obtaining CdZnTe crystals with high value of mhth (i.e. with small concentration of the free vacancies of cadmium) it is necessary to control Pcd above the crystal at stages of its growth and annealing.
I review recently conducted measurements of $\gen$ as well as precision form factor experiments at high momentum transfer that will be performed with the 11 GeV electron beam at Jefferson Lab.
Order picking is the single most cost-intensive activity in picker-to-parts warehouses, and as such has garnered large interest from the scientific community which led to multiple problem formulations and a plethora of algorithms published. Unfortunately, most of them are not applicable at the scale of really large warehouses like those operated by Zalando, a leading European online fashion retailer. Based on our experience in operating Zalando's batching system, we propose a novel batching problem formulation for mixed-shelves, large scale warehouses with zoning. It brings the selection of orders to be batched into the scope of the problem, making it more realistic while at the same time increasing the optimization potential. We present two baseline algorithms and compare them on a set of generated instances. Our results show that first, even a basic greedy algorithm requires significant runtime to solve real-world instances and second, including order selection in the studied problem shows large potential for improved solution quality.
We investigate the multineutron excitations of neutron-rich $^8$He in the electric dipole strength using a $^4$He+$n$+$n$+$n$+$n$ five-body model. Many-body unbound states of $^8$He are obtained in the complex scaling method. We found that the different excitation modes coexist in the dipole strength below the excitation energy of 20 MeV. The strength below 10 MeV comes from the $^7$He+$n$ channel, indicating the sequential breakup of $^8$He via the $^7$He resonance to $^6$He+$n$+$n$. Above 10 MeV, the strength is obtained from many-body continuum states with strong configuration mixings, suggesting a new collective motion of four neutrons with the dipole oscillation against $^4$He.
In this paper we study the homeomorphisms of the disk that are liftable with respect to a simple branched covering. Since any such homeomorphism maps the branch set of the covering onto itself and liftability is invariant up to isotopy fixing the branch set, we are dealing in fact with liftable braids. We prove that the group of liftable braids is finitely generated by liftable powers of half-twists around arcs joining branch points. A set of such generators is explicitly determined for the special case of branched coverings from the disk to the disk. As a preliminary result we also obtain the classification of all the simple branched coverings of the disk.
Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel open-vocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS. Code is available at: https://github.com/clin1223/VLDet.
We propose to combine cepstrum and nonlinear time-frequency (TF) analysis to study mutiple component oscillatory signals with time-varying frequency and amplitude and with time-varying non-sinusoidal oscillatory pattern. The concept of cepstrum is applied to eliminate the wave-shape function influence on the TF analysis, and we propose a new algorithm, named de-shape synchrosqueezing transform (de-shape SST). The mathematical model, adaptive non-harmonic model, is introduced and the de-shape SST algorithm is theoretically analyzed. In addition to simulated signals, several different physiological, musical and biological signals are analyzed to illustrate the proposed algorithm.
Hydra is a full-scale industrial CFD application used for the design of turbomachinery at Rolls Royce plc. It consists of over 300 parallel loops with a code base exceeding 50K lines and is capable of performing complex simulations over highly detailed unstructured mesh geometries. Unlike simpler structured-mesh applications, which feature high speed-ups when accelerated by modern processor architectures, such as multi-core and many-core processor systems, Hydra presents major challenges in data organization and movement that need to be overcome for continued high performance on emerging platforms. We present research in achieving this goal through the OP2 domain-specific high-level framework. OP2 targets the domain of unstructured mesh problems and follows the design of an active library using source-to-source translation and compilation to generate multiple parallel implementations from a single high-level application source for execution on a range of back-end hardware platforms. We chart the conversion of Hydra from its original hand-tuned production version to one that utilizes OP2, and map out the key difficulties encountered in the process. To our knowledge this research presents the first application of such a high-level framework to a full scale production code. Specifically we show (1) how different parallel implementations can be achieved with an active library framework, even for a highly complicated industrial application such as Hydra, and (2) how different optimizations targeting contrasting parallel architectures can be applied to the whole application, seamlessly, reducing developer effort and increasing code longevity. Performance results demonstrate that not only the same runtime performance as that of the hand-tuned original production code could be achieved, but it can be significantly improved on conventional processor systems. Additionally, we achieve further...
Observations indicate that the fraction of potential binary star clusters in the Magellanic Clouds is about 10%. In contrast, it is widely accepted that the binary cluster frequency in the Galaxy disk is much lower. Here we investigate the multiplicity of clusters in the Milky Way disk to either confirm or disprove this dearth of binaries. We quantify the open cluster multiplicity using complete, volume-limited samples from WEBDA and NCOVOCC. At the Solar Circle, at least 12% of all open clusters appear to be experiencing some type of interaction with another cluster; i.e., are possible binaries. As in the Magellanic Clouds, the pair separation histogram hints of a bimodal distribution. Nearly 40% of identified pairs are probably primordial. Most of the remaining pairs could be undergoing some type of close encounter, perhaps as a result of orbital resonances. Confirming early theoretical predictions, the characteristic time scale for destruction of bound pairs in the disk is 200 Myr or one galactic orbit. Our results show that the fraction of possible binary clusters in the Galactic disk is comparable to that in the Magellanic Clouds.
A supersymmetric action functional describing the interaction of the fundamental superstring with the D=10, type IIB Dirichlet super-9-brane is presented. A set of supersymmetric equations for the coupled system is obtained from the action principle. It is found that the interaction of the string endpoints with the super-D9-brane gauge field requires some restrictions for the image of the gauge field strength. When those restrictions are not imposed, the equations imply the absence of the endpoints, and the equations coincide either with the ones of the free super-D9-brane or with the ones for the free closed type IIB superstring. Different phases of the coupled system are described. A generalization to an arbitrary system of intersecting branes is discussed.
The Sun Coronal Ejection Tracker (SunCET) is an extreme ultraviolet imager and spectrograph instrument concept for tracking coronal mass ejections through the region where they experience the majority of their acceleration: the difficult-to-observe middle corona. It contains a wide field of view (0-4~\Rs) imager and a 1~\AA\ spectral-resolution-irradiance spectrograph spanning 170-340~\AA. It leverages new detector technology to read out different areas of the detector with different integration times, resulting in what we call "simultaneous high dynamic range", as opposed to the traditional high dynamic range camera technique of subsequent full-frame images that are then combined in post-processing. This allows us to image the bright solar disk with short integration time, the middle corona with a long integration time, and the spectra with their own, independent integration time. Thus, SunCET does not require the use of an opaque or filtered occulter. SunCET is also compact -- $\sim$15 $\times$ 15 $\times$ 10~cm in volume -- making it an ideal instrument for a CubeSat or a small, complementary addition to a larger mission. Indeed, SunCET is presently in a NASA-funded, competitive Phase A as a CubeSat and has also been proposed to NASA as an instrument onboard a 184 kg Mission of Opportunity.
Some judiciously chosen local curvature scalars can be used to invariantly characterize event horizons of black holes in $D > 3$ dimensions, but they fail for the three dimensional Ba\~nados-Teitelboim-Zanelli (BTZ) black hole since all curvature invariants are constant. Here we provide an invariant characterization of the event horizon of the BTZ black hole using the curvature invariants of codimension one hypersurfaces instead of the full spacetime. Our method is also applicable to black holes in generic dimensions but is most efficient in three, four, and five dimensions. We give four dimensional Kerr, five dimensional Myers-Perry and three dimensional warped-anti-de Sitter, and the three dimensional asymptotically flat black holes as examples.
Very fast magnetic avalanches in (La, Pr)-based manganites are the signature of a phase transition from an insulating blocked charge-ordered (CO-AFM) state to a charge delocalized ferromagnetic (CD-FM) state. We report here the experimental observation that this transition does not occur neither simultaneously nor randomly in the whole sample but there is instead a spatial propagation with a velocity of the order of tens of m/s. Our results show that avalanches are originated in the inside of the sample, move to the outside and occur at values of the applied magnetic field that depend on the CD-FM fraction in the sample. Moreover, a change in the gradient of the magnetic field along the sample shifts the point where the avalanches are ignited.
A powerful approach to solve the Coulombic quantum three-body problem is proposed. The approach is exponentially convergent and more efficient than the Hyperspherical Coordinate(HC) method and the Correlation Function Hyperspherical Harmonic(CFHH) method. This approach is numerically competitive with the variational methods, such as that using the Hylleraas-type basis functions. Numerical comparisons are made to demonstrate them, by calculating the non-relativistic and infinite-nuclear-mass limit of the ground state energy of the helium atom. The exponentially convergency of this approach is due to the full matching between the analytical structure of the basis functions that I use and the true wave function. This full matching was not reached by almost any other methods. For example, the variational method using the Hylleraas-type basis does not reflects the logarithmic singularity of the true wave function at the origin as predicted by Bartlett and Fock. Two important approaches are proposed in this work to reach this full matching: the coordinate transformation method and the asymptotic series method. Besides these, this work makes use of the least square method to substitute complicated numerical integrations in solving the Schr\"{o}dinger equation, without much loss of accuracy; this method is routinely used by people to fit a theoretical curve with discrete experimental data, but I use it here to simplify the computation.
In this paper, we tackle a new and challenging problem of text-driven generation of 3D garments with high-quality textures. We propose "WordRobe", a novel framework for the generation of unposed & textured 3D garment meshes from user-friendly text prompts. We achieve this by first learning a latent representation of 3D garments using a novel coarse-to-fine training strategy and a loss for latent disentanglement, promoting better latent interpolation. Subsequently, we align the garment latent space to the CLIP embedding space in a weakly supervised manner, enabling text-driven 3D garment generation and editing. For appearance modeling, we leverage the zero-shot generation capability of ControlNet to synthesize view-consistent texture maps in a single feed-forward inference step, thereby drastically decreasing the generation time as compared to existing methods. We demonstrate superior performance over current SOTAs for learning 3D garment latent space, garment interpolation, and text-driven texture synthesis, supported by quantitative evaluation and qualitative user study. The unposed 3D garment meshes generated using WordRobe can be directly fed to standard cloth simulation & animation pipelines without any post-processing.
The Ring Loading Problem emerged in the 1990s to model an important special case of telecommunication networks (SONET rings) which gained attention from practitioners and theorists alike. Given an undirected cycle on $n$ nodes together with non-negative demands between any pair of nodes, the Ring Loading Problem asks for an unsplittable routing of the demands such that the maximum cumulated demand on any edge is minimized. Let $L$ be the value of such a solution. In the relaxed version of the problem, each demand can be split into two parts where the first part is routed clockwise while the second part is routed counter-clockwise. Denote with $L^*$ the maximum load of a minimum split routing solution. In a landmark paper, Schrijver, Seymour and Winkler [SSW98] showed that $L \leq L^* + 1.5D$, where $D$ is the maximum demand value. They also found (implicitly) an instance of the Ring Loading Problem with $L = L^* + 1.01D$. Recently, Skutella [Sku16] improved these bounds by showing that $L \leq L^* + \frac{19}{14}D$, and there exists an instance with $L = L^* + 1.1D$. We contribute to this line of research by showing that $L \leq L^* + 1.3D$. We also take a first step towards lower and upper bounds for small instances.
We introduce a method of verifying termination of logic programs with respect to concrete queries (instead of abstract query patterns). A necessary and sufficient condition is established and an algorithm for automatic verification is developed. In contrast to existing query pattern-based approaches, our method has the following features: (1) It applies to all general logic programs with non-floundering queries. (2) It is very easy to automate because it does not need to search for a level mapping or a model, nor does it need to compute an interargument relation based on additional mode or type information. (3) It bridges termination analysis with loop checking, the two problems that have been studied separately in the past despite their close technical relation with each other.
We report measurements of the isotope shifts of two transitions ($4f^46s\rightarrow [25044.7]^{\circ}_{7/2}$ and $4f^46s\rightarrow [25138.6]^{\circ}_{7/2}$) in neodymium ions (Nd$^+$) with hundredfold improved accuracy, using laser spectroscopy of a cryogenically-cooled neutral plasma. The isotope shifts were measured across a set of five spin-zero isotopes that spans a nuclear shape transition. We discuss the prospects for further improvements to the accuracy of Nd$^+$ isotope shifts using optical clock transitions, which could enable higher precision tests of King plot linearity for new physics searches.
Recursion formulae are derived for the calculation of two centre matrix elements of a radial function in relativistic quantum mechanics. The recursions are obtained between not necessarily diagonal radial eigensates using arbitrary radial potentials and any radial functions. The only restriction is that the potentials have to share a common minimum. Among other things, the relations so obtained can help in evaluating relativistic corrections to transition probabilities between atomic Rydberg states.
We estimate the absolute magnitude distribution of galaxies which lie within about a Mpc of Mg II absorption systems. The absorption systems themselves lie along 1880 lines of sight to QSOs from the Sloan Digital Sky Survey Data Release 3, have rest equivalent widths greater than 0.88 Angstroms, and redshifts between 0.37 < z < 0.82. Our measurement is based on all galaxies which lie within a projected distance of about 900 kpc/h of each QSO demonstrating absorption. The redshifts of these projected neighbors are not available, so we use a background subtraction technique to estimate the absolute magnitude distribution of true neighbors. (Our method exploits the fact that, although we do not know the redshifts of the neighbors, we do know the redshift of the absorbers.) The absolute magnitude distribution we find is well described by a bell-shaped curve peaking at about rest-frame M_B = -20, corresponding to L/L* = 1.4. A comparison of this observed distribution to ones in the literature suggests that it is unlikely to be drawn from a population dominated by late-type galaxies. However, the strong equivalent width systems may be associated with later galaxy types. Finally we use the absolute magnitude distribution, along with the observed covering fraction of about 8 percent, to estimate the extent of the MgII absorbing gas around a galaxy. For an L* galaxy, this scale is about 70 kpc/h. We provide an analytic description of our method, which is generally applicable to any dataset in which redshifts are only available for a small sub-sample. Hence, we expect it to aid in the analysis of galaxy scaling relations from photometric redshift datasets.
We investigate the problem of quantifying contraction coefficients of Markov transition kernels in Kantorovich ($L^1$ Wasserstein) distances. For diffusion processes, relatively precise quantitative bounds on contraction rates have recently been derived by combining appropriate couplings with carefully designed Kantorovich distances. In this paper, we partially carry over this approach from diffusions to Markov chains. We derive quantitative lower bounds on contraction rates for Markov chains on general state spaces that are powerful if the dynamics is dominated by small local moves. For Markov chains on $\mathbb{R^d}$ with isotropic transition kernels, the general bounds can be used efficiently together with a coupling that combines maximal and reflection coupling. The results are applied to Euler discretizations of stochastic differential equations with non-globally contractive drifts, and to the Metropolis adjusted Langevin algorithm for sampling from a class of probability measures on high dimensional state spaces that are not globally log-concave.
Recent calculations by Vorobev and Malyshenko (JETP Letters, 71, 39, 2000) show that molecular hydrogen may stay liquid and superfluid in strong electric fields of the order of $4\times 10^7 V/cm$. I demonstrate that strong local electric fields of similar magnitude exist beneath a two-dimensional layer of electrons localized in the image potential above the surface of solid hydrogen. Even stronger local fields exist around charged particles (ions or electrons) if surface or bulk of a solid hydrogen crystal is statically charged. Measurements of the frequency shift of the $1 \to 2$ photoresonance transition in the spectrum of two-dimensional layer of electrons above positively or negatively charged solid hydrogen surface performed in the temperature range 7 - 13.8 K support the prediction of electric field induced surface melting. The range of surface charge density necessary to stabilize the liquid phase of molecular hydrogen at the temperature of superfluid transition is estimated.
We construct infinite cubefree binary words containing exponentially many distinct squares of length n. We also show that for every positive integer n, there is a cubefree binary square of length 2n.
Spin-Fermion systems which obtain their magnetic properties from a system of localized magnetic moments being coupled to conducting electrons are considered. The dynamical degrees of freedom are spin-$s$ operators of localized spins and spin-1/2 Fermi operators of itinerant electrons. Renormalized spin-wave theory, which accounts for the magnon-magnon interaction, and its extension are developed to describe the two ferrimagnetic phases in the system: low temperature phase $0<T<T^{*}$, where all electrons contribute the ordered ferromagnetic moment, and high temperature phase $T^{*}<T<T_C$, where only localized spins form magnetic moment. The magnetization as a function of temperature is calculated. The theoretical predictions are utilize to interpret the experimentally measured magnetization-temperature curves of $UGe_2$..
We study some extension of a discrete Heisenberg group coming from the theory of loop-groups and find invariants of conjugacy classes in this group. In some cases including the case of the integer Heisenberg group we make these invariants more explicit.
We demonstrate that spin current can be generated by an ac voltage in a one-channel quantum wire with strong repulsive electron interactions in the presence of a non-magnetic impurity and uniform static magnetic field. In a certain range of voltages, the spin current can exhibit a power dependence on the ac voltage bias with a negative exponent. The spin current expressed in units of $\hbar/2$ per second can become much larger than the charge current in units of the electron charge per second. The spin current generation requires neither spin-polarized particle injection nor time-dependent magnetic fields.
Although Fourier series approximation is ubiquitous in computational physics owing to the Fast Fourier Transform (FFT) algorithm, efficient techniques for the fast evaluation of a three-dimensional truncated Fourier series at a set of \emph{arbitrary} points are quite rare, especially in MATLAB language. Here we employ the Nonequispaced Fast Fourier Transform (NFFT, by J. Keiner, S. Kunis, and D. Potts), a C library designed for this purpose, and provide a Matlab and GNU Octave interface that makes NFFT easily available to the Numerical Analysis community. We test the effectiveness of our package in the framework of quantum vortex reconnections, where pseudospectral Fourier methods are commonly used and local high resolution is required in the post-processing stage. We show that the efficient evaluation of a truncated Fourier series at arbitrary points provides excellent results at a computational cost much smaller than carrying out a numerical simulation of the problem on a sufficiently fine regular grid that can reproduce comparable details of the reconnecting vortices.
We present a simultaneous, multi-wavelength campaign targeting the nearby (7.2 pc) L8/L9 (optical/near-infrared) dwarf WISEP J060738.65+242953.4 in the mid-infrared, radio, and optical. Spitzer Space Telescope observations show no variability at the 0.2% level over 10 hours each in the 3.6 and 4.5 micron bands. Kepler K2 monitoring over 36 days in Campaign 0 rules out stable periodic signals in the optical with amplitudes great than 1.5% and periods between 1.5 hours and 2 days. Non-simultaneous Gemini optical spectroscopy detects lithium, constraining this L dwarf to be less than ~2 Gyr old, but no Balmer emission is observed. The low measured projected rotation velocity (v sin i < 6 km/s) and lack of variability are very unusual compared to other brown dwarfs, and we argue that this substellar object is likely viewed pole-on. We detect quiescent (non-bursting) radio emission with the VLA. Amongst radio detected L and T dwarfs, it has the lowest observed L_nu and the lowest v sin i. We discuss the implications of a pole-on detection for various proposed radio emission scenarios.
The {\em topological symmetry group} of an embedding $\Gamma$ of an abstract graph $\gamma$ in $S^3$ is the group of automorphisms of $\gamma$ which can be realized by homeomorphisms of the pair $(S^3, \Gamma)$. These groups are motivated by questions about the symmetries of molecules in space. In this paper, we find all the groups which can be realized as topological symmetry groups for each of the graphs in the Heawood family. This is an important collection of spatial graphs, containing the only intrinsically knotted graphs with 21 or fewer edges. As a consequence, we discover that the graphs in this family are all intrinsically chiral.
Optimization of high-dimensional black-box functions is an extremely challenging problem. While Bayesian optimization has emerged as a popular approach for optimizing black-box functions, its applicability has been limited to low-dimensional problems due to its computational and statistical challenges arising from high-dimensional settings. In this paper, we propose to tackle these challenges by (1) assuming a latent additive structure in the function and inferring it properly for more efficient and effective BO, and (2) performing multiple evaluations in parallel to reduce the number of iterations required by the method. Our novel approach learns the latent structure with Gibbs sampling and constructs batched queries using determinantal point processes. Experimental validations on both synthetic and real-world functions demonstrate that the proposed method outperforms the existing state-of-the-art approaches.
We consider the classic 1-center problem: Given a set $P$ of $n$ points in a metric space find the point in $P$ that minimizes the maximum distance to the other points of $P$. We study the complexity of this problem in $d$-dimensional $\ell_p$-metrics and in edit and Ulam metrics over strings of length $d$. Our results for the 1-center problem may be classified based on $d$ as follows. $\bullet$ Small $d$: Assuming the hitting set conjecture (HSC), we show that when $d=\omega(\log n)$, no subquadratic algorithm can solve 1-center problem in any of the $\ell_p$-metrics, or in edit or Ulam metrics. $\bullet$ Large $d$: When $d=\Omega(n)$, we extend our conditional lower bound to rule out subquartic algorithms for 1-center problem in edit metric (assuming Quantified SETH). On the other hand, we give a $(1+\epsilon)$-approximation for 1-center in Ulam metric with running time $\tilde{O_{\varepsilon}}(nd+n^2\sqrt{d})$. We also strengthen some of the above lower bounds by allowing approximations or by reducing the dimension $d$, but only against a weaker class of algorithms which list all requisite solutions. Moreover, we extend one of our hardness results to rule out subquartic algorithms for the well-studied 1-median problem in the edit metric, where given a set of $n$ strings each of length $n$, the goal is to find a string in the set that minimizes the sum of the edit distances to the rest of the strings in the set.
Data prefetching aims to improve access times to data storage systems by predicting data records that are likely to be accessed by subsequent requests and retrieving them into a memory cache before they are needed. In the case of Persistent Object Stores, previous approaches to prefetching have been based on predictions made through analysis of the store's schema, which generates rigid predictions, or monitoring access patterns to the store while applications are executed, which introduces memory and/or computation overhead. In this paper, we present CAPre, a novel prefetching system for Persistent Object Stores based on static code analysis of object-oriented applications. CAPre generates the predictions at compile-time and does not introduce any overhead to the application execution. Moreover, CAPre is able to predict large amounts of objects that will be accessed in the near future, thus enabling the object store to perform parallel prefetching if the objects are distributed, in a much more aggressive way than in schema-based prediction algorithms. We integrate CAPre into a distributed Persistent Object Store and run a series of experiments that show that it can reduce the execution time of applications from 9% to over 50%, depending on the nature of the application and its persistent data model.
The ability to model and predict ego-vehicle's surrounding traffic is crucial for autonomous pilots and intelligent driver-assistance systems. Acceleration prediction is important as one of the major components of traffic prediction. This paper proposes novel approaches to the acceleration prediction problem. By representing spatial relationships between vehicles with a graph model, we build a generalized acceleration prediction framework. This paper studies the effectiveness of proposed Graph Convolution Networks, which operate on graphs predicting the acceleration distribution for vehicles driving on highways. We further investigate prediction improvement through integrating of Recurrent Neural Networks to disentangle the temporal complexity inherent in the traffic data. Results from simulation studies using comprehensive performance metrics support the conclusion that our proposed networks outperform state-of-the-art methods in generating realistic trajectories over a prediction horizon.
We present a machine learning method to predict extreme hydrologic events from spatially and temporally varying hydrological and meteorological data. We used a timestep reduction technique to reduce the computational and memory requirements and trained a bidirection LSTM network to predict soil water and stream flow from time series data observed and simulated over eighty years in the Wabash River Watershed. We show that our simple model can be trained much faster than complex attention networks such as GeoMAN without sacrificing accuracy. Based on the predicted values of soil water and stream flow, we predict the occurrence and severity of extreme hydrologic events such as droughts. We also demonstrate that extreme events can be predicted in geographical locations separate from locations observed during the training process. This spatially-inductive setting enables us to predict extreme events in other areas in the US and other parts of the world using our model trained with the Wabash Basin data.
This paper presents the development of a Supervisory Control and Data Acquisition (SCADA) system testbed used for cybersecurity research. The testbed consists of a water storage tank's control system, which is a stage in the process of water treatment and distribution. Sophisticated cyber-attacks were conducted against the testbed. During the attacks, the network traffic was captured, and features were extracted from the traffic to build a dataset for training and testing different machine learning algorithms. Five traditional machine learning algorithms were trained to detect the attacks: Random Forest, Decision Tree, Logistic Regression, Naive Bayes and KNN. Then, the trained machine learning models were built and deployed in the network, where new tests were made using online network traffic. The performance obtained during the training and testing of the machine learning models was compared to the performance obtained during the online deployment of these models in the network. The results show the efficiency of the machine learning models in detecting the attacks in real time. The testbed provides a good understanding of the effects and consequences of attacks on real SCADA environments
Knowledge of the noise distribution in diffusion MRI is the centerpiece to quantify uncertainties arising from the acquisition process. Accurate estimation beyond textbook distributions often requires information about the acquisition process, which is usually not available. We introduce two new automated methods using the moments and maximum likelihood equations of the Gamma distribution to estimate all unknown parameters using only the magnitude data. A rejection step is used to make the framework automatic and robust to artifacts. Simulations were created for two diffusion weightings with parallel imaging. Furthermore, MRI data of a water phantom with different combinations of parallel imaging were acquired. Finally, experiments on freely available datasets are used to assess reproducibility when limited information about the acquisition protocol is available. Additionally, we demonstrated the applicability of the proposed methods for a bias correction and denoising task on an in vivo dataset. A generalized version of the bias correction framework for non integer degrees of freedom is also introduced. The proposed framework is compared with three other algorithms with datasets from three vendors, employing different reconstruction methods. Simulations showed that assuming a Rician distribution can lead to misestimation of the noise distribution in parallel imaging. Results showed that signal leakage in multiband can also lead to a misestimation of the noise distribution. Repeated acquisitions of in vivo datasets show that the estimated parameters are stable and have lower variability than compared methods. Results show that the proposed methods reduce the appearance of noise at high b-value. The proposed algorithms herein can estimate both parameters of the noise distribution automatically, are robust to signal leakage artifacts and perform best when used on acquired noise maps.
We consider the twistor description of classical self-dual Einstein gravity in the presence of a defect operator wrapping a certain $\mathbb{CP}^1$. The backreaction of this defect deforms the flat twistor space to that of Eguchi-Hanson space. We show that the celestial chiral algebra of self-dual gravity on the Eguchi-Hanson background is likewise deformed to become the loop algebra of a certain scaling limit of the family of $W(\mu)$-algebras, where the scaling limit is controlled by the radius of the Eguchi-Hanson core. We construct this algebra by computing the Poisson algebra of holomorphic functions on the deformed twistor space, and check this result with a space-time calculation of the leading contribution to the gravitational splitting function. The loop algebra of a general $W(\mu)$-algebra (away from the scaling limit) similarly arises as the celestial chiral algebra of Moyal-deformed self-dual gravity on Eguchi-Hanson space. We also obtain corresponding results for self-dual Yang-Mills.
We present a catalogue of non-nuclear regions containing Wolf-Rayet stars in the metal-rich spiral galaxy M83 (NGC5236). From a total of 283 candidate regions identified using HeII 4686 imaging with VLT-FORS2, Multi Object Spectroscopy of 198 regions was carried out, confirming 132 WR sources. From this sub-sample, an exceptional content of 1035 +/- 300 WR stars is inferred, with N(WC)/N(WN) approx 1.2, continuing the trend to larger values at higher metallicity amongst Local Group galaxies, and greatly exceeding current evolutionary predictions at high metallicity. Late-type stars dominate the WC population of M83, with N(WC8-9)/N(WC4-7)=9 and WO subtypes absent, consistent with metallicity dependent WC winds. Equal numbers of late to early WN stars are observed, again in contrast to current evolutionary predictions. Several sources contain large numbers of WR stars. In particular, #74 (alias region 35 from De Vaucouleurs et al. contains 230 WR stars, and is identified as a Super Star Cluster from inspection of archival HST/ACS images. Omitting this starburst cluster would result in revised statistics of N(WC)/N(WN) approx 1 and N(WC8-9)/N(WC4-7) approx 6 for the `quiescent' disk population. Including recent results for the nucleus and accounting for incompleteness in our spectroscopic sample, we suspect the total WR population of M83 may exceed 3000 stars.
Social Overlays suffer from high message delivery delays due to insufficient routing strategies. Limiting connections to device pairs that are owned by individuals with a mutual trust relationship in real life, they form topologies restricted to a subgraph of the social network of their users. While centralized, highly successful social networking services entail a complete privacy loss of their users, Social Overlays at higher performance represent an ideal private and censorship-resistant communication substrate for the same purpose. Routing in such restricted topologies is facilitated by embedding the social graph into a metric space. Decentralized routing algorithms have up to date mainly been analyzed under the assumption of a perfect lattice structure. However, currently deployed embedding algorithms for privacy-preserving Social Overlays cannot achieve a sufficiently accurate embedding and hence conventional routing algorithms fail. Developing Social Overlays with acceptable performance hence requires better models and enhanced algorithms, which guarantee convergence in the presence of local optima with regard to the distance to the target. We suggest a model for Social Overlays that includes inaccurate embeddings and arbitrary degree distributions. We further propose NextBestOnce, a routing algorithm that can achieve polylog routing length despite local optima. We provide analytical bounds on the performance of NextBestOnce assuming a scale-free degree distribution, and furthermore show that its performance can be improved by more than a constant factor when including Neighbor-of-Neighbor information in the routing decisions.
Deep convolutional neural networks (CNNs) for image denoising can effectively exploit rich hierarchical features and have achieved great success. However, many deep CNN-based denoising models equally utilize the hierarchical features of noisy images without paying attention to the more important and useful features, leading to relatively low performance. To address the issue, we design a new Two-stage Progressive Residual Dense Attention Network (TSP-RDANet) for image denoising, which divides the whole process of denoising into two sub-tasks to remove noise progressively. Two different attention mechanism-based denoising networks are designed for the two sequential sub-tasks: the residual dense attention module (RDAM) is designed for the first stage, and the hybrid dilated residual dense attention module (HDRDAM) is proposed for the second stage. The proposed attention modules are able to learn appropriate local features through dense connection between different convolutional layers, and the irrelevant features can also be suppressed. The two sub-networks are then connected by a long skip connection to retain the shallow feature to enhance the denoising performance. The experiments on seven benchmark datasets have verified that compared with many state-of-the-art methods, the proposed TSP-RDANet can obtain favorable results both on synthetic and real noisy image denoising. The code of our TSP-RDANet is available at https://github.com/WenCongWu/TSP-RDANet.
Let $n$ be a non-negative integer and $A=\{a_1,\ldots,a_k\}$ be a multi-set with $k$ not necessarily distinct members, where $a_1\leqslant\ldots\leqslant a_k$. We denote by $\Delta(n,A)$ the number of ways to partition $n$ as the form $a_1x_1+\ldots+a_kx_k$, where $x_i$'s are distinct positive integers and $x_i< x_{i+1}$ whenever $a_i=a_{i+1}$. We give a recursive formula for $\Delta(n,A)$ and some explicit formulas for some special cases. Using this notion we solve the non-intersecting circles problem which asks to evaluate the number of ways to draw $n$ non-intersecting circles in a plane regardless to their sizes. The latter also enumerates the number of unlabelled rooted tree with $n+1$ vertices.
We discuss the scenario of light neutralino dark matter in the minimal supersymmetric standard model, which is motivated by the results of some of the direct detection experiments --- DAMA, CoGENT, and CRESST. We update our previous analysis with the latest results of the LHC. We show that new LHC constraints disfavour the parameter region that can reproduce the results of DAMA and CoGENT.
We study the problem when a firm sets prices for products based on the transaction data, i.e., which product past customers chose from an assortment and what were the historical prices that they observed. Our approach does not impose a model on the distribution of the customers' valuations and only assumes, instead, that purchase choices satisfy incentive-compatible constraints. The individual valuation of each past customer can then be encoded as a polyhedral set, and our approach maximizes the worst-case revenue assuming that new customers' valuations are drawn from the empirical distribution implied by the collection of such polyhedra. We show that the optimal prices in this setting can be approximated at any arbitrary precision by solving a compact mixed-integer linear program. Moreover, we study the single-product case and relate it to the traditional model-based approach. We also design three approximation strategies that are of low computational complexity and interpretable. Comprehensive numerical studies based on synthetic and real data suggest that our pricing approach is uniquely beneficial when the historical data has a limited size or is susceptible to model misspecification.
Since the technological breakthrough prompted by the inception of light emitting diodes based on III-nitrides, these material systems have emerged as strategic semiconductors not only for the lighting of the future, but also for the new generation of high-power electronic and spintronic devices. While III-nitride optoelectronics in the visible and ultraviolet spectral range is widely established, all-nitride and In-free efficient devices in the near-infrared (NIR) are still wanted. Here, through a comprehensive protocol of design, modeling, epitaxial growth and in-depth characterization, we develop Al$_x$Ga$_{1-x}$N:Mn/GaN NIR distributed Bragg reflectors and we show their efficiency in combination with GaN:(Mn,Mg) layers containing Mn-Mg$_{k}$ complexes optically active in the telecommunication range of wavelengths.
We point out a unique mechanism to produce the relic abundance for glueball dark matter from a gauged $SU(N)_d$ hidden sector which is bridged to the standard model sector through heavy vectorlike quarks colored under gauge interactions from both sides. A necessary ingredient of our assumption is that the vectorlike quarks, produced either thermally or non-thermally, are abundant enough to dominate the universe for some time in the early universe. They later undergo dark color confinement and form unstable vectorlike-quarkonium states which annihilate decay and reheat the visible and dark sectors. The ratio of entropy dumped into two sectors and the final energy budget in the dark glueballs is only determined by low energy parameters, including the intrinsic scale of the dark $SU(N)_d$, $\Lambda_d$, and number of dark colors, $N_d$, but depend weakly on parameters in the ultraviolet such as the vectorlike quark mass or the initial condition. We call this a cosmic selection rule for the glueball dark matter relic density.
We establish a new representation of the infinite hierarchy of Pois- son brackets (PB) for the open Toda lattice in terms of its spectral curve. For the classical Poisson bracket (PB) we give a representation in the form of a contour integral of some special Abelian differential (meromorphic one-form) on the spectral curve. All higher brackets of the infinite hierarchy are obtained by multiplication of the one-form by a power of the spectral parameter.
In this talk I review how a non-zero cosmological constant $\Lambda$ affects the propagation of gravitational waves and their detection in pulsar timing arrays (PTA). If $\Lambda\neq 0$ it turns out that waves are anharmonic in cosmological Friedmann-Robertson- Walker coordinates and although the amount of anharmonicity is very small it leads to potentially measurable effects. The timing residuals induced by gravitational waves in PTA would show a peculiar angular dependence with a marked enhancement around a particular value of the angle subtended by the source and the pulsars. This angle depends mainly on the actual value of the cosmological constant and the distance to the source. Preliminary estimates indicate that the enhancement can be rather notorious for supermassive black hole mergers and in fact it could facilitate the first direct detection of gravitational waves while at the same time representing a `local' measurement of $\Lambda$.
This is the second of three articles on the topic of truncation as an operation on divisible abelian lattice-ordered groups, or simply $\ell$-groups. This article uses the notation and terminology of the first article and assumes its results. In particular, we refer to an $\ell$-group with truncation as a truncated $\ell$-group, or simply a trunc, and denote the category of truncs with truncation morphisms by $\mathbf{AT}$. Here we develop the analog for $\mathbf{AT}$ of Madden's pointfree representation for $\mathbf{W}$, the category of archimedean $\ell$-groups with designated order unit. More explicitly, for every archimedean trunc $A$ there is a regular Lindel\"{o}f frame $L$ equipped with a designated point $\ast : L \rightarrow 2$, a subtrunc $\widehat{A}$ of $\mathcal{R}_{0}L$, the trunc of pointed frame maps $\mathcal{O}_{0}\mathbb{R}\rightarrow L$, and a trunc isomorphism $A\rightarrow\widehat{A}$. A pointed frame map is just a frame map between frames which commutes with their designated points, and $\mathcal{O}_{0}\mathbb{R}$ stands for the pointed frame which is the topology $\mathcal{O}\mathbb{R}$ of the real numbers equipped with the frame map of the insertion $0 \to \mathbb{R}$. $\left( L,\ast\right) $ is unique up to pointed frame isomorphism with respect to its properties. Finally, we reprove an important result from the first article, namely that $\mathbf{W}$ is a non-full monoreflective subcategory of $\mathbf{AT}$.
A bar framework (G,p) in dimension r is a graph G whose vertices are points p^1,...,p^n in R^r and whose edges are line segments between pairs of these points. Two frameworks (G,p) and (G,q) are equivalent if each edge of (G,p) has the same (Euclidean) length as the corresponding edge of (G,q). A pair of non-adjacent vertices i and j of (G,p)is universally linked if ||p^i-p^j||=||q^i-q^j|| in every framework (G,q) that is equivalent to (G,p). Framework (G,p) is universally rigid iff every pair of non-adjacent vertices of (G,p) is universally linked. In this paper, we present a unified treatment of the universal rigidity problem based on the geometry of spectrahedra. A spectrahedron is the intersection of the positive semidefinite cone with an affine space. This treatment makes it possible to tie together some known, yet scattered, results and to derive new ones. Among the new results presented in this paper are: (i) A sufficient condition for a given pair of non-adjacent vertices of (G,p) to be universally linked. (ii) A new, weaker, sufficient condition for a framework (G,p) to be universally rigid thus strengthening the existing known condition. An interpretation of this new condition in terms of the Strong Arnold Property and transversal intersection is also presented.
For free boundary problems on Euclidean spaces, the monotonicity formulas of Alt-Caffarelli-Friedman and Caffarelli-Jerison-Kenig are cornerstones for the regularity theory as well as the existence theory. In this article we establish the analogs of these results for the Laplace-Beltrami operator on Riemannian manifolds. As an application we show that our monotonicity theorems can be employed to prove the Lipschitz continuity for the solutions of a general class of two-phase free boundary problems on Riemannian manifolds.
Carbon footprint quantification is key to well-informed decision making over carbon reduction potential, both for individuals and for companies. Many carbon footprint case studies for products and services have been circulated recently. Due to the complex relationships within each scenario, however, the underlying assumptions often are difficult to understand. Also, re-using and adapting a scenario to local or individual circumstances is not a straightforward task. To overcome these challenges, we propose an open and linked data model for carbon footprint scenarios which improves data quality and transparency by design. We demonstrate the implementation of our idea with a web-based data interpreter prototype.
Considering relational databases having powerful capabilities in handling security, user authentication, query optimization, etc., several commercial and academic frameworks reuse relational databases to store and query semi-structured data (e.g., XML, JSON) or graph data (e.g., RDF, property graph). However, these works concentrate on managing one of the above data models with RDBMSs. That is, it does not exploit the underlying tools to automatically generate the relational schema for storing multi-model data. In this demonstration, we present a novel reinforcement learning-based tool called MORTAL. Specifically, given multi-model data containing different data models and a set of queries, it could automatically design a relational schema to store these data while having a great query performance. To demonstrate it clearly, we are centered around the following modules: generating initial state based on loaded multi-model data, influencing learning process by setting parameters, controlling generated relational schema through providing semantic constraints, improving the query performance of relational schema by specifying queries, and a highly interactive interface for showing query performance and storage consumption when users adjust the generated relational schema.
We study the preparation of entangled pure Gaussian states via reservoir engineering. In particular, we consider a chain consisting of $(2\aleph+1)$ quantum harmonic oscillators where the central oscillator of the chain is coupled to a single reservoir. We then completely parametrize the class of $(2\aleph+1)$-mode pure Gaussian states that can be prepared by this type of quantum harmonic oscillator chain. This parametrization allows us to determine the steady-state entanglement properties of such quantum harmonic oscillator chains.
We analyze the abstract representations of the groups of rational points of even-dimensional quasi-split special unitary groups associated with quadratic field extensions. We show that, under certain assumptions, such representations have a standard description, as predicted by a conjecture of Borel and Tits. Our method extends the approach introduced by the first author to study abstract representations of Chevalley groups and is based on the construction and analysis of a certain algebraic ring associated to a given abstract representation.
We investigate the smallest set of requirements for inducing non-Markovian dynamics in a collisional model of open quantum systems. This is done by introducing correlations in the state of the environment and analyzing the divisibility of the quantum maps from consecutive time steps. Our model and results serve as a platform for the microscopic study of non-Markovian behavior as well as an example of a simple scenario of non-Markovianity with purely contractive maps, i.e. with no back-flow of information between system and environment.
We propose a data-driven learned sky model, which we use for outdoor lighting estimation from a single image. As no large-scale dataset of images and their corresponding ground truth illumination is readily available, we use complementary datasets to train our approach, combining the vast diversity of illumination conditions of SUN360 with the radiometrically calibrated and physically accurate Laval HDR sky database. Our key contribution is to provide a holistic view of both lighting modeling and estimation, solving both problems end-to-end. From a test image, our method can directly estimate an HDR environment map of the lighting without relying on analytical lighting models. We demonstrate the versatility and expressivity of our learned sky model and show that it can be used to recover plausible illumination, leading to visually pleasant virtual object insertions. To further evaluate our method, we capture a dataset of HDR 360{\deg} panoramas and show through extensive validation that we significantly outperform previous state-of-the-art.
Efficiencies of organic solar cells have practically doubled since the development of non-fullerene acceptors (NFAs). However, generic chemical design rules for donor-NFA combinations are still needed. Such rules are proposed by analyzing inhomogeneous electrostatic fields at the donor-acceptor interface. It is shown that an acceptor-donor-acceptor molecular architecture, and molecular alignment parallel to the interface, results in energy level bending that destabilizes the charge transfer state, thus promoting its dissociation into free charges. By analyzing a series of PCE10:NFA solar cells, with NFAs including Y6, IEICO, and ITIC, as well as their halogenated derivatives, it is suggested that the molecular quadrupole moment of ca 75 Debye A balances the losses in the open circuit voltage and gains in charge generation efficiency.
Federated Learning (FL), a distributed machine learning technique has recently experienced tremendous growth in popularity due to its emphasis on user data privacy. However, the distributed computations of FL can result in constrained communication and drawn-out learning processes, necessitating the client-server communication cost optimization. The ratio of chosen clients and the quantity of local training passes are two hyperparameters that have a significant impact on FL performance. Due to different training preferences across various applications, it can be difficult for FL practitioners to manually select such hyperparameters. In our research paper, we introduce FedAVO, a novel FL algorithm that enhances communication effectiveness by selecting the best hyperparameters leveraging the African Vulture Optimizer (AVO). Our research demonstrates that the communication costs associated with FL operations can be substantially reduced by adopting AVO for FL hyperparameter adjustment. Through extensive evaluations of FedAVO on benchmark datasets, we show that FedAVO achieves significant improvement in terms of model accuracy and communication round, particularly with realistic cases of Non-IID datasets. Our extensive evaluation of the FedAVO algorithm identifies the optimal hyperparameters that are appropriately fitted for the benchmark datasets, eventually increasing global model accuracy by 6% in comparison to the state-of-the-art FL algorithms (such as FedAvg, FedProx, FedPSO, etc.).
Inspired by human learning, researchers have proposed ordering examples during training based on their difficulty. Both curriculum learning, exposing a network to easier examples early in training, and anti-curriculum learning, showing the most difficult examples first, have been suggested as improvements to the standard i.i.d. training. In this work, we set out to investigate the relative benefits of ordered learning. We first investigate the \emph{implicit curricula} resulting from architectural and optimization bias and find that samples are learned in a highly consistent order. Next, to quantify the benefit of \emph{explicit curricula}, we conduct extensive experiments over thousands of orderings spanning three kinds of learning: curriculum, anti-curriculum, and random-curriculum -- in which the size of the training dataset is dynamically increased over time, but the examples are randomly ordered. We find that for standard benchmark datasets, curricula have only marginal benefits, and that randomly ordered samples perform as well or better than curricula and anti-curricula, suggesting that any benefit is entirely due to the dynamic training set size. Inspired by common use cases of curriculum learning in practice, we investigate the role of limited training time budget and noisy data in the success of curriculum learning. Our experiments demonstrate that curriculum, but not anti-curriculum can indeed improve the performance either with limited training time budget or in existence of noisy data.
This study investigates a high Q-factor spiral inductor fabricated by the CMOS (complementary metal oxide semiconductor) process and a post-process. The spiral inductor is manufactured on silicon substrate using the 0.35 micrometers CMOS process. In order to reduce the substrate loss and enhance the Q-factor of the inductor, silicon substrate under the inductor is removed using a post-process. The post-process uses RIE (reactive ion etching) to etch the sacrificial layer of silicon dioxide, and then TMAH (tetra methyl ammonium hydroxide) is employed to remove the underlying silicon substrate and obtain the suspended spiral inductor. The advantage of the post process is compatible with the CMOS process. The Agilent 8510C network analyzer and a Cascade probe station are used to measure the performances of the spiral inductor. Experiments indicate that the spiral inductor has a Q-factor of 15 at 11 GHz, an inductance of 4 nH at 25.5 GHz and a self-resonance frequency of about 27 GHz.
Distribution functions in hard processes can be described by quark-quark correlators, nonlocal matrix elements of quark fields. Color gauge invariance requires inclusion of appropriate gauge links in these correlators. For transverse momentum dependent distribution functions, in particular important for describing T-odd effects in hard processes, we find that new link structures containing loops can appear in abelian and non-abelian theories. In transverse moments, e.g. measured in azimuthal asymmetries, these loops may enhance the contribution of gluonic poles. Some explicit results for the link structure are given in high-energy leptoproduction and hadron-hadron scattering.
The ergodic decomposition of a family of Hua-Pickrell measures on the space of infinite Hermitian matrices is studied. Firstly, we show that the ergodic components of Hua-Pickrell probability measures have no Gaussian factors, this extends a result of Alexei Borodin and Grigori Olshanski. Secondly, we show that the sequence of asymptotic eigenvalues of Hua-Pickrell random matrices is balanced in certain sense and has a "principal value" coincides with the $\gamma_1$ parameter of ergodic components. This allow us to complete the program of Borodin and Olshanski on the description of the ergodic decomposition of Hua-Pickrell probability measures. Finally, we extend the aforesaid results to the case of infinite Hua-Pickrell measues. By using the theory of $\sigma$-finite infinite determinantal measures recently introduced by A. I. Bufetov, we are able to identify the ergodic decomposition of Hua-Pickrell infinite measures to some explicit $\sigma$-finite determinantal measures on the space of point configurations in $\mathbb{R}^*$. The paper resolves a problem of Borodin and Olshanski.
We consider a theory of scalar and spinor fields, interacting through Yukawa and phi^4 interactions, with Lorentz-violating operators included in the Lagrangian. We compute the leading quantum corrections in this theory. The renormalizability of the theory is explicitly shown up to one-loop order. In the pure scalar sector, the calculations can be generalized to higher orders and to include finite terms, because the theory can be solved in terms of its Lorentz-invariant version.
The purpose of this exposition is to compare the constructions of classical nonsymmetric operads (and their algebras) to that of the globular operads of Leinster and Batanin. It is hoped that, through this comparison, understanding algebras for globular operads can be made more intuitive and approachable. We begin by giving a description of the construction of the classical tautological, or endomorphism, operad $taut(X)$ on a set $X$. We then describe how globular operads are a strict generalization of classical operads. From this perspective a description is given of the construction for the tautological globular operad $Taut(\mathcal{X})$ on a globular set $\mathcal{X}$ by way of describing the internal hom functor for the monoidal category $\boldsymbol{Col}$, of collections and collection homomorphisms, with respect to the monoidal composition tensor product used to define globular operads, all the while emphasizing comparisons to the analogous construction in the category of graded sets.
We study the large deviations for Cox-Ingersoll-Ross (CIR) processes with small noise and state-dependent fast switching via associated Hamilton-Jacobi equations. As the separation of time scales, when the noise goes to $0$ and the rate of switching goes to $\infty$, we get a limit equation characterized by the averaging principle. Moreover, we prove the large deviation principle (LDP) with an action-integral form rate function to describe the asymptotic behavior of such systems. The new ingredient is establishing the comparison principle in the singular context. The proof is carried out using the nonlinear semigroup method coming from Feng and Kurtz's book.
Extracting moving objects from a video sequence and estimating the background of each individual image are fundamental issues in many practical applications such as visual surveillance, intelligent vehicle navigation, and traffic monitoring. Recently, some methods have been proposed to detect moving objects in a video via low-rank approximation and sparse outliers where the background is modeled with the computed low-rank component of the video and the foreground objects are detected as the sparse outliers in the low-rank approximation. All of these existing methods work in a batch manner, preventing them from being applied in real time and long duration tasks. In this paper, we present an online sequential framework, namely contiguous outliers representation via online low-rank approximation (COROLA), to detect moving objects and learn the background model at the same time. We also show that our model can detect moving objects with a moving camera. Our experimental evaluation uses simulated data and real public datasets and demonstrates the superior performance of COROLA in terms of both accuracy and execution time.