text
stringlengths
6
128k
A Majorana mass term for the $\tau$ neutrino would induce neutrino - antineutrino mixing and thereby a process which violates fermion number by two units. We study the possibility of distinguishing between a massive Majorana and a Dirac $\tau$ neutrino, by measuring fermion number violating processes in a deep inelastic scattering experiment $\nu p \rightarrow \tau X$. We show that, if the neutrino beam is obtained from the decay of high energetic pions, the probability of obtaining "wrong sign" $\tau$ leptons is suppressed by a factor ${\cal{O}}(m_{\nu_{\tau}}^2 \theta^2/m_{\mu}^2)$ instead of the naively expected suppression factor $\theta^2 m_{\nu_{\tau}}^2/E_{\nu}^2$, where $E_{\nu}$ is the $\tau$ neutrino energy, $m_{\nu_{\tau}}$ and $m_{\mu}$ are the $\tau$-neutrino and muon masses, respectively, and $\theta$ is the $\nu_{\mu}$ - $\nu_{\tau}$ mixing angle. If $m_{\nu_{\tau}}$ is of the order of 10 MeV and $\theta$ is of the order of $0.01 - 0.04$ (the present bounds are ($m_{\nu_{\tau}} < 35 MeV, \theta < 0.04$) the next round of experiments may be able to distinguish between Majorana and Dirac $\tau$-neutrinos.
In this paper, we propose and analyze a two-point gradient method for solving inverse problems in Banach spaces which is based on the Landweber iteration and an extrapolation strategy. The method allows to use non-smooth penalty terms, including the L^1 and the total variation-like penalty functionals, which are significant in reconstructing special features of solutions such as sparsity and piecewise constancy in practical applications. The design of the method involves the choices of the step sizes and the combination parameters which are carefully discussed. Numerical simulations are presented to illustrate the effectiveness of the proposed method.
The present work proposes an inflow turbulence generation strategy using deep learning methods. This is achieved with the help of an autoencoder architecture with two different types of operational layers in the latent-space: a fully connected multi-layer perceptron and convolutional long short-term memory layers. A wall-resolved large eddy simulation of a turbulent channel flow at a high Reynolds number is performed to create a large database of instantaneous snapshots of turbulent flow-fields used to train neural networks. The training is performed with sequences of instantaneous snapshots so as to learn a snapshot at the next time instant. For the present work, velocity and pressure fluctuations are used for training to produce the same parameters at the next time instant. Within the autoencoders, different types of methods like average pooling and strided convolutions are tested for the reduction of spatial dimensions. For convolutional neural networks, the physical boundary conditions are implemented in the form of symmetric as well as periodic paddings. A priori simulations are performed with the trained deep learning models to check the accuracy of turbulence statistics produced. It is found that the use of convolutional long short-term memory layers in the latent space improved the quality of statistics, although issues related to stability for longer times are observed. Though instantaneous snapshots of the target flow are required for training, these a priori simulations suggest that the deep learning methods for generating inflow turbulence are one of the possible alternatives to existing methods.
In the paper we introduce a notion of a key relation, which is similar to the notion of a critical relation introduced by Keith A.Kearnes and \'Agnes Szendrei. All clones on finite sets can be defined by only key relations. In addition there is a nice description of all key relations on 2 elements. These are exactly the relations that can be defined as a disjunction of linear equations. In the paper we show that, in general key relations do not have such a nice description. Nevertheless, we obtain a nice characterization of all key relations preserved by a weak near-unanimity function. This characterization is presented in the paper.
Physics-informed neural networks (PINNs) are a new tool for solving boundary value problems by defining loss functions of neural networks based on governing equations, boundary conditions, and initial conditions. Recent investigations have shown that when designing loss functions for many engineering problems, using first-order derivatives and combining equations from both strong and weak forms can lead to much better accuracy, especially when there are heterogeneity and variable jumps in the domain. This new approach is called the mixed formulation for PINNs, which takes ideas from the mixed finite element method. In this method, the PDE is reformulated as a system of equations where the primary unknowns are the fluxes or gradients of the solution, and the secondary unknowns are the solution itself. In this work, we propose applying the mixed formulation to solve multi-physical problems, specifically a stationary thermo-mechanically coupled system of equations. Additionally, we discuss both sequential and fully coupled unsupervised training and compare their accuracy and computational cost. To improve the accuracy of the network, we incorporate hard boundary constraints to ensure valid predictions. We then investigate how different optimizers and architectures affect accuracy and efficiency. Finally, we introduce a simple approach for parametric learning that is similar to transfer learning. This approach combines data and physics to address the limitations of PINNs regarding computational cost and improves the network's ability to predict the response of the system for unseen cases. The outcomes of this work will be useful for many other engineering applications where deep learning is employed on multiple coupled systems of equations for fast and reliable computations.
We prove Conjecture F from [VW12] which states that the complements of closures of certain strata of the symmetric power of a smooth irreducible complex variety exhibit rational homological stability. Moreover, we generalize this conjecture to the case of connected manifolds of dimension at least 2 and give an explicit homological stability range.
The main aim of our study is to understand the nature of some conventional and non-conventional mesonic states by applying effective QFT models. We start from the relativistic Lagrangians containing a unique $q\bar{q}$ seed state which is strongly coupled to the low-masses decay products of the original state. We find out that some states may appear as a dynamically generated companion poles of the heavier $q\bar{q}$ mesons. In particular we show that $K^*_0(700)$ is a companion pole of the well-known $K^*_0(1430)$ resonance, $X(3872)$ emerges as a (virtual) companion pole of $\chi_{c1}(2P)$, and the puzzling $Y(4008)$ is not a real state, but a spurious enhancement which appears when studying the state $\psi(4040)$.
The Ripper algorithm is designed to generate rule sets for large datasets with many features. However, it was shown that the algorithm struggles with classification performance in the presence of missing data. The algorithm struggles to classify instances when the quality of the data deteriorates as a result of increasing missing data. In this paper, a feature selection technique is used to help improve the classification performance of the Ripper model. Principal component analysis and evidence automatic relevance determination techniques are used to improve the performance. A comparison is done to see which technique helps the algorithm improve the most. Training datasets with completely observable data were used to construct the model and testing datasets with missing values were used for measuring accuracy. The results showed that principal component analysis is a better feature selection for the Ripper in improving the classification performance.
The collision of a quantum Gaussian wave packet with a square barrier is solved explicitly in terms of known functions. The obtained formula is suitable for performing fast calculations or asymptotic analysis. It also provides physical insight since the description of different regimes and collision phenomena typically requires only some of the terms.
We show that every gravitational instantons are SU(2) Yang-Mills instantons on a Ricci-flat four manifold although the reverse is not necessarily true. It is shown that gravitational instantons satisfy exactly the same self-duality equation of SU(2) Yang-Mills instantons on the Ricci-flat manifold determined by the gravitational instantons themselves. We explicitly check the correspondence with several examples and discuss their topological properties.
This paper shows that structured transmission schemes are a good choice for secret communication over interference networks with an eavesdropper. Structured transmission is shown to exploit channel asymmetries and thus perform better than randomly generated codebooks for such channels. For a class of interference channels, we show that an equivocation sumrate that is within two bits of the maximum possible legitimate communication sum-rate is achievable using lattice codes.
Using the EVEREST photometry pipeline, we have identified 74 candidate ultra-short-period planets (orbital period P<1 d) in the first half of the K2 data (Campaigns 0-8 and 10). Of these, 33 candidates have not previously been reported. A systematic search for additional transiting planets found 13 new multi-planet systems, doubling the number known and representing a third (32%) of USPs. We also identified 30 companions, which have periods from 1.4 to 31 days (median 5.5 d). A third (36 of 104) of the candidate USPs and companions have been statistically validated or confirmed, 10 for the first time, including 7 USPs. Almost all candidates, and all validated planets, are small (radii Rp<=3 R_E) with a median radius of R_p=1.1 R_E; the validated and confirmed candidates have radii between 0.4 R_E and 2.4 R_E and periods from P=0.18 to 0.96 d. The lack of candidate (a) ultra-hot-Jupiters (R_p>10 R_E) and (b) short-period desert (3<=Rp<=10 R_E) planets suggests that both populations are rare, although our survey may have missed some of the very deepest transits. These results also provide strong evidence that we have not reached a lower limit on the distribution of planetary radius values for planets at close proximity to a star, and suggest that additional improvements in photometry techniques would yield yet more ultra-short-period planets. The large fraction of USPs in known multi-planet systems supports origins models that involve dynamical interactions with exterior planets coupled to tidal decay of the USP orbits.
It is confirmed rigorously that the Killing-Cauchy horizons, which sometimes occur in space-times representing the collision and subsequent interaction of plane gravitational waves in a Minkowski background, are unstable with respect to bounded perturbations of the initial waves, at least for the case in which the initial waves have constant aligned polarizations.
We suggest that the Higgs could be discovered at the Tevatron or the LHC (perhaps at the LHCb detector) through decays with one or more substantially displaced vertices from the decay of new neutral particles. This signal may occur with a small but measurable branching fraction in the recently-described ``hidden valley'' models, hep-ph/0604261; weakly-coupled models with multiple scalars, including those of hep-ph/0511250, can also provide such signals, potentially with a much larger branching fraction. This decay channel may extend the Higgs mass reach for the Tevatron. Unusual combinations of b jets, lepton pairs and/or missing energy may accompany this signal.
In recent years, machine learning models have rapidly become better at generating clinical consultation notes; yet, there is little work on how to properly evaluate the generated consultation notes to understand the impact they may have on both the clinician using them and the patient's clinical safety. To address this we present an extensive human evaluation study of consultation notes where 5 clinicians (i) listen to 57 mock consultations, (ii) write their own notes, (iii) post-edit a number of automatically generated notes, and (iv) extract all the errors, both quantitative and qualitative. We then carry out a correlation study with 18 automatic quality metrics and the human judgements. We find that a simple, character-based Levenshtein distance metric performs on par if not better than common model-based metrics like BertScore. All our findings and annotations are open-sourced.
This work studies the environments and star formation relationships of local luminous infrared galaxies (LIRG) in comparison to other types of local and distant (z~1) galaxies. The infrared (IR) galaxies are drawn from the IRAS sample. The density of the environment is quantified using 6dF and Point Source Catalogue redshift survey (PSCz) galaxies in a cylinder of 2h^-1 Mpc radius and 10h^-1 Mpc length. Our most important result shows the existence of a dramatic density difference between local LIRGs and local non-LIRG IR galaxies. LIRGs live in denser environments than non-LIRG IR galaxies implying that L_IR=10^11 h^-2 L_sun marks an important transition point among IR-selected local galaxies. We also find that there is a strong correlation between the densities around LIRGs and their L_IR luminosity, while the IR-activity of non-LIRG IR galaxies does not show any dependence on environment. This trend is independent of mass-bin selection. The SF-density trend in local LIRGs is similar to that found in some studies of blue cloud galaxies at z~1 which show a correlation between star formation and local density (the reversal of the relation seen for local galaxies). This, together with the rapid decline of the number count of LIRGs since z~1, could mean that local LIRGs are survivors of whatever process transformed blue cloud galaxies at z~1 to the present day or local LIRGs came into existence by similar process than high redshift LIRGs but at later stage.
We conjecture a simple set of "Feynman rules" for constructing $n$-point global conformal blocks in any channel in $d$ spacetime dimensions, for external and exchanged scalar operators for arbitrary $n$ and $d$. The vertex factors are given in terms of Lauricella hypergeometric functions of one, two or three variables, and the Feynman rules furnish an explicit power-series expansion in powers of cross-ratios. These rules are conjectured based on previously known results in the literature, which include four-, five- and six-point examples as well as the $n$-point comb channel blocks. We prove these rules for all previously known cases, as well as for a seven-point block in a new topology and the even-point blocks in the "OPE channel." The proof relies on holographic methods, notably the Feynman rules for Mellin amplitudes of tree-level AdS diagrams in a scalar effective field theory, and is easily applicable to any particular choice of a conformal block.
The exploration of a two-dimensional wind-driven ocean model with no-slip boundaries reveals the existence of a turbulent asymptotic regime where energy dissipation becomes independent of fluid viscosity. This asymptotic flow represents an out-of-equilibrium state, characterized by a vigorous two-dimensional vortex gas superimposed onto a western-intensified gyre. The properties of the vortex gas are elucidated through scaling analysis for detached Prandtl boundary layers, providing a rationalization for the observed anomalous dissipation. The asymptotic regime demonstrates that boundary instabilities alone can be strong enough to evacuate wind-injected energy from the large-scale oceanic circulation.
The Bethe-Salpeter (BS) equation in the ladder approximation is studied within a scalar theory: two scalar fields (constituents) with mass $m$ interacting via an exchange of a scalar field (tieon) with mass $\mu$. The BS equation is written in the form of an integral equation in the configuration Euclidean $x$-space with the kernel which for stable bound states $M<2m$ is a self-adjoint positive operator. The solution of the BS equation is formulated as a variational problem. The nonrelativistic limit of the BS equation is considered. The role of so-called abnormal states is discussed. The analytical form of test functions for which the accuracy of calculations of bound state masses is better than 1% (the comparison with available numerical calculations is done) is determined. These test functions make it possible to calculate analytically vertex functions describing the interaction of bound states with constituents. As a by-product a simple solution of the Wick-Cutkosky model for the case of massless bound states is demonstrated.
Transformation equations are presented to convert colors and magnitudes measured in the AAO, ARNICA, CIT, DENIS, ESO, LCO (Persson standards), MSSSO, SAAO, and UKIRT photometric systems to the photometric system inherent to the 2MASS Second Incremental Data Release. The transformations have been derived by comparing 2MASS photometry with published magnitudes and colors for stars observed in these systems. Transformation equations have also been derived indirectly for the Bessell & Brett (1988) and Koornneef (1983) homogenized photometric systems.
Deep learning is an established framework for learning hierarchical data representations. While compute power is in abundance, one of the main challenges in applying this framework to robotic grasping has been obtaining the amount of data needed to learn these representations, and structuring the data to the task at hand. Among contemporary approaches in the literature, we highlight key properties that have encouraged the use of deep learning techniques, and in this paper, detail our experience in developing a simulator for collecting cylindrical precision grasps of a multi-fingered dexterous robotic hand.
We study quantum antiferromagnetism on the highly frustrated checkerboard lattice, also known as the square lattice with crossings. The quantum Heisenberg antiferromagnet on this lattice is of interest as a two-dimensional analog of the pyrochlore lattice magnet. By combining several approaches we conclude that this system is most likely ordered for all values of spin, $S$, with a Neel state for large $S$ giving way to a two-fold degenerate valence-bond solid for smaller $S$. We show next that the Ising antiferromagnet with a weak four-spin exchange, equivalent to square ice with the leading quantum dynamics, exhibits long range ``anti-ferroelectric'' order. As a byproduct of this analysis we obtain, in the system of weakly coupled ice planes, a sliding phase with XY symmetry.
Since close WR+O binaries are the result of a strong interaction of both stars in massive close binary systems, they can be used to constrain the highly uncertain mass and angular momentum budget during the major mass transfer phase. We explore the progenitor evolution of the three best suited WR+O binaries HD 90657, HD 186943 and HD 211853, which are characterized by a WR/O mass ratio of $\sim$0.5 and periods of 6..10 days. We are doing so at three different levels of approximation: predicting the massive binary evolution through simple mass loss and angular momentum loss estimates, through full binary evolution models with parametrized mass transfer efficiency, and through binary evolution models including rotation of both components and a physical model which allows to compute mass and angular momentum loss from the binary system as function of time during the mass transfer process. All three methods give consistently the same answers. Our results show that, if these systems formed through stable mass transfer, their initial periods were smaller than their current ones, which implies that mass transfer has started during the core hydrogen burning phase of the initially more massive star. Furthermore, the mass transfer in all three cases must have been highly non-conservative, with on average only $\sim$10% of the transferred mass being retained by the mass receiving star. This result gives support to our system mass and angular momentum loss model, which predicts that, in the considered systems, about 90% of the overflowing matter is expelled by the rapid rotation of the mass receiver close to the $\Omega$-limit, which is reached through the accretion of the remaining 10%.
We use a model of hard-core bosons to describe a SQUID built with two crystals of $d_{x^2-y^2}$-superconductors with orientations (100) and (110). Across the two faceted (100)/(110) interfaces, the structure of the superconducting order parameter leads to an alternating sign of the local Josephson coupling, and the possibility of quartet formation. Using a mapping of the boson model to an $XXZ$ model, we calculate numerically the energy of the system as a function of the applied magnetic flux, finding signals of $hc/4e$ oscillations in a certain region of parameters. This region has a large overlap to that at which binding of bosons exists.
We investigate the celebrated mathematical SICA model but using fractional differential equations in order to better describe the dynamics of HIV-AIDS infection. The infection process is modelled by a general functional response and the memory effect is described by the Caputo fractional derivative. Stability and instability of equilibrium points are determined in terms of the basic reproduction number. Furthermore, a fractional optimal control system is formulated and the best strategy for minimizing the spread of the disease into the population is determined through numerical simulations based on the derived necessary optimality conditions.
The influence of sinusoidal density modulation on the stimulated Raman scattering (SRS) reflectivity in inhomogeneous plasmas is studied by three-wave coupling equations, fully kinetic Vlasov simulations and particle in cell (PIC) simulations. Through the numerical solution of three-wave coupling equations, we find that the sinusoidal density modulation is capable of inducing absolute SRS even though the Rosenbluth gain is smaller than {\pi}, and we give a region of modulational wavelength and amplitude that the absolute SRS can be induced, which agrees with early studies. The average reflectivity obtained by Vlasov simulations has the same trend with the growth rate of absolute SRS obtained by three-wave equations. Instead of causing absolute instability, modulational wavelength shorter than a basic gain length is able to suppress the inflation of SRS through harmonic waves. And, the PIC simulations qualitatively agree with our Vlasov simulations. Our results offer an alternative explanation of high reflectivity at underdense plasma in experiments, which is due to long-wavelength modulation, and a potential method to suppress SRS by using the short-wavelength modulation.
We propose a robust and accurate method for reconstructing 3D hand mesh from monocular images. This is a very challenging problem, as hands are often severely occluded by objects. Previous works often have disregarded 2D hand pose information, which contains hand prior knowledge that is strongly correlated with occluded regions. Thus, in this work, we propose a novel 3D hand mesh reconstruction network HandGCAT, that can fully exploit hand prior as compensation information to enhance occluded region features. Specifically, we designed the Knowledge-Guided Graph Convolution (KGC) module and the Cross-Attention Transformer (CAT) module. KGC extracts hand prior information from 2D hand pose by graph convolution. CAT fuses hand prior into occluded regions by considering their high correlation. Extensive experiments on popular datasets with challenging hand-object occlusions, such as HO3D v2, HO3D v3, and DexYCB demonstrate that our HandGCAT reaches state-of-the-art performance. The code is available at https://github.com/heartStrive/HandGCAT.
A tensor invariant is defined on a paraquaternionic contact manifold in terms of the curvature and torsion of the canonical paraquaternionic connection involving derivatives up to third order of the contact form. This tensor, called paraquaternionic contact conformal curvature, is similar to the Weyl conformal curvature in Riemannian geometry, the Chern-Moser tensor in CR geometry, the para contact curvature in para CR geometry and to the quaternionic contact conformal curvature in quaternionic contact geometry. It is shown that a paraquaternionic contact manifold is locally paraquaternionic contact conformal to the standard flat paraquaternionic contact structure on the paraquaternionic Heisenberg group, or equivalently, to the standard para 3-Sasakian structure on the paraquaternionic pseudo-sphere iff the paraquaternionic contact conformal curvature vanishes.
We consider the problem of exploration of networks, some of whose edges are faulty. A mobile agent, situated at a starting node and unaware of which edges are faulty, has to explore the connected fault-free component of this node by visiting all of its nodes. The cost of the exploration is the number of edge traversals. For a given network and given starting node, the overhead of an exploration algorithm is the worst-case ratio (taken over all fault configurations) of its cost to the cost of an optimal algorithm which knows where faults are situated. An exploration algorithm, for a given network and given starting node, is called perfectly competitive if its overhead is the smallest among all exploration algorithms not knowing the location of faults. We design a perfectly competitive exploration algorithm for any ring, and show that, for networks modeled by hamiltonian graphs, the overhead of any DFS exploration is at most 10/9 times larger than that of a perfectly competitive algorithm. Moreover, for hamiltonian graphs of size at least 24, this overhead is less than 6% larger than that of a perfectly competitive algorithm.
Due to properties such as interpolation, smoothness, and spline connections, Hermite subdivision schemes employ fast iterative algorithms for geometrically modeling curves/surfaces in CAGD and for building Hermite wavelets in numerical PDEs. In this paper we introduce a notion of generalized Hermite (dyadic) subdivision schemes and then we characterize their convergence, smoothness and underlying matrix masks with or without interpolation properties. We also introduce the notion of linear-phase moments for achieving the polynomial-interpolation property. For any given positive integer m, we constructively prove that there always exist convergent smooth generalized Hermite subdivision schemes with linear-phase moments such that their basis vector functions are spline functions in $C^m$ and have linearly independent integer shifts. As byproducts, our results resolve convergence, smoothness and existence of Lagrange, Hermite, or Birkhoff subdivision schemes. Even in dimension one our results significantly generalize and extend many known results on extensively studied univariate Hermite subdivision schemes. To illustrate the theoretical results in this paper, we provide examples of convergent generalized Hermite subdivision schemes with symmetric matrix masks having short support and smooth basis vector functions with or without interpolation property.
This study introduces an innovative 3D printed dry electrode tailored for biosensing in postoperative recovery scenarios. Fabricated through a drop coating process, the electrode incorporates a novel 2D material.
We investigate whether varying the dust composition (described by the optical constants) can solve a persistent problem in debris disk modeling--the inability to fit the thermal emission without over-predicting the scattered light. We model five images of the beta Pictoris disk: two in scattered light from HST/STIS at 0.58 microns and HST/WFC3 at 1.16 microns, and three in thermal emission from Spitzer/MIPS at 24 microns, Herschel/PACS at 70 microns, and ALMA at 870 microns. The WFC3 and MIPS data are published here for the first time. We focus our modeling on the outer part of this disk, consisting of a parent body ring and a halo of small grains. First, we confirm that a model using astronomical silicates cannot simultaneously fit the thermal and scattered light data. Next, we use a simple, generic function for the optical constants to show that varying the dust composition can improve the fit substantially. Finally, we model the dust as a mixture of the most plausible debris constituents: astronomical silicates, water ice, organic refractory material, and vacuum. We achieve a good fit to all datasets with grains composed predominantly of silicates and organics, while ice and vacuum are, at most, present in small amounts. This composition is similar to one derived from previous work on the HR 4796A disk. Our model also fits the thermal SED, scattered light colors, and high-resolution mid-IR data from T-ReCS for this disk. Additionally, we show that sub-blowout grains are a necessary component of the halo.
Quantum error-correcting codes are analyzed from an information-theoretic perspective centered on quantum conditional and mutual entropies. This approach parallels the description of classical error correction in Shannon theory, while clarifying the differences between classical and quantum codes. More specifically, it is shown how quantum information theory accounts for the fact that "redundant" information can be distributed over quantum bits even though this does not violate the quantum "no-cloning" theorem. Such a remarkable feature, which has no counterpart for classical codes, is related to the property that the ternary mutual entropy vanishes for a tripartite system in a pure state. This information-theoretic description of quantum coding is used to derive the quantum analogue of the Singleton bound on the number of logical bits that can be preserved by a code of fixed length which can recover a given number of errors.
For $G$ a finite group, we prove in dimension 2 that there is a monoidal equivalence between the category of $G$-equivariant topological quantum field theories and the category of $G$-Frobenius algebras, this was proved by G. Moore and G. Segal. This work consists to give, in more detail, a proof of this result.
In this paper, a hierarchical distributed method consisting of two iterative procedures is proposed for optimal electric vehicle charging scheduling (EVCS) in the distribution grids. In the proposed method, the distribution system operator (DSO) aims at reducing the grid loss while satisfying the power flow constraints. This is achieved by a consensus-based iterative procedure with the EV aggregators (Aggs) located in the grid buses. The goal of aggregators, which are equipped with the battery energy storage (BES), is to reduce their electricity cost by optimal control of BES and EVs. As Aggs' optimization problem increases dimensionally by increasing the number of EVs, they solved their problem through another iterative procedure with their customers. This procedure is implementable by exploiting the mathematical properties of the problem and rewriting Aggs' optimization problem as the \textit{sharing problem}, which is solved efficiently by the alternating direction method of multipliers (ADMM). To validate the performance, the proposed method is applied to IEEE-13 bus system.
We propose to the create vortices in spin-1 condensates via magnetic dipole-dipole interaction. Starting with a polarized condensate prepared under large axial magnetic field, we show that by gradually inverting the field, population transfer among different spin states can be realized in a controlled manner. Under optimal condition, we generate a doubly quantized vortex state containing nearly all atoms in the condensate. The resulting vortex state is a direct manifestation of the dipole-dipole interaction and spin textures in spinor condensates. We also point out that the whole process can be qualitatively described by a simple rapid adiabatic passage model.
The highly conductive two-dimensional electron gas formed at the interface between insulating SrTiO$_3$ and LaAlO$_3$ shows low-temperature superconductivity coexisting with inhomogeneous ferromagnetism. The Rashba spin-orbit interaction with in-plane Zeeman field of the system favors $p_x \pm i p_y$-wave superconductivity at finite momentum. Owing to the intrinsic disorder at the interface, the role of spatial inhomogeneity on the superconducting and ferromagnetic states becomes important. We find that for strong disorder, the system breaks up into mutually excluded regions of superconductivity and ferromagnetism. This inhomogeneity-driven electronic phase separation accounts for the unusual coexistence of superconductivity and ferromagnetism observed at the interface.
We present an investigation of magneto-optic rotation in both the Faraday and Voigt geometries. We show that more physical insight can be gained in a comparison of the Faraday and Voigt effects by visualising optical rotation trajectories on the Poincare sphere. This insight is applied to design and experimentally demonstrate an improved ultra-narrow optical bandpass filter based on combining optical rotation from two cascaded cells - one in the Faraday geometry and one in the Voigt geometry. Our optical filter has an equivalent noise bandwidth of 0.56 GHz, and a figure-of-merit value of 1.22(2) GHz$^{-1}$ which is higher than any previously demonstrated filter on the Rb D2 line.
In this paper we investigate the effects of gravitational backreaction for the late time Hawking radiation of evaporating near-extremal black holes. This problem can be studied within the framework of an effective one-loop solvable model on AdS_2. We find that the Hawking flux goes down exponentially and it is proportional to a parameter which depends on details of the collapsing matter. This result seems to suggest that the information of the initial state is not lost and that the boundary of AdS_2 acts, at least at late times, as a sort of stretched horizon in the Reissner-Nordstrom spacetime.
An unfolding measurement of the atmospheric neutrino flux at the South Pole is performed using the IceCube/DeepCore detector. The main results presented is the true neutrino interaction rate by unit volume of ice, as a function of energy or zenith angle. The method used is the D'Agostini iterative unfolding. While the detector response estimate is based on Monte Carlo simulation, the iterative approach will compensate for this inherent bias and draw the unfolded result closer to the unbiased estimator, as the number of iterations is optimized. This is done using an ensemble test with a blind re-smearing approach. Thus this measurement is minimally biased regarding atmospheric flux-, oscillation- and neutrino interaction models, allowing model builders to test their predictions against this measurement. As an aside, using the same data set and methodology, we also present an unfolding measurement directly to atmospheric neutrino flux.
The skin effect and topological edge states in non-Hermitian system have been well-studied, and the second-order skin effect and corner modes have also been proposed in non-Hermitian system recently. In this paper, we construct the nested tight-binding formalism to research the second-order corner modes analytically, which is a direct description of the generic non-Hermitian tight-binding model without other assumptions. Within this formalism, we obtain the exact solutions of second-order topological zero-energy corner modes for the non-Hermitian four-band model. We validate the nested tight-binding formalism in the hybrid skin-topological corner modes for the four-band model and a non-Hermitian two-dimensional (2D) extrinsic model. In addition, we exactly illustrate the corner modes induced by second-order skin effect for a simplest 2D non-Hermitian model by the nested tight-binding formalism.
The properties of the two-flavored Gross-Neveu model with nonzero current quark mass are investigated in the (1+1)-dimensional spacetime at finite isospin $\mu_I$ as well as quark number $\mu$ chemical potentials and zero temperature. The consideration is performed in the limit $N_c\to\infty$, i.e. in the case with an infinite number of colored quarks. In the plane of parameters $\mu_I,\mu$ a rather rich phase structure is found, which contains phases with and without pion condensation. We have found a great variety of one-quark excitations of these phases, including gapless and gapped quasiparticles. Moreover, the mesonic mass spectrum in each phase is also investigated.
By borrowing methods from complex system analysis, in this paper we analyze the features of the complex relationship that links the development and the industrialization of a country to economic inequality. In order to do this, we identify industrialization as a combination of a monetary index, the GDP per capita, and a recently introduced measure of the complexity of an economy, the Fitness. At first we explore these relations on a global scale over the time period 1990--2008 focusing on two different dimensions of inequality: the capital share of income and a Theil measure of wage inequality. In both cases, the movement of inequality follows a pattern similar to the one theorized by Kuznets in the fifties. We then narrow down the object of study ad we concentrate on wage inequality within the United States. By employing data on wages and employment on the approximately 3100 US counties for the time interval 1990--2014, we generalize the Fitness-Complexity algorithm for counties and NAICS sectors, and we investigate wage inequality between industrial sectors within counties. At this scale, in the early nineties we recover a behavior similar to the global one. While, in more recent years, we uncover a trend reversal: wage inequality monotonically increases as industrialization levels grow. Hence at a county level, at net of the social and institutional factors that differ among countries, we not only observe an upturn in inequality but also a change in the structure of the relation between wage inequality and development.
We consider the possibility of the lightest sterile neutrino dark matter which has dipole interaction with heavier sterile neutrinos. The lifetime can be long enough to be a dark matter candidate without violating other constraints and the correct amount of relic abundance can be produced in the early Universe. We find that a sterile neutrino with the mass of around MeV and the dimension-five non-renormalisable dipole interaction suppressed by $\Lambda_5 \gtrsim 10^{15}$ GeV can be a good candidate of dark matter, while heavier sterile neutrinos with masses of the order of GeV can explain the active neutrino oscillations.
The creep behaviour and microstructural evolution of a Sn-3Ag-0.5Cu wt.% sample with a columnar microstructure have been investigated through in-situ creep testing under constant stress of 30 MPa at 298 K. This is important, as 298 K is high temperature within the solder system and in-situ observations of microstructure evolutions confirm the mechanisms involved in deformation and ultimately failure of the material. The sample has been observed in-situ using repeat and automatic forescatter diode and auto electron backscatter diffraction imaging. During deformation, polygonisation and recrystallisation are observed heterogeneously with increasing strain, and these correlate with local lattice rotations near matrix-intermetallic compound interfaces. Recrystallised grains have either twin or special boundary relationships to their parent grains. The combination of these two imaging methods reveal one grain (loading direction, LD, 10.4 {\deg} from [100]) deforms less than the neighbour grain 2 (LD 18.8{\deg} from [110]), with slip traces in the strain localised regions. In grain 1, (1-10)[001] slip system are observed and in grain 2, (1-10)[-1-11]/2 and (110)[-111]/2 slip systems are observed. Lattice orientation gradients build up with increasing plastic strain and near fracture recrystallisation is observed concurrent with fracture.
In this paper we estimate the tracking error of a fixed gain stochastic approximation scheme. The underlying process is not assumed Markovian, a mixing condition is required instead. Furthermore, the updating function may be discontinuous in the parameter.
In this paper we present a perturbative procedure that allows one to numerically solve diffusive non-Markovian Stochastic Schr\"odinger equations, for a wide range of memory functions. To illustrate this procedure numerical results are presented for a classically driven two level atom immersed in a environment with a simple memory function. It is observed that as the order of the perturbation is increased the numerical results for the ensembled average state $\rho_{\rm red}(t)$ approach the exact reduced state found via Imamo\=glu's enlarged system method [Phys. Rev. A. 50, 3650 (1994)].
In recent years, a rigorous quantum mechanical model for the interaction between light and macroscopic dispersive, lossy dielectrics has emerged -macroscopic QED- allowing the application of the usual methods of quantum field theory. Here, we apply time dependent perturbation theory to a general class of problems involving time dependent lossy, dispersive dielectrics. The model is used to derive polariton excitation rates in three illustrative cases, including that of a travelling Gaussian perturbation to the susceptibility of an otherwise infinite homogeneous dielectric, motivated by recent experiments on analogue Hawking radiation. We find that the excitation rate is increased when the wave--vector and frequency of each polariton in the pair either satisfies (or nearly satisfies) the dispersion relation for electromagnetic waves, or is close to a material resonance.
Let $\psi_\K$ be the Chebyshev function of a number field $\K$. Under GRH we prove an explicit upper bound for $|\psi_\K(x)-x|$ in terms of the degree and the discriminant of $\K$. The new bound improves significantly on previous known results.
We investigate graphs that can be represented as vertex intersections of horizontal and vertical paths in a grid, the so called $B_0$-VPG graphs. Recognizing this class is an NP-complete problem. Although, there exists a polynomial time algorithm for recognizing chordal $B_0$-VPG graphs. In this paper, we present a minimal forbidden induced subgraph characterization of $B_0$-VPG graphs restricted to block graphs. As a byproduct, the proof of the main theorem provides an alternative certifying recognition and representation algorithm for $B_0$-VPG graphs in the class of block graphs.
We study the survival probability and the first-passage time distribution for a Brownian motion in a planar wedge with infinite absorbing edges. We generalize existing results obtained for wedge angles of the form $\pi/n$ with $n$ a positive integer to arbitrary angles, which in particular cover the case of obtuse angles. We give explicit and simple expressions of the survival probability and the first-passage time distribution in which the difference between an arbitrary angle and a submultiple of $\pi$ is contained in three additional terms. As an application, we obtain the short time development of the survival probability in a wedge of arbitrary angle.
The low-temperature($4.2<T<12.5$ K) magnetotransport ($B<2$ T) of two-dimensional electrons occupying two subbands (with energy $E_1$ and $E_2$) is investigated in GaAs single quantum well with AlAs/GaAs superlattice barriers. Two series of Shubnikov-de Haas oscillations are found to be accompanied by magnetointersubband (MIS) oscillations, periodic in the inverse magnetic field. The period of the MIS oscillations obeys condition $\Delta_{12}=(E_2-E_1)=k \cdot \hbar \omega_c$, where $\Delta_{12}$ is the subband energy separation, $\omega_c$ is the cyclotron frequency, and $k$ is the positive integer. At $T$=4.2 K the oscillations manifest themselves up to $k$=100. Strong temperature suppression of the magnetointersubband oscillations is observed. We show that the suppression is a result of electron-electron scattering. Our results are in good agreement with recent experiments, indicating that the sensitivity to electron-electron interaction is the fundamental property of magnetoresistance oscillations, originating from the second-order Dingle factor.
Today's cities face many challenges due to population growth, aging population, pedestrian and vehicular traffic congestion, water usage increase, increased electricity demands, crumbling physical infrastructure of buildings, roads, water sewage, power grid, and declining health care services. Moreover, major trends indicate the global urbanization of society, and the associated pressures it brings, will continue to accelerate. One of the approaches to assist in solving some of the challenges is to deploy extensive IT technology. It has been recognized that cyber-technology plays a key role in improving quality of people's lives, strengthening business and helping government agencies serve citizens better. In this white paper, we discuss the benefits and challenges of cyber-technologies within "Smart Cities", especially the IoT (Internet of Things) for smart communities, which means considering the benefits and challenges of IoT cyber-technologies on smart cities physical infrastructures and their human stakeholders. To point out the IoT challenges, we will first present the framework within which IoT lives, and then proceed with the challenges, conclusions and recommendations.
We introduce a method for egocentric videoconferencing that enables hands-free video calls, for instance by people wearing smart glasses or other mixed-reality devices. Videoconferencing portrays valuable non-verbal communication and face expression cues, but usually requires a front-facing camera. Using a frontal camera in a hands-free setting when a person is on the move is impractical. Even holding a mobile phone camera in the front of the face while sitting for a long duration is not convenient. To overcome these issues, we propose a low-cost wearable egocentric camera setup that can be integrated into smart glasses. Our goal is to mimic a classical video call, and therefore, we transform the egocentric perspective of this camera into a front facing video. To this end, we employ a conditional generative adversarial neural network that learns a transition from the highly distorted egocentric views to frontal views common in videoconferencing. Our approach learns to transfer expression details directly from the egocentric view without using a complex intermediate parametric expressions model, as it is used by related face reenactment methods. We successfully handle subtle expressions, not easily captured by parametric blendshape-based solutions, e.g., tongue movement, eye movements, eye blinking, strong expressions and depth varying movements. To get control over the rigid head movements in the target view, we condition the generator on synthetic renderings of a moving neutral face. This allows us to synthesis results at different head poses. Our technique produces temporally smooth video-realistic renderings in real-time using a video-to-video translation network in conjunction with a temporal discriminator. We demonstrate the improved capabilities of our technique by comparing against related state-of-the art approaches.
Sub-8-bit representation of DNNs incur some discernible loss of accuracy despite rigorous (re)training at low-precision. Such loss of accuracy essentially makes them equivalent to a much shallower counterpart, diminishing the power of being deep networks. To address this problem of accuracy drop we introduce the notion of \textit{residual networks} where we add more low-precision edges to sensitive branches of the sub-8-bit network to compensate for the lost accuracy. Further, we present a perturbation theory to identify such sensitive edges. Aided by such an elegant trade-off between accuracy and compute, the 8-2 model (8-bit activations, ternary weights), enhanced by ternary residual edges, turns out to be sophisticated enough to achieve very high accuracy ($\sim 1\%$ drop from our FP-32 baseline), despite $\sim 1.6\times$ reduction in model size, $\sim 26\times$ reduction in number of multiplications, and potentially $\sim 2\times$ power-performance gain comparing to 8-8 representation, on the state-of-the-art deep network ResNet-101 pre-trained on ImageNet dataset. Moreover, depending on the varying accuracy requirements in a dynamic environment, the deployed low-precision model can be upgraded/downgraded on-the-fly by partially enabling/disabling residual connections. For example, disabling the least important residual connections in the above enhanced network, the accuracy drop is $\sim 2\%$ (from FP32), despite $\sim 1.9\times$ reduction in model size, $\sim 32\times$ reduction in number of multiplications, and potentially $\sim 2.3\times$ power-performance gain comparing to 8-8 representation. Finally, all the ternary connections are sparse in nature, and the ternary residual conversion can be done in a resource-constraint setting with no low-precision (re)training.
We calculate the temperature, density, and parallel magnetic field dependence of low temperature electronic resistivity in 2D high-mobility Si/SiGe quantum structures, assuming the conductivity limiting mechanism to be carrier scattering by screened random charged Coulombic impurity centers. We obtain comprehensive agreement with existing experimental transport data, compellingly establishing that the observed 2D metallic behavior in low-density Si/SiGe systems arises from the peculiar nature of 2D screening of long-range impurity disorder. In particular, our theory correctly predicts the experimentally observed metallic temperature dependence of 2D resistivity in the fully spin-polarized system.
This paper is a top down historical perspective on the several phases in the development of probability from its prehistoric origins to its modern day evolution, as one of the key methodologies in artificial intelligence, data science, and machine learning. It is written in honor of Barry Arnold's birthday for his many contributions to statistical theory and methodology. Despite the fact that much of Barry's work is technical, a descriptive document to mark his achievements should not be viewed as being out of line. Barry's dissertation adviser at Stanford (he received a Ph.D. in Statistics there) was a philosopher of Science who dug deep in the foundations and roots of probability, and it is this breadth of perspective is what Barry has inherent. The paper is based on lecture materials compiled by the first author from various published sources, and over a long period of time. The material below gives a limited list of references, because the cast of characters is many, and their contributions are a part of the historical heritage of those of us who are interested in probability, statistics, and the many topics they have spawned.
The ability to process time-series at low energy cost is critical for many applications. Recurrent neural network, which can perform such tasks, are computationally expensive when implementing in software on conventional computers. Here we propose to implement a recurrent neural network in hardware using spintronic oscillators as dynamical neurons. Using numerical simulations, we build a multi-layer network and demonstrate that we can use backpropagation through time (BPTT) and standard machine learning tools to train this network. Leveraging the transient dynamics of the spintronic oscillators, we solve the sequential digits classification task with $89.83\pm2.91~\%$ accuracy, as good as the equivalent software network. We devise guidelines on how to choose the time constant of the oscillators as well as hyper-parameters of the network to adapt to different input time scales.
Device-to-Device (D2D) communication can support the operation of cellular systems by reducing the traffic in the network infrastructure. In this paper, the benefits of D2D communication are investigated in the context of a Fog-Radio Access Network (F-RAN) that leverages edge caching and fronthaul connectivity for the purpose of content delivery. Assuming offline caching, out-of-band D2D communication, and an F-RAN with two edge nodes and two user equipments, an information-theoretically optimal caching and delivery strategy is presented that minimizes the delivery time in the high signal-to-noise ratio regime. The delivery time accounts for the latency caused by fronthaul, downlink, and D2D transmissions. The proposed optimal strategy is based on a novel scheme for an X-channel with receiver cooperation that leverages tools from real interference alignment. Insights are provided on the regimes in which D2D communication is beneficial.
Let K be any field, and let G be a semisimple group over K. Suppose the characteristic of K is positive and is very good for G. We describe all group scheme homomorphisms phi:SL(2) --> G whose image is geometrically G-completely reducible -- or G-cr -- in the sense of Serre; the description resembles that of irreducible modules given by Steinberg's tensor product theorem. In case K is algebraically closed and G is simple, the result proved here was previously obtained by Liebeck and Seitz using different methods. A recent result shows the Lie algebra of the image of phi to be geometrically G-cr; this plays an important role in our proof.
Forecasting indoor temperatures is important to achieve efficient control of HVAC systems. In this task, the limited data availability presents a challenge as most of the available data is acquired during standard operation where extreme scenarios and transitory regimes such as major temperature increases or decreases are de-facto excluded. Acquisition of such data requires significant energy consumption and a dedicated facility, hindering the quantity and diversity of available data. Cost related constraints however do not allow for continuous year-around acquisition. To address this, we investigate the efficacy of data augmentation techniques leveraging SoTA AI-based methods for synthetic data generation. Inspired by practical and experimental motivations, we explore fusion strategies of real and synthetic data to improve forecasting models. This approach alleviates the need for continuously acquiring extensive time series data, especially in contexts involving repetitive heating and cooling cycles in buildings. In our evaluation 1) we assess the performance of synthetic data generators independently, particularly focusing on SoTA AI-based methods; 2) we measure the utility of incorporating synthetically augmented data in a subsequent forecasting tasks where we employ a simple model in two distinct scenarios: 1) we first examine an augmentation technique that combines real and synthetically generated data to expand the training dataset, 2) we delve into utilizing synthetic data to tackle dataset imbalances. Our results highlight the potential of synthetic data augmentation in enhancing forecasting accuracy while mitigating training variance. Through empirical experiments, we show significant improvements achievable by integrating synthetic data, thereby paving the way for more robust forecasting models in low-data regime.
We construct self/anti-self charge conjugate (Majorana-like) states for the (1/2, 0)+(0, 1/2) representation of the Lorentz group, and their analogs for higher spins within the quantum field theory. The problem of the basis rotations and that of the selection of phases in the Dirac-like and Majorana-like field operators are considered. The discrete symmetries properties (P, C, T) are studied. The corresponding dynamical equations are presented. In the (1/2, 0)+(0, 1/2) representation they obey the Dirac-like equation with eight components, which has been first introduced by Markov. Thus, the Fock space for corresponding quantum fields is doubled (as shown by Ziino). The particular attention has been paid to the questions of chirality and helicity (two concepts which are frequently confused in the literature) for Dirac and Majorana states. We further review several experimental consequences which follow from the previous works of M.Kirchbach et al. on neutrinoless double beta decay, and G.J.Ni et al. on meson lifetimes.
We theoretically investigate an electric-field-driven system of charged spheres as a primitive model of concentrated electrolytes under an applied electric field. First, we provide a unified formulation for the stochastic charge and density dynamics of the electric-field-driven primitive model using the stochastic density functional theory (DFT). The stochastic DFT integrates various frameworks of the equilibrium and dynamic DFTs, the liquid state theory, and the field-theoretic approach, which allows us to justify in a unified manner various modifications previously made for the Poisson-Nernst-Planck model. Next, we consider stationary density-density and charge-charge correlation functions of the primitive model with a static electric field. We focus on an electric-field-induced synchronization between the emergence of density and charge oscillations, or the crossover from monotonic to oscillatory decay of density-density and charge-charge correlations. The correlation function analysis demonstrates the appearance of stripe states formed by segregation bands perpendicular to the external field. We also predict the following: (i) the electric-field-induced crossover occurs prior to the conventional Kirkwood crossover without an applied electric field, and (ii) the ion concentration dependence of the decay lengths at the electric-field-induced crossovers bears a similarity to the underscreening behavior found by simulation and theoretical studies on the oscillatory decay length in equilibrium.
I discuss the theoretical interpretation of the top-quark mass, which is extracted in standard and alternative measurements at the LHC. In particular, I point out that the top mass extracted in analyses relying on the use of Monte Carlo event generators must be close to the pole mass and review recent work aiming at estimating the theoretical uncertainty.
We study a porous medium equation with fractional potential pressure: $$ \partial_t u= \nabla \cdot (u^{m-1} \nabla p), \quad p=(-\Delta)^{-s}u, $$ for $m>1$, $0<s<1$ and $u(x,t)\ge 0$. To be specific, the problem is posed for $x\in \mathbb{R}^N$, $N\geq 1$, and $t>0$. The initial data $u(x,0)$ is assumed to be a bounded function with compact support or fast decay at infinity. We establish existence of a class of weak solutions for which we determine whether, depending on the parameter $m$, the property of compact support is conserved in time or not, starting from the result of finite propagation known for $m=2$. We find that when $m\in [1,2)$ the problem has infinite speed of propagation, while for $m\in [2,\infty)$ it has finite speed of propagation. Comparison is made with other nonlinear diffusion models where the results are widely different.
We have identified 82 short-period variable stars in Sextans A from deep WFPC2 observations. All of the periodic variables appear to be short-period Cepheids, with periods as small as 0.8 days for fundamental-mode Cepheids and 0.5 days for first-overtone Cepheids. These objects have been used, along with measurements of the RGB tip and red clump, to measure a true distance modulus to Sextans A of (m-M)_0 = 25.61 +/- 0.07, corresponding to a distance of d = 1.32 +/- 0.04 Mpc. Comparing distances calculated by these techniques, we find that short-period Cepheids (P < 2 days) are accurate distance indicators for objects at or below the metallicity of the SMC. As these objects are quite numerous in low-metallicity star-forming galaxies, they have the potential for providing extremely precise distances throughout the Local Group. We have also compared the relative distances produced by other distance indicators. We conclude that calibrations of RR Lyraes, the RGB tip, and the red clump are self-consistent, but that there appears to be a small dependence of long-period Cepheid distances on metallicity. Finally, we present relative distances of Sextans A, Leo A, IC 1613, and the Magellanic Clouds.
This paper presents the first quantum entanglement establishment scheme for strangers who neither pre-share any secret nor have any authenticated classical channel between them. The proposed protocol requires only the help of two almost dishonest third parties (TPs) to achieve the goal. The security analyses show that the proposed protocol is secure against not only an external eavesdropper's attack, but also the TP's attack.
Similarities between quantum systems and analogous systems for classical waves have been used to great effect in the physics community, be it to gain an intuition for quantum systems or to anticipate novel phenomena in classical waves. This proceeding reviews recent advances in putting these quantum-wave analogies on a mathematically rigorous foundation for classical electromagnetism. Not only has this Schr\"odinger formalism of electromagnetism led to new, interesting mathematical problems for so-called Maxwell-type operators, it has also improved the understanding of the physics of topological phenomena in electromagnetic media. For example, it enabled us to classify electromagnetic media by their material symmetries, and explained why "fermionic time-reversal symmetries" --- that were conjectured to exist in the physics literature --- are in fact forbidden.
Injection moulding is an increasingly automated industrial process, particularly when used for the production of high-value precision components such as polymeric medical devices. In such applications, achieving stringent product quality demands whilst also ensuring a highly efficient process can be challenging. Cycle time is one of the most critical factors which directly affects the throughput rate of the process and hence is a key indicator of process efficiency. In this work, we examine a production data set from a real industrial injection moulding process for manufacture of a high precision medical device. The relationship between the process input variables and the resulting cycle time is mapped with an artificial neural network (ANN) and an adaptive neuro-fuzzy system (ANFIS). The predictive performance of different training methods and neuron numbers in ANN and the impact of model type and the numbers of membership functions in ANFIS has been investigated. The strengths and limitations of the approaches are presented and the further research and development needed to ensure practical on-line use of these methods for dynamic process optimisation in the industrial process are discussed.
Some components of the graviton two-point function have been recently computed in the context of loop quantum gravity, using the spinfoam Barrett-Crane vertex. We complete the calculation of the remaining components. We find that, under our assumptions, the Barrett-Crane vertex does not yield the correct long distance limit. We argue that the problem is general and can be traced to the intertwiner-independence of the Barrett-Crane vertex, and therefore to the well-known mismatch between the Barrett-Crane formalism and the standard canonical spin networks. In a companion paper we illustrate the asymptotic behavior of a vertex amplitude that can correct this difficulty.
We propose QPALM, a nonconvex quadratic programming (QP) solver based on the proximal augmented Lagrangian method. This method solves a sequence of inner subproblems which can be enforced to be strongly convex and which therefore admit a unique solution. The resulting steps are shown to be equivalent to inexact proximal point iterations on the extended-real-valued cost function, which allows for a fairly simple analysis where convergence to a stationary point at an \(R\)-linear rate is shown. The QPALM algorithm solves the subproblems iteratively using semismooth Newton directions and an exact linesearch. The former can be computed efficiently in most iterations by making use of suitable factorization update routines, while the latter requires the zero of a monotone, one-dimensional, piecewise affine function. QPALM is implemented in open-source C code, with tailored linear algebra routines for the factorization in a self-written package LADEL. The resulting implementation is shown to be extremely robust in numerical simulations, solving all of the Maros-Meszaros problems and finding a stationary point for most of the nonconvex QPs in the Cutest test set. Furthermore, it is shown to be competitive against state-of-the-art convex QP solvers in typical QPs arising from application domains such as portfolio optimization and model predictive control. As such, QPALM strikes a unique balance between solving both easy and hard problems efficiently.
We demonstrate the existence of Gaussian multipartite bound information which is a classical analog of Gaussian multipartite bound entanglement. We construct a tripartite Gaussian distribution from which no secret key can be distilled, but which cannot be created by local operations and public communication. Further, we show that the presence of bound information is conditional on the presence of a part of the adversary's information creatable only by private communication. Existence of this part of the adversary's information is found to be a more generic feature of classical analogs of quantum phenomena obtained by mapping of non-classically correlated separable quantum states.
We give a brief status report on our on-going investigation of the prospects to discover QCD instantons in deep inelastic scattering (DIS) at HERA. A recent high-quality lattice study of the topological structure of the QCD vacuum is exploited to provide crucial support of our predictions for DIS, based on instanton perturbation theory.
We attempt to treat the very early Universe according to quantum mechanics. Identifying the scale factor of the Universe with the width of the wave packet associated with it, we show that there cannot be an initial singularity and that the Universe expands. Invoking the correspondence principle, we obtain the scale factor of the Universe and demonstrate that the causality problem of the standard model is solved.
The surface of an insulating material irradiated by a beam of low energy electrons charges positively if the yield of secondary electron is greater than unity. For such a dynamical equilibrium, the thermodynamic properties have been investigated by measuring the surface potential in response to a temperature oscillation of the material. It is shown that an oscillation amplitude of 0.4 K at 530 K induces an oscillation of the surface potential of about 0.5 volts. The frequency dependence indicates a monotonous decrease in the response with decreasing frequency, extrapolating to zero at zero frequency. We propose that this modification of the surface charge is driven by the temperature dependence of a gas of charged particles in equilibrium with the vacuum level.
We perform a consistent comparison of the mass and mass profiles of massive ($M_\star > 10^{11.4}M_{\odot}$) central galaxies at z~0.4 from deep Hyper Suprime-Cam (HSC) observations and from the Illustris, TNG100, and Ponos simulations. Weak lensing measurements from HSC enable measurements at fixed halo mass and provide constraints on the strength and impact of feedback at different halo mass scales. We compare the stellar mass function (SMF) and the Stellar-to-Halo Mass Relation (SHMR) at various radii and show that the radius at which the comparison is performed is important. In general, Illustris and TNG100 display steeper values of $\alpha$ where $M_{\star}\propto M_{\rm vir}^{\alpha}$. These differences are more pronounced for Illustris than for TNG100 and in the inner rather than outer regions of galaxies. Differences in the inner regions may suggest that TNG100 is too efficient at quenching in-situ star formation at $M_{\rm vir}\simeq10^{13} M_{\odot}$ but not efficient enough at $M_{\rm vir}\simeq10^{14} M_{\odot}$. The outer stellar masses are in excellent agreement with our observations at $M_{\rm vir}\simeq10^{13} M_{\odot}$, but both Illustris and TNG100 display excess outer mass as $M_{\rm vir}\simeq10^{14} M_{\odot}$ (by ~0.25 and ~0.12 dex, respectively). We argue that reducing stellar growth at early times in $M_\star \sim 10^{9-10} M_{\odot}$ galaxies would help to prevent excess ex-situ growth at this mass scale. The Ponos simulations do not implement AGN feedback and display an excess mass of ~0.5 dex at $r<30$ kpc compared to HSC which is indicative of over-cooling and excess star formation in the central regions. Joint comparisons between weak lensing and galaxy stellar profiles are a direct test of whether simulations build and deposit galaxy mass in the correct dark matter halos and thereby provide powerful constraints on the physics of feedback and galaxy growth.
In recent years, autonomous robots have become ubiquitous in research and daily life. Among many factors, public datasets play an important role in the progress of this field, as they waive the tall order of initial investment in hardware and manpower. However, for research on autonomous aerial systems, there appears to be a relative lack of public datasets on par with those used for autonomous driving and ground robots. Thus, to fill in this gap, we conduct a data collection exercise on an aerial platform equipped with an extensive and unique set of sensors: two 3D lidars, two hardware-synchronized global-shutter cameras, multiple Inertial Measurement Units (IMUs), and especially, multiple Ultra-wideband (UWB) ranging units. The comprehensive sensor suite resembles that of an autonomous driving car, but features distinct and challenging characteristics of aerial operations. We record multiple datasets in several challenging indoor and outdoor conditions. Calibration results and ground truth from a high-accuracy laser tracker are also included in each package. All resources can be accessed via our webpage https://ntu-aris.github.io/ntu_viral_dataset.
Future collider applications and present high-gradient laser plasma wakefield accelerators operating with picosecond bunch durations place a higher demand on the time resolution of bunch distribution diagnostics. This demand has led to significant advancements in the field of electro-optic sampling over the past ten years. These methods allow the probing of diagnostic light such as coherent transition radiation or the bunch wakefields with sub-picosecond time resolution. Potential applications in shot-to-shot, non-interceptive diagnostics continue to be pursued for live beam monitoring of collider and pump-probe experiments. Related to our developing work with electro-optic imaging, we present results on single-shot electro-optic sampling of the coherent transition radiation from bunches generated at the A0 photoinjector.
Human communication is multimodal in nature; it is through multiple modalities such as language, voice, and facial expressions, that opinions and emotions are expressed. Data in this domain exhibits complex multi-relational and temporal interactions. Learning from this data is a fundamentally challenging research problem. In this paper, we propose Modal-Temporal Attention Graph (MTAG). MTAG is an interpretable graph-based neural model that provides a suitable framework for analyzing multimodal sequential data. We first introduce a procedure to convert unaligned multimodal sequence data into a graph with heterogeneous nodes and edges that captures the rich interactions across modalities and through time. Then, a novel graph fusion operation, called MTAG fusion, along with a dynamic pruning and read-out technique, is designed to efficiently process this modal-temporal graph and capture various interactions. By learning to focus only on the important interactions within the graph, MTAG achieves state-of-the-art performance on multimodal sentiment analysis and emotion recognition benchmarks, while utilizing significantly fewer model parameters.
The recent LHCb angular analysis of the exclusive decay B --> K^* mu^+ mu^- has indicated significant deviations from the Standard Model expectations. In order to give precise theory predictions, it is crucial that uncertainties from non-perturbative QCD are under control and properly included. The dominant QCD uncertainties originate from the hadronic B --> K^* form factors and from charm loops. We present a systematic method to include factorisable power corrections to the form factors in the framework of QCD factorisation and study the impact of the scheme chosen to define the soft form factors. We also discuss charm-loop effects.
The price of electricity is far more volatile than that of other commodities normally noted for extreme volatility. The possibility of extreme price movements increases the risk of trading in electricity markets. However, underlying the process of price returns is a strong mean-reverting mechanism. We study this feature of electricity returns by means of Hurst R/S analysis, Detrended Fluctuation Analysis and periodogram regression.
We propose a new method of generating gamma rays with orbital angular momentum (OAM). Accelerated partially-stripped ions are used as an energy up-converter. Irradiating an optical laser beam with OAM on ultrarelativistic ions, they are excited to a state of large angular momentum. Gamma rays with OAM are emitted in their deexcitation process. We examine the excitation cross section and deexcitation rate.
Previous approaches to model and analyze facial expression analysis use three different techniques: facial action units, geometric features and graph based modelling. However, previous approaches have treated these technique separately. There is an interrelationship between these techniques. The facial expression analysis is significantly improved by utilizing these mappings between major geometric features involved in facial expressions and the subset of facial action units whose presence or absence are unique to a facial expression. This paper combines dimension reduction techniques and image classification with search space pruning achieved by this unique subset of facial action units to significantly prune the search space. The performance results on the publicly facial expression database shows an improvement in performance by 70% over time while maintaining the emotion recognition correctness.
We address the question of constructing explicitly quasi-uniform codes from groups. We determine the size of the codebook, the alphabet and the minimum distance as a function of the corresponding group, both for abelian and some nonabelian groups. Potentials applications comprise the design of almost affine codes and non-linear network codes.
This article describes the fluid dynamics video: "Effect of bubble deformability on the vertical channel bubbly flow". The effect of bubble deformability on the flow rate of bubbly upflow in a turbulent vertical channel is examined using direct numerical simulations. A series of simulations with bubbles of decreasing deformability reveals a sharp transition from a flow with deformable bubbles uniformly distributed in the middle of the channel to a flow with nearly spherical bubbles with a wall-peak bubble distribution and a much lower flow rate.
Depositing disordered Al on top of SrTiO$_3$ is a cheap and easy way to create a two-dimensional electron system in the SrTiO$_3$ surface layers. To facilitate future device applications we passivate the heterostructure by a disordered LaAlO$_3$ capping layer to study the electronic properties by complementary x-ray photoemission spectroscopy and transport measurements on the very same samples. We also tune the electronic interface properties by adjusting the oxygen pressure during film growth.
ABRIDGED. The analysis of spectral energy distributions (SEDs) of protoplanetary disks to determine their physical properties is known to be highly degenerate. Hence, a Bayesian analysis is required to obtain parameter uncertainties and degeneracies. The challenge here is computational speed, as one radiative transfer model requires a couple of minutes to compute. We performed a Bayesian analysis for 30 well-known protoplanetary disks to determine their physical disk properties, including uncertainties and degeneracies. To circumvent the computational cost problem, we created neural networks (NNs) to emulate the SED generation process. We created two sets of radiative transfer disk models to train and test two NNs that predict SEDs for continuous and discontinuous disks. A Bayesian analysis was then performed on 30 protoplanetary disks with SED data collected by the DIANA project to determine the posterior distributions of all parameters. We ran this analysis twice, (i) with old distances and additional parameter constraints as used in a previous study, to compare results, and (ii) with updated distances and free choice of parameters to obtain homogeneous and unbiased model parameters. We evaluated the uncertainties in the determination of physical disk parameters from SED analysis, and detected and quantified the strongest degeneracies. The NNs are able to predict SEDs within 1ms with uncertainties of about 5% compared to the true SEDs obtained by the radiative transfer code. We find parameter values and uncertainties that are significantly different from previous values obtained by $\chi^2$ fitting. Comparing the global evidence for continuous and discontinuous disks, we find that 26 out of 30 objects are better described by disks that have two distinct radial zones. Also, we created an interactive tool that instantly returns the SED predicted by our NNs for any parameter combination.
We show that the Platinum gamma-ray burst (GRB) data compilation, probing the redshift range $0.553 \leq z \leq 5.0$, obeys a cosmological-model-independent three-parameter fundamental plane (Dainotti) correlation and so is standardizable. While they probe the largely unexplored $z \sim 2.3-5$ part of cosmological redshift space, the GRB cosmological parameter constraints are consistent with, but less precise than, those from a combination of baryon acoustic oscillation (BAO) and Hubble parameter [$H(z)$] data. In order to increase the precision of GRB-only cosmological constraints, we exclude common GRBs from the larger Amati-correlated A118 data set composed of 118 GRBs and jointly analyze the remaining 101 Amati-correlated GRBs with the 50 Platinum GRBs. This joint 151 GRB data set probes the largely unexplored $z \sim 2.3-8.2$ region; the resulting GRB-only cosmological constraints are more restrictive, and consistent with, but less precise than, those from $H(z)$ + BAO data.
We revisit the sample average approximation (SAA) approach for non-convex stochastic programming. We show that applying the SAA approach to problems with expected value equality constraints does not necessarily result in asymptotic optimality guarantees as the sample size increases. To address this issue, we relax the equality constraints. Then, we prove the asymptotic optimality of the modified SAA approach under mild smoothness and boundedness conditions on the equality constraint functions. Our analysis uses random set theory and concentration inequalities to characterize the approximation error from the sampling procedure. We apply our approach and analysis to the problem of stochastic optimal control for nonlinear dynamical systems under external disturbances modeled by a Wiener process. Numerical results on relevant stochastic programs show the reliability of the proposed approach. Results on a rocket-powered descent problem show that our computed solutions allow for significant uncertainty reduction compared to a deterministic baseline.
We will utilize the sensitivity of SIRTF through the Legacy Science Program to carry out spectrophotometric observations of solar-type stars aimed at (1) defining the timescales over which terrestrial and gas giant planets are built, from measurements diagnostic of dust/gas masses and radial distributions; and (2) establishing the diversity of planetary architectures and the frequency of planetesimal collisions as a function of time through observations of circumstellar debris disks. Together, these observations will provide an astronomical context for understanding whether our solar system - and its habitable planet - is a common or a rare circumstance. Achieving our science goals requires measuring precise spectral energy distributions for a statistically robust sample capable of revealing evolutionary trends and the diversity of system outcomes. Our targets have been selected from two carefully assembled databases of solar-like stars: (1) a sample located within 50 pc of the Sun spanning an age range from 100-3000 Myr for which a rich set of ancillary measurements (e.g. metallicity, stellar activity, kinematics) are available; and (2) a selection located between 15 and 180 pc and spanning ages from 3 to 100 Myr. For stars at these distances SIRTF is capable of detecting stellar photospheres with SNR >30 at lambda < 24 microns for our entire sample, as well as achieving SNR >5 at the photospheric limit for over 50% of our sample at lambda=70 microns. Thus we will provide a complete census of stars with excess emission down to the level produced by the dust in our present-day solar system. More information concerning our program can be found at: http://gould.as.arizona.edu/feps
In recent years time domain speech separation has excelled over frequency domain separation in single channel scenarios and noise-free environments. In this paper we dissect the gains of the time-domain audio separation network (TasNet) approach by gradually replacing components of an utterance-level permutation invariant training (u-PIT) based separation system in the frequency domain until the TasNet system is reached, thus blending components of frequency domain approaches with those of time domain approaches. Some of the intermediate variants achieve comparable signal-to-distortion ratio (SDR) gains to TasNet, but retain the advantage of frequency domain processing: compatibility with classic signal processing tools such as frequency-domain beamforming and the human interpretability of the masks. Furthermore, we show that the scale invariant signal-to-distortion ratio (si-SDR) criterion used as loss function in TasNet is related to a logarithmic mean square error criterion and that it is this criterion which contributes most reliable to the performance advantage of TasNet. Finally, we critically assess which gains in a noise-free single channel environment generalize to more realistic reverberant conditions.
Byzantine agreement allows n processes to decide on a common value, in spite of arbitrary failures. The seminal Dolev-Reischuk bound states that any deterministic solution to Byzantine agreement exchanges Omega(n^2) bits. In synchronous networks, solutions with optimal O(n^2) bit complexity, optimal fault tolerance, and no cryptography have been established for over three decades. However, these solutions lack robustness under adverse network conditions. Therefore, research has increasingly focused on Byzantine agreement for partially synchronous networks. Numerous solutions have been proposed for the partially synchronous setting. However, these solutions are notoriously hard to prove correct, and the most efficient cryptography-free algorithms still require O(n^3) exchanged bits in the worst case. In this paper, we introduce Oper, the first generic transformation of deterministic Byzantine agreement algorithms from synchrony to partial synchrony. Oper requires no cryptography, is optimally resilient (n >= 3t+1, where t is the maximum number of failures), and preserves the worst-case per-process bit complexity of the transformed synchronous algorithm. Leveraging Oper, we present the first partially synchronous Byzantine agreement algorithm that (1) achieves optimal O(n^2) bit complexity, (2) requires no cryptography, and (3) is optimally resilient (n >= 3t+1), thus showing that the Dolev-Reischuk bound is tight even in partial synchrony. Moreover, we adapt Oper for long values and obtain several new partially synchronous algorithms with improved complexity and weaker (or completely absent) cryptographic assumptions.
We compute the exterior powers, with respect to the additive convolution on the general linear Lie algebra, of a parabolic Springer sheaf corresponding to a maximal parabolic subgroup of type (1, n -- 1). They turn out to be isomorphic to the semisimple perverse sheaves attached by the Springer correspondence to the exterior powers of the permutation representation of the symmetric group.
We introduce MMIS, a novel dataset designed to advance MultiModal Interior Scene generation and recognition. MMIS consists of nearly 160,000 images. Each image within the dataset is accompanied by its corresponding textual description and an audio recording of that description, providing rich and diverse sources of information for scene generation and recognition. MMIS encompasses a wide range of interior spaces, capturing various styles, layouts, and furnishings. To construct this dataset, we employed careful processes involving the collection of images, the generation of textual descriptions, and corresponding speech annotations. The presented dataset contributes to research in multi-modal representation learning tasks such as image generation, retrieval, captioning, and classification.
We study orbits for rational equivalence of zero-cycles on very general abelian varieties by adapting a method of Voisin to powers of abelian varieties. We deduce that, for $k$ at least $3$, a very general abelian variety of dimension at least $2k-2$ has covering gonality greater than $k$. This settles a conjecture of Voisin. We also discuss how upper bounds for the dimension of orbits for rational equivalence can be used to provide new lower bounds on other measures of irrationality. In particular, we obtain a strengthening of the Alzati-Pirola bound on the degree of irrationality of abelian varieties.
In this paper, we propose a solution for improving the quality of temporal sound localization. We employ a multimodal fusion approach to combine visual and audio features. High-quality visual features are extracted using a state-of-the-art self-supervised pre-training network, resulting in efficient video feature representations. At the same time, audio features serve as complementary information to help the model better localize the start and end of sounds. The fused features are trained in a multi-scale Transformer for training. In the final test dataset, we achieved a mean average precision (mAP) of 0.33, obtaining the second-best performance in this track.
We compute S-wave and P-wave electromagnetic quarkonium decays at order v^7 in the heavy-quark velocity expansion. In the S-wave case, our calculation confirms and completes previous findings. In the P-wave case, our results are in disagreement with previous ones; in particular, we find that two matrix elements less are needed. The cancellation of infrared singularities in the matching procedure is discussed.
We show that a peripheral meson model can explain the large deep inelastic electron-proton scattering rapidity gap events observed at HERA.
I propose a model for the formation of slow-massive-wide (SMW) jets by accretion disks around compact objects. This study is motivated by claims for the existence of SMW jets in some astrophysical objects such as in planetary nebulae (PNs) and in some active galactic nuclei in galaxies and in cooling flow clusters. In this model the energy still comes from accretion onto a compact object. The accretion disk launches two opposite jets with velocity of the order of the escape velocity from the accreting object and with mass outflow rate of ~1-20% of the accretion rate as in most popular models for jet launching; in the present model these are termed fast-first-stage (FFS) jets. However, the FFS jets encounter surrounding gas that originates in the mass accretion process, and are terminated by strong shocks close to their origin. Two hot bubbles are formed. These bubbles accelerate the surrounding gas to form two SMW jets that are more massive and slower than the FFS jets. There are two conditions for this mechanism to work. Firstly, the surrounding gas should be massive enough to block the free expansion of the FFS jets. Most efficiently this condition is achieved when the surrounding gas is replenished. Secondly, the radiative energy losses must be small.
Recent observations of Type Ia supernovae provide evidence for the acceleration of our universe, which leads to the possibility that the universe is entering an inflationary epoch. We simulate it under a ``big bounce'' model, which contains a time variable cosmological ``constant'' that is derived from a higher dimension and manifests itself in 4D spacetime as dark energy. By properly choosing the two arbitrary functions contained in the model, we obtain a simple exact solution in which the evolution of the universe is divided into several stages. Before the big bounce, the universe contracts from a $\Lambda $-dominated vacuum, and after the bounce, the universe expands. In the early time after the bounce, the expansion of the universe is decelerating. In the late time after the bounce, dark energy (i.e., the variable cosmological ``constant'') overtakes dark matter and baryons, and the expansion enters an accelerating stage. When time tends to infinity, the contribution of dark energy tends to two third of the total energy density of the universe, qualitatively in agreement with observations.