text
stringlengths
6
128k
Minimisation of discrete energies defined over factors is an important problem in computer vision, and a vast number of MAP inference algorithms have been proposed. Different inference algorithms perform better on factor graph models (GMs) from different underlying problem classes, and in general it is difficult to know which algorithm will yield the lowest energy for a given GM. To mitigate this difficulty, survey papers advise the practitioner on what algorithms perform well on what classes of models. We take the next step forward, and present a technique to automatically select the best inference algorithm for an input GM. We validate our method experimentally on an extended version of the OpenGM2 benchmark, containing a diverse set of vision problems. On average, our method selects an inference algorithm yielding labellings with 96% of variables the same as the best available algorithm.
In this paper we consider the problem of maximizing the Area under the ROC curve (AUC) which is a widely used performance metric in imbalanced classification and anomaly detection. Due to the pairwise nonlinearity of the objective function, classical SGD algorithms do not apply to the task of AUC maximization. We propose a novel stochastic proximal algorithm for AUC maximization which is scalable to large scale streaming data. Our algorithm can accommodate general penalty terms and is easy to implement with favorable $O(d)$ space and per-iteration time complexities. We establish a high-probability convergence rate $O(1/\sqrt{T})$ for the general convex setting, and improve it to a fast convergence rate $O(1/T)$ for the cases of strongly convex regularizers and no regularization term (without strong convexity). Our proof does not need the uniform boundedness assumption on the loss function or the iterates which is more fidelity to the practice. Finally, we perform extensive experiments over various benchmark data sets from real-world application domains which show the superior performance of our algorithm over the existing AUC maximization algorithms.
In this note, we study a special case of the $4$-pt non-vacuum classical block associated with the $\mathcal{W}_3$ algebra. We formulate the monodromy problem for the block and derive monodromy equations within the heavy-light approximation. Fixing the remaining functional arbitrariness using parameters of the $4$-pt vacuum $\mathcal{W}_3$ block, we compute the $4$-pt non-vacuum $\mathcal{W}_3$ block function.
We study low--temperature non Gaussian thermal fluctuations of a system of classical particles around a (hypothetical) crystalline ground state. These thermal fluctuations are described by the behaviour of a system of long range interacting charged dipoles at high--temperature and high--density. For the case of uniformly bounded fluctuations, the low--temperature linked cluster expansion describing the contribution to the free energy is derived and analysed. Finally some nonpertubative results on the existence and independence of boundary conditions of the Gibbs states for the associated dipole systems are obtained.
The availability of large-scale facial databases, together with the remarkable progresses of deep learning technologies, in particular Generative Adversarial Networks (GANs), have led to the generation of extremely realistic fake facial content, raising obvious concerns about the potential for misuse. Such concerns have fostered the research on manipulation detection methods that, contrary to humans, have already achieved astonishing results in various scenarios. In this study, we focus on the synthesis of entire facial images, which is a specific type of facial manipulation. The main contributions of this study are four-fold: i) a novel strategy to remove GAN "fingerprints" from synthetic fake images based on autoencoders is described, in order to spoof facial manipulation detection systems while keeping the visual quality of the resulting images; ii) an in-depth analysis of the recent literature in facial manipulation detection; iii) a complete experimental assessment of this type of facial manipulation, considering the state-of-the-art fake detection systems (based on holistic deep networks, steganalysis, and local artifacts), remarking how challenging is this task in unconstrained scenarios; and finally iv) we announce a novel public database, named iFakeFaceDB, yielding from the application of our proposed GAN-fingerprint Removal approach (GANprintR) to already very realistic synthetic fake images. The results obtained in our empirical evaluation show that additional efforts are required to develop robust facial manipulation detection systems against unseen conditions and spoof techniques, such as the one proposed in this study.
Photonic entangled states lie at the heart of quantum science for the demonstrations of quantum mechanics foundations and supply as a key resource for approaching various quantum technologies. An integrated realization of such states will certainly guarantee a high-degree of entanglement and improve the performance like portability, stability and miniaturization, hence becomes an inevitable tendency towards the integrated quantum optics. Here, we report the compact realization of steerable photonic path-entangled states from a monolithic quadratic nonlinear photonic crystal. The crystal acts as an inherent beam splitter to distribute photons into coherent spatial modes, producing the heralded single-photon even appealing beamlike two-photon path-entanglement, wherein the entanglement is characterized by quantum spatial beatings. Such multifunctional entangled source can be further extended to high-dimensional fashion and multi-photon level as well as involved with other degrees of freedom, which paves a desirable way to engineer miniaturized quantum light source.
Metaverse, a burgeoning technological trend that combines virtual and augmented reality, provides users with a fully digital environment where they can assume a virtual identity through a digital avatar and interact with others as they were in the real world. Its applications span diverse domains such as economy (with its entry into the cryptocurrency field), finance, social life, working environment, healthcare, real estate, and education. During the COVID-19 and post-COVID-19 era, universities have rapidly adopted e-learning technologies to provide students with online access to learning content and platforms, rendering previous considerations on integrating such technologies or preparing institutional infrastructures virtually obsolete. In light of this context, the present study proposes a framework for analyzing university students' acceptance and intention to use metaverse technologies in education, drawing upon the Technology Acceptance Model (TAM). The study aims to investigate the relationship between students' intention to use metaverse technologies in education, hereafter referred to as MetaEducation, and selected TAM constructs, including Attitude, Perceived Usefulness, Perceived Ease of Use, Self-efficacy of metaverse technologies in education, and Subjective Norm. Notably, Self-efficacy and Subjective Norm have a positive influence on Attitude and Perceived Usefulness, whereas Perceived Ease of Use does not exhibit a strong correlation with Attitude or Perceived Usefulness. The authors postulate that the weak associations between the study's constructs may be attributed to limited knowledge regarding MetaEducation and its potential benefits. Further investigation and analysis of the study's proposed model are warranted to comprehensively understand the complex dynamics involved in the acceptance and utilization of MetaEducation technologies in the realm of higher education
We have studied the body-centered cubic (bcc), face-centered cubic (fcc) and hexagonal close-packed (hcp) phases of Fe alloyed with 25 at. % of Ni at Earth's core conditions using an ab initio local density approximation + dynamical mean-field theory (LDA+DMFT) approach. The alloys have been modeled by ordered crystal structures based on the bcc, fcc, and hcp unit cells with minimum possible cell size allowing for the proper composition. Our calculations demonstrate that the strength of electronic correlations on the Fe 3d shell is highly sensitive to the phase and local environment. In the bcc phase the 3d electrons at the Fe site with Fe only nearest neighbors remain rather strongly correlated even at extreme pressure-temperature conditions, with the local and uniform magnetic susceptibility exhibiting a Curie-Weiss-like temperature evolution and the quasi-particle lifetime {\Gamma} featuring a non-Fermi-liquid temperature dependence. In contrast, for the corresponding Fe site in the hcp phase we predict a weakly-correlated Fermi-liquid state with a temperature-independent local susceptibility and a quadratic temperature dependence of {\Gamma}. The iron sites with nickel atoms in the local environment exhibit behavior in the range between those two extreme cases, with the strength of correlations gradually increasing along the hcp-fcc-bcc sequence. Further, the inter-site magnetic interactions in the bcc and hcp phases are also strongly affected by the presence of Ni nearest neighbors. The sensitivity to the local environment is related to modifications of the Fe partial density of states due to mixing with Ni 3d-states.
We show that (i) any constrained polynomial optimization problem (POP) has an equivalent formulation on a variety contained in an Euclidean sphere and (ii) the resulting semidefinite relaxations in the moment-SOS hierarchy have the constant trace property (CTP) for the involved matrices. We then exploit the CTP to avoid solving the semidefinite relaxations via interior-point methods and rather use ad-hoc spectral methods that minimize the largest eigenvalue of a matrix pencil. Convergence to the optimal value of the semidefinite relaxation is guaranteed. As a result we obtain a hierarchy of nonsmooth "spectral relaxations" of the initial POP. Efficiency and robustness of this spectral hierarchy is tested against several equality constrained POPs on a sphere as well as on a sample of randomly generated quadratically constrained quadratic problems (QCQPs).
We develop the embedded gradient vector field method, introduced in [8] and [9], for the case of the special unitary group $\mathcal{SU}(N)$ regarded as a constraint submanifold of the unitary group $\mathcal{U}(N)$. The optimization problem associated to the trace fidelity cost function defined on $\mathcal{SU}(N)$ that appears in the context of $\mathcal{SU}(N)$ quantum control landscapes is completely solved using the embedded gradient vector field method. We prove that for $N\geq 5$, the landscape is not $\mathcal{SU}(N)$-trap free, there are always kinematic local extrema that are not global extrema.
We prove a Lieb-Thirring type inequality for a complex perturbation of a d-dimensional massive Dirac operator $D_m, m\geq 0$ whose spectrum is $]-\infty , -m]\cup[m , +\infty[$. The difficulty of the study is that the unperturbed operator is not bounded from below in this case, and, to overcome it, we use the methods of complex function theory. The methods of the article also give similar results for complex perturbations of the Klein-Gordon operator.
In this study, we forecast the population of the Philippines using a discrete age-structured compartmental model. We estimated the future population structure of the Philippines if the government imposes an n-child policy on top of the declining current birth and death rate trend of the country.
A set of locally finite perimeter $E \subset \mathbb{R}^{n}$ is called an anisotropic minimal surface in an open set $A$ if $\Phi(E;A) \le \Phi(F;A)$ for some surface energy $\Phi(E;A) = \int_{\partial^{*}E \cap A} \| \nu_{E}\| d \mathcal{H}^{n-1}$ and all sets of locally finite perimeter $F$ such that $E \Delta F \subset \subset A$. In this short note we provide the details of a geometric proof verifying that all anisotropic surface minimizers in $\mathbb{R}^{2}$ whose corresponding integrand $\| \cdot \|$ is strictly convex are locally disjoint unions of line segments. This demonstrates that, in the plane, strict convexity of $\| \cdot \|$ is both necessary and sufficient for regularity. The corresponding Bernstein theorem is also proven: global anisotropic minimizers $E \subset \mathbb{R}^{2}$ are half-spaces.
In this article, we are proposing a thorough analysis of the cross, and the in-plane thermal conductivity of thin-film materials based on the 3$\omega$ method. The analysis accommodates a 2D mathematical heat transfer model of a semi-infinite body and the details of the sample preparation followed by the measurement process. The presented mathematical model for the system considers a two-dimensional space for its solution. It enables the calculation of the cross-plane thermal conductivity with a single frequency measurement, the derived equation opens new opportunities for frequency-based and penetration-depth dependent thermal conductivity analysis. The derived equation for the in-plane thermal conductivity is dependent on the cross-plane thermal conductivity. Both in and cross-plane thermal conductivities enable the measurements in two steps of measurements, the resistance-temperature slope measurement and another set of measures that extracts the third harmonic of the voltage signal. We evaluated the methodology in two sets of samples, silicon nitride and boron nitride, both on silicon wafers. We observed anisotropic thermal conductivity in the cross and the in-plane direction, despite the isotropic nature of the thin films, which we relate to the total anisotropy of the thin film-substrate system. The technique is conducive to the thermal analysis of next-generation nanoelectronic devices.
Composite Higgs models predict the existence of resonances. We study in detail the collider phenomenology of both the vector and fermionic resonances, including the possibility of both of them being light and within the reach of the LHC. We present current constraints from di-boson, di-lepton resonance searches and top partner pair searches on a set of simplified benchmark models based on the minimal coset $SO(5)/SO(4)$, and make projections for the reach of the HL-LHC. We find that the cascade decay channels for the vector resonances into top partners, or vice versa, can play an important role in the phenomenology of the models. We present a conservative estimate for their reach by using the same-sign di-lepton final states. As a simple extrapolation of our work, we also present the projected reach at the 27 TeV HE-LHC and a 100 TeV $pp$ collider.
A new mechanism of bi-linear magnetoresistance (BMR) is studied theoretically within the minimal model describing surface electronic states in topological insulators (TIs). The BMR appears as a consequence of the second-order response to electric field, and depends linearly on both electric field (current) and magnetic field. The mechanism is based on the interplay of current-induced spin polarization and scattering processes due to peculiar spin-orbit defects. The proposed mechanism is compared to that based on a Fermi surface warping, and is shown to be dominant at lower Fermi energies. We provide a consistent theoretical approach based on the Green function formalism and show that the magnetic field dependent relaxation processes in the presence of non-equilibrium current-induced spin polarization give rise to the BMR.
We study the intermediate statistics of the spectrum of quasi-energies and of the eigenfunctions in the kicked rotator, in the case when the corresponding system is fully chaotic while quantally localized. As for the eigenphases, we find clear evidence that the spectral statistics is well described by the Brody distribution, notably better than by the Izrailev's one, which has been proposed and used broadly to describe such cases. We also studied the eigenfunctions of the Floquet operator and their localization. We show the existence of a scaling law between the repulsion parameter with relative localization length, but only as a first order approximation, since another parameter plays a role. We believe and have evidence that a similar analysis applies in time-independent Hamilton systems.
In this paper, we study a wireless networked control system (WNCS) with $N \ge 2$ sub-systems sharing a common wireless channel. Each sub-system consists of a plant and a controller and the control message must be delivered from the controller to the plant through the shared wireless channel. The wireless channel is unreliable due to interference and fading. As a result, a packet can be successfully delivered in a slot with a certain probability. A network scheduling policy determines how to transmit those control messages generated by such $N$ sub-systems and directly influences the transmission delay of control messages. We first consider the case that all sub-systems have the same sampling period. We characterize the stability condition of such a WNCS under the joint design of the control policy and the network scheduling policy by means of $2^N$ linear inequalities. We further simplify the stability condition into only one linear inequality for two special cases: the perfect-channel case where the wireless channel can successfully deliver a control message with certainty in each slot, and the symmetric-structure case where all sub-systems have identical system parameters. We then consider the case that different sub-systems can have different sampling periods, where we characterize a sufficient condition for stability.
Static spherically symmetric solutions of the Einstein-Maxwell gravity with the dilaton field are described. The solutions correspond to black holes and are generalizations of the previously known dilaton black hole solution. In addition to mass and electric charge these solutions are labeled by a new parameter, the dilaton charge of the black hole. Different effects of the dilaton charge on the geometry of space-time of such black holes are studied. It is shown that in most cases the scalar curvature is divergent at the horizons. Another feature of the dilaton black hole is that there is a finite interval of values of electric charge for which no black hole can exist.
We present a method to compute the number of particles occupying spherical single-particle (SSP) levels within the energy density functional (EDF) framework. These SSP levels are defined for each nucleus by performing self-consistent mean-field calculations. The nuclear many-body states, in which the occupation numbers are evaluated, are obtained with a symmetry conserving configuration mixing (SCCM) method based on the Gogny EDF. The method allows a closer comparison between EDF and shell model with configuration mixing in large valence spaces (SM-CI) results, and can serve as a guidance to define physically sound valence spaces for SM-CI calculations. As a first application of the method, we analyze the onset of deformation in neutron-rich $N=40$ isotones and the role of the SSP levels around this harmonic oscillator magic number, with particular emphasis in the structure of $^{64}$Cr.
Using Monte Carlo dynamics and the Monte Carlo Histogram Method, the simple three-dimensional 27 monomer lattice copolymer is examined in depth. The thermodynamic properties of various sequences are examined contrasting the behavior of good and poor folding sequences. The good (fast folding) sequences have sharp well-defined thermodynamic transitions while the slow folding sequences have broad ones. We find two independent transitions: a collapse transition to compact states and a folding transition from compact states to the native state. The collapse transition is second order-like, while folding is first order. The system is also studied as a function of the energy parameters. In particular, as the average energetic drive toward compactness is reduced, the two transitions approach each other. At zero average drive, collapse and folding occur almost simultaneously; i.e., the chain collapses directly into the native state. At a specific value of this energy drive the folding temperature falls below the glass point, indicating that the chain is now trapped in local minimum. By varying one parameter in this simple model, we obtain a diverse array of behaviors which may be useful in understanding the different folding properties of various proteins.
Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0,1] through a sigmoid function. However, due to the graduality of the sigmoid function, the sigmoid gate is not flexible in representing multi-modality or skewness. Besides, the previous models lack modeling on the correlation between the gates, which would be a new method to adopt inductive bias for a relationship between previous and current input. This paper proposes a new gate structure with the bivariate Beta distribution. The proposed gate structure enables probabilistic modeling on the gates within the LSTM cell so that the modelers can customize the cell state flow with priors and distributions. Moreover, we theoretically show the higher upper bound of the gradient compared to the sigmoid function, and we empirically observed that the bivariate Beta distribution gate structure provides higher gradient values in training. We demonstrate the effectiveness of bivariate Beta gate structure on the sentence classification, image classification, polyphonic music modeling, and image caption generation.
Two-species condensing zero range processes (ZRPs) are interacting particle systems with two species of particles and zero range interaction exhibiting phase separation outside a domain of sub-critical densities. We prove the hydrodynamic limit of nearest neighbour mean zero two-species condensing zero range processes with bounded local jump rate for sub-critical initial profiles, i.e., for initial profiles whose image is contained in the region of sub-critical densities. The proof is based on H. T. Yau's relative entropy method, which relies on the existence of sufficiently regular solutions to the hydrodynamic equation. In the particular case of the species-blind ZRP, we prove that the solutions of the hydrodynamic equation exist globally in time and thus the hydrodynamic limit is valid for all times.
Large-volume optical coherence tomography (OCT)-setups employ scanning mirrors and suffer from non-linear geometric distortion artifacts in which the degree of distortion is determined by the maximum angles over which the mirrors rotate. In this chapter, we describe a straightforward approach to correct for these distortion artifacts, creating an alternative to previously reported ray-tracing schemes that are unable to apply these corrections in real-time. By implementing the proposed 3D recalibration algorithm on the graphics card of a standard computer, this feature can be applied in real-time. We validate the accuracy of the technique using OCT measurements of a highly curved object within a large imaging volume of 12.35 x 10.13 x 2.36 mm^3. The resulting 3D object shape measurements are compared against high-resolution and aberration-free optical profilometry measurements. Maintaining an optical resolution of <10 micron within the sample, both axially and transversally, we realized a real-time, high-resolution, large-volume OCT imaging system, capable of producing distortion corrected wide-field OCT data with a geometric surface shape accuracy of <15 micron.
We report on the recent development of a versatile analog front-end compatible with a negative-ion $\mu$-TPC for a directional dark matter search as well as a dual-phase, next-generation $\mathcal{O}$(10~kt) liquid argon TPC to study neutrino oscillations, nucleon decay, and astrophysical neutrinos. Although the operating conditions for negative-ion and liquid argon TPCs are quite different (room temperature \textit{vs.} $\sim$88~K operation, respectively), the readout electronics requirements are similar. Both require a wide-dynamic range up to 1600 fC, and less than 2000--5000 e$^-$ noise for a typical signal of 80 fC with a detector capacitance of $C_{\rm det} \approx 300$~pF. In order to fulfill such challenging requirements, a prototype ASIC was newly designed using 180-nm CMOS technology. Here, we report on the performance of this ASIC, including measurements of shaping time, dynamic range, and equivalent noise charge (ENC). We also demonstrate the first operation of this ASIC on a low-pressure negative-ion $\mu$-TPC.
The nature of the interface in lateral heterostructures of 2D monolayer semiconductors including its composition, size, and heterogeneity critically impacts the functionalities it engenders on the 2D system for next-generation optoelectronics. Here, we use tip-enhanced Raman scattering (TERS) to characterize the interface in a single-layer MoS2/WS2 lateral heterostructure with a spatial resolution of 50 nm. Resonant and non-resonant TERS spectroscopies reveal that the interface is alloyed with a size that varies over an order of magnitude-from 50-600 nm-within a single crystallite. Nanoscale imaging of the continuous interfacial evolution of the resonant and non-resonant Raman spectra enables the deconvolution of defect-activation, resonant enhancement, and material composition for several vibrational modes in single-layer MoS2, MoxW1-xS2, and WS2. The results demonstrate the capabilities of nanoscale TERS spectroscopy to elucidate macroscopic structure-property relationships in 2D materials and to characterize lateral interfaces of 2D systems on length scales that are imperative for devices.
The presumed Wolf-Rayet star progenitors of Type Ib/c supernovae have fast, low density winds and the shock waves generated by the supernova interaction with the wind are not expected to be radiative at typical times of observation. The injected energy spectrum of radio emitting electrons typically has an observed index p=3, which is suggestive of acceleration in cosmic ray dominated shocks. The early, absorbed part of the radio light curves can be attributed to synchrotron self-absorption, which leads to constraints on the magnetic field in the emitting region and on the circumstellar density. The range of circumstellar densities inferred from the radio emission is somewhat broader than that for Galactic Wolf-Rayet stars, if similar efficiencies of synchrotron emission are assumed in the extragalactic supernovae. For the observed and expected ranges of circumstellar densities to roughly overlap, a high efficiency of magnetic field production in the shocked region is required (epsilon_B ~ 0.1). For the expected densities around a Wolf-Rayet star, a nonthermal mechanism is generally required to explain the observed X-ray luminosities of Type Ib/c supernovae. Although the inverse Compton mechanism can explain the observed X-ray emission from SN 2002ap if the wind parameters are taken from the radio model, the mechanism is not promising for other supernovae unless the postshock magnetic energy density is much smaller than the electron energy density. In some cases another mechanism is definitely needed and we suggest that it is X-ray synchrotron emission in a case where the shock wave is cosmic ray dominated so that the electron energy spectrum flattens at high energy. More comprehensive X-ray observations of a Type Ib/c supernova are needed to determine whether this suggestion is correct.
Discovering successful coordinated behaviors is a central challenge in Multi-Agent Reinforcement Learning (MARL) since it requires exploring a joint action space that grows exponentially with the number of agents. In this paper, we propose a mechanism for achieving sufficient exploration and coordination in a team of agents. Specifically, agents are rewarded for contributing to a more diversified team behavior by employing proper intrinsic motivation functions. To learn meaningful coordination protocols, we structure agents' interactions by introducing a novel framework, where at each timestep, an agent simulates counterfactual rollouts of its policy and, through a sequence of computations, assesses the gap between other agents' current behaviors and their targets. Actions that minimize the gap are considered highly influential and are rewarded. We evaluate our approach on a set of challenging tasks with sparse rewards and partial observability that require learning complex cooperative strategies under a proper exploration scheme, such as the StarCraft Multi-Agent Challenge. Our methods show significantly improved performances over different baselines across all tasks.
We study the Polonyi problem in the framework of no-scale type supergravity models. We show that the lightest superparticle (LSP) produced in the decay of the Polonyi field may contribute too much to the present density of the universe. By requiring that LSP should not overclose the universe, we obtain a stringent constraint on the reheating temperature after the decay of the Polonyi field. We calculate the LSP density with physical parameters obtained by solving renormalization group equations in the minimal supersymmetric SU(5) model and find that the reheating temperature should be greater than about 100MeV which corresponds to $O$(100)TeV of the Polonyi mass.
Time-series data is being increasingly collected and stud- ied in several areas such as neuroscience, climate science, transportation, and social media. Discovery of complex patterns of relationships between individual time-series, using data-driven approaches can improve our understanding of real-world systems. While traditional approaches typically study relationships between two entire time series, many interesting relationships in real-world applications exist in small sub-intervals of time while remaining absent or feeble during other sub-intervals. In this paper, we define the notion of a sub-interval relationship (SIR) to capture inter- actions between two time series that are prominent only in certain sub-intervals of time. We propose a novel and efficient approach to find most interesting SIR in a pair of time series. We evaluate our proposed approach on two real-world datasets from climate science and neuroscience domain and demonstrated the scalability and computational efficiency of our proposed approach. We further evaluated our discovered SIRs based on a randomization based procedure. Our results indicated the existence of several such relationships that are statistically significant, some of which were also found to have physical interpretation.
We apply the method of asymptotic homogenization to metamaterials with microscopically bianisotropic inclusions to calculate a full set of constitutive parameters in the long wavelength limit. Two different implementations of electromagnetic asymptotic homogenization are presented. We test the homogenization procedure on two different metamaterial examples. Finally, the analytical solution for long wavelength homogenization of a one dimensional metamaterial with microscopically bi-isotropic inclusions is derived.
We present spatially resolved maps of six individually-detected Lyman alpha haloes (LAHs) as well as a first statistical analysis of the Lyman alpha (Lya) spectral signature in the circum-galactic medium of high-redshift star-forming galaxies using MUSE. Our resolved spectroscopic analysis of the LAHs reveals significant intrahalo variations of the Lya line profile. Using a three-dimensional two-component model for the Lya emission, we measure the full width at half maximum (FWHM), the peak velocity shift and the asymmetry of the Lya line in the core and in the halo of 19 galaxies. We find that the Lya line shape is statistically different in the halo compared to the core for ~40% of our galaxies. Similarly to object-by-object based studies and a recent resolved study using lensing, we find a correlation between the peak velocity shift and the width of the Lya line both at the interstellar and circum-galactic scales. While there is a lack of correlation between the spectral properties and the spatial scale lengths of our LAHs, we find a correlation between the width of the line in the LAH and the halo flux fraction. Interestingly, UV bright galaxies show broader, more redshifted and less asymmetric Lya lines in their haloes. The most significant correlation found is for the FWHM of the line and the UV continuum slope of the galaxy, suggesting that the redder galaxies have broader Lya lines. The generally broad and red line shapes found in the halo component suggests that the Lya haloes are powered either by scattering processes through an outflowing medium, fluorescent emission from outflowing cold clumps of gas, or a mix of both. Considering the large diversity of the Lya line profiles observed in our sample and the lack of strong correlation, the interpretation of our results is still broadly open and underlines the need for realistic spatially resolved models of the LAHs.
We present a practical method for evaluating the scattering amplitude $f_s(\theta,\phi)$ that arises in the context of the scattering of scalar, electromagnetic and gravitational planar waves by a rotating black hole. The partial-wave representation of $f_s$ is a divergent series, but $f_s$ itself diverges only at a single point on the sphere. Here we show that $f_s$ can be expressed as the product of a reduced series and a pre-factor that diverges only at this point. The coefficients of the reduced series are found iteratively as linear combinations of those in the original series, and the reduced series is shown to have amenable convergence properties. This series-reduction method has its origins in an approach originally used in electron scattering calculations in the 1950s, which we have extended to the axisymmetric context for all bosonic fields.
We study the NMSSM with universal Susy breaking terms (besides the Higgs sector) at the GUT scale. Within this constrained parameter space, it is not difficult to find a Higgs boson with a mass of about 125 GeV and an enhanced cross section in the diphoton channel. An additional lighter Higgs boson with reduced couplings and a mass <123 GeV is potentially observable at the LHC. The NMSSM-specific Yukawa couplings lambda and kappa are relatively large and tan(beta) is small, such that lambda, kappa and the top Yukawa coupling are of order 1 at the GUT scale. The lightest stop can be as light as 105 GeV, and the fine-tuning is modest. WMAP constraints can be satisfied by a dominantly higgsino-like LSP with substantial bino, wino and singlino admixtures and a mass of ~60-90 GeV, which would potentially be detectable by XENON100.
Game theory is an established branch of mathematics that offers a rich set of mathematical tools for multi-person strategic decision making that can be used to model the interactions of decision makers in security problems who compete for limited and shared resources. This article presents a review of the literature in the area of game theoretical modelling of network/cybersecurity.
We present results of the first systematic search for submillimetre continuum emission from z=2, radio-quiet, optically-luminous quasars, using the SCUBA/JCMT. We have observed a homogeneous sample of 57 quasars in the redshift range 1.5<z<3.0- the epoch during which the comoving density of luminous AGN peaks- to make a systematic comparison with an equivalent sample at high (z>4) redshift. The target sensitivity of the survey (3sigma=10mJy at 850um) was chosen to enable efficient identification of bright submm sources, suitable for detailed follow-up. 9 targets are detected, with fluxes in the range 7-17mJy. Although there is a suggestion of variation of submm detectability between z=2 and z=4, this is consistent with the K-correction of a characteristic far-infrared spectrum. Additionally, the weighted mean fluxes of non-detections at z=2 and z>4 are comparable.
Substantial changes in the generation portfolio take place due to the fast growth of renewable energy generation, of which the major types such as wind and solar power have significant forecast uncertainty. Reducing the impacts of uncertainty requires the cooperation of system participants, which are supported by proper market rules and incentives. In this paper, we propose a bilateral reserve market for variable generation (VG) producers and capacity resource providers. In this market, VG producers purchase bilateral reserve services (BRSs) to reduce potential imbalance penalties, and BRS providers earn profits on their available capacity for re-dispatch. We show in this paper that by introducing this product, the VG producers' overall imbalance costs are linked to both their forecast quality and the available system capacity, which follows the cost-causation principle. Case studies demonstrate how the proposed BRS mechanism works and its effectiveness.
Examining limitations is a crucial step in the scholarly research reviewing process, revealing aspects where a study might lack decisiveness or require enhancement. This aids readers in considering broader implications for further research. In this article, we present a novel and challenging task of Suggestive Limitation Generation (SLG) for research papers. We compile a dataset called \textbf{\textit{LimGen}}, encompassing 4068 research papers and their associated limitations from the ACL anthology. We investigate several approaches to harness large language models (LLMs) for producing suggestive limitations, by thoroughly examining the related challenges, practical insights, and potential opportunities. Our LimGen dataset and code can be accessed at \url{https://github.com/arbmf/LimGen}.
Offline reinforcement learning is important in many settings with available observational data but the inability to deploy new policies online due to safety, cost, and other concerns. Many recent advances in causal inference and machine learning target estimation of causal contrast functions such as CATE, which is sufficient for optimizing decisions and can adapt to potentially smoother structure. We develop a dynamic generalization of the R-learner (Nie and Wager 2021, Lewis and Syrgkanis 2021) for estimating and optimizing the difference of $Q^\pi$-functions, $Q^\pi(s,1)-Q^\pi(s,0)$ (which can be used to optimize multiple-valued actions). We leverage orthogonal estimation to improve convergence rates in the presence of slower nuisance estimation rates and prove consistency of policy optimization under a margin condition. The method can leverage black-box nuisance estimators of the $Q$-function and behavior policy to target estimation of a more structured $Q$-function contrast.
We propose a new framework, Translation between Augmented Natural Languages (TANL), to solve many structured prediction language tasks including joint entity and relation extraction, nested named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, and dialogue state tracking. Instead of tackling the problem by training task-specific discriminative classifiers, we frame it as a translation task between augmented natural languages, from which the task-relevant information can be easily extracted. Our approach can match or outperform task-specific models on all tasks, and in particular, achieves new state-of-the-art results on joint entity and relation extraction (CoNLL04, ADE, NYT, and ACE2005 datasets), relation classification (FewRel and TACRED), and semantic role labeling (CoNLL-2005 and CoNLL-2012). We accomplish this while using the same architecture and hyperparameters for all tasks and even when training a single model to solve all tasks at the same time (multi-task learning). Finally, we show that our framework can also significantly improve the performance in a low-resource regime, thanks to better use of label semantics.
The fully heavy axial-vector diquark-antidiquark structures $bb\overline{c} \overline{c}$ are explored by means of the QCD sum rule method. They are modeled as four-quark mesons $T_{\mathrm{1}}$ and $T_{\mathrm{2}}$ composed of $b^{T}C\sigma _{\mu \nu }\gamma _{5}b$, $\overline{c}\gamma ^{\nu }C \overline{c}^{T}$ and $b^{T}C\gamma _{\mu }\gamma _{5}b$, $\overline{c}C \overline{c}^{T}$ diquarks, respectively. The spectroscopic parameters of the tetraquarks $T_{\mathrm{1}}$ and $T_{\mathrm{2}}$ are determined in the context of the QCD two-point sum rule method. Results obtained for masses of these states $m_{1} =(12715\pm 86)~\mathrm{MeV}$ and $m_{2}=(13383\pm 92)~ \mathrm{MeV}$ are used to fix their strong decay channels. The full width $ \Gamma (T_{\mathrm{1}})$ of the diquark-antidiquark state $T_{\mathrm{1}}$ is estimated by considering the processes $T_{\mathrm{1}} \to B_{c}^{-}B_{c}^{\ast -}$ and $T_{\mathrm{1}} \to B_{c}^{\ast -}B_{c}^{\ast -} $. The decays to mesons $B_{c}^{-}B_{c}^{\ast -}$, $B_{c}^{-}(2S)B_{c}^{ \ast -}$ and $B_{c}^{\ast -}B_{c}^{\ast -}$ are employed to evaluate $\Gamma (T_{\mathrm{2}})$. Results obtained for the widths $\Gamma (T_{\mathrm{1} })=(44.3\pm 8.8)~\mathrm{MeV}$ and $\Gamma (T_{\mathrm{2}})=(82.5\pm 13.7)~ \mathrm{MeV}$ of these tetraquarks in conjunction with their masses are useful for future experimental studies of fully heavy resonances.
We give a complete analytic and geometric description of the horofunction boundary for polygonal sub-Finsler metrics---that is, those that arise as asymptotic cones of word metrics---on the Heisenberg group. We develop theory for the more general case of horofunction boundaries in homogeneous groups by connecting horofunctions to Pansu derivatives of the distance function.
The article is devoted to the simulation of viscous incompressible turbulent fluid flow based on solving the Reynolds averaged Navier-Stokes (RANS) equations with different k-omega models. The isogeometrical approach is used for the discretization based on the Galerkin method. Primary goal of using isogeometric analysis is to be always geometrically exact, independent of the discretization, and to avoid a time-consuming generation of meshes of computational domains. For higher Reynolds numbers, we use stabilization SUPG technique in equations for k and omega. The solutions are compared with the standard benchmark example of turbulent flow over a backward facing step.
Concept-based interpretations of black-box models are often more intuitive for humans to understand. The most widely adopted approach for concept-based interpretation is Concept Activation Vector (CAV). CAV relies on learning a linear relation between some latent representation of a given model and concepts. The linear separability is usually implicitly assumed but does not hold true in general. In this work, we started from the original intent of concept-based interpretation and proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions. We showed that for a general (potentially non-linear) concept, we can mathematically evaluate how a small change of concept affecting the model's prediction, which leads to an extension of gradient-based interpretation to the concept space. We demonstrated empirically that CG outperforms CAV in both toy examples and real world datasets.
Quantum chaotic systems are conjectured to display a spectrum whose fine-grained features (gaps and correlations) are well described by Random Matrix Theory (RMT). We propose and develop a complementary version of this conjecture: quantum chaotic systems display a Lanczos spectrum whose local means and covariances are well described by RMT. To support this proposal, we first demonstrate its validity in examples of chaotic and integrable systems. We then show that for Haar-random initial states in RMTs the mean and covariance of the Lanczos spectrum suffices to produce the full long time behavior of general survival probabilities including the spectral form factor, as well as the spread complexity. In addition, for initial states with continuous overlap with energy eigenstates, we analytically find the long time averages of the probabilities of Krylov basis elements in terms of the mean Lanczos spectrum. This analysis suggests a notion of eigenstate complexity, the statistics of which differentiate integrable systems and classes of quantum chaos. Finally, we clarify the relation between spread complexity and the universality classes of RMT by exploring various values of the Dyson index and Poisson distributed spectra.
Massive black holes (MBHs) in galactic nuclei are believed to be surrounded by a high density stellar cluster, whose mass is mostly in hard-to-detect faint stars and compact remnants. Such dark cusps dominate the dynamics near the MBH: a dark cusp in the Galactic center (GC) of the Milky Way would strongly affect orbital tests of General Relativity there; on cosmic scales, dark cusps set the rates of gravitational wave emission events from compact remnants that spiral into MBHs, and they modify the rates of tidal disruption events, to list only some implications. A recently discovered long-period massive young binary (P_12 <~ 1 yr, M_12 ~ O(100 M_sun), T_12 ~ 6x10^6 yr), only ~0.1 pc from the Galactic MBH (Pfuhl et al 2013), sets a lower bound on the 2-body relaxation timescale there, min t_rlx ~ (P_12/M_12)^(2/3)T_12 ~ 10^7 yr, and correspondingly, an upper bound on the stellar number density, max n ~ few x 10^8/<M_star^2> 1/pc^3, based on the binary's survival against evaporation by the dark cusp. However, a conservative dynamical estimate, the drain limit, implies t_rlx > O(10^8) yr. Such massive binaries are thus too short-lived and tightly bound to constrain a dense relaxed dark cusp. We explore here in detail the use of longer-period, less massive and longer-lived binaries (P_12 ~ few yr, M_12 ~ 2-4 M_sun, T_12 ~ 10^8-10^10 yr), presently just below the detection threshold, for probing the dark cusp, and develop the framework for translating their future detections among the giants in the GC into dynamical constraints.
We introduce a one-parameter family of random infinite quadrangulations of the half-plane, which we call the uniform infinite half-planar quadrangulations with skewness (UIHPQ$_p$ for short, with $p\in[0,1/2]$ measuring the skewness). They interpolate between Kesten's tree corresponding to $p=0$ and the usual UIHPQ with a general boundary corresponding to $p=1/2$. As we make precise, these models arise as local limits of uniform quadrangulations with a boundary when their volume and perimeter grow in a properly fine-tuned way, and they represent all local limits of (sub)critical Boltzmann quadrangulations whose perimeter tend to infinity. Our main result shows that the family (UIHPQ$_p$)$_p$ approximates the Brownian half-planes BHP$_\theta$, $\theta\geq 0$, recently introduced in Baur, Miermont, and Ray (2016). For $p<1/2$, we give a description of the UIHPQ$_p$ in terms of a looptree associated to a critical two-type Galton-Watson tree conditioned to survive.
Maximal arcs in small projective Hjelmslev geometries are classified up to isomorphism, and the parameters of the associated codes are determined.
Machine learning (ML) models can underperform on certain population groups due to choices made during model development and bias inherent in the data. We categorize sources of discrimination in the ML pipeline into two classes: aleatoric discrimination, which is inherent in the data distribution, and epistemic discrimination, which is due to decisions made during model development. We quantify aleatoric discrimination by determining the performance limits of a model under fairness constraints, assuming perfect knowledge of the data distribution. We demonstrate how to characterize aleatoric discrimination by applying Blackwell's results on comparing statistical experiments. We then quantify epistemic discrimination as the gap between a model's accuracy when fairness constraints are applied and the limit posed by aleatoric discrimination. We apply this approach to benchmark existing fairness interventions and investigate fairness risks in data with missing values. Our results indicate that state-of-the-art fairness interventions are effective at removing epistemic discrimination on standard (overused) tabular datasets. However, when data has missing values, there is still significant room for improvement in handling aleatoric discrimination.
For a continuous complex-valued function g on the real line without zeros, several notions of a mean winding number are introduced. We give necessary conditions for a Toeplitz operator with matrix-valued symbol G to be semi-Fredholm in terms of mean winding numbers of det G. The matrix function G is assumed to be continuous on the real line, and no other apriori assumptions on it are made.
An additional spheroidal integral of motion and a group of dynamic symmetry in a model quantum-mechanical problem of two centres eZ_{1}Z_{2}omega with Coulomb and oscillator interactions is obtained, the group properties of its solutions being studied. P(3) \otimes P(2,1), P(5,1) and P(4,2) groups are considered as the dynamic symmetry groups of the problem, among them P(3) \otimes P(2,1) group possessing the smallest number of parameters. The obtained results may appear useful at the calcuations of QQq-baryons and QQg-mesons energy spectra.
We demonstrate radio-frequency tuning of the energy of individual CdTe/ZnTe quantum dots (QDs) by Surface Acoustic Waves (SAWs). Despite the very weak piezoelectric coefficient of ZnTe, SAW in the GHz range can be launched on a ZnTe surface using interdigitated transducers deposited on a c-axis oriented ZnO layer grown on ZnTe containing CdTe QDs. The photoluminescence (PL) of individual QDs is used as a nanometer-scale sensor of the acoustic strain field. The energy of QDs is modulated by SAW in the GHz range and leads to characteristic broadening of time-integrated PL spectra. The dynamic modulation of the QD PL energy can also be detected in the time domain using phase-locked time domain spectroscopy. This technique is in particular used for monitoring complex local acoustic fields resulting from the superposition of two or more SAW pulses in a cavity. Under magnetic field, the dynamic spectral tuning of a single QD by SAW can be used to generate single photons with alternating circular polarization controlled in the GHz range.
Using a new approach to quantum mechanics we revisit Hardy's proof for Bell's theorem and point out a loophole in it. We also demonstrate on this example that quantum mechanics is a local realistic theory.
In this article, we consider a class of bi-stable reaction-diffusion equations in two components on the real line. We assume that the system is singularly perturbed, i.e. that the ratio of the diffusion coefficients is (asymptotically) small. This class admits front solutions that are asymptotically close to the (stable) front solution of the `trivial' scalar bi-stable limit system $u_t = u_{xx} + u(1-u^2)$. However, in the system these fronts can become unstable by varying parameters. This destabilization is either caused by the essential spectrum associated to the linearized stability problem, or by an eigenvalue that exists near the essential spectrum. We use the Evans function to study the various bifurcation mechanisms and establish an explicit connection between the character of the destabilization and the possible appearance of saddle-node bifurcations of heteroclinic orbits in the existence problem.
Given a user's historical interaction sequence, online novel recommendation suggests the next novel the user may be interested in. Online novel recommendation is important but underexplored. In this paper, we concentrate on recommending online novels to new users of an online novel reading platform, whose first visits to the platform occurred in the last seven days. We have two observations about online novel recommendation for new users. First, repeat novel consumption of new users is a common phenomenon. Second, interactions between users and novels are informative. To accurately predict whether a user will reconsume a novel, it is crucial to characterize each interaction at a fine-grained level. Based on these two observations, we propose a neural network for online novel recommendation, called NovelNet. NovelNet can recommend the next novel from both the user's consumed novels and new novels simultaneously. Specifically, an interaction encoder is used to obtain accurate interaction representation considering fine-grained attributes of interaction, and a pointer network with a pointwise loss is incorporated into NovelNet to recommend previously-consumed novels. Moreover, an online novel recommendation dataset is built from a well-known online novel reading platform and is released for public use as a benchmark. Experimental results on the dataset demonstrate the effectiveness of NovelNet.
We have built a CsI(Tl) gamma-ray detector array for the NPDGamma experiment to search for a small parity-violating directional asymmetry in the angular distribution of 2.2 MeV gamma-rays from the capture of polarized cold neutrons by protons with a sensitivity of several ppb. The weak pion-nucleon coupling constant can be determined from this asymmetry. The small size of the asymmetry requires a high cold neutron flux, control of systematic errors at the ppb level, and the use of current mode gamma-ray detection with vacuum photo diodes and low-noise solid-state preamplifiers. The average detector photoelectron yield was determined to be 1300 photoelectrons per MeV. The RMS width seen in the measurement is therefore dominated by the fluctuations in the number of gamma rays absorbed in the detector (counting statistics) rather than the intrinsic detector noise. The detectors were tested for noise performance, sensitivity to magnetic fields, pedestal stability and cosmic background. False asymmetries due to gain changes and electronic pickup in the detector system were measured to be consistent with zero to an accuracy of $10^{-9}$ in a few hours. We report on the design, operating criteria, and the results of measurements performed to test the detector array.
We study the inflow-outflow boundary value problem on an interval, the analog of the 1D shock tube problem for gas dynamics, for general systems of hyperbolic-parabolic conservation laws. In a first set of investigations, we study existence, uniqueness, and stability, showing in particular local existence, uniqueness, and stability of small amplitude solutions for general symmetrizable systems. In a second set of investigations, we investigate structure and behavior in the small- and large-viscosity limits. A phenomenon of particular interest is the generic appearance of characteristic boundary layers in the inviscid limit, arising from noncharacteristic data for the viscous problem, even of arbitrarily small amplitude. This induces an interesting new type of \transcharacteristic" hyperbolic boundary condition governing the formal inviscid limit.
In this note we prove the Weinstein conjecture for a class of symplectic manifolds including the uniruled manifolds based on Liu-Tian's result.
Molecular absorption lines measured along the line of sight of distant quasars are important probes of the gas evolution in galaxies as a function of redshift. A review is made of the handful of molecular absorbing systems studied so far, with the present sensitivity of mm instruments. They produce information on the chemistry of the ISM at z \sim 1, the physical state of the gas, in terms of clumpiness, density and temperature. The CMB temperature can be derived as a function of z, and also any possible variations of fundamental constants can be constrained. With the sensitivity of ALMA, many more absorbing systems can be studied, for which some predictions and perspectives are described.
Sliced inverse regression (SIR, Li 1991) is a pioneering work and the most recognized method in sufficient dimension reduction. While promising progress has been made in theory and methods of high-dimensional SIR, two remaining challenges are still nagging high-dimensional multivariate applications. First, choosing the number of slices in SIR is a difficult problem, and it depends on the sample size, the distribution of variables, and other practical considerations. Second, the extension of SIR from univariate response to multivariate is not trivial. Targeting at the same dimension reduction subspace as SIR, we propose a new slicing-free method that provides a unified solution to sufficient dimension reduction with high-dimensional covariates and univariate or multivariate response. We achieve this by adopting the recently developed martingale difference divergence matrix (MDDM, Lee & Shao 2018) and penalized eigen-decomposition algorithms. To establish the consistency of our method with a high-dimensional predictor and a multivariate response, we develop a new concentration inequality for sample MDDM around its population counterpart using theories for U-statistics, which may be of independent interest. Simulations and real data analysis demonstrate the favorable finite sample performance of the proposed method.
We propose the action for the nonrelativistic string invariant under general coordinate transformations on the string worldsheet. The Hamiltonian formulation for the nonrelativistic string is given. Particular solutions of the Euler-Lagrange equations are found in the time gauge.
We propose a new dark-state cooling method of trapped ion systems in the Lamb-Dicke limit. With application of microwave dressing the ion, we can obtain two electromagnetically induced transparency structures. The heating effects caused by the carrier and the blue sideband transition vanish due to the EIT effects and the final mean phonon numbers can be much less than the recoil limit. Our scheme is robust to fluctuations of microwave power and laser intensities which provides a broad cooling bandwidth to cool motional modes of a linear ion chain. Moreover, it is more suitable to cool four-level ions on a large-scale ion chip.
We propose that the Matrix Profile data structure, conventionally applied to large scale time-series data mining, is applicable to the analysis and suppression of cyclical error in electromechanical systems, paving the way for an intelligent family of adaptable control systems which respond to environmental error at a computational cost low enough to be practical in embedded applications. We construct and evaluate the efficacy of a control algorithm utilizing the Matrix Profile, which we call Cyclical Electromechanical Error Denial System (CEEDS).
Inspired by the works of L. Carlitz and Z.-W. Sun on cyclotomic matrices, in this paper, we investigate certain cyclotomic matrices involving Gauss sums over finite fields, which can be viewed as finite field analogues of certain matrices related to the Gamma function. For example, let $q=p^n$ be an odd prime power with $p$ prime and $n\in\mathbb{Z}^+$. Let $\zeta_p=e^{2\pi{\bf i}/p}$ and let $\chi$ be a generator of the group of all mutiplicative characters of the finite field $\mathbb{F}_q$. For the Gauss sum $$G_q(\chi^{r})=\sum_{x\in\mathbb{F}_q}\chi^{r}(x)\zeta_p^{{\rm Tr}_{\mathbb{F}_q/\mathbb{F}_p}(x)},$$ we prove that $$\det \left[G_q(\chi^{2i+2j})\right]_{0\le i,j\le (q-3)/2}=(-1)^{\alpha_p}\left(\frac{q-1}{2}\right)^{\frac{q-1}{2}}2^{\frac{p^{n-1}-1}{2}},$$ where $$\alpha_p= \begin{cases} 1 & \mbox{if}\ n\equiv 1\pmod 2, (p^2+7)/8 & \mbox{if}\ n\equiv 0\pmod 2. \end{cases}$$
Let $\Sigma$ be a surface with a symplectic form, let $\phi$ be a symplectomorphism of $\Sigma$, and let $Y$ be the mapping torus of $\phi$. We show that the dimensions of moduli spaces of embedded pseudoholomorphic curves in $\R\times Y$, with cylindrical ends asymptotic to periodic orbits of $\phi$ or multiple covers thereof, are bounded from above by an additive relative index. We deduce some compactness results for these moduli spaces. This paper establishes some of the foundations for a program with Michael Thaddeus, to understand the Seiberg-Witten Floer homology of $Y$ in terms of such pseudoholomorphic curves. Analogues of our results should also hold in three dimensional contact homology.
In this analytical study, we have presented a new type of solving procedure with aim to obtain the coordinates of small mass m, which moves around primary M_Sun, referred to non-inertial frame of restricted two-body problem (R2BP) with modified potential function (taking into account the components of variable velocity of central body M_Sun motion) instead of classical potential function for Kepler formulation of R2BP. Meanwhile, system of equations of motion has been successfully explored with respect to the existence of analytical way for presentation of the solution in polar coordinates with radial distance r = r(t). We have obtained analytical formula for function t = t(r) via appropriate elliptic integral. Having obtained the inversed dependence r = r(t), we can obtain the time-dependence for the polar angle as well. Also, we have pointed out how to express components of solution (including initial conditions) from cartesian to polar coordinates.
The microscopic modeling of spin-orbit entangled $j=1/2$ Mott insulators such as the layered hexagonal Iridates Na$_2$IrO$_3$ and Li$_2$IrO$_3$ has spurred an interest in the physics of Heisenberg-Kitaev models. Here we explore the effect of lattice distortions on the formation of the collective spin-orbital states which include not only conventionally ordered phases but also gapped and gapless spin-orbital liquids. In particular, we demonstrate that in the presence of spatial anisotropies of the exchange couplings conventionally ordered states are formed through an order-by-disorder selection which is not only sensitive to the type of exchange anisotropy but also to the relative strength of the Heisenberg and Kitaev couplings. The spin-orbital liquid phases of the Kitaev limit -- a gapless phase in the vicinity of spatially isotropic couplings and a gapped Z$_2$ phase for a dominant spatial anisotropy of the exchange couplings -- show vastly different sensitivities to the inclusion of a Heisenberg exchange. While the gapless phase is remarkably stable, the gapped Z$_2$ phase quickly breaks down in what might be a rather unconventional phase transition driven by the simultaneous condensation of its elementary excitations.
Scores of on-going microlensing events are now announced yearly by the microlensing discovery teams OGLE, MACHO and EROS. These early warning systems have allowed other international microlensing networks to focus considerable resources on intense photometric - and occasionally spectroscopic - monitoring of microlensing events. Early results include: metallicity measurements of main sequence Galactic bulge stars; limb darkening determinations for stars in the Bulge and Small Magellanic Cloud; proper motion measurements that constrain microlens identity; and constraints on Jovian-mass planets orbiting (presumably stellar) lenses. These results and auxiliary science such as variable star studies and optical identification of gamma ray bursts are reviewed.
We statistically analyse a recent sample of data points measuring the fine-structure constant alpha (relative to the terrestrial value) in quasar absorption systems. Using different statistical techniques, we find general agreement with previous authors that a dipole model is a well-justified fit to the data. We determine the significance of the dipole fit relative to that of a simple monopole fit, discuss the consistency of the interpretation, and test alternate models for potential variation of alpha against the data. Using a simple analysis we find that the monopole term (the constant offset in (delta alpha)/alpha) may be caused by non-terrestrial magnesium isotope abundances in the absorbers. Finally we test the domain-wall model against the data.
In Goal-oriented Reinforcement learning, relabeling the raw goals in past experience to provide agents with hindsight ability is a major solution to the reward sparsity problem. In this paper, to enhance the diversity of relabeled goals, we develop FGI (Foresight Goal Inference), a new relabeling strategy that relabels the goals by looking into the future with a learned dynamics model. Besides, to improve sample efficiency, we propose to use the dynamics model to generate simulated trajectories for policy training. By integrating these two improvements, we introduce the MapGo framework (Model-Assisted Policy Optimization for Goal-oriented tasks). In our experiments, we first show the effectiveness of the FGI strategy compared with the hindsight one, and then show that the MapGo framework achieves higher sample efficiency when compared to model-free baselines on a set of complicated tasks.
We study heterogeneity in the effect of a mindset intervention on student-level performance through an observational dataset from the National Study of Learning Mindsets (NSLM). Our analysis uses machine learning (ML) to address the following associated problems: assessing treatment group overlap and covariate balance, imputing conditional average treatment effects, and interpreting imputed effects. By comparing several different model families we illustrate the flexibility of both off-the-shelf and purpose-built estimators. We find that the mindset intervention has a positive average effect of 0.26, 95%-CI [0.22, 0.30], and that heterogeneity in the range of [0.1, 0.4] is moderated by school-level achievement level, poverty concentration, urbanicity, and student prior expectations.
In recent years, deep neural networks (DNNs) based approaches have achieved the start-of-the-art performance for music source separation (MSS). Although previous methods have addressed the large receptive field modeling using various methods, the temporal and frequency correlations of the music spectrogram with repeated patterns have not been explicitly explored for the MSS task. In this paper, a temporal-frequency attention module is proposed to model the spectrogram correlations along both temporal and frequency dimensions. Moreover, a multi-scale attention is proposed to effectively capture the correlations for music signal. The experimental results on MUSDB18 dataset show that the proposed method outperforms the existing state-of-the-art systems with 9.51 dB signal-to-distortion ratio (SDR) on separating the vocal stems, which is the primary practical application of MSS.
Many panel studies collect refreshment samples---new, randomly sampled respondents who complete the questionnaire at the same time as a subsequent wave of the panel. With appropriate modeling, these samples can be leveraged to correct inferences for biases caused by non-ignorable attrition. We present such a model when the panel includes many categorical survey variables. The model relies on a Bayesian latent pattern mixture model, in which an indicator for attrition and the survey variables are modeled jointly via a latent class model. We allow the multinomial probabilities within classes to depend on the attrition indicator, which offers additional flexibility over standard applications of latent class models. We present results of simulation studies that illustrate the benefits of this flexibility. We apply the model to correct attrition bias in an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The Einstein equations are solved algebraically to model a hybrid astrophysical compact object consisting of a preon gas core, a mantle of electrically charged hot quark-gluon plasma, and an outer envelope of charged hadronic matter which is matched to an exterior Reissner-Nordstrom vacuum. The piecewise-continuous metric and the pressure and density functions consist of polynomials that are everywhere well-behaved. Boundary conditions at each interface yield estimates for physical parameters applicable to each layer, and to the star as a whole.
In this paper, we study the local exact controllability to special trajectories of the micropolar fluid systems in dimension d = 2 and d = 3. We show that controllability is possible acting only on one velocity.
In this paper, an analytical method that solves the forward displacement problem of several common spherical parallel manipulators (SPMs) is presented. The method uses the the quaternion algebra to restate the problem as a system of four quadrics in four variables and uses an algebraic geometry result by Dixon from 1908 to solve. In addition, a case study is presented for a specific SPM.
The study of Galactic Cosmic-ray electrons (CREs) saw important developments in recent years, with the assumption of positron production only in interaction of hadronic Cosmic-rays with interstellar matter challenged by new measurements of CRE spectrum and related quantities. Indeed, all recent experiments seem to confirm an hardening in the positrons, a feature that is totally in contrast with the all-secondaries hypothesis, even if significant disagreements are present about the CRE spectral behavior and the possible presence of spectral features. Together with insufficient precision of current measurements, these disagreements prevent the identification of the primary positron source, with models involving Dark matter or astrophysical sources like Super Nova Remnants and Pulsar Wind Nebula all able to explain current data. The fermi-LAT contribution to the CRE study was fundamental, with the 2009 measurement of the positron + electron spectrum extended to the 7 MeV - 1TeV range with a statistics already exceeding previous results by many order of magnitude; since then, the last statistic has largely increased, while the LAT event reconstruction was significantly improved. In this article the reader will find an extensive and historical review of the CRE science, a summary of the history of gamma astronomy before fermi, an accurate description of the LAT and of its data analysis, a review of the present knowledge about CRE spectrum and on the theories that try to explain it, and finally a description of the changes and improvements introduced in the LAT event reconstruction process at the beginning of 2015.
We discuss possible non-standard contributions to the top-quark width, particularly the virtual effects on the standard decay $t\rightarrow W^+\,b$ within the context of the MSSM. We also place a renewed emphasis on the unconventional mode $t\rightarrow H^+\,b$ in the light of recent analyses of $Z$-boson observables. It turns out that in the region of parameter space highlighted by $Z$-boson physics, the charged Higgs mode should exhibite an appreciable branching fraction as compared to the standard decay of the top quark. Remarkably enough, the corresponding quantum effects in this region are also rather large, slowly decoupling, and most likely resolvable in the next generation of experiments at Tevatron and at LHC.
Contemporary quantum devices are reaching new limits in size and complexity, allowing for the experimental exploration of emergent quantum modes. However, this increased complexity introduces significant challenges in device tuning and control. Here, we demonstrate autonomous tuning of emergent Majorana zero modes in a minimal realization of a Kitaev chain. We achieve this task using cross-platform transfer learning. First, we train a tuning model on a theory model. Next, we retrain it using a Kitaev chain realization in a two-dimensional electron gas. Finally, we apply this model to tune a Kitaev chain realized in quantum dots coupled through a semiconductor-superconductor section in a one-dimensional nanowire. Utilizing a convolutional neural network, we predict the tunneling and Cooper pair splitting rates from differential conductance measurements, employing these predictions to adjust the electrochemical potential to a Majorana sweet spot. The algorithm successfully converges to the immediate vicinity of a sweet spot (within 1.5 mV in 67.6% of attempts and within 4.5 mV in 80.9% of cases), typically finding a sweet spot in 45 minutes or less. This advancement is a stepping stone towards autonomous tuning of emergent modes in interacting systems, and towards foundational tuning machine learning models that can be deployed across a range of experimental platforms.
A nitrogen gas Raman cell system has been constructed to shift a 70 J 527 nm laser beam to 600 nm with 20 J of energy. The 600 nm probe and a 200J, 527 nm pump beam were optically mixed in a laser produced (gas jet) plasma. The beating of the two laser beams formed a ponderomotive force that can drive Kinetic Electrostatic Electron Nonlinear (KEEN) waves discovered in Vlasov-Poisson simulations by Afeyan et al [1,2]. KEEN waves were detected in these experiments where traditional plasma theory would declare there to be a spectral gap (ie no linear waves possible). The detection was done using Thomson scattering with probe wavelengths of both 351 nm and 263.5 nm.
Regular incidence complexes are combinatorial incidence structures generalizing regular convex polytopes, regular complex polytopes, various types of incidence geometries, and many other highly symmetric objects. The special case of abstract regular polytopes has been well-studied. The paper describes the combinatorial structure of a regular incidence complex in terms of a system of distinguished generating subgroups of its automorphism group or a flag-transitive subgroup. Then the groups admitting a flag-transitive action on an incidence complex are characterized as generalized string C-groups. Further, extensions of regular incidence complexes are studied, and certain incidence complexes particularly close to abstract polytopes, called abstract polytope complexes, are investigated.
In estimating the causal effect of a continuous exposure or treatment, it is important to control for all confounding factors. However, most existing methods require parametric specification for how control variables influence the outcome or generalized propensity score, and inference on treatment effects is usually sensitive to this choice. Additionally, it is often the goal to estimate how the treatment effect varies across observed units. To address this gap, we propose a semiparametric model using Bayesian tree ensembles for estimating the causal effect of a continuous treatment of exposure which (i) does not require a priori parametric specification of the influence of control variables, and (ii) allows for identification of effect modification by pre-specified moderators. The main parametric assumption we make is that the effect of the exposure on the outcome is linear, with the steepness of this relationship determined by a nonparametric function of the moderators, and we provide heuristics to diagnose the validity of this assumption. We apply our methods to revisit a 2001 study of how abortion rates affect incidence of crime.
We report tentative evidence for a cold stellar stream in the ultra-diffuse galaxy NGC1052-DF2. If confirmed, this stream (which we refer to as "The Maybe Stream") would be the first cold stellar stream detected outside of the Local Group. The candidate stream is very narrow and has an unusual and highly curved shape.
Contrary to a previously published claim it is found that the spheroidal galaxies NGC 147 and NGC 185 probably form a stable binary system. Distance estimates place this pair on the near side of the Andromeda subgroup of the Local Group. The fact that this system has probably remained stable over a Hubble time suggests that it does not have a plunging orbit that brings it very close to M 31. It is noted that the only two Local Group galaxy pairs, in which the components have comparable masses, also have similar morphological types. NGC 147 and NGC 185 are both spheroidals, while the LMC and SMC are both irregulars. This suggests that protogalaxies of similar mass that are spawned in similar environments evolve into objects having similar morphologies.
In this paper we construct massive supermultiplets out of appropriate set of massless ones in the same way as massive spin s particle could be constructed out of massless spin s,s-1,... ones leading to gauge invariant description of massive particle. Mainly we consider massive spin 3/2 supermultiplets in a flat d=4 Minkowski space both without central charge for N=1,2,3 as well as with central charge for N=2,4. Besides, we give two examples of massive N=1 supermultiplets with spin 3/2 and 2 in AdS_4 space.
The FIND algorithm (also called Quickselect) is a fundamental algorithm to select ranks or quantiles within a set of data. It was shown by Gr\"ubel and R\"osler that the number of key comparisons required by Find as a process of the quantiles $\alpha\in[0,1]$ in a natural probabilistic model converges after normalization in distribution within the c\`adl\`ag space $D[0,1]$ endowed with the Skorokhod metric. We show that the process of the residuals in the latter convergence after normalization converges in distribution to a mixture of Gaussian processes in $D[0,1]$ and identify the limit's conditional covariance functions. A similar result holds for the related algorithm QuickVal. Our method extends to other cost measures such as the number of swaps (key exchanges) required by Find or cost measures which are based on key comparisons but take into account that the cost of a comparison between two keys may depend on their values, an example being the number of bit comparisons needed to compare keys given by their bit expansions.
Graph-based clustering plays an important role in the clustering area. Recent studies about graph convolution neural networks have achieved impressive success on graph type data. However, in general clustering tasks, the graph structure of data does not exist such that the strategy to construct a graph is crucial for performance. Therefore, how to extend graph convolution networks into general clustering tasks is an attractive problem. In this paper, we propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs. The adaptive process is designed to induce the model to exploit the high-level information behind data and utilize the non-Euclidean structure sufficiently. We further design a novel mechanism with rigorous analysis to avoid the collapse caused by the adaptive construction. Via combining the generative model for network embedding and graph-based clustering, a graph auto-encoder with a novel decoder is developed such that it performs well in weighted graph used scenarios. Extensive experiments prove the superiority of our model.
Utilization of multiple trajectories of a dynamical system model provides us with several benefits in approximation of time series. For short term predictions a high accuracy can be achieved via switches to new trajectory at any time. Different long term trends (tendency to different stationary points) of the phase portrait characterize various scenarios of the process realization influenced by externalities. The dynamical system's phase portrait analysis helps to see if the equations properly describe the reality. We also extend the dynamical systems approach (discussed in \cite{R5}) to the dynamical systems with external control. We illustrate these ideas with the help of new examples of the rental properties HOMES.mil platform data. We also compare the qualitative properties of HOMES.mil and Wikipedia.org platforms' phase portraits and the corresponding differences of the two platforms' users. In our last example with COVID-19 data we discuss the high accuracy of the short term prediction of confirmed infection cases, recovery cases and death cases in various countries.
Assuming an effective quadratic Hamiltonian, we derive an approximate, linear stochastic equation of motion for the density-fluctuations in liquids, composed of overdamped Brownian particles. From this approach, time dependent two point correlation functions (such as the intermediate scattering function) are derived. We show that this correlation function is exact at short times, for any interaction and, in particular, for arbitrary external potentials so that it applies to confined systems. Furthermore, we discuss the relation of this approach to previous ones, such as dynamical density functional theory as well as the formally exact treatment. This approach, inspired by the well known Landau-Ginzburg Hamiltonians, and the corresponding "Model B" equation of motion, may be seen as its microscopic version, containing information about the details on the particle level.
Electromagnetic metasurfaces have attracted significant interest recently due to their low profile and advantageous applications. Practically, many metasurface designs start with a set of constraints for the radiated far-field, such as main-beam direction(s) and side lobe levels, and end with a non-uniform physical structure for the surface. This problem is quite challenging, since the required tangential field transformations are not completely known when only constraints are placed on the scattered fields. Hence, the required surface properties cannot be solved for analytically. Moreover, the translation of the desired surface properties to the physical unit cells can be time-consuming and difficult, as it is often a one-to-many mapping in a large solution space. Here, we divide the inverse design process into two steps: a macroscopic and microscopic design step. In the former, we use an iterative optimization process to find the surface properties that radiate a far-field pattern that complies with specified constraints. This iterative process exploits non-radiating currents to ensure a passive and lossless design. In the microscopic step, these optimized surface properties are realized with physical unit cells using machine learning surrogate models. The effectiveness of this end-to-end synthesis process is demonstrated through measurement results of a beam-splitting prototype.
For a symplectic isotopy on the two-dimensional disc we show that the classical spectral invariants of Viterbo [20] can be extended in a meaningful way to {\it non-compactly} supported Hamiltonians. We establish some basic properties of these extended invariants and as an application we show that Hutchings' inequality in [8] between the Calabi invariant and the mean action spectrum holds without any assumptions on the isotopy; in [8] it is assumed that the Calabi invariant is less than the rotation number (or action) on the boundary.
Recent spectropolarimetric surveys (MiMeS, BOB) have revealed that approximately 7% of massive stars host stable, surface dipolar magnetic fields with strengths on the order of kG. These fields channel the dense radiatively driven stellar wind into a circumstellar magnetosphere. Wind-sensitive UV spectral lines can probe the density and velocity structure of massive star magnetospheres, providing insight into wind-field interactions. To date, large-scale magnetohydrodynamic modeling of this phenomenon has been limited by the associated computational cost. Our analysis, using the Analytic Dynamical Magnetosphere model, solves this problem by applying a simple analytic prescription to efficiently calculate synthetic UV spectral lines. It can therefore be applied in the context of a larger parameter study to derive the wind properties for the population of known magnetic O stars. We also present the latest UV spectra of the magnetic O star NGC 1624-2 obtained with HST/COS, which test the limits of our models and suggest a particularly complex magnetospheric structure for this archetypal object.
Motivated to measure the QED birefringence and to detect pseudoscalar-photon interaction, we started to build up the Q & A experiment (QED [Quantum Electrodynamics] and Axion experiment) in 1994. In this talk, we first review our 3.5 m Fabry-Perot interferometer together with our results of measuring Cotton-Mouton effects of gases. We are uprading our interferometer to 7 m armlength with a new 1.8 m 2.3 T permanent magnet capable of rotation up to 13 cycles per second. We will use 532 nm Nd:YAG laser as light source with cavity finesse around 100,000, and aim at 10 nrad/Hz^{1/2} optical sensitivity. With all these achieved and the upgrading of vacuum, QED birefringence would be measured to 28% in about 50 days. Along the way, we should be able to improve on the dichroism detection significantly.
We prove a necessary condition for a dynamic integro-differential equation to be an Euler-Lagrange equation. New and interesting results for the discrete and quantum calculus are obtained as particular cases. An example of a second order dynamic equation, which is not an Euler-Lagrange equation on an arbitrary time scale, is given.
Following the success of the so-called algebraic approach to the study of decision constraint satisfaction problems (CSPs), exact optimization of valued CSPs, and most recently promise CSPs, we propose an algebraic framework for valued promise CSPs. To every valued promise CSP we associate an algebraic object, its so-called valued minion. Our main result shows that the existence of a homomorphism between the associated valued minions implies a polynomial-time reduction between the original CSPs. We also show that this general reduction theorem includes important inapproximability results, for instance, the inapproximability of almost solvable systems of linear equations beyond the random assignment threshold.
Message-passing methods provide a powerful approach for calculating the expected size of cascades either on random networks (e.g., drawn from a configuration-model ensemble or its generalizations) asymptotically as the number $N$ of nodes becomes infinite or on specific finite-size networks. We review the message-passing approach and show how to derive it for configuration-model networks using the methods of (Dhar et al., 1997) and (Gleeson, 2008). Using this approach, we explain for such networks how to determine an analytical expression for a "cascade condition", which determines whether a global cascade will occur. We extend this approach to the message-passing methods for specific finite-size networks (Shrestha and Moore, 2014; Lokhov et al., 2015), and we derive a generalized cascade condition. Throughout this chapter, we illustrate these ideas using the Watts threshold model.
Attributed network embedding has attracted plenty of interest in recent years. It aims to learn task-independent, low-dimensional, and continuous vectors for nodes preserving both topology and attribute information. Most of the existing methods, such as random-walk based methods and GCNs, mainly focus on the local information, i.e., the attributes of the neighbours. Thus, they have been well studied for assortative networks (i.e., networks with communities) but ignored disassortative networks (i.e., networks with multipartite, hubs, and hybrid structures), which are common in the real world. To enable model both assortative and disassortative networks, we propose a block-based generative model for attributed network embedding from a probability perspective. Specifically, the nodes are assigned to several blocks wherein the nodes in the same block share the similar linkage patterns. These patterns can define assortative networks containing communities or disassortative networks with the multipartite, hub, or any hybrid structures. To preserve the attribute information, we assume that each node has a hidden embedding related to its assigned block. We use a neural network to characterize the nonlinearity between node embeddings and node attributes. We perform extensive experiments on real-world and synthetic attributed networks. The results show that our proposed method consistently outperforms state-of-the-art embedding methods for both clustering and classification tasks, especially on disassortative networks.
During the shock-wave propagation in a core-collapse supernova (SN), matter turbulence may affect neutrino flavor conversion probabilities. Such effects have been usually studied by adding parametrized small-scale random fluctuations (with arbitrary amplitude) on top of coarse, spherically symmetric matter density profiles. Recently, however, two-dimensional (2D) SN models have reached a space resolution high enough to directly trace anisotropic density profiles, down to scales smaller than the typical neutrino oscillation length. In this context, we analyze the statistical properties of a large set of SN matter density profiles obtained in a high-resolution 2D simulation, focusing on a post-bounce time (2 s) suited to study shock-wave effects on neutrino propagation on scales as small as O(100) km and possibly below. We clearly find the imprint of a broken (Kolmogorov-Kraichnan) power-law structure, as generically expected in 2D turbulence spectra. We then compute the flavor evolution of SN neutrinos along representative realizations of the turbulent matter density profiles, and observe no or modest damping of the neutrino crossing probabilities on their way through the shock wave. In order to check the effect of possibly unresolved fluctuations at scales below O(100) km, we also apply a randomization procedure anchored to the power spectrum calculated from the simulation, and find consistent results within \pm 1 sigma fluctuations. These results show the importance of anchoring turbulence effects on SN neutrinos to realistic, fine-grained SN models.
We are concerned with spherically symmetric solutions of the Euler equations for multidimensional compressible fluids, which are motivated by many important physical situations. Various evidences indicate that spherically symmetric solutions of the compressible Euler equations may blow up near the origin at certain time under some circumstance. The central feature is the strengthening of waves as they move radially inward. A longstanding open, fundamental question is whether concentration could form at the origin. In this paper, we develop a method of vanishing viscosity and related estimate techniques for viscosity approximate solutions, and establish the convergence of the approximate solutions to a global finite-energy entropy solution of the compressible Euler equations with spherical symmetry and large initial data. This indicates that concentration does not form in the vanishing viscosity limit, even though the density may blow up at certain time. To achieve this, we first construct global smooth solutions of appropriate initial-boundary value problems for the Euler equations with designed viscosity terms, an approximate pressure function, and boundary conditions, and then we establish the strong convergence of the viscosity approximate solutions to a finite-energy entropy solutions of the Euler equations.
Multistate Markov models are a canonical parametric approach for data modeling of observed or latent stochastic processes supported on a finite state space. Continuous-time Markov processes describe data that are observed irregularly over time, as is often the case in longitudinal medical data, for example. Assuming that a continuous-time Markov process is time-homogeneous, a closed-form likelihood function can be derived from the Kolmogorov forward equations -- a system of differential equations with a well-known matrix-exponential solution. Unfortunately, however, the forward equations do not admit an analytical solution for continuous-time, time-inhomogeneous Markov processes, and so researchers and practitioners often make the simplifying assumption that the process is piecewise time-homogeneous. In this paper, we provide intuitions and illustrations of the potential biases for parameter estimation that may ensue in the more realistic scenario that the piecewise-homogeneous assumption is violated, and we advocate for a solution for likelihood computation in a truly time-inhomogeneous fashion. Particular focus is afforded to the context of multistate Markov models that allow for state label misclassifications, which applies more broadly to hidden Markov models (HMMs), and Bayesian computations bypass the necessity for computationally demanding numerical gradient approximations for obtaining maximum likelihood estimates (MLEs). Supplemental materials are available online.