text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
We give a direct approach to recover some of the results of Wiles and Tayor on modularity of certain 2-dimensional p-adic representations of the absolute Galois group of Q.
|
Modularity of p-adic Galois representations via p-adic approximations
|
Confidential computing is a key technology for isolating high-assurance applications from the large amounts of untrusted code typical in modern systems. Existing confidential computing systems cannot be certified for use in critical applications, like systems controlling critical infrastructure, hardware security modules, or aircraft, as they lack formal verification. This paper presents an approach to formally modeling and proving a security monitor. It introduces a canonical architecture for virtual machine (VM)-based confidential computing systems. It abstracts processor-specific components and identifies a minimal set of hardware primitives required by a trusted security monitor to enforce security guarantees. We demonstrate our methodology and proposed approach with an example from our Rust implementation of the security monitor for RISC-V.
|
Towards a Formally Verified Security Monitor for VM-based Confidential Computing
|
We consider the one-variable fragment of first-order logic extended with Presburger constraints. The logic is designed in such a way that it subsumes the previously-known fragments extended with counting, modulo counting or cardinality comparison and combines their expressive powers. We prove NP-completeness of the logic by presenting an optimal algorithm for solving its finite satisfiability problem.
|
One-Variable Logic Meets Presburger Arithmetic
|
We present a multi--wavelength study of a supergiant shell within the violent interstellar medium of the nearby dwarf galaxy IC 2574, a member of the M81 group of galaxies. Neutral hydrogen (HI) observations obtained with the Very Large Array (VLA) reveal a prominent expanding supergiant HI shell which is thought to be produced by the combined effects of stellar winds and supernova explosions. It measures roughly 1000 x 500 pc in size and is expanding at about 25 km/s. The HI data suggest an age of about 1.4 x 10^6 yrs; the energy input must have been of order (2.6\pm 1) x 10^53 ergs. Massive star forming regions, as traced by H$\alpha$ emission, are situated predominantly on the rim of this HI shell. VLA radio continuum observations at 6 cm show that these star-forming regions are the main sources of radio continuum emission in this galaxy. Soft X-ray emission from within the HI hole is detected by a pointed ROSAT PSPC observation. The emission is resolved, coinciding in size and orientation with the HI shell. These spatial properties suggest that the emission is generated by an X-ray emitting plasma located within the HI shell although a contribution from X-ray binaries cannot be completely ruled out. The X-ray data are compatible with emission coming from a Raymond & Smith plasma at a temperature of about log(T[K]) = 6.8 and a density of 0.03 cm^-3.
|
X-ray Emission from an Expanding Supergiant Shell in IC 2574
|
We describe a spin-resolved electron spectrometer capable of uniquely efficient and high energy resolution measurements. Spin analysis is obtained through polarimetry based on low-energy exchange scattering from a ferromagnetic thin-film target. This approach can achieve a similar analyzing power (Sherman function) as state-of-the-art Mott scattering polarimeters, but with as much as 100 times improved efficiency due to increased reflectivity. Performance is further enhanced by integrating the polarimeter into a time-of-flight (TOF) based energy analysis scheme with a precise and flexible electrostatic lens system. The parallel acquisition of a range of electron kinetic energies afforded by the TOF approach results in an order of magnitude (or more) increase in efficiency compared to hemispherical analyzers. The lens system additionally features a 90{\deg} bandpass filter, which by removing unwanted parts of the photoelectron distribution allows the TOF technique to be performed at low electron drift energy and high energy resolution within a wide range of experimental parameters. The spectrometer is ideally suited for high-resolution spin- and angle-resolved photoemission spectroscopy (spin-ARPES), and initial results are shown. The TOF approach makes the spectrometer especially ideal for time-resolved spin-ARPES experiments.
|
A high-efficiency spin-resolved phototemission spectrometer combining time-of-flight spectroscopy with exchange-scattering polarimetry
|
Stellar-mass binary black holes (BBHs) embedded in active galactic nucleus (AGN) discs offer a distinct dynamical channel to produce black hole mergers detected in gravitational waves by LIGO/Virgo. To understand their orbital evolution through interactions with the disc gas, we perform a suite of 2D high-resolution, local shearing box, viscous hydrodynamical simulations of equal-mass binaries. We find that viscosity not only smooths the flow structure around prograde circular binaries, but also greatly raises their accretion rates. The overwhelming positive torque associated with the accretion dominates over the gravitational torque, and drives binary orbital expansion. However, retrograde binaries still experience rapid orbital decay, and prograde eccentric binaries still experience eccentricity damping, despite undergoing outspiral. Our numerical experiments further show that prograde binaries may experience inspiral if the physical sizes of the accretors are sufficiently small, such that the net binary accretion is reduced. Such a dependence of the binary accretion rate on the accretor size can be weaken through boosted accretion either due to a high viscosity or a more isothermal-like equation of state (EOS). Our results widen the explored parameter space for the hydrodynamics of embedded BBHs and demonstrate that their orbital evolution in AGN discs is a complex, multifaceted problem.
|
Hydrodynamical Evolution of Black-Hole Binaries Embedded in AGN Discs: III. The Effects of Viscosity
|
Major depressive disorder (MDD) is one of the most common mental health conditions that has been intensively investigated for its association with brain atrophy and mortality. Recent studies reveal that the deviation between the predicted and the chronological age can be a marker of accelerated brain aging to characterize MDD. However, current conclusions are usually drawn based on structural MRI information collected from Caucasian participants. The universality of this biomarker needs to be further validated by subjects with different ethnic/racial backgrounds and by different types of data. Here we make use of the REST-meta-MDD, a large scale resting-state fMRI dataset collected from multiple cohort participants in China. We develop a stacking machine learning model based on 1101 healthy controls, which estimates a subject's chronological age from fMRI with promising accuracy. The trained model is then applied to 1276 MDD patients from 24 sites. We observe that MDD patients exhibit a $+4.43$ years ($\text{$p$} < 0.0001$, $\text{Cohen's $d$} = 0.35$, $\text{95\% CI}:1.86 - 3.91$) higher brain-predicted age difference (brain-PAD) compared to controls. In the MDD subgroup, we observe a statistically significant $+2.09$ years ($\text{$p$} < 0.05$, $\text{Cohen's $d$} = 0.134483$) brain-PAD in antidepressant users compared to medication-free patients. The statistical relationship observed is further checked by three different machine learning algorithms. The positive brain-PAD observed in participants in China confirms the presence of accelerated brain aging in MDD patients. The utilization of functional brain connectivity for age estimation verifies existing findings from a new dimension.
|
Accelerated functional brain aging in major depressive disorder: evidence from a large scale fMRI analysis of Chinese participants
|
Wireless capsule endoscopy (WCE) systems are used to capture images of the human digestive tract for medical applications. The antenna is one of the most important components in a WCE system. In this paper, we provide novel small antenna solutions for a WCE system operating at the 433 MHz ISM band. The in-body capsule transmitter uses an ultrawideband outer-wall conformal loop antenna, whereas the on-body receiver uses a printed monopole antenna with a partial ground plane. A colon-equivalent tissue phantom and CST Gustav voxel human body model were used for the numerical studies of the capsule antenna. The simulation results in the colon-tissue phantom were validated through in-vitro measurements using a liquid phantom. According to the phantom simulations, the capsule antenna has -10 dB impedance matching from 309 to 1104 MHz. The ultrawideband characteristic enables the capsule antenna to tolerate the detuning effects due to electronic modules in the capsule and due to the proximity of various different tissues in gastrointestinal tracts. The on-body antenna was numerically evaluated on the colon-tissue phantom and the CST Gustav voxel human body model, followed by in-vitro and ex-vivo measurements for validation. The on-body antenna exceeds -10 dB impedance matching from 390 MHz to 500 MHz both in simulations and measurements. Finally, this paper reports numerical and experimental studies of the path loss for the radio link between an in-body capsule transmitter and an on-body receiver using our antenna solutions. The path loss both in simulations and measurements is less than 50 dB for any capsule orientation and location.
|
Antenna Systems for Wireless Capsule Endoscope: Design, Analysis and Experimental Validation
|
Let $G$ be a bridgeless graph, $C$ is a circuit of $G$. Fan proposed a conjecture that if $G/C$ admits a nowhere-zero 4-flow, then $G$ admits a 4-flow $(D,f)$ such that $E(G)-E(C)\subseteq$ supp$(f)$ and $|\textrm{supp}(f)\cap E(C)|>\frac{3}{4}|E(C)|$. The purpose of this conjecture is to find shorter circuit cover in bridgeless graphs. Fan showed that the conjecture holds for $|E(C)|\le19.$ Wang, Lu and Zhang showed that the conjecture holds for $|E(C)|\le 27$. In this paper, we prove that the conjecture holds for $|E(C)|\le 35.$
|
On Fan's conjecture about $4$-flow
|
A new fitting methodology is presented which is equally well suited for the estimation of low-, medium-, and high-degree mode parameters from $m$-averaged solar oscillation power spectra of widely differing spectral resolution. This method, which we call the "Windowed, MuLTiple-Peak, averaged spectrum", or WMLTP Method, constructs a theoretical profile by convolving the weighted sum of the profiles of the modes appearing in the fitting box with the power spectrum of the window function of the observing run using weights from a leakage matrix that takes into account both observational and physical effects, such as the distortion of modes by solar latitudinal differential rotation. We demonstrate that the WMLTP Method makes substantial improvements in the inferences of the properties of the solar oscillations in comparison with a previous method that employed a single profile to represent each spectral peak. We also present an inversion for the internal solar structure which is based upon 6,366 modes that we have computed using the WMLTP method on the 66-day long 2010 SOHO/MDI Dynamics Run. To improve both the numerical stability and reliability of the inversion we developed a new procedure for the identification and correction of outliers in a frequency data set. We present evidence for a pronounced departure of the sound speed in the outer half of the solar convection zone and in the subsurface shear layer from the radial sound speed profile contained in Model~S of Christensen-Dalsgaard and his collaborators that existed in the rising phase of Solar Cycle~24 during mid-2010.
|
A method for the estimation of p-mode parameters from averaged solar oscillation power spectra
|
A brief review of the experimental status of neutrino mixing. The model of neutrino oscillations has now been established with high confidence, with many of the model parameters measured to an accuracy of a few per cent. However, some parameters still remain unknown, notably the mixing angle $\theta_{13}$ and the amount of CP violation. Recently, new questions have come to light, highlighting possibilities to search for new physics in the neutrino sector.
|
The current status of neutrino mixing
|
We report the use of the Australia Telescope Compact Array (ATCA) to conduct polarimetric observations of the sky at 5 GHz. The ATCA is normally operated as an interferometer array, but these observations were conducted in a split array mode in which the antenna elements were used as single-dishes with their beams staggered to simultaneously cover a wide area of sky with a resolution of 10 arcmin. The linearly polarized sky radiation was fully characterized from measurements, made over a range of parallactic angles, of the cross correlated signals from the orthogonal linear feeds. We describe the technique and present a polarimetric image of the Vela supernova remnant made as a test of the method. The development of the techniques was motivated by the need for wide-field imaging of the foreground contamination of the polarized component of the cosmic microwave background signal.
|
A novel technique for wide-field polarimetry with a radiotelescope array
|
In the framework of a systematic study of proton induced nuclear reactions on lanthanides we have measured the excitation functions on natural cerium for the production of 142,139,138m,137Pr, 141,139,137m,137g,135Ce and 133La up to 65 MeV proton energy using the activation method with stacked-foil irradiation technique and high-resolution gamma-ray spectrometry. The cross-sections of the investigated reactions were compared with the data retrieved from the TENDL-2014 and TENDL-2015 libraries, based on the latest version of the TALYS code system. No earlier experimental data were found in the literature. The measured cross-section data are important for further improvement of nuclear reaction models and for practical applications in nuclear medicine, other labeling and activation studies.
|
Activation cross-section measurement of proton induced reactions on cerium
|
In this study we present a metric of consensus for Likert scales. The measure gives the level of agreement as the percentage of consensus among respondents. The proposed framework allows to design a positional indicator that gives the degree of agreement for each item independently of the number of reply options. In order to assess the performance of the proposed metric of consensus, in an iterated one-period ahead forecasting experiment we test whether the inclusion of the degree of agreement in expectations regarding the evolution of unemployment improves out-of-sample forecast accuracy in eight European countries. We find evidence that the degree of agreement among consumers contains useful information to predict unemployment rates. These results show the usefulness of consensus-based metrics to track the evolution of economic variables.
|
A new metric of consensus for Likert scales
|
The atmosphere affects humans in a multitude of ways, from loss of life due to adverse weather effects to long-term social and economic impacts on societies. Computer simulations of atmospheric dynamics are, therefore, of great importance for the well-being of our and future generations. Here, we propose AtmoRep, a novel, task-independent stochastic computer model of atmospheric dynamics that can provide skillful results for a wide range of applications. AtmoRep uses large-scale representation learning from artificial intelligence to determine a general description of the highly complex, stochastic dynamics of the atmosphere from the best available estimate of the system's historical trajectory as constrained by observations. This is enabled by a novel self-supervised learning objective and a unique ensemble that samples from the stochastic model with a variability informed by the one in the historical record. The task-independent nature of AtmoRep enables skillful results for a diverse set of applications without specifically training for them and we demonstrate this for nowcasting, temporal interpolation, model correction, and counterfactuals. We also show that AtmoRep can be improved with additional data, for example radar observations, and that it can be extended to tasks such as downscaling. Our work establishes that large-scale neural networks can provide skillful, task-independent models of atmospheric dynamics. With this, they provide a novel means to make the large record of atmospheric observations accessible for applications and for scientific inquiry, complementing existing simulations based on first principles.
|
AtmoRep: A stochastic model of atmosphere dynamics using large scale representation learning
|
We propose a subspace-accelerated Bregman method for the linearly constrained minimization of functions of the form $f(\mathbf{u})+\tau_1 \|\mathbf{u}\|_1 + \tau_2 \|D\,\mathbf{u}\|_1$, where $f$ is a smooth convex function and $D$ represents a linear operator, e.g. a finite difference operator, as in anisotropic Total Variation and fused-lasso regularizations. Problems of this type arise in a wide variety of applications, including portfolio optimization and learning of predictive models from functional Magnetic Resonance Imaging (fMRI) data, and source detection problems in electroencephalography. The use of $\|D\,\mathbf{u}\|_1$ is aimed at encouraging structured sparsity in the solution. The subspaces where the acceleration is performed are selected so that the restriction of the objective function is a smooth function in a neighborhood of the current iterate. Numerical experiments on multi-period portfolio selection problems using real datasets show the effectiveness of the proposed method.
|
A subspace-accelerated split Bregman method for sparse data recovery with joint l1-type regularizers
|
Suppose A and B are unital C*-algebras and A is separable. Let Rep(A,B) denote the set of all unital *-homomorphisms from A to B with the topology of pointwise convergence. We consider the problem of when the closure of the unitary orbit of a single representation in Rep(A,B) is path-connected. An affirmative answer was given by the first author when A is singly generated and B is the algebra of all operators on a separable Hilbert space. We extend this result for all separable A. We also give an affirmative answer when A is AF or homogeneous and B is a von Neumann algebra or when A is ASH and B is a finite von Neumann algebra.
|
Path-connected Closures of Unitary Orbits
|
The relationship between star formation and super-massive black hole growth is central to our understanding of galaxy formation and evolution. Hyper-Luminous Infrared Galaxies (HLIRGs) are unique laboratories to investigate the connection between starburst (SB) and Active Galactic Nuclei (AGN), since they exhibit extreme star formation rates, and most of them show evidence of harbouring powerful AGN. Our previous X-ray study of a sample of 14 HLIRGs shows that the X-ray emission of most HLIRGs is dominated by AGN activity. To improve our estimate of the relative contribution of the AGN and SB emission to its total bolometric output, we have built broad band spectral energy distributions (SEDs) for these HLIRGs, and we have fitted empirical AGN and SB templates to these SEDs. In broad terms, most sources are well fitted using this method, and we found AGN and SB contributions similar to those obtained by previous studies of HLIRGs. We have classified the HLIRGs SEDs in two groups, named class A and class B. Class A HLIRGs show a flat SED from the optical to the infrared energy range. Three out of seven class A sources can be modelled with a pure luminosity-dependent QSO template, while the rest of them require a type 1 AGN template and a SB template. The SB component is dominant in three out of four class A objects. Class B HLIRGs show SEDs with a prominent and broad IR bump. These sources can not trivially be modelled with a combination of pure AGN and pure SB, they require templates of composite objects, suggesting that >50% of their emission comes from stellar formation processes. We propose that our sample is actually composed by three different populations: very luminous QSO, young galaxies going through their maximal star formation period and the high luminosity tail of ULIRG population distribution.
|
Spectral Energy Distribution of Hyper-Luminous Infrared Galaxies
|
I In this paper, first we study a complete smooth metric measure space $(M^n,g, e^{-f}dv)$ with the ($\infty$)-Bakry-\'Emery Ricci curvature $\textrm{Ric}_f\ge \frac a2g$ for some positive constant $a$. It is known that the spectrum of the drifted Laplacian $\Delta_f$ for $M$ is discrete and the first nonzero eigenvalue of $\Delta_f$ has lower bound $\frac a2$. We prove that if the lower bound $\frac a2$ is achieved with multiplicity $k\geq 1$, then $k\leq n$, $M$ is isometric to $\Sigma^{n-k}\times \mathbb{R}^k$ for some complete $(n-k)$-dimensional manifold $\Sigma$ and by passing an isometry, $(M^n,g, e^{-f}dv)$ must split off a gradient shrinking Ricci soliton $(\mathbb{R}^k, g_{can}, \frac{a}{4}|t|^2)$, $t\in \mathbb{R}^k$. This result has an application to gradient shrinking Ricci solitons. Secondly, we study the drifted Laplacian $\mathcal{L}$ for properly immersed self-shrinkers in the Euclidean space $\mathbb{R}^{n+p}$, $p\geq1$ and show the discreteness of the spectrum of $\mathcal{L}$ and a logarithmic Sobolev inequality.
|
Eigenvalues of the drifted Laplacian on complete metric measure spaces
|
The probability of emission of photons in non-dipole case by channeled particle is calculated. The emission of hard photons with an energy comparable to the energy of the incident channeled particle is examined. The calculation of the quasi-Bloch energy spectrum of the oriented fast charged particle entering the crystal at an angle substantially greater than the Lindhard angle is performed. The initial and final spectra of the channeled particle belonging to a different set of band wave functions corresponding to different energies are used. The processes of the photon generation by the quantum crystal-oriented particle entering into the crystal at an angle both greater and smaller than the Lindhard angle are considered on the equal footing. It is shown that the spectrum of hard photon emission consists of a set of the well-observed emission lines. The probability of the nondipole photon radiation by the channeled particle with the energy within a small solid angle oriented along the propagation direction of the channeling particle is shown to be equal to It has been shown that the nondipole processes of the radiation of hard photons by the channeled particles are well observed experimentally.
|
The manifestation of the band structure in the photon emission spectrum of the fast above-barrier oriented particle
|
Smart grid is a self-sufficient system. That tracks how the energy is used from its source to its final destination. The smart grid can increase the service quality while reducing the consumption of electricity. However, the safety and confidentiality of information data is the major challenge in smart grid environment. To overcome this there are numerous authentication procedures that have been documented. The mutual authentication system for the smart grid that is based on elliptic curve cryptography and biometrics was thus introduced by A.A. Khan et al.s. This protocol is secure from various attacks. But we found an inability of password and biometric updating phase. Therefore we provided the password and biometric updating phase in this protocol.
|
Modification in Elliptic Curve Cryptography based Mutual authentication scheme for smart grid communication using biometric approach
|
Due to the current severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, there is an urgent need for novel therapies and drugs. We conducted a large-scale virtual screening for small molecules that are potential CoV-2 inhibitors. To this end, we utilized "ChemAI", a deep neural network trained on more than 220M data points across 3.6M molecules from three public drug-discovery databases. With ChemAI, we screened and ranked one billion molecules from the ZINC database for favourable effects against CoV-2. We then reduced the result to the 30,000 top-ranked compounds, which are readily accessible and purchasable via the ZINC database. Additionally, we screened the DrugBank using ChemAI to allow for drug repurposing, which would be a fast way towards a therapy. We provide these top-ranked compounds of ZINC and DrugBank as a library for further screening with bioassays at https://github.com/ml-jku/sars-cov-inhibitors-chemai.
|
Large-scale ligand-based virtual screening for SARS-CoV-2 inhibitors using deep neural networks
|
Acausal features of quantum electrodynamic processes are discussed. While these processes are not present for the classical electrodynamic theory, in the quantum electrodynamic theory, acausal processes are well known to exist. For example, any Feynman diagram with a ``loop'' in space-time describes a ``particle'' which may move forward in time or backward in time or in space-like directions. The engineering problems involved in experimentally testing such causality violations on a macroscopic scale are explored.
|
Acuasal Behavior in Quantum Electrodynamics
|
We use observed rotation velocity-luminosity (VL) and size-luminosity (RL) relations to single out a specific scenario for disk galaxy formation in the LCDM cosmology. Our model involves four independent log-normal random variables: dark-halo concentration c, disk spin lam_gal, disk mass fraction m_gal, and stellar mass-to-light ratio M/L_I. A simultaneous match of the VL and RL zero points with adiabatic contraction requires low-c halos, but this model has V_2.2~1.8 V_vir (where V_2.2 and V_vir are the circular velocity at 2.2 disk scale lengths and the virial radius, respectively) which will be unable to match the luminosity function (LF). Similarly models without adiabatic contraction but standard c also predict high values of V_2.2/V_vir. Models in which disk formation induces an expansion rather than the commonly assumed contraction of the dark-matter halos have V_2.2~1.2 V_vir which allows a simultaneous fit of the LF. This may result from non-spherical, clumpy gas accretion, where dynamical friction transfers energy from the gas to the dark matter. This model requires low lam_gal and m_gal values, contrary to naive expectations. However, the low lam_gal is consistent with the notion that disk galaxies predominantly survive in halos with a quiet merger history, while a low m_gal is also indicated by galaxy-galaxy lensing. The smaller than expected scatter in the RL relation, and the lack of correlation between the residuals of the VL and RL relations, respectively, imply that the scatter in lam_gal and in c need to be smaller than predicted for LCDM halos, again consistent with the idea that disk galaxies preferentially reside in halos with a quiet merger history.
|
A Revised Model for the Formation of Disk Galaxies: Low Spin and Dark-Halo Expansion
|
Star formation is inefficient. Only a few percent of the available gas in molecular clouds forms stars, leading to the observed low star formation rate (SFR). The same holds when averaged over many molecular clouds, such that the SFR of whole galaxies is again surprisingly low. Indeed, considering the low temperatures, molecular clouds should be highly gravitationally unstable and collapse on their global mean freefall timescale. And yet, they are observed to live about 10-100 times longer, i.e., the SFR per freefall time (SFR_ff) is only a few percent. Thus, other physical mechanisms must counteract the quick global collapse. Turbulence, magnetic fields and stellar feedback have been proposed as regulating agents, but it is still unclear which of these processes is the most important and what their relative contributions are. Here we run high-resolution simulations including gravity, turbulence, magnetic fields, and jet/outflow feedback. We confirm that clouds collapse on a mean freefall time, if only gravity is considered, producing stars at an unrealistic rate. In contrast, if turbulence, magnetic fields, and feedback are included step-by-step, the SFR is reduced by a factor of 2-3 with each additional physical ingredient. When they all act in concert, we find a constant SFR_ff = 0.04, currently the closest match to observations, but still about a factor of 2-4 higher than the average. A detailed comparison with other simulations and with observations leads us to conclude that only models with turbulence producing large virial parameters, and including magnetic fields and feedback can produce realistic SFRs.
|
Inefficient star formation through turbulence, magnetic fields and feedback
|
We introduce a price impact model which accounts for finite market depth, tightness and resilience. Its coupled bid- and ask-price dynamics induce convex liquidity costs. We provide existence of an optimal solution to the classical problem of maximizing expected utility from terminal liquidation wealth at a finite planning horizon. In the specific case when market uncertainty is generated by an arithmetic Brownian motion with drift and the investor exhibits constant absolute risk aversion, we show that the resulting singular optimal stochastic control problem readily reduces to a deterministic optimal tracking problem of the optimal frictionless constant Merton portfolio in the presence of convex costs. Rather than studying the associated Hamilton-Jacobi-Bellmann PDE, we exploit convex analytic and calculus of variations techniques allowing us to construct the solution explicitly and to describe the free boundaries of the action- and non-action regions in the underlying state space. As expected, it is optimal to trade towards the frictionless Merton position, taking into account the initial bid-ask spread as well as the optimal liquidation of the accrued position when approaching terminal time. It turns out that this leads to a surprisingly rich phenomenology of possible trajectories for the optimal share holdings.
|
Optimal investment with transient price impact
|
In a narrow composition range centered at Zn74.5Au10.5Yb15.0, Tsai-type icosahedral quasicrystal is formed in alloys quenched from 880 C. This quasicrystal belongs to the primitive type with the 6-dimensional lattice parameter a6D=7.378 A. The quasicrystal was not formed in the slowly cooled specimen, and was considered a metastable phase. The stable phase is a 2/1 approximant of the lattice parameter a2/1=23.271 A. This approximant forms exclusively in Zn76.0Au9.0Yb15.0 alloy annealed at 530 C. In addition, Zn70.5Au15.5Tb14.0 alloy annealed at 505 C forms a Tsai-type 1/1 approximant (a1/1=14.343 A). These new Zn-based phases observed in this study correspond to the quasicrystal-related phases in binary Cd-lanthanoid systems, and show the possibility of isostructural substitution of Cd by Zn/Au.
|
Icosahedral quasicrystal, 1/1 and 2/1 approximants in Zn-based ternary alloys containing Au and Yb/Tb
|
Decayless kink oscillations of plasma loops in the solar corona may contain an answer to the enigmatic problem of solar and stellar coronal heating. The polarisation of the oscillations gives us a unique information about their excitation mechanisms and energy supply. However, unambiguous determination of the polarisation has remained elusive. Here, we show simultaneous detection of a 4-min decayless kink oscillation from two non-parallel lines-of-sights, separated by about 104\textdegree, provided by unique combination of the High Resolution Imager on Solar Orbiter and the Atmospheric Imaging Assembly on Solar Dynamics Observatory. The observations reveal a horizontal or weakly oblique linear polarisation of the oscillation. This conclusion is based on the comparison of observational results with forward modelling of the observational manifestation of various kinds of polarisation of kink oscillations. The revealed polarisation favours the sustainability of these oscillations by quasi-steady flows which may hence supply the energy for coronal heating.
|
Polarisation of decayless kink oscillations of solar coronal loops
|
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions, which is essential for understanding physical, social, and team-play systems. However, most existing works on modeling multi-agent interactions typically assume that agents make independent decisions based on their observations, ignoring the complex dependence among agents. In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems. Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents. Extensive experiments on synthetic and real-world datasets show that our model outperforms state-of-the-art baselines across various scenarios in the action prediction task, and is able to generate new trajectories close to expert demonstrations.
|
Multi-Agent Imitation Learning with Copulas
|
We present results of the comparative analysis of the two wide binary systems -- 16 Cyg, with a giant gas planet orbiting around 16 Cyg B, and HD 219542 without planet detected. Atmospheric parameters of the binary components and the Sun were determined using their high-resolution spectra and the SME tools for automatic spectral analysis. By applying the synthetic spectrum method, we derived abundances of 29 and 23 chemical elements in 16 Cyg and HD 219542, respectively. For 19 of these elements, our results are based on the non-local thermodynamic equilibrium (NLTE) line formation. For both 16 Cyg and HD 219542, we obtained a small abundance difference between the A and B components: +0.019$\pm$0.012 and -0.014$\pm$0.019, respectively, suggesting only a weak influence of the giant gas planet formation on chemical composition of the host star atmosphere. For HD 219542 A and B, trends of the relative-to-solar abundances with the dust condensation temperature are similar to the literature data for the solar analogues without detected planets. The components of 16 Cyg reveal very similar behaviour of [X/H] with the condensation temperature, however, it is different from that for HD 219542. This indicates a specific chemical composition of the cloud from which the 16 Cyg binary system formed.
|
Detailed abundances of the wide pairs of stars with and without planets: the binary systems 16 Cyg and HD 219542
|
ACO2163 is one of the hottest (mean $kT=12-15.5$ keV) and extremely X-ray overluminous merging galaxy clusters which is located at $z=0.203$. The cluster hosts one of the largest giant radio halos which are observed in most of the merging clusters, and a candidate radio relic. Recently, three merger shock fronts were detected in this cluster, explaining its extreme temperature and complex structure. Furthermore, previous XMM-Newton and Chandra observations hinted at the presence of a shock front that is associated with the gas `bullet' crossing the main cluster in the west-ward direction, and which heated the intra-cluster medium, leading to adiabatic compression of the gas behind the 'bullet'. The goal of this paper is to report on the detection of this shock front as revealed by the temperature discontinuity in the X-ray XMM-Newton image, and the edge in the Very Large Array (VLA) radio image. We also report on the detection of a relic source in the north-eastern region of the radio halo in the KAT-7 data, confirming the presence of an extended relic in this cluster. The brightness edge in the X-rays corresponds to a shock front with a Mach number $M= 2.2\pm0.3$, at a distance of 0.2 Mpc from the cluster centre. An estimate from the luminosity jump gives $M=1.9\pm0.4$. We consider a simple explanation for the electrons at the shock front, and for the observed discrepancy between the average spectral index of the radio halo emission and that predicted by the $M=2.2$ shock which precedes the 'bullet'.
|
Discovery of a shock front in the merging cluster of galaxies A2163
|
We propose a state from the two-dimensional conformal field theory on the orbifold $(T^4)^N/S(N)$ as a dual description for a pulsating string moving in $AdS_3$. We show that, up to first order in the deforming parameter, the energy in both descriptions has the same dependence on the mode number, but with a non-trivial function of the coupling.
|
Pulsating strings from two dimensional CFT on $(T^4)^N/S(N)$
|
We described a minimal separating set for the algebra of $O(F_q)$-invariant polynomial functions of $m$-tuples of two-dimensional vectors over a finite field $F_q$.
|
Separating invariants for two-dimensional orthogonal groups over finite fields
|
The powder in tube method has been used to fabricate Ag and Cu clad MgB2 wires using an in-situ reaction method. The effects of short time sintering on the critical current densities of Ag and Cu clad MgB2 wires were studied. All the samples were examined using XRD, SEM, and magnetization measurements. For Ag clad wire Jc is improved by more than two times after the short time sintering process. Jc values of 1.2x10^5 A/cm2 in zero field and above 10^4 A/cm2 in 2T at 20 K have been achieved for Ag clad MgB2 wire which is only sintered for 6 minutes at 800oC. However, a remarkable degree of reaction has been found between the superconducting cores and the sheath materials, leading to the formation of Cu2Mg and Ag3Mg for copper and silver clad wires, respectively. The results from Tc, Jc and Hirr convincingly show that the short sintering causes less reaction between the magnesium and the sheath materials and markedly improves the critical current density. Our result shows that Iron is still the best sheath material because of the lack of reaction between Fe and the superconducting MgB2 material.
|
Improvement of critical current density in the Cu/MgB2 and Ag/MgB2 superconducting wires using the fast formation method
|
Given a Hodge manifold, it is introduced a self-adjoint operator on the space of endomorphisms of the global holomorphic sections of the polarization line bundle. Such operator is shown to approximate the Laplace operator on functions when composed with Berezin-Toeplitz quantization map and its adjoint up to an error which tends to zero when taking higher powers of the polarization line bundle.
|
A note on Berezin-Toeplitz quantization of the Laplace operator
|
We present some plausible definitions for the tangent grupoid of a manifold M, as well as some of the known applications of the structure. This is a kind of introductory note.
|
Introduction to the Tangent Grupoid
|
We present measurements of the branching fractions for the charmless two-body decays $\Bz\to\pip\pim$ and $\Bz\to\Kp\pim$, and a search for the decay $\Bz\to\Kp\Km$. We include the effects of final-state radiation from the daughter mesons for the first time, and quote branching fractions for the inclusive processes $\Bz\to h^+ h^{\prime -} n\gamma$, where $h$ and $h^\prime$ are pions or kaons. The maximum value of the sum of the energies of the $n$ undetected photons, $E_\gamma^{\rm max}$, is mode-dependent. Using a data sample of approximately 227 million \upsbb decays collected with the \babar\ detector at the \pep2 asymmetric-energy \epem collider at SLAC, we measure: \begin{eqnarray*} {\cal B} (\Bz \to \pip\pim n\gamma;\: E_{\gamma}^{\rm max}=150\mev) & = & (5.1\pm 0.4\pm 0.2)\times 10^{-6}, {\cal B} (\Bz \to \Kp\pim n\gamma;\: E_{\gamma}^{\rm max}=105\mev) & = & (18.1\pm 0.6\pm 0.6)\times 10^{-6}, {\cal B} (\Bz \to \Kp\Km n\gamma;\: E_{\gamma}^{\rm max}=59\mev) & < & 0.5 \times 10^{-6} (90% {\rm confidence level}), \end{eqnarray*} where the first uncertainty is statistical and the second is systematic. Theoretical calculations can be used to extrapolate from the above measurements the non-radiative branching fractions, ${\cal B}^0$. Using one such calculation, we find: \begin{eqnarray*} {\cal B}^0 (\Bz \to \pip\pim) & = & (5.5\pm 0.4\pm 0.3)\times 10^{-6}, {\cal B}^0 (\Bz \to \Kp\pim) & = & (19.1\pm 0.6\pm 0.6)\times 10^{-6}, {\cal B}^0 (\Bz \to \Kp\Km) & < & 0.5 \times 10^{-6} (90% {\rm confidence level}). \end{eqnarray*} Meaningful comparison between theory and experiment, as well as combination of measurements from different experiments, can be performed only in terms of these non-radiative quantities.
|
Improved Measurements of the Branching Fractions for B0 --> pi+pi- and B0 --> K+pi-, and a Search fro B0 --> K+K-
|
We have detected narrow HI 21cm and CI absorption at $z \sim 1.4 - 1.6$ towards Q0458$-$020 and Q2337$-$011, and use these lines to test for possible changes in the fine structure constant $\alpha$, the proton-electron mass ratio $\mu$, and the proton gyromagnetic ratio $g_p$. A comparison between the HI 21cm and CI line redshifts yields $\Delta X/X = [+6.8 \pm 1.0] \times 10^{-6}$ over $0 < <z> \le 1.46$, where $X = g_p \alpha^2/\mu$, and the errors are purely statistical, from the gaussian fits. The simple line profiles and the high sensitivity of the spectra imply that statistical errors in this comparison are an order of magnitude lower than in previous studies. Further, the CI lines arise in cold neutral gas that also gives rise to HI 21cm absorption, and both background quasars are core-dominated, reducing the likelihood of systematic errors due to local velocity offsets between the hyperfine and resonance lines. The dominant source of systematic error lies in the absolute wavelength calibration of the optical spectra, which appears uncertain to $\sim 2$ km/s, yielding a maximum error in $\Delta X/X$ of $\sim 6.7 \times 10^{-6}$. Including this, we obtain $\Delta X/X = [+6.8 \pm 1.0 (statistical) \pm 6.7 (max. systematic)] \times 10^{-6}$ over $0 < <z> \le 1.46$. Using literature constraints on $\Delta \mu/\mu$, this is inconsistent with claims of a smaller value of $\alpha$ from the many-multiplet method, unless fractional changes in $g_p$ are larger than those in $\alpha$ and $\mu$.
|
Probing fundamental constant evolution with neutral atomic gas lines
|
We investigate the sharp asymptotic behavior at criticality of the large fluctuations of extensive observables in renewal models of statistical mechanics, such as the Poland-Scheraga model of DNA denaturation, the Fisher-Felderhof model of fluids, the Wako-Sait\^o-Mu\~noz-Eaton model of protein folding, and the Tokar-Dreyss\'e model of strained epitaxy. These models amount to Gibbs changes of measure of a classical renewal process and can be identified with a constrained pinning model of polymers. The extensive observables that enter the thermodynamic description turn out to be cumulative rewards corresponding to deterministic rewards that are uniquely determined by the waiting time and grow no faster than it. The probability decay with the system size of their fluctuations switches from exponential to subexponential at criticality, which is a regime corresponding to a discontinuous pinning-depinning phase transition. We describe such decay by proposing a precise large deviation principle under the assumption that the subexponential correction term to the waiting time distribution is regularly varying. This principle is in particular used to characterize the fluctuations of the number of renewals, which measures the DNA-bound monomers in the Poland-Scheraga model, the particles in the Fisher-Felderhof model and the Tokar-Dreyss\'e model, and the native peptide bonds in the Wako-Sait\^o-Mu\~noz-Eaton model.
|
Critical Fluctuations in Renewal Models of Statistical Mechanics
|
For a simple-cubic optical lattice with lattice spacing d, occupied by two species of fermionic atoms of mass m that interact repulsively, we ask what conditions maximize the Neel temperature in the Mott insulating phase at density one atom per site, with equal numbers of the two species. This maximum occurs near the edge of the regime where the system is well-approximated by the usual Hubbard model. The correction to the Hubbard-model approximation that produces a "direct" ferromagnetic interaction between atoms in nearest-neighbor Wannier orbitals is the leading term that limits how high the Neel temperature can be made.
|
Maximizing the Neel temperature of fermions in a simple-cubic optical lattice
|
Competition indices are models frequently used in ecology to account for the impact of density and resource distribution on the growth of a plant population. They allow to define simple individual-based models, by integrating information relatively easy to collect at the population scale, which are generalized to a macroscopic scale by mean-field limit arguments. Nevertheless, up to our knowledge, few works have studied under which conditions on the competition index or on the initial configuration of the population the passage from the individual scale to the population scale is mathematically guaranteed. We consider in this paper a competition index commonly used in the literature, expressed as an average over the population of a pairwise potential depending on a measure of plants' sizes and their respective distances. In line with the literature on mixed-effect models, the population is assumed to be heterogeneous, with inter-individual variability of growth parameters. Sufficient conditions on the initial configuration are given so that the population dynamics, taking the form of a system of non-linear differential equations, is well defined. The mean-field distribution associated with an infinitely crowded population is then characterized by the characteristic flow, and the convergence towards this distribution for an increasing population size is also proved. The dynamics of the heterogeneous population is illustrated by numerical simulations, using a Lagrangian scheme to visualize the mean-field dynamics.
|
Analysis of the dynamics induced by a competition index in a heterogeneous population of plants: from an individual-based model to a macroscopic model
|
Absorption lines in the Lyman-alpha forest can be thought of as peaks in neutral hydrogen density along lines of sight. The column density distribution (the number density of absorption lines as a function of column density) is then a statistic of density peaks, which contains information about the underlying power spectrum. In particular, we show that the slope of the distribution provides a measure of power on scales smaller than those probed by studies of present-day large scale structure.
|
The Column Density Distribution of the Lyman-Alpha Forest: A Measure of Small Scale Power
|
In the framework of kt-factorization approach,the production of prompt $\chi_c$ mesons in pp collisions at the LHC energies is studied. Our consideration is based on the off-shell amplitudes for hard partonic subprocesses $g^*g^*\to\chi_{cJ}$ and non-relativistic QCD formalism for bound states. The transverse momentum dependent (unintegrated) gluon densities in a proton were derived from Ciafaloni-Catani-Fiorani-Marchesini evolution equation or, alternatively, were chosen in accordance with Kimber-Martin-Ryskin prescription. Taking into account both color singlet and color octet contributions, we deduce the corresponding non-perturbative long-distance matrix elements from the fits to the latest ATLAS data on $\chi_{c1}$ and $\chi_{c2}$ transverse momentum distributions at $\sqrt s = 7$ TeV. We find that these distributions at small and moderate pt are formed mainly by the color singlet components. We successfully described the data on the differential cross sections and relative production rates $\sigma(\chi_{c2})/\sigma(\chi_{c1})$ presented by the ATLAS, CMS and LHCb Collaborations. We find that the fit points to unequal wave functions of $\chi_{c1}$ and $\chi_{c2}$ states.
|
Prompt charmonia production and polarization at LHC in the NRQCD with kt-factorization. Part II: $\chi_c$ mesons
|
We report exact expressions for atomic forces in the diffusion Monte Carlo (DMC) method when using nonlocal pseudopotentials. We present approximate schemes for estimating these expressions in both mixed and pure DMC calculations, including the pseudopotential Pulay term which has not previously been calculated and the Pulay nodal term which has not been calculated for real systems in pure DMC simulations. Harmonic vibrational frequencies and equilibrium bond lengths are derived from the DMC forces and compared with those obtained from DMC potential energy curves. Results for four small molecules show that the equilibrium bond lengths obtained from our best force and energy calculations differ by less than 0.002 Angstrom.
|
Total forces in the diffusion Monte Carlo method with nonlocal pseudopotentials
|
We confirm the Halperin-Carlsson Conjecture for free $Z_p$-torus actions (p is a prime) on 2-dimensional finite CW-complexes and free $Z_2$-torus actions on compact 3-manifolds.
|
On free $Z_p$-torus actions in dimension two and three
|
The tri-layered perovskite Sr4Rh3O10 is reported for the first time. High-pressure and high-temperature heating (6 GPa and 1500 C) brought about successful preparation of a polycrystalline sample of the expected member at n=3 of Srn+1RhnO3n+1. Neutron-diffraction studies revealed the orthorhombic crystal structure (Pbam) at room temperature and 3.4 K. Local structure distortions rotationally tilt the RhO6 octahedra ~12 degrees in the perovskite-based blocks along the c-axis, and approximately a 20 % disorder was found in sequence of the alternating rotational tilt. The sample was also investigated by measurements of specific heat, thermopower, magnetic susceptibility, and electrical resistivity. The data clearly revealed enhanced paramagnetism and electrically conducting character, which reflected nature of the correlated 4d5-electrons of Rh4+. However, no clear signs of magnetic and electrical transitions were observed above 2 K and below 70 kOe, providing a remarkable contrast to the rich electronic phenomena for the significantly relevant ruthenate, Sr4Ru3O10.
|
Crystal Structure and Magnetic Properties of the Tri-layered Perovskite Sr4Rh3O10: A New Member of the Strontium Rhodates Family
|
Deep learning based channel state information (CSI) feedback in frequency division duplex systems has drawn much attention in both academia and industry. In this paper, we focus on integrating the Type-II codebook in the beyond fifth-generation (B5G) wireless systems with deep learning to enhance the performance of CSI feedback. In contrast to its counterpart in Release 16, the Type-II codebook in Release 17 (R17) exploits the angular-delay-domain partial reciprocity between uplink and downlink channels and selects part of angular-delay-domain ports for measuring and feeding back the downlink CSI, where the performance of the conventional deep learning methods is limited due to the deficiency of sparse structures. To address this issue, we propose the new paradigm of adopting deep learning to improve the performance of R17 Type-II codebook. Firstly, considering the relatively low signal-to-noise ratio of uplink channels, deep learning is utilized to refine the selection of the dominant angular-delay-domain ports, where the focal loss is harnessed to solve the class imbalance problem. Secondly, we propose to reconstruct the downlink CSI by way of deep learning based on the feedback of R17 Type-II codebook at the base station, where the information of sparse structures can be effectively leveraged. Finally, a weighted shortcut module is designed to facilitate the accurate reconstruction, and a two-stage loss function with the combination of the mean squared error and sum rate is proposed for adapting to actual multi-user scenarios. Simulation results demonstrate that our proposed angular-delay-domain port selection and CSI reconstruction paradigm can improve the sum rate performance by more than 10% compared with the traditional R17 Type-II codebook and deep learning benchmarks.
|
Deep Learning Empowered Type-II Codebook: New Paradigm for Enhancing CSI Feedback
|
Microfluidic mixing is a fundamental functionality in most lab on a chip (LOC) systems,whereas realization of efficient mixing is challenging in microfluidic channels due to the small Reynolds numbers. Here, we design and fabricate a compact three-dimensional (3D) micromixer to enable efficient mixing at various flow rates. The performance of the fabricated micromixer was examined using blue and red inks. The extreme flexibility in fabricating microfluidic structures of arbitrary 3D geometries using femtosecond laser micromachining allows us to tackle the major disadvantageous effects for optimizing the mixing efficiency.
|
A compact and efficient three-dimensional microfluidic mixer
|
Using 'Cut and Paste' technique, we develop a thin shell wormhole in heterotic string theory. We determine the surface stresses, which are localized in the shell, by using Darmois-Israel formalism. The linearized stability of this thin wormhole is also analyzed.
|
Thin Shell Wormhole in Heterotic String Theory
|
In recent papers we have proposed that the dark matter of the Universe could be from scalar field origin. In this letter, we find that if the scale of renormalization of the model is of order of the Planck Mass, then a scalar field $\Phi $ endowed with the scalar potential $V=V_{o}[\cosh {(\lambda \sqrt{\kappa_{0}}\Phi)}-1] can be a reliable model for dark matter in galaxies. The predicted scattering cross section fits the value required for self-interacting dark matter and the additional degree of freedom of the theory is of order of hundreds of TeV.
|
Scalar Field Dark Matter, Cross Section and Planck-Scale Physics
|
Quantum error correction is of crucial importance for fault-tolerant quantum computers. As an essential step towards the implementation of quantum error-correcting codes, quantum non-demolition (QND) measurements are needed to efficiently detect the state of a logical qubit without destroying it. Here we implement QND measurements in a Si/SiGe two-qubit system, with one qubit serving as the logical qubit and the other serving as the ancilla. Making use of a two-qubit controlled-rotation gate, the state of the logical qubit is mapped onto the ancilla, followed by a destructive readout of the ancilla. Repeating this procedure enhances the logical readout fidelity from $75.5\pm 0.3\%$ to $94.5 \pm 0.2\%$ after 15 ancilla readouts. In addition, we compare the conventional thresholding method with an improved signal processing method called soft decoding that makes use of analog information in the readout signal to better estimate the state of the logical qubit. We demonstrate that soft decoding leads to a significant reduction in the required number of repetitions when the readout errors become limited by Gaussian noise, for instance in the case of readouts with a low signal-to-noise ratio. These results pave the way for the implementation of quantum error correction with spin qubits in silicon.
|
Repetitive quantum non-demolition measurement and soft decoding of a silicon spin qubit
|
An emerging insight is that ground states of symmetry-protected topological orders (SPTO's) possess latent computational complexity in terms of their many-body entanglement. By introducing a fractional symmetry of SPTO, which requires the invariance under 3-colorable symmetries of a lattice, we prove that every renormalization fixed-point state of 2D $(\mathbb{Z}_2)^m$ SPTO with fractional symmetry can be utilized for universal quantum computation using only Pauli measurements, as long as it belongs to a nontrivial 2D SPTO phase. Our infinite family of fixed-point states may serve as a base model to demonstrate the idea of a "quantum computational phase" of matter, whose states share universal computational complexity ubiquitously.
|
Latent Computational Complexity of Symmetry-Protected Topological Order with Fractional Symmetry
|
Since the first observations by Kaufmann et al.\ (1970), special attention has been paid to static pressure-balanced structures in the form of magnetic holes or humps observed in regions of the solar wind and of planetary magnetosheaths where the $\beta$ parameter is relatively large and the ion perpendicular temperature exceeds the parallel one. Although alternative interpretations have been proposed, these structures are usually viewed as associated with the mirror instability discovered in 1957 by Vedenov and Sagdeev. After reviewing observational results provided by satellite missions, high-resolution numerical simulations of the Vlasov--Maxwell equations together with asymptotic and phenomenological models of the nonlinear dynamics near the instability threshold are discussed. The constraining effect of the mirror instability on the temperature anisotropy associated with a dominant perpendicular ion heating observed in the solar wind is reported, and recent simulations of this phenomenon based on an elaborated fluid model including low-frequency kinetic effects are briefly mentioned.
|
Nonlinear Mirror Modes in Space Plasmas
|
We prove the following continuous analogue of Vaught's Two-Cardinal Theorem: if for some $\kappa>\lambda\geq \aleph_0$, a continuous theory $T$ has a model with density character $\kappa$ which has a definable subset of density character $\lambda$, then $T$ has a model with density character $\aleph_1$ which has a separable definable subset. We also show that if we assume that $T$ is $\omega$-stable, then if $T$ has a model of density character $\aleph_1$ with a separable definable set, then for any uncountable $\kappa$ we can find a model of $T$ with density character $\kappa$ which has a separable definable subset. In order to prove this, we develop an approximate notion of quasi-minimality for the continuous setting. We apply these results to show a continuous version of the forward direction of the Baldwin-Lachlan characterization of uncountable categoricity: if a continuous theory $T$ is uncountably categorical, then $T$ is $\omega$-stable and has no Vaughtian pairs.
|
Vaught's Two-Cardinal Theorem and Quasi-Minimality in Continuous Logic
|
(Abridged). We present numerical simulations of isothermal, MHD, supersonic turbulence, designed to test various hypotheses frequently assumed in star formation(SF) theories. We consider three simulations, each with a different combination of physical size, rms sonic Mach number, and Jeans parameter, but chosen as to give the same value of the virial parameter and to conform with Larson's scaling relations. As in the non-magnetic case: we find no simultaneously subsonic and super-Jeans structures in our MHD simulations. We find that the fraction of small-scale super-Jeans structures increases when self gravity is turned on, and that the production of gravitationally unstable dense cores by turbulence alone is very low. This implies that self-gravity is in general necessary not only to induce the collapse of Jeans-unstable cores, but also to form them. We find that denser regions tend to have more negative values of the velocity divergence, implying a net inwards flow towards the regions' centers. We compare the results from our simulations with the predictions from the recent SF theories by Krumholz & McKee, Padoan & Nordlund, and Hennebelle & Chabrier, using the expressions recently provided by Federrath & Klessen. We find that none of these theories reproduces the dependence of the SFEff with Ms observed in our simulations in the MHD case. The SFEff predicted by the theories ranges between half and one order of magnitude larger than what we observe in the simulations in both the HD and the MHD cases. We conclude that the type of flow used in simulations like the ones presented here and assumed in recent SF theories, may not correctly represent the flow within actual clouds, and that theories that assume it does may be missing a fundamental aspect of the flow. We suggest that a more realistic regime may be that of hierarchical gravitational collapse.
|
Testing assumptions and predictions of star-formation theories
|
Active visual exploration aims to assist an agent with a limited field of view to understand its environment based on partial observations made by choosing the best viewing directions in the scene. Recent methods have tried to address this problem either by using reinforcement learning, which is difficult to train, or by uncertainty maps, which are task-specific and can only be implemented for dense prediction tasks. In this paper, we propose the Glimpse-Attend-and-Explore model which: (a) employs self-attention to guide the visual exploration instead of task-specific uncertainty maps; (b) can be used for both dense and sparse prediction tasks; and (c) uses a contrastive stream to further improve the representations learned. Unlike previous works, we show the application of our model on multiple tasks like reconstruction, segmentation and classification. Our model provides encouraging results while being less dependent on dataset bias in driving the exploration. We further perform an ablation study to investigate the features and attention learned by our model. Finally, we show that our self-attention module learns to attend different regions of the scene by minimizing the loss on the downstream task. Code: https://github.com/soroushseifi/glimpse-attend-explore.
|
Glimpse-Attend-and-Explore: Self-Attention for Active Visual Exploration
|
The main purpose of this work is to generalize the $S^3_\bfw$ Sasaki join construction $M\star_\bfl S^3_\bfw$ described in \cite{BoTo14a} when the Sasakian structure on $M$ is regular, to the general case where the Sasakian structure is only quasi-regular. This gives one of the main results, Theorem 3.2, which describes an inductive procedure for constructing Sasakian metrics of constant scalar curvature. In the Gorenstein case ($c_1(\cald)=0$) we construct a polynomial whose coeffients are linear in the components of $\bfw$ and whose unique root in the interval $(1,\infty)$ completely determines the Sasaki-Einstein metric. In the more general case we apply our results to prove that there exists infinitely many smooth 7-manifolds each of which admit infinitely many inequivalent contact structures of Sasaki type admitting constant scalar curvature Sasaki metrics (see Corollary 6.15). We also discuss the relationship with a recent paper \cite{ApCa18} of Apostolov and Calderbank as well as the relation with K-stability.
|
The $S^3_\bfw$ Sasaki Join Construction
|
Microstructure and crystallography of {\delta} phase hydrides in as-received fine grain and 'blocky' alpha large grain Zircaloy-4 (average grain size ~11 {\mu}m and >200 {\mu}m, respectively) were examined using electron backscatter diffraction. Results suggest that the the matrix-hydride orientation relationship is {0001}{\alpha}||{111}{\delta};<11-20>{\alpha}||<110>{\delta} for all the cases studied. The habit plane of intragranular hydrides and some intergranular hydrides has been found to be {10-17} of the surrounding matrix. The morphology of intergranular hydrides can vary depending upon the angle between the grain boundary and the hydride habit plane. The misfit strain between {\alpha}-Zr and {\delta}-hydride is accommodated by high density of dislocations and twin structures in the hydrides, and a mechanism of twin formation in the hydrides has been proposed. The growth of hydrides across grain boundaries is achieved through an auto-catalytic manner similar to the growth pattern of intragranular hydrides. Easy collective shear along <1-100> makes it possible for hydride nucleation at any grain boundaries, while the process seems to favour grain boundaries with low (<40{\deg}) and high (>80{\deg}) c-axis misorientation angles. Moreover, the angle between the grain boundary and the adjacent basal planes does not influence the propensity for hydride nucleation.
|
Microstructure and formation mechanisms of {\delta}-hydrides in variable grain size Zircaloy-4 studied by electron backscatter diffraction
|
Current measurements of planet population as a function of stellar mass show three seemingly contradictory signatures: close-in super-Earths are more prevalent around M dwarfs than FGK dwarfs; inner super-Earths are correlated with outer giants; and outer giants are less common around M dwarfs than FGK dwarfs. Here, we build a simple framework that combines the theory of pebble accretion with the measurements of dust masses in protoplanetary disks to reconcile all three observations. First, we show that cooler stars are more efficient at converting pebbles into planetary cores at short orbital periods. Second, when disks are massive enough to nucleate a heavy core at 5 AU, more than enough dust can drift in to assemble inner planets, establishing the correlation between inner planets and outer giants. Finally, while stars of varying masses are similarly capable of converting pebbles into cores at long orbital periods, hotter stars are much more likely to harbor more massive dust disks so that the giant planet occurrence rate rises around hotter stars. Our results are valid over a wide range of parameter space for a disk accretion rate that follows $\dot{M}_\star \sim 10^{-8}\,M_\odot\,{\rm yr}^{-1}(M_\star/M_\odot)^2$. We predict a decline in mini-Neptune population (but not necessarily terrestrial planets) around stars lighter than $\sim 0.3-0.5 M_\odot$. Cold giants ($\gtrsim$5 AU), if they exist, should remain correlated with inner planets even around lower mass stars.
|
Small Planets Around Cool Dwarfs: Enhanced Formation Efficiency of Super-Earths around M dwarfs
|
Identifying the right tools to express the stochastic aspects of neural activity has proven to be one of the biggest challenges in computational neuroscience. Even if there is no definitive answer to this issue, the most common procedure to express this randomness is the use of stochastic models. In accordance with the origin of variability, the sources of randomness are classified as intrinsic or extrinsic and give rise to distinct mathematical frameworks to track down the dynamics of the cell. While the external variability is generally treated by the use of a Wiener process in models such as the Integrate-and-Fire model, the internal variability is mostly expressed via a random firing process. In this paper, we investigate how those distinct expressions of variability can be related. To do so, we examine the probability density functions to the corresponding stochastic models and investigate in what way they can be mapped one to another via integral transforms. Our theoretical findings offer a new insight view into the particular categories of variability and it confirms that, despite their contrasting nature, the mathematical formalization of internal and external variability are strikingly similar.
|
Theoretical connections between mathematical neuronal models corresponding to different expressions of noise
|
Active turbulent advection is considered in the context of magneto-hydrodynamics. In this case, an auxiliary passive field bears no apparent connection to the active field. The scaling properties of the two fields are different. In the framework of a shell model, we show that the two-point structure function of the passive field has a unique zero mode, characterizing the scaling of this field only. In other words, the existence of statistical invariants for the decaying passive field carries no information on the scaling properties of the active field.
|
Statistics of active vs. passive advections in magnetohydrodynamic turbulence
|
Using the AdS/CFT correspondence and the eikonal approximation, we evaluate the elastic parton-parton scattering amplitude at large $N$ and strong coupling $g^2N$ in N=4 SYM. We obtain a scattering amplitude with a Regge behavior that unitarizes at large $\sqrt{s}$.
|
Elastic Parton-Parton Scattering from AdS/CFT
|
Processes of heavy quark production at TEVATRON energies are considered using the semihard (k_T factorization) QCD approach with emphasis of the BFKL dynamics of gluon distributions. We investigate the dependence of the p_T distribution of heavy quark production (presented in the form of integrated cross-sections) on different forms of the unintegrated gluon distribution. The theoretical results are compared with recent D0 and CDF experimental data on beauty production.
|
Heavy Quark Production at the TEVATRON in the Semihard QCD Approach and the Unintegrated Gluon Distribution
|
By means of a counter-example we show that the multilinear fractional operator is not bounded from a product of Hardy spaces into a Hardy space.
|
Multilinear Fractional Integral Operators: A counter-example
|
In this paper we analyze the approximation of multivariate integrals over the Euclidean plane for functions which are analytic. We show explicit upper bounds which attain the exponential rate of convergence. We use an infinite grid with different mesh sizes and lengths in each direction to sample the function, and then truncate it. In our analysis, the mesh sizes and the truncated domain are chosen by optimally balancing the truncation error and the discretization error. This paper derives results in comparable function space settings, extended to $\R^s$, as which were recently obtained in the unit cube by Dick, Larcher, Pillichshammer and Wo{\'z}niakowski (2011). They showed that both lattice rules and regular grids, with different mesh sizes in each direction, attain exponential rates, hence motivating us to analyze only cubature formula based on regular meshes. We further also amend the analysis of older publications, e.g., Sloan and Osborn (1987) and Sugihara (1987), using lattice rules on $\R^s$ by taking the truncation error into account and extending them to take the anisotropy of the function space into account.
|
Multivariate integration over $\R^s$ with exponential rate of convergence
|
In 1994, J.Cobb constructed a tame Cantor set in $\mathbb R^3$ each of whose projections into $2$-planes is one-dimensional. We show that an Antoine's necklace can serve as an example of a Cantor set all of whose projections are one-dimensional and connected. We prove that each Cantor set in $\mathbb R^n$, $n\geqslant 3$, can be moved by a small ambient isotopy so that the projection of the resulting Cantor set into each $(n-1)$-plane is $(n-2)$-dimensional. We show that if $X\subset \mathbb R^n$, $n\geqslant 2$, is a zero-dimensional compactum whose projection into some plane $\Pi\subset \mathbb R^n$ with $\dim \Pi \in \{1, 2, n-2, n-1\}$ is zero-dimensional, then $X$ is tame; this extends some particular cases of the results of D.R.McMillan, Jr. (1964) and D.G.Wright, J.J.Walsh (1982). We use the technique of defining sequences which comes back to Louis Antoine.
|
Cantor sets with high-dimensional projections
|
This paper deals with magnetic equations of the type dH=J where the current J is a delta-function on a brane worldvolume and H a p-form field strength. In many situations in M-theory this equation needs to be solved for H in terms of a potential. A standard universality class of solutions, involving Dirac-branes, gives rise to strong intermediate singularities in H which in many physically relevant cases lead to inconsistencies. In this paper we present an alternative universality class of solutions for magnetic equations in terms of Chern-kernels, and provide relevant applications, among which the anomaly-free effective action for open M2-branes ending on M5-branes. The unobservability of the Dirac-brane requires a Dirac quantization condition; we show that the requirement of ``unobservability'' of the Chern-kernel leads in M-theory to classical gravitational anomalies which cancel precisely their quantum counterparts.
|
Chern-kernels and anomaly cancellation in M-theory
|
We show that a three-dimensional steady gradient Ricci soliton which is asymptotic to the Bryant soliton in a suitable sense must be isometric to the Bryant soliton.
|
Uniqueness of gradient Ricci solitons
|
Matrix Product Operators (MPOs) are tensor networks representing operators acting on 1D systems. They model a wide variety of situations, including communication channels with memory effects, quantum cellular automata, mixed states in 1D quantum systems, or holographic boundary models associated to 2D quantum systems. A scenario where MPOs have proven particularly useful is to represent algebras of non-trivial symmetries. Concretely, the boundary of both symmetry protected and topologically ordered phases in 2D quantum systems exhibit symmetries in the form of MPOs. In this paper, we develop a theory of MPOs as representations of algebraic structures. We establish a dictionary between algebra and MPO properties which allows to transfer results between both setups, covering the cases of pre-bialgebras, weak bialgebras, and weak Hopf algebras. We define the notion of pulling-through algebras, which abstracts the minimal requirements needed to define topologically ordered 2D tensor networks from MPO algebras. We show, as one of our main results, that any semisimple pivotal weak Hopf algebra is a pulling-trough algebra. We demonstrate the power of this framework by showing that they can be used to construct Kitaev's quantum double models for Hopf algebras solely from an MPO representation of the Hopf algebra, in the exact same way as MPO symmetries obtained from fusion categories can be used to construct Levin-Wen string-net models, and to explain all their topological features; it thus allows to describe both Kitaev and string-net models on the same formal footing.
|
Matrix product operator algebras I: representations of weak Hopf algebras and projected entangled pair states
|
Online platforms play an increasingly important role in shaping democracy by influencing the distribution of political information to the electorate. In recent years, political campaigns have spent heavily on the platforms' algorithmic tools to target voters with online advertising. While the public interest in understanding how platforms perform the task of shaping the political discourse has never been higher, the efforts of the major platforms to make the necessary disclosures to understand their practices falls woefully short. In this study, we collect and analyze a dataset containing over 800,000 ads and 2.5 million videos about the 2020 U.S. presidential election from Facebook, Google, and TikTok. We conduct the first large scale data analysis of public data to critically evaluate how these platforms amplified or moderated the distribution of political advertisements. We conclude with recommendations for how to improve the disclosures so that the public can hold the platforms and political advertisers accountable.
|
How Algorithms Shape the Distribution of Political Advertising: Case Studies of Facebook, Google, and TikTok
|
Anomalies are a powerful way to gain insight into possible lattice regularizations of a quantum field theory. In this work, we argue that the continuum anomaly for a given symmetry can be matched by a manifestly-symmetric, local, lattice regularization in the same spacetime dimensionality only if (i) the symmetry action is offsite, or (ii) if the continuum anomaly is reproduced exactly on the lattice. We consider lattice regularizations of a class of prototype models of QCD: the (1+1)-dimensional asymptotically-free Grassmannian nonlinear sigma models (NLSMs) with a $\theta$ term. Using the Grassmannian NLSMs as a case study, we provide examples of lattice regularizations in which both possibilities are realized. For possibility (i), we argue that Grassmannian NLSMs can be obtained from $\mathrm{SU}(N)$ antiferromagnets with a well-defined continuum limit, reproducing both the infrared physics of $\theta$ vacua and the ultraviolet physics of asymptotic freedom. These results enable the application of new classical algorithms to lattice Monte Carlo studies of these quantum field theories, and provide a viable realization suited for their quantum simulation. On the other hand, we show that, perhaps surprisingly, the conventional lattice regularization of $\theta$ vacua due to Berg and L\"uscher reproduces the anomaly exactly on the lattice, providing a realization of the second possibility.
|
Lattice regularizations of $\theta$ vacua: Anomalies and qubit models
|
Motivated by the problem of effectively executing clustering algorithms on very large data sets, we address a model for large scale distributed clustering methods. To this end, we briefly recall some standards on the quantization problem and some results on the almost sure convergence of the Competitive Learning Vector Quantization (CLVQ) procedure. A general model for linear distributed asynchronous algorithms well adapted to several parallel computing architectures is also discussed. Our approach brings together this scalable model and the CLVQ algorithm, and we call the resulting technique the Distributed Asynchronous Learning Vector Quantization algorithm (DALVQ). An in-depth analysis of the almost sure convergence of the DALVQ algorithm is performed. A striking result is that we prove that the multiple versions of the quantizers distributed among the processors in the parallel architecture asymptotically reach a consensus almost surely. Furthermore, we also show that these versions converge almost surely towards the same nearly optimal value for the quantization criterion.
|
Convergence of distributed asynchronous learning vector quantization algorithms
|
As a consequence of the rugged landscape of RNA molecules their folding is described by the kinetic partitioning mechanism according to which only a small fraction ($\phi_F$) reaches the folded state while the remaining fraction of molecules is kinetically trapped in misfolded intermediates. The transition from the misfolded states to the native state can far exceed biologically relevant time. Thus, RNA folding in vivo is often aided by protein cofactors, called RNA chaperones, that can rescue RNAs from a multitude of misfolded structures. We consider two models, based on chemical kinetics and chemical master equation, for describing assisted folding. In the passive model, applicable for class I substrates, transient interactions of misfolded structures with RNA chaperones alone are sufficient to destabilize the misfolded structures, thus entropically lowering the barrier to folding. For this mechanism to be efficient the intermediate ribonucleoprotein (RNP) complex between collapsed RNA and protein cofactor should have optimal stability. We also introduce an active model (suitable for stringent substrates with small $\phi_F$), which accounts for the recent experimental findings on the action of CYT-19 on the group I intron ribozyme, showing that RNA chaperones does not discriminate between the misfolded and the native states. In the active model, the RNA chaperone system utilizes chemical energy of ATP hydrolysis to repeatedly bind and release misfolded and folded RNAs, resulting in substantial increase of yield of the native state. The theory outlined here shows, in accord with experiments, that in the steady state the native state does not form with unit probability.
|
Generalized Iterative Annealing Model for the action of RNA chaperones
|
We have used recent surveys of the composition of exoplanet host stars to investigate the expected composition of condensed material in planetesimals formed beyond the snow line in the circumstellar nebulae of these systems. Of the major solid forming elements, we find that, as for the Sun, the C and O abundances (and particularly the C/O abundance ratio) have the most significant effect on the composition of icy planetesimals formed in these systems. The calculations use a self-consistent model for the condensation sequence of volatile ices from the nebula gas after refractory (silicate and metal) phases have condensed. The resultant proportions of refractory phases and ices were calculated for a range of nebular temperature structure and redox conditions. Planetesimals in systems with sub-solar C/O should be water ice-rich, with lower than solar mass fractions of refractory materials, while in super-solar C/O systems planetesimals should have significantly higher fractions of refractories, in some cases having little or no water ice. C-bearing volatile ices and clathrates also become increasingly important with increasing C/O depending on the assumed nebular temperatures. These compositional variations in early condensates in the outer portions of the nebula will be significant for the equivalent of the Kuiper Belt in these systems, icy satellites of giant planets and the enrichment (over stellar values) of volatiles and heavy elements in giant planet atmospheres.
|
Planetesimal Compositions in Exoplanet Systems
|
Two-impurity Kondo models are paradigmatic for correlated spin-fermion systems. Working with Mn atoms on Au(111) covered by a monolayer of MoS$_2$, we tune the inter-adatom exchange via the adatom distance and the adatom-substrate exchange via the location relative to a moir\'e structure of the substrate. Differential-conductance measurements on isolated adatoms exhibit Kondo peaks with heights depending on the adatom location relative to the moir\'e structure. Mn dimers spaced by a few atomic lattice sites exhibit split Kondo resonances. In contrast, adatoms in closely spaced dimers couple antiferromagnetically, resulting in a molecular-singlet ground state. Exciting the singlet-triplet transition by tunneling electrons, we find that the singlet-triplet splitting is surprisingly sensitive to the moir\'e structure. We interpret our results theoretically by relating the variations in the singlet-triplet splitting to the heights of the Kondo peaks of single adatoms, finding evidence for coupling of the adatom spin to multiple conduction electron channels.
|
Tuning a two-impurity Kondo system by a moir\'e superstructure
|
Video Question Answering (VideoQA) is a task that requires a model to analyze and understand both the visual content given by the input video and the textual part given by the question, and the interaction between them in order to produce a meaningful answer. In our work we focus on the Egocentric VideoQA task, which exploits first-person videos, because of the importance of such task which can have impact on many different fields, such as those pertaining the social assistance and the industrial training. Recently, an Egocentric VideoQA dataset, called EgoVQA, has been released. Given its small size, models tend to overfit quickly. To alleviate this problem, we propose several augmentation techniques which give us a +5.5% improvement on the final accuracy over the considered baseline.
|
Data augmentation techniques for the Video Question Answering task
|
We consider the effect of weak disorder on eigenstates in a special class of tight-binding models. Models in this class have short-range hopping on periodic lattices; their defining feature is that the clean systems have some energy bands that are dispersionless throughout the Brillouin zone. We show that states derived from these flat bands are generically critical in the presence of weak disorder, being neither Anderson localised nor spatially extended. Further, we establish a mapping between this localisation problem and the one of resonances in random impedance networks, which previous work has suggested are also critical. Our conclusions are illustrated using numerical results for a two-dimensional lattice, known as the square lattice with crossings or the planar pyrochlore lattice.
|
Anderson localisation in tight-binding models with flat bands
|
The Stochastic Gradient Langevin Dynamics (SGLD) are popularly used to approximate Bayesian posterior distributions in statistical learning procedures with large-scale data. As opposed to many usual Markov chain Monte Carlo (MCMC) algorithms, SGLD is not stationary with respect to the posterior distribution; two sources of error appear: The first error is introduced by an Euler--Maruyama discretisation of a Langevin diffusion process, the second error comes from the data subsampling that enables its use in large-scale data settings. In this work, we consider an idealised version of SGLD to analyse the method's pure subsampling error that we then see as a best-case error for diffusion-based subsampling MCMC methods. Indeed, we introduce and study the Stochastic Gradient Langevin Diffusion (SGLDiff), a continuous-time Markov process that follows the Langevin diffusion corresponding to a data subset and switches this data subset after exponential waiting times. There, we show that the Wasserstein distance between the posterior and the limiting distribution of SGLDiff is bounded above by a fractional power of the mean waiting time. Importantly, this fractional power does not depend on the dimension of the state space. We bring our results into context with other analyses of SGLD.
|
Subsampling Error in Stochastic Gradient Langevin Diffusions
|
The Lyman-$\alpha$ absorption spectrum associated with photons traversing the intergalactic medium allows us to probe the linear matter power spectrum down to relatively small distance scales. Finding ways of accurately evaluating Lyman-$\alpha$ constraints across large classes of candidate models of dark-matter physics is thus of paramount importance. While such constraints have been evaluated for dark-matter models with relatively simple dark-matter velocity distributions, more complex models -- particularly those with dark-matter velocity distributions stretching across multiple scales -- are receiving increasing attention. In this paper, we undertake a study of the Lyman-$\alpha$ constraints associated with general dark-matter velocity distributions. Although these constraints are difficult to evaluate in principle, in practice there exist two ways of recasting them into forms which are easier to evaluate and which therefore allow a more rapid determination of whether a given dark-matter model is ruled in or out. We utilize both of these recasts in order to determine the Lyman-$\alpha$ bounds on different classes of dark-matter velocity distributions. We also develop a general method by which the results of these different recasts can be compared. For relatively simple dark-matter velocity distributions, we find that these two classes of recasts tend to align and give similar results. However, the situation is far more complex for distributions involving multiple velocity scales: while these two recasts continue to yield similar results within certain regions of parameter space, they nevertheless yield dramatically different results within precisely those regions of parameter space which are likely to be phenomenologically relevant. This, then, serves as a cautionary tale regarding the use of such recasts for complex dark-matter velocity distributions.
|
Evaluating Lyman-$\alpha$ Constraints for General Dark-Matter Velocity Distributions: Multiple Scales and Cautionary Tales
|
Form factors are derived for a model describing the coherent Josephson tunneling between two coupled Bose-Einstein condensates. This is achieved by studying the exact solution of the model in the framework of the algebraic Bethe ansatz. In this approach the form factors are expressed through determinant representations which are functions of the roots of the Bethe ansatz equations.
|
Exact form factors for the Josephson tunneling current and relative particle number fluctuations in a model of two coupled Bose-Einstein condensates
|
A class of exact solutions of the geodesic equations in (anti-)de Sitter AdS$_4$ and dS$_4$ spacetimes is presented. The geodesics for test particles in AdS$_4$ and dS$_4$ spacetimes are respectively sinusoidal and hyperbolic sine world lines. The world line for light rays is straight lines as known. In particular, the world lines of test particles are not dependent on their energy. Spontaneous symmetry breaking of AdS$_4$ spacetime provides a physical explanation for arising of the virtual particle and antiparticle pairs in vacuum. Interestingly, the energy of a pair and the time its particles moving along their geodesics can be related by a relation similar to Heisenberg uncertainty one pertaining quantum vacuum fluctuations. The sinusoidal geodesics of AdS$_4$ spacetime can describe the world lines of the virtual particles and antiparticles. The hyperbolic sine geodesics of dS$_4$ spacetime can explain why galaxies move apart with positive accelerations.
|
Geodesics in the (anti-)de Sitter spacetime
|
The question of whether or not any zero torsion linear map on a non abelian real Lie algebra g is necessarily an extension of some CR-structure is considered and answered in the negative. Two examples are provided, one in the negative and one in the positive.In both cases, the computation up to equivalence of all zero torsion linear maps on g is used for an explicit description of the equivalence classes of integrable complex structures on the direct product g x g.
|
Two examples about zero torsion linear maps on Lie algebras
|
The paper addresses the problem of motion saliency in videos, that is, identifying regions that undergo motion departing from its context. We propose a new unsupervised paradigm to compute motion saliency maps. The key ingredient is the flow inpainting stage. Candidate regions are determined from the optical flow boundaries. The residual flow in these regions is given by the difference between the optical flow and the flow inpainted from the surrounding areas. It provides the cue for motion saliency. The method is flexible and general by relying on motion information only. Experimental results on the DAVIS 2016 benchmark demonstrate that the method compares favourably with state-of-the-art video saliency methods.
|
Unsupervised motion saliency map estimation based on optical flow inpainting
|
This paper studies the decoding capabilities of maximum distance profile (MDP) convolutional codes over the erasure channel and compares them with the decoding capabilities of MDS block codes over the same channel. The erasure channel involving large alphabets is an important practical channel model when studying packet transmissions over a network, e.g, the Internet.
|
Decoding of MDP Convolutional Codes over the Erasure Channel
|
We provide a Quantum Field Theory derivation of Lifshitz formula for the Casimir force due to a fluctuating real scalar field in $d+1$ dimensions. The field is coupled to two imperfect, thick, plane mirrors, which are modeled by background potentials localized on their positions. The derivation proceeds from the calculation of the vacuum energy in the Euclidean version of the system, reducing the problem to the evaluation of a functional determinant. The latter is written, via Gelfand-Yaglom's formula, in terms of functions depending on the structure of the potential describing each mirror; those functions encode the properties which are relevant to the Casimir force and are the reflection coefficients evaluated at imaginary frequencies.
|
Lifshitz formula for the Casimir force and the Gelfand-Yaglom theorem
|
The qBounce experiment offers a new way of looking at gravitation based on quantum interference. An ultracold neutron is reflected in well-defined quantum states in the gravity potential of the Earth by a mirror, which allows to apply the concept of gravity resonance spectroscopy (GRS). This experiment with neutrons gives access to all gravity parameters as the dependences on distance, mass, curvature, energy-momentum as well as on torsion. Here, we concentrate on torsion.
|
Gravity Resonance Spectroscopy and Einstein-Cartan Gravity
|
Deep Convolutional Neural Networks~(CNNs) offer remarkable performance of classifications and regressions in many high-dimensional problems and have been widely utilized in real-word cognitive applications. However, high computational cost of CNNs greatly hinder their deployment in resource-constrained applications, real-time systems and edge computing platforms. To overcome this challenge, we propose a novel filter-pruning framework, two-phase filter pruning based on conditional entropy, namely \textit{2PFPCE}, to compress the CNN models and reduce the inference time with marginal performance degradation. In our proposed method, we formulate filter pruning process as an optimization problem and propose a novel filter selection criteria measured by conditional entropy. Based on the assumption that the representation of neurons shall be evenly distributed, we also develop a maximum-entropy filter freeze technique that can reduce over fitting. Two filter pruning strategies -- global and layer-wise strategies, are compared. Our experiment result shows that combining these two strategies can achieve a higher neural network compression ratio than applying only one of them under the same accuracy drop threshold. Two-phase pruning, that is, combining both global and layer-wise strategies, achieves 10 X FLOPs reduction and 46% inference time reduction on VGG-16, with 2% accuracy drop.
|
2PFPCE: Two-Phase Filter Pruning Based on Conditional Entropy
|
The analogue of the charge-conjugation modular invariant for rational logarithmic conformal field theories is constructed. This is done by reconstructing the bulk spectrum from a simple boundary condition (the analogue of the Cardy `identity brane'). We apply the general method to the c_1,p triplet models and reproduce the previously known bulk theory for p=2 at c=-2. For general p we verify that the resulting partition functions are modular invariant. We also construct the complete set of 2p boundary states, and confirm that the identity brane from which we started indeed exists. As a by-product we obtain a logarithmic version of the Verlinde formula for the c_1,p triplet models.
|
From boundary to bulk in logarithmic CFT
|
Since ChatGPT has emerged as a major AIGC model, providing high-quality responses across a wide range of applications (including software development and maintenance), it has attracted much interest from many individuals. ChatGPT has great promise, but there are serious problems that might arise from its misuse, especially in the realms of education and public safety. Several AIGC detectors are available, and they have all been tested on genuine text. However, more study is needed to see how effective they are for multi-domain ChatGPT material. This study aims to fill this need by creating a multi-domain dataset for testing the state-of-the-art APIs and tools for detecting artificially generated information used by universities and other research institutions. A large dataset consisting of articles, abstracts, stories, news, and product reviews was created for this study. The second step is to use the newly created dataset to put six tools through their paces. Six different artificial intelligence (AI) text identification systems, including "GPTkit," "GPTZero," "Originality," "Sapling," "Writer," and "Zylalab," have accuracy rates between 55.29 and 97.0%. Although all the tools fared well in the evaluations, originality was particularly effective across the board.
|
An Empirical Study of AI Generated Text Detection Tools
|
This paper presents a dynamic model to study the impact on the economic outcomes in different societies during the Malthusian Era of individualism (time spent working alone) and collectivism (complementary time spent working with others). The model is driven by opposing forces: a greater degree of collectivism provides a higher safety net for low quality workers but a greater degree of individualism allows high quality workers to leave larger bequests. The model suggests that more individualistic societies display smaller populations, greater per capita income and greater income inequality. Some (limited) historical evidence is consistent with these predictions.
|
A Theory of Individualism, Collectivism and Economic Outcomes
|
We report the detection of luminous extended X-ray emission in NGC 6240 on the basis of ROSAT HRI observations of this ultraluminous IR galaxy. The spatial structure and temporal behavior of the X-ray source were analyzed. We find that >= 70% of the soft X-ray emission is extended beyond a radius of 5 arcsec. Strong emission can be traced out to a radius of 20 arcsec and weaker emission extends out to 50 arcsec. With a luminosity of at least 10^{42} erg/s this makes NGC 6240 one of the most luminous X-ray emitters in extended emission known. Evidence for a nuclear compact variable component is indicated by a drop of 32% in the HRI count rate as compared to the PSPC data taken one year earlier. No short-timescale variability is detected. The HRI data, which represent the first high-resolution study of the X-ray emission from NGC 6240, complement previous spectral fits to ROSAT PSPC data (astro-ph/9710098; A&A 330, 823) that suggested a two-component model consisting of thermal emission from shocked gas immersed in a starburst wind plus a powerlaw source attributed to scattered light from an obscured AGN. We discuss several models to account for the extended and compact emission. Although pushed to its limits the starburst outflow model is tenable for the essential part of the extended emission. For the AGN-type component we propose a model consisting of a near-nuclear `warm scatterer' that explains the apparent fading of the X-ray flux within a year as well as the strong FeKalpha complex seen in an ASCA spectrum.
|
ROSAT HRI discovery of luminous extended X-ray emission in NGC 6240
|
We present the results of a linear optics photonic implementation of a quantum circuit that simulates a phase covariant cloner, by using two different degrees of freedom of a single photon. We experimentally simulate the action of two mirrored $1\rightarrow 2$ cloners, each of them biasing the cloned states into opposite regions of the Bloch sphere. We show that by applying a random sequence of these two cloners, an eavesdropper can mitigate the amount of noise added to the original input state and therefore prepare clones with no bias but with the same individual fidelity, masking its presence in a quantum key distribution protocol. Input polarization qubit states are cloned into path qubit states of the same photon, which is identified as a potential eavesdropper in a quantum key distribution protocol. The device has the flexibility to produce mirrored versions that optimally clone states on either the northern or southern hemispheres of the Bloch sphere, as well as to simulate optimal and non-optimal cloning machines by tuning the asymmetry on each of the cloning machines.
|
Photonic quantum simulator for unbiased phase covariant cloning
|
The interplay between disorder and quantum interference leads to a wide variety of physical phenomena including celebrated Anderson localization -- the complete absence of diffusive transport due to quantum interference between different particle trajectories. In two dimensions, any amount of disorder is thought to induce localization of all states at long enough length scales, though this may be prevented if bands are topological or have strong spin-orbit coupling. In this note, we present a simple argument providing another mechanism for disrupting localization: by tuning the underlying curvature of the manifold on which diffusion takes place. We show that negative curvature manifolds contain a natural infrared cut off for the probability of self returning paths. We provide explicit calculations of the Cooperon -- directly related to the weak-localization corrections to the conductivity -- in hyperbolic space. It is shown that constant negative curvature leads to a rapid growth in the number of available trajectories a particle can coherently traverse in a given time, reducing the importance of interference effects and restoring classical diffusive behavior even in the absence of inelastic collisions. We conclude by arguing that this result may be amenable to experimental verification through the use of quantum simulators.
|
Absence of Weak Localization on Negative Curvature Surfaces
|
Prostate cancer patients who undergo prostatectomy are closely monitored for recurrence and metastasis using routine prostate-specific antigen (PSA) measurements. When PSA levels rise, salvage therapies are recommended to decrease the risk of metastasis. However, due to the side effects of these therapies and to avoid over-treatment, it is important to understand which patients and when to initiate these salvage therapies. In this work, we use the University of Michigan Prostatectomy registry Data to tackle this question. Due to the observational nature of this data, we face the challenge that PSA is simultaneously a time-varying confounder and an intermediate variable for salvage therapy. We define different causal salvage therapy effects defined conditionally on different specifications of the longitudinal PSA history. We then illustrate how these effects can be estimated using the framework of joint models for longitudinal and time-to-event data. All proposed methodology is implemented in the freely-available R package JMbayes2.
|
Using Joint Models for Longitudinal and Time-to-Event Data to Investigate the Causal Effect of Salvage Therapy after Prostatectomy
|
Assuming that AXPs and SGRs accrete matter from a fallback disk, we attempt to explain both the soft and the hard X-ray emission as the result of the accretion process. We also attempt to explain their radio emission or the lack of it. We test the hypothesis that the power-law, hard X-ray spectra are produced in the accretion flow mainly by bulk-motion Comptonization of soft photons emitted at the neutron star surface. Fallback disk models invoke surface dipole magnetic fields of $10^{12} - 10^{13}$ G, which is what we assume here. Unlike normal X-ray pulsars, for which the accretion rate is highly super-Eddington, the accretion rate is approximately Eddington in AXPs and SGRs and thus the bulk-motion Comptonization operates efficiently. As an illustrative example we reproduce both the hard and the soft X-ray spectra of AXP 4U 0142+61 well using the XSPEC package compTB. Our model seems to explain both the hard and the soft X-ray spectra of AXPs and SGRs, as well as their radio emission or the lack of it, in a natural way. It might also explain the short bursts observed in these sources. On the other hand, it cannot explain the giant X-ray outbursts observed in SGRs, which may result from the conversion of magnetic energy in local multipole fields.
|
The Energy Spectrum of Anomalous X-ray Pulsars and Soft Gamma-ray Repeaters
|
Hot, compact, hydrogen-deficient pre-white dwarfs (pre-WDs) with effective temperatures of Teff > 70,000 K and a surface gravity of 5.0 < log g < 7.0 are rather rare objects despite recent and ongoing surveys. It is believed that they are the outcome of either single star evolution (late helium-shell flash or late helium-core flash) or binary star evolution (double WD merger). Their study is interesting because the surface elemental abundances reflect the physics of thermonuclear flashes and merger events. Spectroscopically they are divided in three different classes, namely PG1159, O(He), or He-sdO. We present a spectroscopic analysis of five such stars that turned out to have atmospheric parameters in the range Teff = 70,000-80,000 K and log g = 5.2-6.3. The three investigated He-sdOs have a relatively high hydrogen mass fraction (10%) that is unexplained by both single (He core flash) and binary evolution (He-WD merger) scenarios. The O(He) star JL9 is probably a binary helium-WD merger, but its hydrogen content (6%) is also at odds with merger models. We found that RL 104 is the 'coolest' (Teff = 80,000 K) member of the PG1159 class in a pre-WD stage. Its optical spectrum is remarkable because it exhibits C IV lines involving Rydberg states with principal quantum numbers up to n = 22. Its rather low mass (0.48 +0.03/-0.02 Msun) is difficult to reconcile with the common evolutionary scenario for PG1159 stars due to it being the outcome of a (very) late He-shell flash. The same mass-problem faces a merger model of a close He-sdO plus CO WD binary that predicts PG1159-like abundances. Perhaps RL 104 originates from a very late He-shell flash in a CO/He WD formed by a merger of two low-mass He-WDs.
|
Non-local thermodynamic equilibrium spectral analysis of five hot, hydrogen-deficient pre-white dwarfs
|
We consider the Krein realization of the Hilbert space for a massless scalar field in 1+1 dimensions. We find convergence criteria and the completion of the space of test functions ${\cal S}$ with the topology induced by the Krein scalar product. Finally, we show that the interpretation for the Fourier components as probability amplitudes for the momentum operator is lost in this case.
|
Massless scalar fields in 1+1 dimensions and Krein spaces
|
We use a model of an accretion flow coupled with an emergent flare to interpret the latest 1.3mm VLBI measurements for Sagittarius A*. The visibility data constrained the distances from the flare center to the black hole center as $d_{\rm EW}\lesssim20{\rm R_g}$ and $d_{\rm NS}\lesssim80{\rm R_g}$ in the East-West and North-South directions, respectively. If interpreted by the hot-spot model, the flare was preferred to pass in front of the black hole at a radius much larger than $d_{\rm EW}$. If interpreted by the episodic jet launched from a nearly edge-on hot accretion flow, the flare was preferred to be ejected with $\theta_{\rm j}\gtrsim40^\circ$ off the black hole rotating axis. This method can be generalized to help us understand future sub-millimeter VLBI observations, and study the millimeter/sub-millimeter variabilities in the vicinity of the Galactic Center supermassive black hole.
|
Constraining the Flaring Region of Sagittarius A* By 1.3mm VLBI Measurements
|
Rovibrational quantum states in the $X^1\Sigma_g^+$ electronic ground state of H$_2$ are prepared in the $v=13$ vibrational level up to its highest bound rotational level $J=7$, and in the highest bound vibrational level $v=14$ (for $J=1$) by two-photon photolysis of H$_2$S. These states are laser-excited in a subsequent two-photon scheme into $F^1\Sigma_g^+$ outer well states, where the assignment of the highest ($v,J$) states is derived from a comparison of experimentally known levels in \F, combined with \emph{ab initio} calculations of \X\ levels. The assignments are further verified by excitation of $F^1\Sigma_g^+$ population into autoionizing continuum resonances which are compared with multi-channel quantum defect calculations. Precision spectroscopic measurements of the $F-X$ intervals form a test for the \emph{ab initio} calculations of ground state levels at high vibrational quantum numbers and large internuclear separations, for which agreement is found.
|
Photolysis production and spectroscopic investigation of the highest vibrational states in H$_2$ (X$^1\Sigma_g^+$ $v=13,14$)
|
Fairness has become a central issue for our research community as classification algorithms are adopted in societally critical domains such as recidivism prediction and loan approval. In this work, we consider the potential bias based on protected attributes (e.g., race and gender), and tackle this problem by learning latent representations of individuals that are statistically indistinguishable between protected groups while sufficiently preserving other information for classification. To do that, we develop a minimax adversarial framework with a generator to capture the data distribution and generate latent representations, and a critic to ensure that the distributions across different protected groups are similar. Our framework provides a theoretical guarantee with respect to statistical parity and individual fairness. Empirical results on four real-world datasets also show that the learned representation can effectively be used for classification tasks such as credit risk prediction while obstructing information related to protected groups, especially when removing protected attributes is not sufficient for fair classification.
|
Learning Fair Representations via an Adversarial Framework
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.