text
stringlengths
133
1.92k
summary
stringlengths
24
228
The backaction of quantum measurement on the Kondo effect in a quantum dot system is investigated by considering continuous projective measurement of singly occupied states of a quantum dot. We elucidate the qualitative feature of the Kondo effect under quantum measurement and determine effective Kondo temperature affected by the measurement. The Kondo resonance in the spectral function is suppressed when the measurement strength reaches the energy scale of the Kondo temperature without measurement. Through the spin susceptibility, we identify the generalized Kondo temperature under continuous quantum measurement. The measurement backaction changes the singularity in the spin susceptibility into a highly non-monotonic temperature dependence around the generalized Kondo temperature. The dependence of the generalized Kondo temperature on the measurement strength is quantitatively discussed.
Kondo Effect in a Quantum Dot under Continuous Quantum Measurement
The HH80-81 system is one of the most powerful jets driven by a massive protostar. We present new near-infrared (NIR) line imaging observations of the HH80-81 jet in the H$_2$ (2.122 $\mu$m) and [Fe II] (1.644 $\mu$m) lines. These lines trace not only the jet close to the exciting source but also the knots located farther away. We have detected nine groups of knot-like structures in the jet including HH80 and HH81 spaced $0.2-0.9$ pc apart. The knots in the northern arm of the jet show only [Fe II] emission closer to the exciting source, a combination of [Fe II] and H$_2$ at intermediate distances, and solely H$_2$ emission farther outwards. Towards the southern arm, all the knots exhibit both H$_2$ and [Fe II] emission. The nature of the shocks is inferred by assimilating the NIR observations with radio and X-ray observations from literature. In the northern arm, we infer the presence of strong dissociative shocks, in the knots located close to the exciting source. The knots in the southern arm that include HH80 and HH81 are explicable as a combination of strong and weak shocks. The mass-loss rates of the knots determined from [Fe II] luminosities are in the range $\sim 3.0\times 10^{-7}-5.2\times 10^{-5}$ M$_{\odot}$ yr$^{-1}$, consistent with those from massive protostars. Towards the central region, close to the driving source of the jet, we have observed various arcs in H$_2$ emission which resemble bow shocks, and strings of H$_2$ knots which reveal traces of multiple outflows.
Imaging of HH80-81 jet in the NIR shock tracers H$_2$ and [Fe II]
Most digital cameras use sensors coated with a Color Filter Array (CFA) to capture channel components at every pixel location, resulting in a mosaic image that does not contain pixel values in all channels. Current research on reconstructing these missing channels, also known as demosaicing, introduces many artifacts, such as zipper effect and false color. Many deep learning demosaicing techniques outperform other classical techniques in reducing the impact of artifacts. However, most of these models tend to be over-parametrized. Consequently, edge implementation of the state-of-the-art deep learning-based demosaicing algorithms on low-end edge devices is a major challenge. We provide an exhaustive search of deep neural network architectures and obtain a pareto front of Color Peak Signal to Noise Ratio (CPSNR) as the performance criterion versus the number of parameters as the model complexity that beats the state-of-the-art. Architectures on the pareto front can then be used to choose the best architecture for a variety of resource constraints. Simple architecture search methods such as exhaustive search and grid search require some conditions of the loss function to converge to the optimum. We clarify these conditions in a brief theoretical study.
Deep Demosaicing for Edge Implementation
Recent surveys have identified a seemingly ubiquitous population of galaxies with elevated [OIII]/H$\beta$ emission line ratios at $z > 1$, though the nature of this phenomenon continues to be debated. The [OIII]/H$\beta$ line ratio is of interest because it is a main component of the standard diagnostic tools used to differentiate between active galactic nuclei (AGN) and star-forming galaxies, as well as the gas-phase metallicity indicators $O_{23}$ and $R_{23}$. Here, we investigate the primary driver of increased [OIII]/H$\beta$ ratios by median-stacking rest-frame optical spectra for a sample of star-forming galaxies in the 3D-HST survey in the redshift range $z\sim1.4-2.2$. Using $N = 4220$ star-forming galaxies, we stack the data in bins of mass and specific star formation rates (sSFR) respectively. After accounting for stellar Balmer absorption, we measure [OIII]$\lambda5007$\AA/H$\beta$ down to $\mathrm{M} \sim 10^{9.2} \ \mathrm{M_\odot}$ and sSFR $\sim 10^{-9.6} \ \mathrm{yr}^{-1}$, more than an order of magnitude lower than previous work at similar redshifts. We find an offset of $0.59\pm0.05$ dex between the median ratios at $z\sim2$ and $z\sim0$ at fixed stellar mass, in agreement with existing studies. However, with respect to sSFR, the $z \sim 2$ stacks all lie within 1$\sigma$ of the median SDSS ratios, with an average offset of only $-0.06\pm 0.05$. We find that the excitation properties of galaxies are tightly correlated with their sSFR at both $z\sim2$ and $z\sim0$, with a relation that appears to be roughly constant over the last 10 Gyr of cosmic time.
The Relation Between [OIII]/H$\beta$ and Specific Star Formation Rate in Galaxies at $z \sim 2$
In the study of order estimation of the Riemann zeta-function $ \zeta(s) = \sum_{n=1}^\infty n^{-s} $, solving Lindel\"{o}f hypothesis is an important theme. As one of the relationships, asymptotic behavior of mean values has been studied. Furthermore, the theory of the mean values is also noted in the double zeta-functions, and the mean values of the Euler-Zagier type of double zeta-function and Mordell-Tornheim type of double zeta-function were studied. In this paper, we prove asymptotic formulas for mean square values of the Barnes double zeta-function $ \zeta_2 (s, \alpha ; v, w ) = \sum_{m=0}^\infty \sum_{n=0}^\infty (\alpha+vm+wn)^{-s} $ with respect to $ \mathrm{Im}(s) $ as $ \mathrm{Im}(s) \rightarrow + \infty $.
Mean values of the Barnes double zeta-function
Traditional network interdiction problems focus on removing vertices or edges from a network so as to disconnect or lengthen paths in the network; network diversion problems seek to remove vertices or edges to reroute flow through a designated critical vertex or edge. We introduce the all-pairs vitality maximization problem (VIMAX), in which vertex deletion attempts to maximize the amount of flow passing through a critical vertex, measured as the all-pairs vitality of the vertex. The assumption in this problem is that in a network for which the structure is known but the physical locations of vertices may not be known (e.g. a social network), locating a person or asset of interest might require the ability to detect a sufficient amount of flow (e.g., communications or financial transactions) passing through the corresponding vertex in the network. We formulate VIMAX as a mixed integer program, and show that it is NP-Hard. We compare the performance of the MIP and a simulated annealing heuristic on both real and simulated data sets and highlight the potential increase in vitality of key vertices that can be attained by subset removal. We also present graph theoretic results that can be used to narrow the set of vertices to consider for removal.
The All-Pairs Vitality-Maximization (VIMAX) Problem
Many-body localization (MBL) describes a quantum phase where an isolated interacting system subject to sufficient disorder displays non-ergodic behavior, evading thermal equilibrium that occurs under its own dynamics. Previously, the thermalization-MBL transition has been largely characterized with the growth of disorder. Here, we explore a new axis, reporting on an energy resolved MBL transition using a 19-qubit programmable superconducting processor, which enables precise control and flexibility of both disorder strength and initial state preparations. We observe that the onset of localization occurs at different disorder strengths, with distinguishable energy scales, by measuring time-evolved observables and many-body wavefunctions related quantities. Our results open avenues for the experimental exploration of many-body mobility edges in MBL systems, whose existence is widely debated due to system size finiteness, and where exact simulations in classical computers become unfeasible.
Observation of energy resolved many-body localization
Decisions in a shareholder meeting or a legislative committee are often modeled as a weighted game. Influence of a member is then measured by a power index. A large variety of different indices has been introduced in the literature. This paper analyzes how power indices differ with respect to the largest possible power of a non-dictatorial player. It turns out that the considered set of power indices can be partitioned into two classes. This may serve as another indication which index to use in a given application.
The power of the largest player
We introduce strong distributivity, a strengthening of distributivity, which implies preservation of ccness and stationarity, afterwards showing a stronger version of the Easton Lemma. We also introduce a new framework for working with arbitrary orders on products of sets. Both concepts are applied together to answer two questions of Krueger using a new version of Mitchell's Forcing.
Disjoint Stationary Sequences on an Interval of Cardinals
We make some remarks on reconnection in plasmas and want to present some calculations related to the problem of finding velocity fields which conserve magnetic flux or at least magnetic field lines. Hereby we start from views and definitions of ideal and non-ideal flows on one hand, and of reconnective and non-reconnective plasma dynamics on the other hand. Our considerations give additional insights into the discussion on violations of the frozen--in field concept which started recently with the papers by Baranov & Fahr (2003a; 2003b). We find a correlation between the nonidealness which is given by a generalized form of the Ohm's law and a general transporting velocity, which is field line conserving.
Flux and field line conservation in 3--D nonideal MHD flows: Remarks about criteria for 3--D reconnection without magnetic neutral points
The Ribosome Flow Model (RFM) describes the unidirectional movement of interacting particles along a one-dimensional chain of sites. As a site becomes fuller, the effective entry rate into this site decreases. The RFM has been used to model and analyze mRNA translation, a biological process in which ribosomes (the particles) move along the mRNA molecule (the chain), and decode the genetic information into proteins. Here we propose the RFM as an analytical framework for modeling and analyzing linear communication networks. In this context, the moving particles are data-packets, the chain of sites is a one dimensional set of ordered buffers, and the decreasing entry rate to a fuller buffer represents a kind of decentralized backpressure flow control. For an RFM with homogeneous link capacities, we provide closed-form expressions for important network metrics including the throughput and end-to-end delay. We use these results to analyze the hop length and the transmission probability (in a contention access mode) that minimize the end-to-end delay in a multihop linear network, and provide closed-form expressions for the optimal parameter values.
Analyzing Linear Communication Networks using the Ribosome Flow Model
In this paper, we discuss the atomistic structure of two conducting bridge computer memory materials, including Cu-doped alumina and silver-doped GeSe$_3$. We show that the Ag is rather uniformly distributed through the chalcogenide glass, whereas the Cu strongly clusters in the alumina material. The copper-oxide system conducts via extended state conduction through Cu atoms once the concentration becomes high enough to form connected Cu channels. What is more, the addition of Cu leads to extended states throughout the large alumina (host) optical gap. By contrast, Ag in the selenide host is not strongly conducting even if one imposes a very narrow nanowire geometry. All of these results are discussed using novel techniques for computing the conduction active parts of the network.
Electronic conduction mechanisms in GeSe$_3$:Ag and Al$_2$O$_3$:Cu
Feature reduction is an important concept which is used for reducing dimensions to decrease the computation complexity and time of classification. Since now many approaches have been proposed for solving this problem, but almost all of them just presented a fix output for each input dataset that some of them aren't satisfied cases for classification. In this we proposed an approach as processing input dataset to increase accuracy rate of each feature extraction methods. First of all, a new concept called dispelling classes gradually (DCG) is proposed to increase separability of classes based on their labels. Next, this method is used to process input dataset of the feature reduction approaches to decrease the misclassification error rate of their outputs more than when output is achieved without any processing. In addition our method has a good quality to collate with noise based on adapting dataset with feature reduction approaches. In the result part, two conditions (With process and without that) are compared to support our idea by using some of UCI datasets.
Dispelling Classes Gradually to Improve Quality of Feature Reduction Approaches
Pressure effect is overviewed for the cuprates and carbon-based superconductors, with an emphasis on how their orbital characters are modified by pressure. For the high-Tc cuprates, we start from an observation for ambient pressure that, on top of the main orbital (dx2-y2), a hybridization with the second (dz2) orbital around the Fermi energy significantly affects Tc in the spin-fluctuation mediated pairing, where the hybridization is dominated by material parameters. We can then show that applying pressures along a, b axes enhances Tc while a c axis pressure suppresses Tc, where not only the dz2 hybridization but also Cu(4s) hybridization exert an effect. For the multi-layer cuprates, inter-layer pair hopping is suggested to be important, which may contribute to pressure effect. Pressure effect is also interesting in a recently discovered aromatic family of superconductors (picene, etc). There, we have again multi-band systems, which in this case derive from different molecular orbitals. The Fermi surface is an intriguing composite of different pockets/sheets having different dimensionalities arising from anisotropic transfers between the molecular orbitals, and pressure effects should be an important probe of these.
Pressure effects and orbital characters in cuprate and carbon-based superconductors
The Jacobian conjecture in dimension $n$ asserts that any polynomial endomorphism of $n$-dimensional affine space over a field of zero characteristic, with the Jacobian equal 1, is invertible. The Dixmier conjecture in rank $n$ asserts that any endomorphism of the $n$-th Weyl algebra (the algebra of polynomial differential operators in $n$ variables) is invertible. We prove that the Jacobian conjecture in dimension $2n$ implies the Dixmier conjecture in rank $n$. Together with a well-known implication in the opposite direction, it shows that the stable Jacobian and Dixmier conjectures are equivalent. The main tool of the proof is the reduction to finite characteristic. After the paper was finished we have learned that the main result was already published by Y.Tsuchimoto in Osaka Journal of Mathematics Volume 42, Number 2 (June 2005). His proof is different.
The Jacobian Conjecture is stably equivalent to the Dixmier Conjecture
A scalable system for real-time analysis of electron temperature and density based on signals from the Thomson scattering diagnostic, initially developed for and installed on the NSTX-U experiment, was recently adapted for the Large Helical Device (LHD) and operated for the first time during plasma discharges. During its initial operation run, it routinely recorded and processed signals for four spatial points at the laser repetition rate of 30 Hz, well within the system's rated capability for 60 Hz. We present examples of data collected from this initial run and describe subsequent adaptations to the analysis code to improve the fidelity of the temperature calculations.
Initial operation and data processing on a system for real-time evaluation of Thomson scattering signals on the Large Helical Device
Under infrared ultrashort pulse laser stimulation, we investigated temperature-dependent second-harmonic generation (SHG) from nitrogen-vacancy (NV)-introduced bulk diamond. The SHG intensity decreases in the temperature range of 20-300 $^{\circ}C$, due to phase mismatching caused by refractive index modification. We discovered that optical phonon scattering outperforms acoustic phonon one in NV diamond by fitting the temperature dependence of SHG intensity using a model based on the bandgap change via the deformation potential interaction. This study presents an efficient and viable way for creating diamond-based nonlinear optical temperature sensing.
Temperature-dependent second-harmonic generation from color centers in diamond
We observe plasma flows in cool loops using the Slit-Jaw Imager (SJI) onboard the Interface Region Imaging Spectrometer (IRIS). Huang et al. (2015) observed unusually broadened Si IV 1403 angstrom line profiles at the footpoints of such loops that were attributed to signatures of explosive events (EEs). We have chosen one such uni-directional flowing cool loop system observed by IRIS where one of the footpoints is associated with significantly broadened Si IV line profiles. The line profile broadening indirectly indicates the occurrence of numerous EEs below the transition region (TR), while it directly infers a large velocity enhancement /perturbation further causing the plasma flows in the observed loop system. The observed features are implemented in a model atmosphere in which a low-lying bi-polar magnetic field system is perturbed in the chromosphere by a velocity pulse with a maximum amplitude of 200 km/s. The data-driven 2-D numerical simulation shows that the plasma motions evolve in a similar manner as observed by IRIS in the form of flowing plasma filling the skeleton of a cool loop system. We compare the spatio-temporal evolution of the cool loop system in the framework of our model with the observations, and conclude that their formation is mostly associated with the velocity response of the transient energy release above their footpoints in the chromosphere/TR. Our observations and modeling results suggest that the velocity responses most likely associated to the EEs could be one of the main candidates for the dynamics and energetics of the flowing cool loop systems in the lower solar atmosphere.
Velocity Response of the Observed Explosive Events in the Lower Solar Atmosphere: I. Formation of the Flowing Cool Loop System
Global pooling is one of the most significant operations in many machine learning models and tasks, whose implementation, however, is often empirical in practice. In this study, we develop a novel and solid global pooling framework through the lens of optimal transport. We demonstrate that most existing global pooling methods are equivalent to solving some specializations of an unbalanced optimal transport (UOT) problem. Making the parameters of the UOT problem learnable, we unify various global pooling methods in the same framework, and accordingly, propose a generalized global pooling layer called UOT-Pooling (UOTP) for neural networks. Besides implementing the UOTP layer based on the classic Sinkhorn-scaling algorithm, we design a new model architecture based on the Bregman ADMM algorithm, which has better numerical stability and can reproduce existing pooling layers more effectively. We test our UOTP layers in several application scenarios, including multi-instance learning, graph classification, and image classification. Our UOTP layers can either imitate conventional global pooling layers or learn some new pooling mechanisms leading to better performance.
Revisiting Global Pooling through the Lens of Optimal Transport
We investigate the gauge-invariant observables constructed by smearing the graviton and inflaton fields by compactly supported tensors at linear order in general single-field inflation. These observables correspond to gauge-invariant quantities that can be measured locally. In particular, we show that these observables are equivalent to (smeared) local gauge-invariant observables such as the linearised Weyl tensor, which have better infrared properties than the graviton and inflaton fields. Special cases include the equivalence between the compactly supported gauge-invariant graviton observable and the smeared linearised Weyl tensor in Minkowski and de Sitter spaces. Our results indicate that the infrared divergences in the tensor and scalar perturbations in single-field inflation have the same status as in de Sitter space and are both a gauge artefact, in a certain technical sense, at tree level.
Compactly supported linearised observables in single-field inflation
The Maunakea Spectroscopic Explorer (MSE) will each year obtain millions of spectra in the optical to near-infrared, at low (R~3000) to high (R~40000) spectral resolution by observing >3000 spectra per pointing via a highly multiplexed fiber-fed system. Key science programs for MSE include black hole reverberation mapping, stellar population analysis of faint galaxies at high redshift, and sub-km/s velocity accuracy for stellar astrophysics. The architecture of MSE is an assembly of subsystems designed to meet the science requirements and describes what MSE will look like. In this paper we focus on the operations concept of MSE, which describes how to operate a fiber fed, highly multiplexed, dedicated observatory given its architecture and the science requirements. The operations concept details the phases of operations, from selecting proposals within the science community to distributing back millions of spectra to this community. For each phase, the operations concept describes the tools required to support the science community in their analyses and the operations staff in their work. It also highlights the specific needs related to the complexity of MSE with millions of targets to observe, thousands of fibers to position, and different spectral resolution to use. Finally, the operations concept shows how the science requirements on calibration and observing efficiency can be met.
Optimal scheduling and science delivery of spectra for millions of targets in thousands of fields: the operational concept of the Maunakea spectroscopic explorer (MSE)
We examine the effect of multilevels on decoherence and dephasing properties of a quantum system consisting of a non-ideal two level subspace, identified as the qubit and a finite set of higher energy levels above this qubit subspace. The whole system is under interaction with an environmental bath through a Caldeira-Leggett type coupling. The model interaction we use can generate nonnegligible couplings between the qubit states and the higher levels upto $N\sim 10$. In contrast to the pure two-level system, in a multilevel system the quantum information may leak out of the qubit subspace through nonresonant as well as resonant excitations induced by the environment. The decoherence properties of the qubit subspace is examined numerically using the master equation formalism of the system's reduced density matrix. We numerically examine the relaxation and dephasing times as the environmental frequency spectrum, the environmental temperature, and the multilevel system parameters are varied. We observe the influence of all energy scales in the noise spectrum on the short time dynamics implying the dominance of nonresonant transitions at short times. The relaxation and dephasing times calculated, strongly depend on $N$ for $4< N<10$ and saturate for $10 <N$. We also examine double degenerate systems with $4 \le N$ and observe a strong suppression (almost by two orders of magnitude) of the low temperature relaxation and dephasing rates. An important observation for $4 \le N$ in doubly degenerate energy configuration is that, we find a strong suppression of the RD rates for such systems relative to the singly degenerate ones. These results are also compared qualitatively with the relaxation rates found from the Fermi Golden Rule.
Decoherence and dephasing in multilevel systems interacting with thermal environment
The opening rate of voltage-gated potassium ion channels exhibits a characteristic, knee-like turnover where the common exponential voltage-dependence changes suddenly into a linear one. An explanation of this puzzling crossover is put forward in terms of a stochastic first passage time analysis. The theory predicts that the exponential voltage-dependence correlates with the exponential distribution of closed residence times. This feature occurs at large negative voltages when the channel is predominantly closed. In contrast, the linear part of voltage-dependence emerges together with a non-exponential distribution of closed dwelling times with increasing voltage, yielding a large opening rate. Depending on the parameter set, the closed-time distribution displays a power law behavior which extends over several decades.
Ion channel gating: a first passage time analysis of the Kramers type
We study, to all orders in perturbative QCD, the universal behavior of the saturation momentum $Q_s(L)$ controlling the transverse momentum distribution of a fast parton propagating through a dense QCD medium with large size $L$. Due to the double logarithmic nature of the quantum evolution of the saturation momentum, its large $L$ asymptotics is obtained by slightly departing from the double logarithmic limit of either next-to-leading log (NLL) BFKL or leading order DGLAP evolution equations. At fixed coupling, or in conformal $\mathcal{N}=4$ SYM theory, we derive the large $L$ expansion of $Q_s(L)$ up to order $\alpha_s^{3/2}$. In QCD with massless quarks, where conformal symmetry is broken by the running of the strong coupling constant, the one-loop QCD $\beta$-function fully accounts for the universal terms in the $Q_s(L)$ expansion. Therefore, the universal coefficients of this series are known exactly to all orders in $\alpha_s$.
Transverse momentum broadening from NLL BFKL to all orders in pQCD
In the framework of (vector valued) quantized holomorphic functions defined on non-commutative spaces, ``quantized hermitian symmetric spaces'', we analyze what the algebras of quantized differential operators with variable coefficients should be. It is an emediate point that even $0$th order operators, given as multiplications by polynomials, have to be specified as e.g. left or right multiplication operators since the polynomial algebras are replaced by quadratic, non-commutative algebras. In the settings we are interested in, there are bilinear pairings which allows us to define differential operators as duals of multiplication operators. Indeed, there are different choices of pairings which lead to quite different results. We consider three different pairings. The pairings are between quantized generalized Verma modules and quantized holomorphically induced modules. It is a natural demand that the corresponding representations can be expressed by (matrix valued) differential operators. We show that a quantum Weyl algebra ${\mathcal W}eyl_q(n,n)$ introduced by T. Hyashi (Comm. Math. Phys. 1990) plays a fundamental role. In fact, for one pairing, the algebra of differential operators, though inherently depending on a choice of basis, is precisely matrices over ${\mathcal W}eyl_q(n,n)$. We determine explicitly the form of the (quantum) holomorphically induced representations and determine, for the different pairings, if they can be expressed by differential operators.
Algebras of Variable Coefficient Quantized Differential Operators
We have performed evolutionary calculations of very-low-mass stars from 0.08 to 0.8 $\msol$ for different metallicites from [M/H]= -2.0 to -1.0 and we have tabulated the mechanical, thermal and photometric characteristics of these models. The calculations include the most recent interior physics and improved non-grey atmosphere models. The models reproduce the entire main sequences of the globular clusters observed with the Hubble Space Telescope over the afore-mentioned range of metallicity. Comparisons are made in the WFPC2 Flight system including the F555, F606 and F814 filters, and in the standard Johnson-Cousins system. We examine the effects of different physical parameters, mixing-length, $\alpha$-enriched elements, helium fraction, as well as the accuracy of the photometric transformations of the HST data into standard systems. We derive mass-effective temperature and mass-magnitude relationships and we compare the results with the ones obtained with different grey-like approximations. These latter are shown to yield inaccurate relations, in particular near the hydrogen-burning limit. We derive new hydrogen-burning minimum masses, and the corresponding absolute magnitudes, for the different metallicities. We predict color-magnitude diagrams in the infrared NICMOS filters, to be used for the next generation of the HST observations, providing mass-magnitudes relationships in these colors down to the brown-dwarf limit. We show that the expected signature of the stellar to substellar transition in color-magnitude diagrams is a severe blueshift in the infrared colors, due to the increasing collision-induced absorption of molecular hydrogen with increasing density and decreasing temperature.
Evolutionary models for metal-poor low-mass stars. Lower main sequence of globular clusters and halo field stars
Thermal machines exploit interactions with multiple heat baths to perform useful tasks, such as work production and refrigeration. In the quantum regime, tasks with no classical counterpart become possible. Here, we consider the minimal setting for quantum thermal machines, namely two-qubit autonomous thermal machines that use only incoherent interactions with their environment, and investigate the fundamental resources needed to generate entanglement. Our investigation is systematic, covering different types of interactions, bosonic and fermionic environments, and different resources that can be supplied to the machine. We adopt an operational perspective in which we assess the nonclassicality of the generated entanglement through its ability to perform useful tasks such as Einstein-Podolsky-Rosen steering, quantum teleportation and Bell nonlocality. We provide both constructive examples of nonclassical effects and general no-go results that demarcate the fundamental limits in autonomous entanglement generation. Our results open up a path toward understanding nonclassical phenomena in thermal processes.
Operational nonclassicality in minimal autonomous thermal machines
In this paper we prove a conjecture by De Concini, Kac and Procesi \cite{CP} (Corollary \ref{conj}): The dimension of any $M\in U_q-\mood^\chi$ is divisible by $l^{codim_\mathcal{B}\mathcal{B}_\chi}$.
Proof of the De Concini-Kac-Procesi conjecture
In the framework of the large extra dimensions (LED) model, we investigate the effects induced by the Kaluza-Klein (KK) gravitons up to the QCD next-to-leading order (NLO) on the $W$-pair production followed by a subsequential $W$-decay at the CERN LHC. We depict the regions in the $\mathcal{L}-M_{S}$ parameter space where the LED effect can and cannot be observed from the analyses of the $pp \to W^+W^- + X$ and $pp \to W^+W^- \to W^{\pm}l^{\mp}\stackrel{(-)}{\nu} + X$ processes. We find that the ability of probing the LED effects can be improved by taking the cutoffs for the invariant mass of $W$-pair and the transverse momentum of the final lepton. Our results demonstrate that the NLO QCD corrections to observables are significant, and do not show any improvement for the renormalization/factorization scale uncertainty on the QCD NLO corrected cross section, because the LO result underestimates the scale dependence.
Revisiting the large extra dimension effects on $W$-pair production at the LHC in NLO QCD
We study the existence of a periodic solution for a differential equation with distributed delay. It is shown that, for a class of distributed delay diferential quations, a symmetric period 2 solution, where the period is twice the maximum delay, is given as a periodic solution of a Hamiltonian system of ordinary differential equations. Proof of the idea is based on (Kaplan & Yorke, 1974, J. Math. Anal. Appl.) for a discrete delay differential equation with an odd nonlinear function. To illustrate the results, we present distributed delay differential equations that have periodic solutions expressed in terms of the Jacobi elliptic functions.
Period two solution for a class of distributed delay differential equations
This paper introduces the model, numerical methods, algorithms and parallel implementation of a thermal reservoir simulator that designed for numerical simulations of thermal reservoir with multiple components in three dimensional domain using distributed-memory parallel computers. Its full mathematical model is introduced with correlations for important properties and well modeling. Various well constraints, such as fixed bottom hole pressure, fixed oil, water, gas and liquid rates, constant heat transfer model, convective heat transfer model, heater model (temperature control, rate control, dual rate/temperature control), and subcool (steam trap), are introduced in details, including their mathematical models and methods. Efficient numerical methods and parallel computing technologies are presented. The simulator is designed for giant models with billions or even trillions of grid blocks using hundreds of thousands of CPUs. Numerical experiments show that our results match commercial simulators, which confirms the correctness of our methods and implementations. SAGD simulation with 15106 well pairs is also presented to study the effectiveness of our numerical methods. Scalability testings demonstrate that our simulator can handle giant models with over 200 billion grid blocks using 100,800 CPU cores and the simulator has good scalability.
A Scalable Thermal Reservoir Simulator for Giant Models on Parallel Computers
The contextual information plays a core role in semantic segmentation. As for video semantic segmentation, the contexts include static contexts and motional contexts, corresponding to static content and moving content in a video clip, respectively. The static contexts are well exploited in image semantic segmentation by learning multi-scale and global/long-range features. The motional contexts are studied in previous video semantic segmentation. However, there is no research about how to simultaneously learn static and motional contexts which are highly correlated and complementary to each other. To address this problem, we propose a Coarse-to-Fine Feature Mining (CFFM) technique to learn a unified presentation of static contexts and motional contexts. This technique consists of two parts: coarse-to-fine feature assembling and cross-frame feature mining. The former operation prepares data for further processing, enabling the subsequent joint learning of static and motional contexts. The latter operation mines useful information/contexts from the sequential frames to enhance the video contexts of the features of the target frame. The enhanced features can be directly applied for the final prediction. Experimental results on popular benchmarks demonstrate that the proposed CFFM performs favorably against state-of-the-art methods for video semantic segmentation. Our implementation is available at https://github.com/GuoleiSun/VSS-CFFM
Coarse-to-Fine Feature Mining for Video Semantic Segmentation
Several possible odd-parity states are listed up group-theoretically and examined in light of recent experiments on Sr$_2$RuO$_4$. Those include some of the $f$-wave pairing states, ${\mib d}({\mib k})\propto{\hat{\mib z}} k_xk_y(k_x + {\rm i}k_y)$ and ${\hat{\mib z}} (k_x^2-k_y^2)(k_x + {\rm i}k_y)$ and other ${\hat{\mib z}} (k_x + {\rm i}k_y)\cos ck_z$ ($c$ is the $c$-axis lattice constant) as most plausible candidates. These are time-reversal symmetry broken states and have line nodes running either vertically (the former two) or horizontally (the latter), consistent with experiments. Characterizations of these states and other possibilities are given.
Spin triplet superconductivity with line nodes in Sr2RuO4
Symmetry and reciprocity constraints on polarization state of the field diffracted by gratings of quasi-planar particles are considered. It is shown that the optical activity effects observed recently in arrays of quasi-planar plasmonic particles on a dielectric substrate are due to the reflection of the field at the air-dielectric slab interface and are proportional to this reflection coefficient.
Symmetry and reciprocity constraints on diffraction by gratings of quasi-planar particles
Orthogonal frequency division multiplexing with index modulation (OFDM-IM) is a novel multicarrier scheme, which uses the indices of the active subcarriers to transmit data. OFDM-IM also inherits the high peak-to-average power ratio (PAPR) problem, which induces in-band distortion and out-of-band radiation when OFDM-IM signal passes through high power amplifier (HPA). In this letter, dither signals are added in the idle subcarriers to reduce the PAPR. Unlike the previous work using single level dither signals, multilevel dither signals are used by exploiting that the amplitudes of the active subcarriers are variously distributed for different subblocks. As a result, the proposed scheme gives the dither signals more freedom (a larger radius of dithering) in average. Simulation results show that the proposed scheme can achieve better PAPR reduction performance than the previous work.
PAPR Reduction in OFDM-IM Using Multilevel Dither Signals
We report the experimental realization of a topological Creutz ladder for ultracold fermionic atoms in a resonantly driven 1D optical lattice. The two-leg ladder consists of the two lowest orbital states of the optical lattice and the cross inter-leg links are generated via two-photon resonant coupling between the orbitals by periodic lattice shaking. The characteristic pseudo-spin winding in the topologically non-trivial bands of the ladder system is demonstrated using momentum-resolved Ramsey-type interferometric measurements. We discuss a two-tone driving method to extend the inter-leg link control and propose a topological charge pumping scheme for the Creutz ladder system.
Topological Creutz Ladder in a Resonantly Shaken 1D Optical Lattice
The Density Conjecture of Katz and Sarnak associates a classical compact group to each reasonable family of $L$-functions. Under the assumption of the Generalized Riemann Hypothesis, Rubinstein computed the $n$-level density of low-lying zeros for the family of quadratic Dirichlet $L$-functions in the case that the Fourier transform $\hat{f}(u)$ of any test function $f$ is supported in the region $\sum^n_{j=1}u_j < 1$ and showed that the result agrees with the Density Conjecture. In this paper, we improve Rubinstein's result on computing the $n$-level density for the Fourier transform $\hat{f}(u)$ being supported in the region $\sum^n_{j=1}u_j < 2$.
$n$-level density of the low-lying zeros of quadratic Dirichlet $L$-functions
Supersymmetry is under pressure from LHC searches requiring colored superpartners to be heavy. We demonstrate R-parity violating spectra for which the dominant signatures are not currently well searched for at the LHC. In such cases, the bounds can be as low as 800 GeV on both squarks and gluinos. We demonstrate that there are nontrivial constraints on squark and gluino masses with baryonic RPV (UDD operators) and show that in fact leptonic RPV can allow comparable or even lighter superpartners. The constraints from many searches are weakened if the LSP is significantly lighter than the colored superpartners, such that it is produced with high boost. The LSP decay products will then be collimated, leading to the miscounting of leptons or jets and causing such models to be missed even with large production cross-sections. Other leptonic RPV scenarios that evade current searches include the highly motivated case of a higgsino LSP decaying to a tau and two quarks, and the case of a long-lived LSP with a displaced decay to electrons and jets. The least constrained models can have SUSY production cross-sections of ~pb or larger, implying tens of thousands of SUSY events in the 8 TeV data. We suggest novel searches for these signatures of RPV, which would also improve the search for general new physics at the LHC.
Supersymmetric Crevices: Missing Signatures of RPV at the LHC
We introduce a new method for proving the nonexistence of positive supersolutions of elliptic inequalities in unbounded domains of $\mathbb{R}^n$. The simplicity and robustness of our maximum principle-based argument provides for its applicability to many elliptic inequalities and systems, including quasilinear operators such as the $p$-Laplacian, and nondivergence form fully nonlinear operators such as Bellman-Isaacs operators. Our method gives new and optimal results in terms of the nonlinear functions appearing in the inequalities, and applies to inequalities holding in the whole space as well as exterior domains and cone-like domains.
Nonexistence of positive supersolutions of elliptic equations via the maximum principle
The mechanism of irreversible dynamics in the systems with mixing is analyzed. The procedure of splitting of system on equilibrium subsystems and studying of dynamics of one of them under condition of its interaction with other subsystems in the basis of the approach to the analysis of dynamics of nonequilibrium systems is used. The problem of "coarse-grain" of the phase space in this method is eliminated. The formula, which expresses the entropy through the work of forces between systems, is submitted. The essential link between thermodynamics and classical mechanics was found.
Mixing and irreversibility in classical mechanics
We consider how to learn multi-step predictions efficiently. Conventional algorithms wait until observing actual outcomes before performing the computations to update their predictions. If predictions are made at a high rate or span over a large amount of time, substantial computation can be required to store all relevant observations and to update all predictions when the outcome is finally observed. We show that the exact same predictions can be learned in a much more computationally congenial way, with uniform per-step computation that does not depend on the span of the predictions. We apply this idea to various settings of increasing generality, repeatedly adding desired properties and each time deriving an equivalent span-independent algorithm for the conventional algorithm that satisfies these desiderata. Interestingly, along the way several known algorithmic constructs emerge spontaneously from our derivations, including dutch eligibility traces, temporal difference errors, and averaging. This allows us to link these constructs one-to-one to the corresponding desiderata, unambiguously connecting the `how' to the `why'. Each step, we make sure that the derived algorithm subsumes the previous algorithms, thereby retaining their properties. Ultimately we arrive at a single general temporal-difference algorithm that is applicable to the full setting of reinforcement learning.
Learning to Predict Independent of Span
We propose a novel method for enantio-selective electron paramagnetic resonance spectroscopy based on magneto-chiral anisotropy. We calculate the strength of this effect and propose a dedicated interferometer setup for its observation.
Traveling wave enantio-selective electron paramagnetic resonance
An interpretation of the formation of halo in accelerators based on quantum-like theory by a diffraction model is given in terms of the transversal beam motion. Physical implications of the longitudinal dynamics are also examined.
Quantum-like approach to the transversal and longitudinal beam dynamics. The halo problem
Research in unpaired video translation has mainly focused on short-term temporal consistency by conditioning on neighboring frames. However for transfer from simulated to photorealistic sequences, available information on the underlying geometry offers potential for achieving global consistency across views. We propose a novel approach which combines unpaired image translation with neural rendering to transfer simulated to photorealistic surgical abdominal scenes. By introducing global learnable textures and a lighting-invariant view-consistency loss, our method produces consistent translations of arbitrary views and thus enables long-term consistent video synthesis. We design and test our model to generate video sequences from minimally-invasive surgical abdominal scenes. Because labeled data is often limited in this domain, photorealistic data where ground truth information from the simulated domain is preserved is especially relevant. By extending existing image-based methods to view-consistent videos, we aim to impact the applicability of simulated training and evaluation environments for surgical applications. Code and data: http://opencas.dkfz.de/video-sim2real.
Long-Term Temporally Consistent Unpaired Video Translation from Simulated Surgical 3D Data
Twisted $U$- and twisted $U/K$-hierarchies are soliton hierarchies introduced by Terng to find higher flows of the generalized sine-Gordon equation. Twisted $\frac {O(J,J)}{O(J)\times O(J)}$-hierarchies are among the most important classes of twisted hierarchies. In this paper, interesting first and higher flows of twisted $\frac {O(J,J)}{O(J)\times O(J)}$-hierarchies are explicitly derived, the associated submanifold geometry is investigated and a unified treatment of the inverse scattering theory is provided.
Twisted hierarchies associated with the generalized sine-Gordon equation
Photonic crystals (PCs) are periodic dielectric structures that severed as an excellent platform to manipulate light. A conventional way to guide/trap light via PCs is to introduce a line or point defect by removing or modifying several unit cells. Here we show that the light can be effectively guided and trapped in the glided photonic crystal interfaces (GPCIs). The projected band gap of GPCIs, which depends on the glide parameter, is characterized by a Dirac mass. Interestingly, the GPCIs with zero Dirac mass is a glide-symmetric waveguide featured with excellent transmission performance even in the presence of sharp corners and disorders. Moreover, placing two GPCIs with opposite Dirac mass together results in a photonic bound state due to the Jackiw-Rebbi theory. Our work provides an alternative way towards the design of ultracompact photonic devices such as GPCIs-induced coupled cavity-waveguide system and waveguide splitter.
Molding light via glided photonic crystal interfaces
[Abridged] We investigate the nature of the relations between black hole (BH) mass ($M_{\rm BH}$) and the central velocity dispersion ($\sigma$) and, for core-S\'ersic galaxies, the size of the depleted core ($R_{\rm b}$). Our sample of 144 galaxies with dynamically determined $M_{\rm BH}$ encompasses 24 core-S\'ersic galaxies, thought to be products of gas-poor mergers, and reliably identified based on high-resolution HST imaging. For core-S\'ersic galaxies -- i.e., combining normal-core ($R_{\rm b} < 0.5 $ kpc) and large-core galaxies ($R_{\rm b} \gtrsim 0.5$ kpc), we find that $M_{\rm BH}$ correlates remarkably well with $R_{\rm b}$ such that $M_{\rm BH} \propto R_{\rm b}^{1.20 \pm 0.14}$ (rms scatter in log $M_{\rm BH}$ of $\Delta_{\rm rms} \sim 0.29$ dex), confirming previous works on the same galaxies except three new ones. Separating the sample into S\'ersic, normal-core and large-core galaxies, we find that S\'ersic and normal-core galaxies jointly define a single log-linear $M_{\rm BH}-\sigma$ relation $M_{\rm BH} \propto \sigma^{ 4.88 \pm 0.29}$ with $\Delta_{\rm rms} \sim 0.47$ dex, however, at the high-mass end large-core galaxies (four with measured $M_{\rm BH}$) are offset upward from this relation by ($2.5-4) \times \sigma_{\rm s}$, explaining the previously reported steepening of the $M_{\rm BH}-\sigma$ relation for massive galaxies. Large-core spheroids have magnitudes $M_{V} \le -23.50$ mag, half-light radii Re $>$ 10 kpc and are extremely massive $M_{*} \ge 10^{12}M_{\odot}$. Furthermore, these spheroids tend to host ultramassive BHs ($M_{\rm BH} \ge 10^{10}M_{\odot}$) tightly connected with their $R_{\rm b}$ rather than $\sigma$. The less popular $M_{\rm BH}-R_{\rm b}$ relation exhibits $\sim$ 62% less scatter in log $M_{\rm BH}$ than the $M_{\rm BH}- \sigma$ relations.
Ultramassive black holes in the most massive galaxies: $M_{\rm BH}-\sigma$ versus $M_{\rm BH}-R_{\rm b}$
This paper presents a structured power and energy-flow-based qualitative modelling approach that is applicable to a variety of system types including electrical and fluid flow. The modelling is split into two parts. Power flow is a global phenomenon and is therefore naturally represented and analysed by a network comprised of the relevant structural elements from the components of a system. The power flow analysis is a platform for higher-level behaviour prediction of energy related aspects using local component behaviour models to capture a state-based representation with a global time. The primary application is Failure Modes and Effects Analysis (FMEA) and a form of exaggeration reasoning is used, combined with an order of magnitude representation to derive the worst case failure modes. The novel aspects of the work are an order of magnitude(OM) qualitative network analyser to represent any power domain and topology, including multiple power sources, a feature that was not required for earlier specialised electrical versions of the approach. Secondly, the representation of generalised energy related behaviour as state-based local models is presented as a modelling strategy that can be more vivid and intuitive for a range of topologically complex applications than qualitative equation-based representations.The two-level modelling strategy allows the broad system behaviour coverage of qualitative simulation to be exploited for the FMEA task, while limiting the difficulties of qualitative ambiguity explanation that can arise from abstracted numerical models. We have used the method to support an automated FMEA system with examples of an aircraft fuel system and domestic a heating system discussed in this paper.
Qualitative Order of Magnitude Energy-Flow-Based Failure Modes and Effects Analysis
This survey is organized around three main topics: models, econometrics, and empirical applications. Section 2 presents the theoretical framework, introduces the concept of Markov Perfect Nash Equilibrium, discusses existence and multiplicity, and describes the representation of this equilibrium in terms of conditional choice probabilities. We also discuss extensions of the basic framework, including models in continuous time, the concepts of oblivious equilibrium and experience-based equilibrium, and dynamic games where firms have non-equilibrium beliefs. In section 3, we first provide an overview of the types of data used in this literature, before turning to a discussion of identification issues and results, and estimation methods. We review different methods to deal with multiple equilibria and large state spaces. We also describe recent developments for estimating games in continuous time and incorporating serially correlated unobservables, and discuss the use of machine learning methods to solving and estimating dynamic games. Section 4 discusses empirical applications of dynamic games in IO. We start describing the first empirical applications in this literature during the early 2000s. Then, we review recent applications dealing with innovation, antitrust and mergers, dynamic pricing, regulation, product repositioning, advertising, uncertainty and investment, airline network competition, dynamic matching, and natural resources. We conclude with our view of the progress made in this literature and the remaining challenges.
Dynamic Games in Empirical Industrial Organization
We use femtosecond x-ray diffraction to study the structural response of charge and orbitally ordered Pr$_{1-x}$Ca$_x$MnO$_3$ thin films across a phase transition induced by 800 nm laser pulses. By investigating the dynamics of both superlattice reflections and regular Bragg peaks, we disentangle the different structural contributions and analyze their relevant time-scales. The dynamics of the structural and charge order response are qualitatively different when excited above and below a critical fluence $f_c$. For excitations below $f_c$ the charge order and the superlattice is only partially suppressed and the ground state recovers within a few tens of nanosecond via diffusive cooling. When exciting above the critical fluence the superlattice vanishes within approximately half a picosecond followed by a change of the unit cell parameters on a 10 picoseconds time-scale. At this point all memory from the symmetry breaking is lost and the recovery time increases by many order of magnitudes due to the first order character of the structural phase transition.
The photoinduced transition in magnetoresistive manganites: a comprehensive view
In this paper, we collect and study Twitter communications to understand the societal impact of COVID-19 in the United States during the early days of the pandemic. With infections soaring rapidly, users took to Twitter asking people to self isolate and quarantine themselves. Users also demanded closure of schools, bars, and restaurants as well as lockdown of cities and states. We methodically collect tweets by identifying and tracking trending COVID-related hashtags. We first manually group the hashtags into six main categories, namely, 1) General COVID, 2) Quarantine, 3) Panic Buying, 4) School Closures, 5) Lockdowns, and 6) Frustration and Hope}, and study the temporal evolution of tweets in these hashtags. We conduct a linguistic analysis of words common to all hashtag groups and specific to each hashtag group and identify the chief concerns of people as the pandemic gripped the nation (e.g., exploring bidets as an alternative to toilet paper). We conduct sentiment analysis and our investigation reveals that people reacted positively to school closures and negatively to the lack of availability of essential goods due to panic buying. We adopt a state-of-the-art semantic role labeling approach to identify the action words and then leverage a LSTM-based dependency parsing model to analyze the context of action words (e.g., verb deal is accompanied by nouns such as anxiety, stress, and crisis). Finally, we develop a scalable seeded topic modeling approach to automatically categorize and isolate tweets into hashtag groups and experimentally validate that our topic model provides a grouping similar to our manual grouping. Our study presents a systematic way to construct an aggregated picture of peoples' response to the pandemic and lays the groundwork for future fine-grained linguistic and behavioral analysis.
Analyzing Societal Impact of COVID-19: A Study During the Early Days of the Pandemic
We prove that every connected affine scheme of positive characteristic is a K(pi, 1) space for the etale topology. The main ingredient is the special case of the affine space over a field k. This is dealt with by induction on n, using a key "Bertini-type"' statement regarding the wild ramification of l-adic local systems on affine spaces, which might be of independent interest. Its proof uses in an essential way recent advances in higher ramification theory due to T. Saito. We also give rigid analytic and mixed characteristic versions of the main result.
Wild ramification and K(pi, 1) spaces
In this article, for modelling numeral systems, the operator approach, which is introduced in [25], is generalized for a certain case. An example of such numeral systems is introduced and considered.
One example of singular representations of real numbers from the unit interval
We present a general class of geometric network growth mechanisms by homogeneous attachment in which the links created at a given time $t$ are distributed homogeneously between a new node and the exising nodes selected uniformly. This is achieved by creating links between nodes uniformly distributed in a homogeneous metric space according to a Fermi-Dirac connection probability with inverse temperature $\beta$ and general time-dependent chemical potential $\mu(t)$. The chemical potential limits the spatial extent of newly created links. Using a hidden variable framework, we obtain an analytical expression for the degree sequence and show that $\mu(t)$ can be fixed to yield any given degree distributions, including a scale-free degree distribution. Additionally, we find that depending on the order in which nodes appear in the network---its $\textit{history}$---the degree-degree correlation can be tuned to be assortative or disassortative. The effect of the geometry on the structure is investigated through the average clustering coefficient $\langle c \rangle$. In the thermodynamic limit, we identify a phase transition between a random regime where $\langle c \rangle \rightarrow 0$ when $\beta < \beta_\mathrm{c}$ and a geometric regime where $\langle c \rangle > 0$ when $\beta > \beta_\mathrm{c}$.
Geometric evolution of complex networks
Bipartite temporal Bell inequalities are similar to the usual Bell inequalities except that, instead of changing the direction of the polariser at each measurement, one changes the time at which the measurement is being performed. By doing so, one is able to test for realism and locality, but relying on position measurements only. This is particularly useful in experimental setups where the momentum direction cannot be probed (such as in cosmology for instance). We study these bipartite temporal Bell inequalities for continuous systems placed in two-mode squeezed states, and find some regions in parameter space where they are indeed violated. We highlight the role played by the rotation angle, which is one of the three parameters characterising a two-mode squeezed state (the other two being the squeezing amplitude and the squeezing angle). In single-time measurements, it only determines the overall phase of the wavefunction and can therefore be discarded, but in multiple-time measurements, its time dynamics becomes relevant and crucially determines when bipartite temporal Bell inequalities can be violated. Our study opens up the possibility of new experimental designs for the observation of Bell inequality violations.
Bipartite temporal Bell inequalities for two-mode squeezed states
When non-adsorbing polymers are added to an isotropic suspension of rod-like colloids, the colloids effectively attract each other via depletion forces. We performed Monte Carlo simulations to study the phase diagram of such rod-polymer mixture. The colloidal rods were modelled as hard spherocylinders; the polymers were described as spheres of the same diameter as the rods. The polymers may overlap with no energy cost, while overlap of polymers and rods is forbidden. Large amounts of depletant cause phase separation of the mixture. We estimated the phase boundaries of isotropic-isotropic coexistence both, in the bulk and in confinement. To determine the phase boundaries we applied the grand canonical ensemble using successive umbrella sampling [J. Chem. Phys. 120, 10925 (2004)], and we performed a finite-size scaling analysis to estimate the location of the critical point. The results are compared with predictions of the free volume theory developed by Lekkerkerker and Stroobants [Nuovo Cimento D 16, 949 (1994)]. We also give estimates for the interfacial tension between the coexisting isotropic phases and analyse its power-law behaviour on approach of the critical point.
Depletion induced isotropic-isotropic phase separation in suspensions of rod-like colloids
Report is made of a systematic scaling study of the finite-temperature chiral phase transition of two-flavor QCD with the Kogut-Susskind quark action based on simulations on $L^3\times4$ ($L$=8, 12 and 16) lattices at the quark mass of $m_q=0.075, 0.0375, 0.02$ and 0.01. Our finite-size data show that a phase transition is absent for $m_q\geq 0.02$, and quite likely also at $m_q=0.01$. The scaling behavior of susceptibilities as a function of $m_q$ is consistent with a second-order transition at $m_q=0$. However, the exponents deviate from the O(2) or O(4) values theoretically expected.
Scaling Analysis of Chiral Phase Transition for Two Flavors of Kogut-Susskind Quarks
Using a time-resolved detection scheme in scanning transmission X-ray microscopy (STXM) we measured element resolved ferromagnetic resonance (FMR) at microwave frequencies up to 10\,GHz and a spatial resolution down to 20\,nm at two different synchrotrons. We present different methods to separate the contribution of the background from the dynamic magnetic contrast based on the X-ray magnetic circular dichroism (XMCD) effect. The relative phase between the GHz microwave excitation and the X-ray pulses generated by the synchrotron, as well as the opening angle of the precession at FMR can be quantified. A detailed analysis for homogeneous and inhomogeneous magnetic excitations demonstrates that the dynamic contrast indeed behaves as the usual XMCD effect. The dynamic magnetic contrast in time-resolved STXM has the potential be a powerful tool to study the linear and non-linear magnetic excitations in magnetic micro- and nano-structures with unique spatial-temporal resolution in combination with element selectivity.
Extracting the Dynamic Magnetic Contrast in Time-Resolved X-ray Transmission Microscopy
We establish lower bound for the first nonzero eigenvalue of the Laplacian on a closed K\"ahler manifold in terms of dimension, diameter, and lower bounds of holomorphic sectional curvature and orthogonal Ricci curvature. On compact K\"ahler manifolds with boundary, we prove lower bounds for the first nonzero Neumann or Dirichlet eigenvalue in terms of geometric data. Our results are K\"ahler analogues of well-known results for Riemannian manifolds.
Lower bounds for the first eigenvalue of the Laplacian on K\"ahler manifolds
In a recent paper the author derived a formula for calculating common denominators for the homogeneous components of the Baker-Campbell-Hausdorff (BCH) series. In the present work it is proved that this formula actually yields the smallest such common denominators. In an appendix a new efficient algorithm for computing coefficients of the BCH series is presented, which is based on these common denominators, and requires only integer arithmetic rather than less efficient rational arithmetic.
Smallest common denominators for the homogeneous components of the Baker-Campbell-Hausdorff series
People spend a substantial portion of their lives engaged in conversation, and yet our scientific understanding of conversation is still in its infancy. In this report we advance an interdisciplinary science of conversation, with findings from a large, novel, multimodal corpus of 1,656 recorded conversations in spoken English. This 7+ million word, 850 hour corpus totals over 1TB of audio, video, and transcripts, with moment-to-moment measures of vocal, facial, and semantic expression, along with an extensive survey of speaker post conversation reflections. We leverage the considerable scope of the corpus to (1) extend key findings from the literature, such as the cooperativeness of human turn-taking; (2) define novel algorithmic procedures for the segmentation of speech into conversational turns; (3) apply machine learning insights across various textual, auditory, and visual features to analyze what makes conversations succeed or fail; and (4) explore how conversations are related to well-being across the lifespan. We also report (5) a comprehensive mixed-method report, based on quantitative analysis and qualitative review of each recording, that showcases how individuals from diverse backgrounds alter their communication patterns and find ways to connect. We conclude with a discussion of how this large-scale public dataset may offer new directions for future research, especially across disciplinary boundaries, as scholars from a variety of fields appear increasingly interested in the study of conversation.
Advancing an Interdisciplinary Science of Conversation: Insights from a Large Multimodal Corpus of Human Speech
Avian Influenza breakouts cause millions of dollars in damage each year globally, especially in Asian countries such as China and South Korea. The impact magnitude of a breakout directly correlates to time required to fully understand the influenza virus, particularly the interspecies pathogenicity. The procedure requires laboratory tests that require resources typically lacking in a breakout emergency. In this study, we propose new quantitative methods utilizing machine learning and deep learning to correctly classify host species given raw DNA sequence data of the influenza virus, and provide probabilities for each classification. The best deep learning models achieve top-1 classification accuracy of 47%, and top-3 classification accuracy of 82%, on a dataset of 11 host species classes.
AI4AI: Quantitative Methods for Classifying Host Species from Avian Influenza DNA Sequence
Symplectic eigenvalues are conventionally defined for symmetric positive-definite matrices via Williamson's diagonal form. Many properties of standard eigenvalues, including the trace minimization theorem, are extended to the case of symplectic eigenvalues. In this note, we will generalize Williamson's diagonal form for symmetric positive-definite matrices to the case of symmetric positive-semidefinite matrices, which allows us to define symplectic eigenvalues, and prove the trace minimization theorem in the new setting.
Symplectic eigenvalues of positive-semidefinite matrices and the trace minimization theorem
The general aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image. Deep learning based methods become the mainstream of image fusion by virtue of its powerful feature representation ability. However, most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience. End-to-end decoder design often leads to unrealistic result because of its non-linear mapping mechanism. On the other hand, generating an intermediate decision map achieves better quality for the fused image, but relies on the rectification with empirical post-processing parameter choices. In this work, to handle the requirements of both output image quality and comprehensive simplicity of structure implementation, we propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure. It avoids the dependence on empirical post-processing methods in the inference stage. To improve the fusion quality, we introduce a gradient aware loss function to preserve gradient information in output fused image. In addition, we design a decision calibration strategy to decrease the time consumption in the application of multiple images fusion. Extensive experiments are conducted to compare with 19 different state-of-the-art multi-focus image fusion structures with 6 assessment metrics. The results prove that our designed structure can generally ameliorate the output fused image quality, while implementation efficiency increases over 30\% for multiple images fusion.
End-to-End Learning for Simultaneously Generating Decision Map and Multi-Focus Image Fusion Result
Penalized regression methods, such as lasso and elastic net, are used in many biomedical applications when simultaneous regression coefficient estimation and variable selection is desired. However, missing data complicates the implementation of these methods, particularly when missingness is handled using multiple imputation. Applying a variable selection algorithm on each imputed dataset will likely lead to different sets of selected predictors, making it difficult to ascertain a final active set without resorting to ad hoc combination rules. In this paper we consider a general class of penalized objective functions which, by construction, force selection of the same variables across multiply-imputed datasets. By pooling objective functions across imputations, optimization is then performed jointly over all imputed datasets rather than separately for each dataset. We consider two objective function formulations that exist in the literature, which we will refer to as "stacked" and "grouped" objective functions. Building on existing work, we (a) derive and implement efficient cyclic coordinate descent and majorization-minimization optimization algorithms for both continuous and binary outcome data, (b) incorporate adaptive shrinkage penalties, (c) compare these methods through simulation, and (d) develop an R package miselect for easy implementation. Simulations demonstrate that the "stacked" objective function approaches tend to be more computationally efficient and have better estimation and selection properties. We apply these methods to data from the University of Michigan ALS Patients Repository (UMAPR) which aims to identify the association between persistent organic pollutants and ALS risk.
Variable selection with multiply-imputed datasets: choosing between stacked and grouped methods
The rigorous description of correlated quantum many-body systems constitutes one of the most challenging tasks in contemporary physics and related disciplines. In this context, a particularly useful tool is the concept of effective pair potentials that take into account the effects of the complex many-body medium consistently. In this work, we present extensive, highly accurate \emph{ab initio} path integral Monte Carlo (PIMC) results for the effective interaction and the effective force between two electrons in the presence of the uniform electron gas (UEG). This gives us a direct insight into finite-size effects, thereby opening up the possibility for novel domain decompositions and methodological advances. In addition, we present unassailable numerical proof for an effective attraction between two electrons under moderate coupling conditions, without the mediation of an underlying ionic structure. Finally, we compare our exact PIMC results to effective potentials from linear-response theory, and we demonstrate their usefulness for the description of the dynamic structure factor. All PIMC results are made freely available online and can be used as a thorough benchmark for new developments and approximations.
Effective electronic forces and potentials from ab initio path integral Monte Carlo simulations
The COVID-19 pandemic has drastically changed accepted norms globally. Within the past year, masks have been used as a public health response to limit the spread of the virus. This sudden change has rendered many face recognition based access control, authentication and surveillance systems ineffective. Official documents such as passports, driving license and national identity cards are enrolled with fully uncovered face images. However, in the current global situation, face matching systems should be able to match these reference images with masked face images. As an example, in an airport or security checkpoint it is safer to match the unmasked image of the identifying document to the masked person rather than asking them to remove the mask. We find that current facial recognition techniques are not robust to this form of occlusion. To address this unique requirement presented due to the current circumstance, we propose a set of re-purposed datasets and a benchmark for researchers to use. We also propose a contrastive visual representation learning based pre-training workflow which is specialized to masked vs unmasked face matching. We ensure that our method learns robust features to differentiate people across varying data collection scenarios. We achieve this by training over many different datasets and validating our result by testing on various holdout datasets. The specialized weights trained by our method outperform standard face recognition features for masked to unmasked face matching. We believe the provided synthetic mask generating code, our novel training approach and the trained weights from the masked face models will help in adopting existing face recognition systems to operate in the current global environment. We open-source all contributions for broader use by the research community.
Multi-Dataset Benchmarks for Masked Identification using Contrastive Representation Learning
In an equiangular spiral, "the whorls continually increase in breadth and do so in a steady and unchanging ratio... It follows that the sectors cut off by successive radii, at equal vectorial angles, are similar to one another in every respect and that the figure may be conceived as growing continuously without ever changing its shape the while" as stated by Sir D'Arcy W. Thompson. The mathematical modeling of them is a very attractive topic of study and research and more specifically, the geometrical conditions under which any quadrangle or triangle can be fitted into similar copies of itself and form an equiangular spiral. This formation gives the impression of a digital form of spiral, where every digit is a triangle or quadrangle following similarity laws, which can allow a multiplicity of design capabilities. The study of these capabilities is presented in the present article and is related with the geometry and the design characteristics of equiangular spirals.
Geometry and Design of Equiangular Spirals
We review applications of Dyson-Schwinger equations at nonzero temperature, T, and chemical potential, mu, touching topics such as: deconfinement and chiral symmetry restoration; the behaviour of bulk thermodynamic quantities; the (T,mu)-dependence of hadron properties; and the possibility of diquark condensation.
Dyson-Schwinger Equations and the Quark-Gluon Plasma
The energy asymmetry in top-antitop-jet production is an observable of the top charge asymmetry designed for the LHC. We perform a realistic analysis in the boosted kinematic regime, including effects of the parton shower, hadronization and expected experimental uncertainties. Our predictions at particle level show that the energy asymmetry in the Standard Model can be measured with a significance of $3\sigma$ during Run 3, and with more than $5\sigma$ significance at the HL-LHC. Beyond the Standard Model the energy asymmetry is a sensitive probe of new physics with couplings to top quarks. In the framework of the Standard Model Effective Field Theory, we show that the sensitivity of the energy asymmetry to effective four-quark interactions is higher or comparable to other top observables and resolves blind directions in current LHC fits. We suggest to include the energy asymmetry as an important observable in global searches for new physics in the top sector.
Measuring the top energy asymmetry at the LHC: QCD and SMEFT interpretations
Well curated, large-scale corpora of social media posts containing broad public opinion offer an alternative data source to complement traditional surveys. While surveys are effective at collecting representative samples and are capable of achieving high accuracy, they can be both expensive to run and lag public opinion by days or weeks. Both of these drawbacks could be overcome with a real-time, high volume data stream and fast analysis pipeline. A central challenge in orchestrating such a data pipeline is devising an effective method for rapidly selecting the best corpus of relevant documents for analysis. Querying with keywords alone often includes irrelevant documents that are not easily disambiguated with bag-of-words natural language processing methods. Here, we explore methods of corpus curation to filter irrelevant tweets using pre-trained transformer-based models, fine-tuned for our binary classification task on hand-labeled tweets. We are able to achieve F1 scores of up to 0.95. The low cost and high performance of fine-tuning such a model suggests that our approach could be of broad benefit as a pre-processing step for social media datasets with uncertain corpus boundaries.
Curating corpora with classifiers: A case study of clean energy sentiment online
We examine a class of exact solutions for the eigenvalues and eigenfunctions of a doubly anharmonic oscillator defined by the potential $V(x)=\omega^2/2 x^2+\lambda x^4/4+\eta x^6/6$, $\eta>0$. These solutions hold provided certain constraints on the coupling parameters $\omega^2$, $\lambda$ and $\eta$ are satisfied.
A note on a class of exact solutions for a doubly anharmonic oscillator
With the development of machine learning, it is difficult for a single server to process all the data. So machine learning tasks need to be spread across multiple servers, turning centralized machine learning into a distributed one. However, privacy remains an unsolved problem in distributed machine learning. Multi-key homomorphic encryption over torus (MKTFHE) is one of the suitable candidates to solve the problem. However, there may be security risks in the decryption of MKTFHE and the most recent result about MKFHE only supports the Boolean operation and linear operation. So, MKTFHE cannot compute the non-linear function like Sigmoid directly and it is still hard to perform common machine learning such as logistic regression and neural networks in high performance. This paper first introduces secret sharing to propose a new distributed decryption protocol for MKTFHE, then designs an MKTFHE-friendly activation function, and finally utilizes them to implement logistic regression and neural network training in MKTFHE. We prove the correctness and security of our decryption protocol and compare the efficiency and accuracy between using Taylor polynomials of Sigmoid and our proposed function as an activation function. The experiments show that the efficiency of our function is 10 times higher than using 7-order Taylor polynomials straightly and the accuracy of the training model is similar to that of using a high-order polynomial as an activation function scheme.
Securer and Faster Privacy-Preserving Distributed Machine Learning
Motivated by vision tasks such as robust face and object recognition, we consider the following general problem: given a collection of low-dimensional linear subspaces in a high-dimensional ambient (image) space and a query point (image), efficiently determine the nearest subspace to the query in $\ell^1$ distance. We show in theory that Cauchy random embedding of the objects into significantly-lower-dimensional spaces helps preserve the identity of the nearest subspace with constant probability. This offers the possibility of efficiently selecting several candidates for accurate search. We sketch preliminary experiments on robust face and digit recognition to corroborate our theory.
Efficient Point-to-Subspace Query in $\ell^1$: Theory and Applications in Computer Vision
A Green functions approach is used to study superconductivity in nanofilms and nanowires. We show that the superconducting condensate results from the multimodal entanglement, or internal Josephson coupling, of the subcondensates associated with the manifold of Fermi surface subparts resulting from size-quantisation. This entanglement is of critical importance in these systems, since without it superconductivity would be extremely weak, if not completely negligible. Further, the multimodal character of the condensate generally results in multigap superconductivity, with great quantitative consequence for the value of the critical parameters. Our approach suggests that these are universal characteristics of confined superconductors.
Condensate entanglement and multigap superconductivity in nanoscale superconductors
Let k be a field and A the n-Kronecker algebra, this is the path algebra of the quiver with 2 vertices, a source and a sink, and n arrows from the source to the sink. It is well-known that the dimension vectors of the indecomposable A-modules are the positive roots of the corresponding Kac-Moody algebra. Thorsten Weist has shown that for every positive root there are even tree modules with this dimension vector and that for every positive imaginary root there are at least n tree modules. Here, we present a short proof of this result. The considerations used also provide a calculation-free proof that all exceptional modules over the path algebra of a finite quiver are tree modules.
Indecomposable representations of the Kronecker quivers
In this paper, we introduce a new framework called the sentiment-aspect attribution module (SAAM). SAAM works on top of traditional neural networks and is designed to address the problem of multi-aspect sentiment classification and sentiment regression. The framework works by exploiting the correlations between sentence-level embedding features and variations of document-level aspect rating scores. We demonstrate several variations of our framework on top of CNN and RNN based models. Experiments on a hotel review dataset and a beer review dataset have shown SAAM can improve sentiment analysis performance over corresponding base models. Moreover, because of the way our framework intuitively combines sentence-level scores into document-level scores, it is able to provide a deeper insight into data (e.g., semi-supervised sentence aspect labeling). Hence, we end the paper with a detailed analysis that shows the potential of our models for other applications such as sentiment snippet extraction.
Multi-Aspect Sentiment Analysis with Latent Sentiment-Aspect Attribution
A microscopic scaling relation linking the normal and superconducting states of the cuprates in the presence of a pseudogap is presented using angle-resolved photoemission spectroscopy. This scaling relation, complementary to the bulk universal scaling relation embodied by Homes' law, explicitly connects the momentum-dependent amplitude of the d-wave superconducting order parameter at T\sim0 to quasiparticle scattering mechanisms operative at T\gtrsimTc. The form of the scaling is proposed to be a consequence of the marginal Fermi-liquid phenomenology and the inherently strong dissipation of the normal pseudogap state of the cuprates.
Universal scaling of length, time, and energy for cuprate superconductors based on photoemission measurements of Bi2Sr2CaCu2O8+{\delta}
We address the question of how to compute the probability distribution of the time at which a detector clicks, in the situation of $n$ non-relativistic quantum particles in a volume $\Omega\subset \mathbb{R}^3$ in physical space and detectors placed along the boundary $\partial \Omega$ of $\Omega$. We have previously [arXiv:1601.03715] argued in favor of a rule for the 1-particle case that involves a Schr\"odinger equation with an absorbing boundary condition on $\partial \Omega$ introduced by Werner; we call this rule the "absorbing boundary rule." Here, we describe the natural extension of the absorbing boundary rule to the $n$-particle case. A key element of this extension is that, upon a detection event, the wave function gets collapsed by inserting the detected position, at the time of detection, into the wave function, thus yielding a wave function of $n-1$ particles. We also describe an extension of the absorbing boundary rule to the case of moving detectors.
Detection Time Distribution for Several Quantum Particles
In this paper, I outline several conceptual and methodological issues related to modeling individual and group processes embedded in clustered/hierarchical data structures. We position multilevel modeling techniques within a broader set of univariate and multivariate methods commonly used to study different types of data structures. We then illustrate how the choice of analysis method affects how best to examine the data. This overview gives us an idea of our further development of these themes and models in this study.
Introduction to Multilevel Modeling Techniques
In this paper an alternative theory about space-time is given. First some preliminaries about 3-dimensional time and the reasons for its introduction are presented. Alongside the 3-dimensional space (S) the 3-dimensional space of spatial rotations (SR) is considered independently from the 3-dimensional space. Then it is given a model of the universe, based on the Lie groups of real and complex orthogonal 3x3 matrices in this 3+3+3-dimensional space. Special attention is dedicated for introduction and study of the space SxSR, which appears to be isomorphic to SO(3,R)xSO(3,R) or S^3xS^3. The influence of the gravitational acceleration to the spinning bodies is considered. Some important applications of these results about spinning bodies are given, which naturally lead to violation of Newton's third law in its classical formulation. The precession of the spinning axis is also considered.
The geometry of the space-time and motion of the spinning bodies
We present the large-scale correlation function measured from a spectroscopic sample of 46,748 luminous red galaxies from the Sloan Digital Sky Survey. The survey region covers 0.72 h^{-3} Gpc^3 over 3816 square degrees and 0.16<z<0.47, making it the best sample yet for the study of large-scale structure. We find a well-detected peak in the correlation function at 100h^{-1} Mpc separation that is an excellent match to the predicted shape and location of the imprint of the recombination-epoch acoustic oscillations on the low-redshift clustering of matter. This detection demonstrates the linear growth of structure by gravitational instability between z=1000 and the present and confirms a firm prediction of the standard cosmological theory. The acoustic peak provides a standard ruler by which we can measure the ratio of the distances to z=0.35 and z=1089 to 4% fractional accuracy and the absolute distance to z=0.35 to 5% accuracy. From the overall shape of the correlation function, we measure the matter density Omega_mh^2 to 8% and find agreement with the value from cosmic microwave background (CMB) anisotropies. Independent of the constraints provided by the CMB acoustic scale, we find Omega_m = 0.273 +- 0.025 + 0.123 (1+w_0) + 0.137 Omega_K. Including the CMB acoustic scale, we find that the spatial curvature is Omega_K=-0.010+-0.009 if the dark energy is a cosmological constant. More generally, our results provide a measurement of cosmological distance, and hence an argument for dark energy, based on a geometric method with the same simple physics as the microwave background anisotropies. The standard cosmological model convincingly passes these new and robust tests of its fundamental properties.
Detection of the Baryon Acoustic Peak in the Large-Scale Correlation Function of SDSS Luminous Red Galaxies
A ternary message passing (TMP) decoding algorithm for low-density parity-check codes is developed. All messages exchanged between variable and check nodes have a ternary alphabet, and the variable nodes exploit soft information from the channel. A density evolution analysis is developed for unstructured and protograph-based ensembles. For unstructured ensembles the stability condition is derived. Optimized ensembles for TMP decoding show asymptotic gains of up to 0.6 dB with respect to ensembles optimized for binary message passing decoding. Finite length simulations of codes from TMP-optimized ensembles show gains of up to 0.5 dB under TMP compared to protograph-based codes designed for unquantized belief propagation decoding.
Protograph-Based LDPC Code Design for Ternary Message Passing Decoding
The Hubbard-Holstein model is a simple model including both electron-phonon interaction and electron-electron correlations. We review a body of theoretical work investigating the effects of strong correlations on the electron-phonon interaction. We focus on the regime, relevant to high-T_c superconductors, in which the electron correlations are dominant. We find that the electron-phonon interaction can still have important signatures, even if many anomalies appear, and the overall effect is far from conventional. In particular in the paramagnetic phase the effects of phonons are much reduced in the low-energy properties, while the high-energy physics can be strongly affected by phonons. Moreover, the electron-phonon interaction can still give rise to important effects, like phase separation and charge-ordering, and it assumes a predominance of forward scattering even if the bare interaction is assumed to be local (momentum independent). Antiferromagnetic correlations reduce the screening effects due to electron-electron interactions and revive the electron-phonon effects.
Electron-phonon interaction in Strongly Correlated Systems
Aerial filming is constantly gaining importance due to the recent advances in drone technology. It invites many intriguing, unsolved problems at the intersection of aesthetical and scientific challenges. In this work, we propose a deep reinforcement learning agent which supervises motion planning of a filming drone by making desirable shot mode selections based on aesthetical values of video shots. Unlike most of the current state-of-the-art approaches that require explicit guidance by a human expert, our drone learns how to make favorable viewpoint selections by experience. We propose a learning scheme that exploits aesthetical features of retrospective shots in order to extract a desirable policy for better prospective shots. We train our agent in realistic AirSim simulations using both a hand-crafted reward function as well as reward from direct human input. We then deploy the same agent on a real DJI M210 drone in order to test the generalization capability of our approach to real world conditions. To evaluate the success of our approach in the end, we conduct a comprehensive user study in which participants rate the shot quality of our methods. Videos of the system in action can be seen at https://youtu.be/qmVw6mfyEmw.
Can a Robot Become a Movie Director? Learning Artistic Principles for Aerial Cinematography
Encoding a qubit in a high quality superconducting microwave cavity offers the opportunity to perform the first layer of error correction in a single device, but presents a challenge: how can quantum oscillators be controlled while introducing a minimal number of additional error channels? We focus on the two-qubit portion of this control problem by using a 3-wave mixing coupling element to engineer a programmable beamsplitter interaction between two bosonic modes separated by more than an octave in frequency, without introducing major additional sources of decoherence. Combining this with single-oscillator control provided by a dispersively coupled transmon provides a framework for quantum control of multiple encoded qubits. The beamsplitter interaction $g_\text{bs}$ is fast relative to the timescale of oscillator decoherence, enabling over $10^3$ beamsplitter operations per coherence time, and approaching the typical rate of the dispersive coupling $\chi$ used for individual oscillator control. Further, the programmable coupling is engineered without adding unwanted interactions between the oscillators, as evidenced by the high on-off ratio of the operations, which can exceed $10^5$. We then introduce a new protocol to realize a hybrid controlled-SWAP operation in the regime $g_{bs}\approx\chi$, in which a transmon provides the control bit for the SWAP of two bosonic modes. Finally, we use this gate in a SWAP test to project a pair of bosonic qubits into a Bell state with measurement-corrected fidelity of $95.5\% \pm 0.2\%$.
A high on-off ratio beamsplitter interaction for gates on bosonically encoded qubits
We comment on mathematical results about the statistical behavior of Lorenz equations an its attractor, and more generally to the class of singular hyperbolic systems. The mathematical theory of such kind of systems turned out to be surprisingly difficult. It is remarkable that a rigorous proof of the existence of the Lorenz attractor was presented only around the year 2000 with a computer assisted proof together with an extension of the hyperbolic theory developed to encompass attractors robustly containing equilibria. We present some of the main results on the statisitcal behavior of such systems. We show that for attractors of three-dimensional flows, robust chaotic behavior is equivalent to the existence of certain hyperbolic structures, known as singular-hyperbolicity. These structures, in turn, are associated to the existence of physical measures: \emph{in low dimensions, robust chaotic behavior for flows ensures the existence of a physical measure}. We then give more details on recent results on the dynamics of singular-hyperbolic (Lorenz-like) attractors.
Statistical properties of Lorenz like flows, recent developments and perspectives
Vacuum polarization of the quantized massive fields in Bianchi type I spacetime is investigated from the point of view of the adiabatic approximation and the Schwinger-DeWitt method. It is shown that both approaches give the same results that can be used in construction of the trace of the stress-energy tensor of the conformally coupled fields. The stress-energy tensor is calculated in the Bianchi type I spacetime and the back reaction of the quantized fields upon the Kasner geometry is studied. A special emphasis is put on the problem of isotropization, studied with the aid of the directional Hubble parameters. Similarities with the quantum corrected interior of the Schwarzschild black hole is briefly discussed.
Quantum fields in Bianchi type I spacetimes. The Kasner metrc
This paper is a discussion of the physics of magnification in telescopes. Special attention is given to the question of whether telescopes magnify stars. Telescopes do magnify star images, although opinions to the contrary abound.
Is Magnification Consistent? Why people from amateur astronomers to science's worst enemy have some basic physics wrong, and why
Using the examples of pion-nucleon scattering and the nucleon mass we analyze the convergence of perturbative series in the framework of baryon chiral perturbation theory. For both cases we sum up sets of an infinite number of diagrams by solving equations exactly and compare the solutions with the perturbative contributions.
Probing the convergence of perturbative series in baryon chiral perturbation theory
Protostellar jets are known to emit in a wide range of bands, from radio to IR to optical bands, and to date also about ten X-ray emitting jets have been detected, with a rate of discovery of about one per year. We aim at investigating the mechanism leading to the X-ray emission detected in protostellar jets and at constraining the physical parameters that describe the jet/ambient interaction by comparing our model predictions with observations. We perform 2D axisymmetric hydrodynamic simulations of the interaction between a supersonic jet and the ambient. The jet is described as a train of plasma blobs randomly ejected by the stellar source along the jet axis. We explore the parameter space by varying the ejection rate, the initial jet Mach number, and the initial density contrast between the ambient and the jet. We synthesized from the model the X-ray emission as it would be observed with the current X-ray telescopes. The mutual interactions among the ejected blobs and of the blobs with the ambient medium lead to complex X-ray emitting structures within the jet: irregular chains of knots; isolated knots with measurable proper motion; apparently stationary knots; reverse shocks. The predicted X-ray luminosity strongly depends on the ejection rate and on the initial density contrast between the ambient and the jet, with a weaker dependence on the jet Mach number. Our model represents the first attempt to describe the X-ray properties of all the X-ray emitting protostellar jets. The comparison between our model predictions and the observations can provide a useful diagnostic tool necessary for a proper interpretation of the observations. In particular, we suggest that the observable quantities derived from the spectral analysis of X-ray observations can be used to constrain the ejection rate, a parameter explored in our model that is not measurable by current observations.
Generation of radiative knots in a randomly pulsed protostellar jet II. X-ray emission
We construct solutions of type IIB supergravity dual to N=2 super Yang-Mills theories. By considering a probe moving in a background with constant coupling and an AdS_{5} component in its geometry, we are able to reproduce the exact low energy effective action for the theory with gauge group SU(2) and N_{f}=4 massless flavors. After turning on a mass for the flavors we find corrections to the AdS_{5} geometry. In addition, the coupling shows a power law dependence on the energy scale of the theory. The origin of the power law behaviour of the coupling is traced back to instanton corrections. Instanton corrections to the four derivative terms in the low energy effective action are correctly obtained from a probe analysis. By considering a Wilson loop in this geometry we are also able to compute the instanton effects on the quark-antiquark potential. Finally we consider a solution corresponding to an asymptotically free field theory. Again, the leading form of the four derivative terms in the low energy effective action are in complete agreement with field theory expectations.
Non-holomorphic Corrections from Threebranes in F Theory
Currently, pre-trained language models (PLMs) do not cope well with the distribution shift problem, resulting in models trained on the training set failing in real test scenarios. To address this problem, the test-time adaptation (TTA) shows great potential, which updates model parameters to suit the test data at the testing time. Existing TTA methods rely on well-designed auxiliary tasks or self-training strategies based on pseudo-label. However, these methods do not achieve good trade-offs regarding performance gains and computational costs. To obtain some insights into such a dilemma, we take two representative TTA methods, i.e., Tent and OIL, for exploration and find that stable prediction is the key to achieving a good balance. Accordingly, in this paper, we propose perturbation consistency learning (PCL), a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts. Extensive experiments on adversarial robustness and cross-lingual transferring demonstrate that our method can achieve higher or comparable performance with less inference time over strong PLM backbones and previous state-of-the-art TTA methods.
Test-Time Adaptation with Perturbation Consistency Learning
Superbursts from accreting neutron stars probe nuclear reactions at extreme densities ($\rho \approx 10^{9}~g\,cm^{-3}$) and temperatures ($T>10^9~K$). These bursts ($\sim$1000 times more energetic than type I X-ray bursts) are most likely triggered by unstable ignition of carbon in a sea of heavy nuclei made during the rp-process of regular type I X-ray bursts (where the accumulated hydrogen and helium are burned). An open question is the origin of sufficient amounts of carbon, which is largely destroyed during the rp-process in X-ray bursts. We explore carbon production in steady-state burning via the rp-process, which might occur together with unstable burning in systems showing superbursts. We find that for a wide range of accretion rates and accreted helium mass fractions large amounts of carbon are produced, even for systems that accrete solar composition. This makes stable hydrogen and helium burning a viable source of carbon to trigger superbursts. We also investigate the sensitivity of the results to nuclear reactions. We find that the $^{14}$O($\alpha$,p)$^{17}$F reaction rate introduces by far the largest uncertainties in the $^{12}$C yield.
Carbon Synthesis in Steady-State Hydrogen and Helium Burning On Accreting Neutron Stars
We study the development of mean structures in a nonlinear model of large scale ocean dynamics with bottom topography and dissipation, and forced with a noise term. We show that the presence of noise in this nonlinear model leads to persistent average currents directed along isobaths. At variance with previous works we use a scale unselective dissipation, so that the phenomenon can not be explained in terms of minimum enstrophy states. The effect requires the presence of both the nonlinear and the random terms, and can be though of as an ordering of the stochastic energy input by the combined effect of nonlinearity and topography. The statistically steady state is well described by a generalized canonical equilibrium with mean energy and enstrophy determined by a balance between random forcing and dissipation. This result allows predicting the strengh of the noise-sustained currents. Finally we discuss the relevance that these noise-induced currents could have on real ocean circulation.
Noise-Sustained currents in quasigeostrophic turbulence over topography
This study examine the theoretical and empirical perspectives of the symmetric Hawkes model of the price tick structure. Combined with the maximum likelihood estimation, the model provides a proper method of volatility estimation specialized in ultra-high-frequency analysis. Empirical studies based on the model using the ultra-high-frequency data of stocks in the S\&P 500 are performed. The performance of the volatility measure, intraday estimation, and the dynamics of the parameters are discussed. A new approach of diffusion analogy to the symmetric Hawkes model is proposed with the distributional properties very close to the Hawkes model. As a diffusion process, the model provides more analytical simplicity when computing the variance formula, incorporating skewness and examining the probabilistic property. An estimation of the diffusion model is performed using the simulated maximum likelihood method and shows similar patterns to the Hawkes model.
Modeling microstructure price dynamics with symmetric Hawkes and diffusion model using ultra-high-frequency stock data
Learning with noisy labels has gained the enormous interest in the robust deep learning area. Recent studies have empirically disclosed that utilizing dual networks can enhance the performance of single network but without theoretic proof. In this paper, we propose Cooperative Learning (CooL) framework for noisy supervision that analytically explains the effects of leveraging dual or multiple networks. Specifically, the simple but efficient combination in CooL yields a more reliable risk minimization for unseen clean data. A range of experiments have been conducted on several benchmarks with both synthetic and real-world settings. Extensive results indicate that CooL outperforms several state-of-the-art methods.
Cooperative Learning for Noisy Supervision
The detection of gravitational waves has offered us the opportunity to explore the dynamical and strong-field regime of gravity. Because matched filtering is more sensitive to variations in the gravitational waveform phase than the amplitude, many tests of gravity with gravitational waves have been carried out using only the former. Such studies cannot probe the non-Einsteinian effects that may enter only in the amplitude. Besides, if not accommodated in the waveform template, a non-Einsteinian effect in the amplitude may induce systematic errors on other parameters such as the luminosity distance. In this paper, we derive constraints on a few modified theories of gravity (Einstein-dilaton-Gauss-Bonnet gravity, scalar-tensor theories, and varying-$G$ theories), incorporating both phase and amplitude corrections. We follow the model-independent approach of the parametrized post-Einsteinian formalism. We perform Fisher analyses with Monte-Carlo simulations using the LIGO/Virgo posterior samples. We find that the contributions from amplitude corrections can be comparable to the ones from the phase corrections in case of massive binaries like GW150914. Also, constraints derived by incorporating both phase and amplitude corrections differ from the ones with phase corrections only by 4% at most, which supports many of the previous studies that only considered corrections in the phase. We further derive reliable constraints on the time-evolution of a scalar field in a scalar-tensor theory for the first time with gravitational waves.
Testing Gravity with Gravitational Waves from Binary Black Hole Mergers: Contributions from Amplitude Corrections
Logical connectives and their implications on the meaning of a natural language sentence are a fundamental aspect of understanding. In this paper, we investigate whether visual question answering (VQA) systems trained to answer a question about an image, are able to answer the logical composition of multiple such questions. When put under this \textit{Lens of Logic}, state-of-the-art VQA models have difficulty in correctly answering these logically composed questions. We construct an augmentation of the VQA dataset as a benchmark, with questions containing logical compositions and linguistic transformations (negation, disjunction, conjunction, and antonyms). We propose our {Lens of Logic (LOL)} model which uses question-attention and logic-attention to understand logical connectives in the question, and a novel Fr\'echet-Compatibility Loss, which ensures that the answers of the component questions and the composed question are consistent with the inferred logical operation. Our model shows substantial improvement in learning logical compositions while retaining performance on VQA. We suggest this work as a move towards robustness by embedding logical connectives in visual understanding.
VQA-LOL: Visual Question Answering under the Lens of Logic
We give the first (ZFC) dividing line in Keisler's order among the unstable theories, specifically among the simple unstable theories. That is, for any infinite cardinal $\lambda$ for which there is $\mu < \lambda \leq 2^\mu$, we construct a regular ultrafilter D on $\lambda$ such that (i) for any model $M$ of a stable theory or of the random graph, $M^\lambda/D$ is $\lambda^+$-saturated but (ii) if $Th(N)$ is not simple or not low then $N^\lambda/D$ is not $\lambda^+$-saturated. The non-saturation result relies on the notion of flexible ultrafilters. To prove the saturation result we develop a property of a class of simple theories, called Qr1, generalizing the fact that whenever $B$ is a set of parameters in some sufficiently saturated model of the random graph, $|B| = \lambda$ and $\mu < \lambda \leq 2^\mu$, then there is a set $A$ with $|A| = \mu$ such that any non-algebraic $p \in S(B)$ is finitely realized in $A$. In addition to giving information about simple unstable theories, our proof reframes the problem of saturation of ultrapowers in several key ways. We give a new characterization of good filters in terms of "excellence," a measure of the accuracy of the quotient Boolean algebra. We introduce and develop the notion of {moral} ultrafilters on Boolean algebras. We prove a so-called "separation of variables" result which shows how the problem of constructing ultrafilters to have a precise degree of saturation may be profitably separated into a more set-theoretic stage, building an excellent filter, followed by a more model-theoretic stage: building moral ultrafilters on the quotient Boolean algebra, a process which highlights the complexity of certain patterns, arising from first-order formulas, in certain Boolean algebras.
A Dividing Line Within Simple Unstable Theories