text
stringlengths
6
128k
An algorithm is described that can generate random variants of a time series or image while preserving the probability distribution of original values and the pointwise Holder regularity. Thus, it preserves the multifractal properties of the data. Our algorithm is similar in principle to well-known algorithms based on the preservation of the Fourier amplitude spectrum and original values of a time series. However, it is underpinned by a dual-tree complex wavelet transform rather than a Fourier transform. Our method, which we term the Iterated Amplitude Adjusted Wavelet Transform (IAAWT) method can be used to generate bootstrapped versions of multifractal data and, because it preserves the pointwise Holder regularity but not the local Holder regularity, it can be used to test hypotheses concerning the presence of oscillating singularities in a time series, an important feature of turbulence and econophysics data. Because the locations of the data values are randomized with respect to the multifractal structure, hypotheses about their mutual coupling can be tested, which is important for the velocity-intermittency structure of turbulence and self-regulating processes.
A pretorsion theory for the category of all categories is presented. The associated prekernels and precokernels are calculated for every functor.
This paper considers the reconstruction of a sparse coefficient vector {\theta} for a rational transfer function, under a pair of FIR and Takenaka-Malmquist (TM) bases and from a limited number of linear frequency-domain measurements. We propose to concatenate a limited number of FIR and TM basis functions in the representation of the transfer function, and prove the uniqueness of the sparse representation defined in the infinite dimensional function space with pairs of FIR and TM bases. The sufficient condition is given for replacing the l_0 optimal solution by the l_1 optimal solution using FIR and TM bases with random samples on the upper unit circle, as the foundation of reconstruction. The simulations verify that l_1 minimization can reconstruct the coefficient vector {\theta} with high probability. It is shown that the concatenated FIR and TM bases give a much sparser representation, with much lower reconstruction order than using only FIR basis functions and less dependency on the knowledge of the true system poles than using only TM basis functions.
The paper is concerned with the completeness property of root functions of the Dirac operator with summable complexvalued potential and non-regular boundary conditions. We also obtain explicit form for the fundamental solution system of the considered operator.
We investigate the electronic structure of Ca1-xSrxVO3 using careful state-of-the-art experiments and calculations. Photoemission spectra using synchrotron radiation reveal a hitherto unnoticed polarization dependence of the photoemission matrix elements for the surface component leading to a substantial suppression of its intensity. Bulk spectra extracted with the help of experimentally determined electron escape depth and estimated suppression of surface contributions resolve outstanding puzzles concerning the electronic structure in Ca1-xSrxVO3.
We present observational evidences that dust in the circumnuclear region of AGNs has different properties than in the Galactic diffuse interstellar medium. By comparing the reddening of optical and infrared broad lines and the X-ray absorbing column density we find that the E(B-V)/N_H ratio is nearly always lower than Galactic by a factor ranging from ~3 up to ~100. Other observational results indicate that the Av/N_H ratio is significantly lower than Galactic in various classes of AGNs including intermediate type 1.8-1.9 Seyferts, hard X-ray selected and radio selected quasars, broad absorption line QSOs and grism selected QSOs. The lack of prominent absorption features at 9.7um (silicates) and at 2175A (carbon dip) in the spectra of Seyfert 2s and of reddened Seyfert 1s, respectively, add further evidence for dust in the circumnuclear region of AGNs being different from Galactic. These observational results indicate that the dust composition in the circumnuclear region of AGNs could be dominated by large grains, which make the extinction curve flatter, featureless and are responsible for the reduction of the E(B-V)/N_H and Av/N_H ratios. Regardless of the physical origin of these phenomena, the reduced dust absorption with respect to what expected from the gaseous column density should warn about a mismatch between the optical and the X-ray classification of the active galactic nuclei in terms of their obscuration.
Exploring the nucleon's sea quark and gluon structure is a prime objective of a future electron-ion collider (EIC). Many of the key questions require accurate differential semi-inclusive (spin/flavor decomposition, orbital motion) and exclusive (spatial distributions of quarks/gluons) DIS measurements in the region 0.01 < x < 0.3 and Q^2 ~ few 10 GeV^2. Such measurements could ideally be performed with a high-luminosity collider of moderate CM energy, s ~ 10^3 GeV^2, and relatively symmetric configuration, e.g. E_e/E_p = 5/30-60 GeV. Specific examples are presented, showing the advantages of this setup (angular/energy distribution of final-state particles, large-x coverage) compared to typical high-energy colliders.
The need to characterize ices coating dust grains in dense interstellar clouds arises from the importance of ice morphology in facilitating the diffusion and storage of radicals and reaction products in ices, a well-known place for the formation of complex molecules. Yet, there is considerable uncertainty about the structure of ISM ices, their ability to store volatiles and under what conditions. We measured the infrared absorption spectra of CO on the pore surface of porous amorphous solid water (ASW), and quantified the effective pore surface area of ASW. Additionally, we present results obtained from a Monte Carlo model of ASW in which the morphology of the ice is directly visualized and quantified. We found that 200 ML of ASW annealed to 20 K has a total pore surface area that is equivalent to 46 ML. This surface area decreases linearly with temperature to about 120 K. We also found that (1) dangling OH bonds only exist on the surface of pores; (2) almost all of the pores in the ASW are connected to the vacuum--ice interface, and are accessible for adsorption of volatiles from the gas phase; there are few closed cavities inside ASW at least up to a thickness of 200 ML; (3) the total pore surface area is proportional to the total 3-coordinated water molecules in the ASW in the temperature range 60--120 K. We also discuss the implications on the structure of ASW and surface reactions in the ice mantle in dense clouds.
The flux of photons with energies above 1 TeV from the direction of the centre and a cloud in the western part of the nearby southern supernova remnant (SNR) RX J1713.7-3946 is calculated in the ``hadronic scenario'' that aims to explain the intense VHE radiation from this remnant with the decay of \pi_0 pions produced in nuclear collisions. The expected flux from its centre is found to fall short by about factor 40 from the one observed by the HESS collaboration. This discrepancy presents a serious obstacle to the ``hadronic scenario''. The theoretically expected flux from the molecular cloud exceeds the one observed by HESS by at least a factor 3. While the size of this discrepancy might still seem acceptable in the face of various theoretical uncertainties, the result strongly suggests a strict spatial correlation of the cloud with an excess of TeV \gamma radiation. The observational lack of such correlations in the remnant reported by HESS is another counter argument against the hadronic scenario. In combination these arguments cannot be refuted by choosing certain parameters for the total energy or acceleration efficiency of the SNR.
We describe a simple experimental method to detect electron paramagnetic resonance (EPR) in polycrystalline 2,2-diphenyl-1-picrylhydrazyl (DPPH) sample, the standard g-marker for EPR spectroscopy, without using a cavity resonator or a prefabricated waveguide. It is shown that microwave(MW) current injected into a layer of silver paint coated on an insulating DPPH sample is able to excite the paramagnetic resonance in DPPH. As the applied dc magnetic field H is swept, the high-frequency resistance of the Ag-paint layer, measured at room temperature with a single port impedance analyzer in the MW frequency range 1 to 2.5 GHz, exhibits a sharp peak at a critical value of the dc field (H = Hres) while the reactance exhibits a dispersion-like behavior around the same field value for a given frequency. Hres increases linearly with the frequency of MW current. We interpret the observed features in the impedance to EPR in DPPH driven by the Oersted magnetic field arising from the MW current in the Ag-paint layer. We also confirm the occurrence of EPR in DPPH independently using a coplanar waveguide-based broadband technique. This technique has the potential to investigate other EPR active inorganic and organic compounds.
This paper develops a control approach with correctness guarantees for the simultaneous operation of lane keeping and adaptive cruise control. The safety specifications for these driver assistance modules are expressed in terms of set invariance. Control barrier functions are used to design a family of control solutions that guarantee the forward invariance of a set, which implies satisfaction of the safety specifications. The control barrier functions are synthesized through a combination of sum-of-squares program and physics-based modeling and optimization. A real-time quadratic program is posed to combine the control barrier functions with the performance-based controllers, which can be either expressed as control Lyapunov function conditions or as black-box legacy controllers. In both cases, the resulting feedback control guarantees the safety of the composed driver assistance modules in a formally correct manner. Importantly, the quadratic program admits a closed-form solution that can be easily implemented. The effectiveness of the control approach is demonstrated by simulations in the industry-standard vehicle simulator Carsim.
We develop the technique of reduced word manipulation to give a range of results concerning reduced words and permutations more generally. We prove a broad connection between pattern containment and reduced words, which specializes to our previous work for vexillary permutations. We also analyze general tilings of Elnitsky's polygon, and demonstrate that these are closely related to the patterns in a permutation. Building on previous work for commutation classes, we show that reduced word enumeration is monotonically increasing with respect to pattern containment. Finally, we give several applications of this work. We show that a permutation and a pattern have equally many reduced words if and only if they have the same length (equivalently, the same number of 21-patterns), and that they have equally many commutation classes if and only if they have the same number of 321-patterns. We also apply our techniques to enumeration problems of pattern avoidance, and give a bijection between 132-avoiding permutations of a given length and partitions of that same size, as well as refinements of this data and a connection to the Catalan numbers.
We analyze the low-energy e-N2 collisions within the framework of the Modified-Effective Range Theory (MERT) for the long-range potentials, developed by O'Malley, Spruch and Rosenberg [Journal of Math. Phys. 2, 491 (1961)]. In comparison to the traditional MERT we do not expand the total cross-section in the series of the incident momentum \hbar k, but instead we apply the exact analytical solutions of the Schroedinger equation for the long-range polarization potential, as proposed in the original formulation of O'Malley et al. This extends the applicability of MERT up to few eV regime, as we confirm using some simplified model potential of the electron-molecule interaction. The parameters of the effective-range expansion (i.e. the scattering length and the effective range) are determined from experimental, integral elastic cross sections in the 0.1 - 1.0 eV energy range by fitting procedure. Surprisingly, our treatment predicts a shape resonance that appears slightly higher than experimentally well known resonance in the total cross section. Agreement with the experimentally observed shape-resonance can be improved by assuming the position of the resonance in a given partial wave. Influence of the quadrupole potential on resonances is also discussed: we show that it can be disregarded for N2. In conclusion, the modified-effective range formalism treating the long-range part of the potential in an exact way, reproduces well both the very low-energy behavior of the integral cross section as well as the presence of resonances in the few eV range.
On social media platforms, hateful and offensive language negatively impact the mental well-being of users and the participation of people from diverse backgrounds. Automatic methods to detect offensive language have largely relied on datasets with categorical labels. However, comments can vary in their degree of offensiveness. We create the first dataset of English language Reddit comments that has fine-grained, real-valued scores between -1 (maximally supportive) and 1 (maximally offensive). The dataset was annotated using Best--Worst Scaling, a form of comparative annotation that has been shown to alleviate known biases of using rating scales. We show that the method produces highly reliable offensiveness scores. Finally, we evaluate the ability of widely-used neural models to predict offensiveness scores on this new dataset.
The paper establishes a relationship between finite separable extensions and norm groups of strictly quasilocal fields with Henselian discrete valuations, which yields a generally nonabelian one-dimensional local class field theory.
We review our understanding of the prototype ``Propeller'' system AE Aqr and we examine its flaring behaviour in detail. The flares are thought to arise from collisions between high density regions in the material expelled from the system after interaction with the rapidly rotating magnetosphere of the white dwarf. We show calculations of the time-dependent emergent optical spectra from the resulting hot, expanding ball of gas and derive values for the mass, lengthscale and temperature of the material involved. We see that the fits suggest that the secondary star in this system has reduced metal abundances and that, counter-intuitively, the evolution of the fireballs is best modelled as isothermal.
Bots are automated social media users that can be used to amplify (mis)information and sow harmful discourse. In order to effectively influence users, bots can be generated to reproduce human user behavior. Indeed, people tend to trust information coming from users with profiles that fit roles they expect to exist, such as users with gender role stereotypes. In this work, we examine differences in the types of identities in profiles of human and bot accounts with a focus on combinations of identities that represent gender role stereotypes. We find that some types of identities differentiate between human and bot profiles, confirming this approach can be a useful in distinguishing between human and bot accounts on social media. However, contrary to our expectations, we reveal that gender bias is expressed more in human accounts than bots overall. Despite having less gender bias overall, we provide examples of identities with strong associations with gender identities in bot profiles, such as those related to technology, finance, sports, and horoscopes. Finally, we discuss implications for designing constructive social media bot detection training materials.
Recent experimental developments towards obtaining a very precise value of the third neutrino mixing angle, $\theta_{13}$, are summarized. Various implications of the measured value of this angle are briefly discussed.
We characterize pairs of bounded Reinhardt domains in $\CC^2$ between which there exists a proper holomorphic map and find all proper maps that are not elementary algebraic.
We study general Delaunay-graphs, which are natural generalizations of Delaunay triangulations to arbitrary families, in particular to pseudo-disks. We prove that for any finite pseudo-disk family and point set, there is a plane drawing of their Delaunay-graph such that every edge lies inside every pseudo-disk that contains its endpoints.
This paper presents an integrated perception and control approach to accomplish safe autonomous navigation in unknown environments. This is achieved by numerical optimization with constraint learning for instantaneous local control barrier functions (IL-CBFs) and goal-driven control Lyapunov functions (GD-CLFs). In particular, the constraints reflecting safety and task requirements are first online learned from perceptual signals, wherein IL-CBFs are learned to characterize potential collisions, and GD-CLFs are constructed to reflect incrementally discovered subgoals. Then, the learned IL-CBFs are united with GD-CLFs in the context of a quadratic programming optimization, whose feasibility is improved by enlarging the shared control space. Numerical simulations are conducted to reveal the effectiveness of our proposed safe feedback control strategy that could drive the mobile robot to safely reach the destination incrementally in an uncertain environment.
We consider a quantum emitter ("atom") radiating in a one-dimensional (1D) photonic waveguide in the presence of a single mirror, resulting in a delay differential equation for the atomic amplitude. We carry out a systematic analysis of the non-Markovian (NM) character of the atomic dynamics in terms of refined, recently developed notions of quantum non-Markovianity such as indivisibility and information back-flow. NM effects are quantified as a function of the round-trip time and phase shift associated with the atom-mirror optical path. We find, in particular, that unless an atom-photon bound state is formed a finite time delay is always required in order for NM effects to be exhibited. This identifies a finite threshold in the parameter space, which separates the Markovian and non-Markovian regimes.
Traditional cosmological inference using Type Ia supernovae (SNeIa) have used stretch- and color-corrected fits of SN Ia light curves and assumed a resulting fiducial mean and symmetric intrinsic dispersion for the resulting relative luminosity. As systematics become the main contributors to the error budget, it has become imperative to expand supernova cosmology analyses to include a more general likelihood to model systematics to remove biases with losses in precision. To illustrate an example likelihood analysis, we use a simple model of two populations with a relative luminosity shift, independent intrinsic dispersions, and linear redshift evolution of the relative fraction of each population. Treating observationally viable two-population mock data using a one-population model results in an inferred dark energy equation of state parameter $w$ that is biased by roughly 2 times its statistical error for a sample of N $ \gtrsim$ 2500 SNeIa. Modeling the two-population data with a two-population model removes this bias at a cost of an approximately $\sim20\%$ increase in the statistical constraint on $w$. These significant biases can be realized even if the support for two underlying SNeIa populations, in the form of model selection criteria, is inconclusive. With the current observationally-estimated difference in the two proposed populations, a sample of N $ \gtrsim$ 10,000 SNeIa is necessary to yield conclusive evidence of two populations.
In this paper, we consider the problem of incentive mechanism design for smart-phone crowd-sourcing. Each user participating in crowd-sourcing submits a set of tasks it can accomplish and its corresponding bid. The platform then selects the users and their payments to maximize its utility while ensuring truthfulness, individual rationality, profitability, and polynomial algorithm complexity. Both the offline and the online scenarios are considered, where in the offline case, all users submit their profiles simultaneously, while in the online case they do it sequentially, and the decision whether to accept or reject each user is done instantaneously with no revocation. The proposed algorithms for both the offline and the online case are shown to satisfy all the four desired properties of an efficient auction. Through extensive simulation, the performance of the offline and the online algorithm is also compared.
We derive a simple rule to determine surface plasmon energies, based on the geometrical properties of the surface of the metal. We apply this concept to obtain the surface plasmon energies in wedges, corners and conical tips. The results presented here provide simple and straightforward rules to design the energy of surface plasmons in severals situations of experimental interest such as in plasmon wave guiding and in tip-enhanced spectroscopies.
We present a way of exciting surface plasmon polaritons along non-patterned metallic surfaces by means of a flat squeezing slab designed with transformation optics. The slab changes the dispersion relation of incident light, enabling evanescent coupling to propagating surface plasmons. Unlike prism couplers, the proposed device does not introduce reflections at its input interface. Moreover, its compact geometry is suitable for integration. A feasible dielectric implementation of the coupler is suggested. Finally, we show that the angular response of the device can be engineered by using a non-uniform compression factor. As an example, we design a coupler with a half-power angular bandwidth 2.5 times higher than that of a conventional dielectric coupler.
Fine-tuning large-scale pretrained models is prohibitively expensive in terms of computational and memory costs. LoRA, as one of the most popular Parameter-Efficient Fine-Tuning (PEFT) methods, offers a cost-effective alternative by fine-tuning an auxiliary low-rank model that has significantly fewer parameters. Although LoRA reduces the computational and memory requirements significantly at each iteration, extensive empirical evidence indicates that it converges at a considerably slower rate compared to full fine-tuning, ultimately leading to increased overall compute and often worse test performance. In our paper, we perform an in-depth investigation of the initialization method of LoRA and show that careful initialization (without any change of the architecture and the training algorithm) can significantly enhance both efficiency and performance. In particular, we introduce a novel initialization method, LoRA-GA (Low Rank Adaptation with Gradient Approximation), which aligns the gradients of low-rank matrix product with those of full fine-tuning at the first step. Our extensive experiments demonstrate that LoRA-GA achieves a convergence rate comparable to that of full fine-tuning (hence being significantly faster than vanilla LoRA as well as various recent improvements) while simultaneously attaining comparable or even better performance. For example, on the subset of the GLUE dataset with T5-Base, LoRA-GA outperforms LoRA by 5.69% on average. On larger models such as Llama 2-7B, LoRA-GA shows performance improvements of 0.34, 11.52%, and 5.05% on MT-bench, GSM8K, and Human-eval, respectively. Additionally, we observe up to 2-4 times convergence speed improvement compared to vanilla LoRA, validating its effectiveness in accelerating convergence and enhancing model performance. Code is available at https://github.com/Outsider565/LoRA-GA.
In this work we generalize and combine Gibbs and von Neumann approaches to build, for the first time, a rigorous definition of entropy for hybrid quantum-classical systems. The resulting function coincides with the two cases above when the suitable limits are considered. Then, we apply the MaxEnt principle for this hybrid entropy function and obtain the natural candidate for the Hybrid Canonical Ensemble (HCE). We prove that the suitable classical and quantum limits of the HCE coincide with the usual classical and quantum canonical ensembles since the whole scheme admits both limits, thus showing that the MaxEnt principle is applicable and consistent for hybrid systems.
Abridged: ATLASGAL is an unbiased 870 micron submillimetre survey of the inner Galactic plane. It provides a large and systematic inventory of all massive, dense clumps in the Galaxy (>1000 Msun) and includes representative samples of all embedded stages of high-mass star formation. Here we present the first detailed census of the properties (velocities, distances, luminosities and masses) and spatial distribution of a complete sample of ~8000 dense clumps located in the Galactic disk. We derive highly reliable velocities and distances to ~97% of the sample and use mid- and far-infrared survey data to develop an evolutionary classification scheme that we apply to the whole sample. Comparing the evolutionary subsamples reveals trends for increasing dust temperatures, luminosities and line-widths as a function of evolution indicating that the feedback from the embedded proto-clusters is having a significant impact on the structure and dynamics of their natal clumps. We find 88\,per\,cent are already associated with star formation at some level. We also find the clump mass to be independent of evolution suggesting that the clumps form with the majority of their mass in-situ. We estimate the statistical lifetime of the quiescent stage to be ~5 x 10^4 yr for clump masses ~1000 Msun decreasing to ~1 x 10^4 yr for clump masses >10000 Msun. We find a strong correlation between the fraction of clumps associated with massive stars and peak column density. The fraction is initially small at low column densities but reaching 100\,per\,cent for column densities above 10^{23} cm^{-2}; there are no clumps with column density clumps above this value that are not already associated with massive star formation. All of the evidence is consistent with a dynamic view of star formation wherein the clumps form rapidly and are initially very unstable so that star formation quickly ensues.
Continuous tracking of boxers across multiple training sessions helps quantify traits required for the well-known ten-point-must system. However, continuous tracking of multiple athletes across multiple training sessions remains a challenge, because it is difficult to precisely segment bout boundaries in a recorded video stream. Furthermore, re-identification of the same athlete over different period or even within the same bout remains a challenge. Difficulties are further compounded when a single fixed view video is captured in top-view. This work summarizes our progress in creating a system in an economically single fixed top-view camera. Specifically, we describe improved algorithm for bout transition detection and in-bout continuous player identification without erroneous ID updation or ID switching. From our custom collected data of ~11 hours (athlete count: 45, bouts: 189), our transition detection algorithm achieves 90% accuracy and continuous ID tracking achieves IDU=0, IDS=0.
Pruning has emerged as a promising approach for compressing large-scale models, yet its effectiveness in recovering the sparsest of models has not yet been explored. We conducted an extensive series of 485,838 experiments, applying a range of state-of-the-art pruning algorithms to a synthetic dataset we created, named the Cubist Spiral. Our findings reveal a significant gap in performance compared to ideal sparse networks, which we identified through a novel combinatorial search algorithm. We attribute this performance gap to current pruning algorithms' poor behaviour under overparameterization, their tendency to induce disconnected paths throughout the network, and their propensity to get stuck at suboptimal solutions, even when given the optimal width and initialization. This gap is concerning, given the simplicity of the network architectures and datasets used in our study. We hope that our research encourages further investigation into new pruning techniques that strive for true network sparsity.
We study the ergodic properties of excited states in a model of interacting fermions in quasi-one-dimensional chains subjected to a random vector potential. In the noninteracting limit, we show that arbitrarily small values of this complex off-diagonal disorder trigger localization for the whole spectrum; the divergence of the localization length in the single-particle basis is characterized by a critical exponent $\nu$ which depends on the energy density being investigated. When short-range interactions are included, the localization is lost, and the system is ergodic regardless of the magnitude of disorder in finite chains. Our numerical results suggest a delocalization scheme for arbitrary small values of interactions. This finding indicates that the standard scenario of the many-body localization cannot be obtained in a model with random gauge fields.
The lasso has become an important practical tool for high dimensional regression as well as the object of intense theoretical investigation. But despite the availability of efficient algorithms, the lasso remains computationally demanding in regression problems where the number of variables vastly exceeds the number of data points. A much older method, marginal regression, largely displaced by the lasso, offers a promising alternative in this case. Computation for marginal regression is practical even when the dimension is very high. In this paper, we study the relative performance of the lasso and marginal regression for regression problems in three different regimes: (a) exact reconstruction in the noise-free and noisy cases when design and coefficients are fixed, (b) exact reconstruction in the noise-free case when the design is fixed but the coefficients are random, and (c) reconstruction in the noisy case where performance is measured by the number of coefficients whose sign is incorrect. In the first regime, we compare the conditions for exact reconstruction of the two procedures, find examples where each procedure succeeds while the other fails, and characterize the advantages and disadvantages of each. In the second regime, we derive conditions under which marginal regression will provide exact reconstruction with high probability. And in the third regime, we derive rates of convergence for the procedures and offer a new partitioning of the ``phase diagram,'' that shows when exact or Hamming reconstruction is effective.
The technique of muon spin rotation ({\mu}SR) has emerged in the last few decades as one of the most powerful methods of obtaining local magnetic information. To make the technique fully quantitative, it is necessary to have an accurate estimate of where inside the crystal structure the muon implants. This can be provided by density functional theory calculations using an approach that is termed DFT+{\mu}, density functional theory with the implanted muon included. This article reviews this approach, describes some recent successes in particular {\mu}SR experiments, and suggests some avenues for future exploration.
Exponential random graph models (ERGMs) are flexible probability models allowing edge dependency. However, it is known that, to a first-order approximation, many ERGMs behave like Erd\"os-R\'enyi random graphs, where edges are independent. In this paper, to distinguish ERGMs from Erd\"os-R\'enyi random graphs, we consider second-order approximations of ERGMs using two-stars and triangles. We prove that the second-order approximation indeed achieves second-order accuracy in the triangle-free case. The new approximation is formally obtained by Hoeffding decomposition and rigorously justified using Stein's method.
Axions play a central role in many realizations of large field models of inflation and in recent alternative mechanisms for generating primordial tensor modes in small field models. If these axions couple to gauge fields, the coupling produces a tachyonic instability that leads to an exponential enhancement of the gauge fields, which in turn can decay into observable scalar or tensor curvature perturbations. Thus, a fully self-consistent treatment of axions during inflation is important, and in this work we discuss the perturbative constraints on axions coupled to gauge fields. We show how the recent proposal of generating tensor modes through these alternative mechanisms is in tension with perturbation theory in the in-in formalism. Interestingly, we point out that the constraints are parametrically weaker than one would estimate based on naive power counting of propagators of the gauge field. In the case of non-Abelian gauge fields, we derive new constraints on the size of the gauge coupling, which apply also in certain models of natural large field inflation, such as alignment mechanisms.
We study the twisted bosonization of massive Thirring model to relate to sine-Gordon model in Moyal spacetime using twisted commutation relations. We obtain the relevant twisted bosonization rules. We show that there exists dual rela- tionship between twisted bosonic and fermionic operators. The strong-weak duality is also observed to be preserved as its commutative counterpart.
We establish the gravitational detectability of a Dirac monopole using a weak-field limit of general relativity, which can be developed from the Newtonian gravitational potential by including energy as a source. The resulting potential matches (by construction) the weak-field limit of two different solutions to Einstein's equations of general relativity: one associated with the magnetically monopolar spray of field lines emerging from the half-infinite solenoid that makes up the Dirac monopole, the other associated with the field-energetic source of the solenoid itself (the Dirac string). The string's gravitational effect dominates, and we suggest that the primary strong-field contribution of the Dirac configuration is that of a half-infinite line of energy, whose GR solution is known.
Washing hands, social distancing and staying at home are the preventive measures set in place to contain the spread of the COVID-19, a disease caused by SARS-CoV-2. These measures, although straightforward to follow, highlight the tip of an imbalanced socio-economic and socio-technological iceberg. Here, a System Dynamic (SD) model of COVID-19 preventive measures and their correlation with the 17 Sustainable Development Goals (SDGs) is presented. The result demonstrates a better informed view of the COVID-19 vulnerability landscape. This novel qualitative approach refreshes debates on the future of SDGS amid the crisis and provides a powerful mental representation for decision makers to find leverage points that aid in preventing long-term disruptive impacts of this health crisis on people, planet and economy. There is a need for further tailor-made and real-time qualitative and quantitative scientific research to calibrate the criticality of meeting the SDGS targets in different countries according to ongoing lessons learned from this health crisis.
We solve the O(n) model, defined in terms of self- and mutually avoiding loops coexisting with voids, on a 3-simplex fractal lattice, using an exact real space renormalization group technique. As the density of voids is decreased, the model shows a critical point, and for even lower densities of voids, there is a dense phase showing power-law correlations, with critical exponents that depend on n, but are independent of density. At n=-2 on the dilute branch, a trivalent vertex defect acts as a marginal perturbation. We define a model of biconnected clusters which allows for a finite density of such vertices. As n is varied, we get a line of critical points of this generalized model, emanating from the point of marginality in the original loop model. We also study another perturbation of adding local bending rigidity to the loop model, and find that it does not affect the universality class.
Starting from the effective torsion space-time model, we study its effects on the top pair production cross section at hadron colliders. We also study the effect of this model on top pair asymmetries at the Tevatron and the LHC. We find that torsion space-time can explain forward-backward asymmetry according to measured anomaly at Tevatron. We find an allowed region in the parameters space which can satisfy simultaneously all $t\bar{t}$ observables measured at Tevatron and LHC.
Longitudinal fMRI datasets hold great promise for the study of neurodegenerative diseases, but realizing their potential depends on extracting accurate fMRI-based brain measures in individuals over time. This is especially true for rare, heterogeneous and/or rapidly progressing diseases, which often involve small samples whose functional features may vary dramatically across subjects and over time, making traditional group-difference analyses of limited utility. One such disease is ALS, which results in extreme motor function loss and eventual death. Here, we analyze a rich longitudinal dataset containing 190 motor task fMRI scans from 16 ALS patients and 22 age-matched HCs. We propose a novel longitudinal extension to our cortical surface-based spatial Bayesian GLM, which has high power and precision to detect activations in individuals. Using a series of longitudinal mixed-effects models to subsequently study the relationship between activation and disease progression, we observe an inverted U-shaped trajectory: at relatively mild disability we observe enlarging activations, while at higher disability we observe severely diminished activation, reflecting progression toward complete motor function loss. We observe distinct trajectories depending on clinical progression rate, with faster progressors exhibiting more extreme hyper-activation and subsequent hypo-activation. These differential trajectories suggest that initial hyper-activation is likely attributable to loss of inhibitory neurons. By contrast, earlier studies employing more limited sampling designs and using traditional group-difference analysis approaches were only able to observe the initial hyper-activation, which was assumed to be due to a compensatory process. This study provides a first example of how surface-based spatial Bayesian modeling furthers scientific understanding of neurodegenerative disease.
Dynamic affinity scheduling has been an open problem for nearly three decades. The problem is to dynamically schedule multi-type tasks to multi-skilled servers such that the resulting queueing system is both stable in the capacity region (throughput optimality) and the mean delay of tasks is minimized at high loads near the boundary of the capacity region (heavy-traffic optimality). As for applications, data-intensive analytics like MapReduce, Hadoop, and Dryad fit into this setting, where the set of servers is heterogeneous for different task types, so the pair of task type and server determines the processing rate of the task. The load balancing algorithm used in such frameworks is an example of affinity scheduling which is desired to be both robust and delay optimal at high loads when hot-spots occur. Fluid model planning, the MaxWeight algorithm, and the generalized $c\mu$-rule are among the first algorithms proposed for affinity scheduling that have theoretical guarantees on being optimal in different senses, which will be discussed in the related work section. All these algorithms are not practical for use in data center applications because of their non-realistic assumptions. The join-the-shortest-queue-MaxWeight (JSQ-MaxWeight), JSQ-Priority, and weighted-workload algorithms are examples of load balancing policies for systems with two and three levels of data locality with a rack structure. In this work, we propose the Generalized-Balanced-Pandas algorithm (GB-PANDAS) for a system with multiple levels of data locality and prove its throughput optimality. We prove this result under an arbitrary distribution for service times, whereas most previous theoretical work assumes geometric distribution for service times. The extensive simulation results show that the GB-PANDAS algorithm alleviates the mean delay and has a better performance than the JSQ-MaxWeight algorithm by twofold
It is shown that the two-step excitation scheme typically used to create an ultracold Rydberg gas can be described with an effective two-level rate equation, greatly reducing the complexity of the optical Bloch equations. This allows us to solve the many-body problem of interacting cold atoms with a Monte Carlo technique. Our results reproduce the Rydberg blockade effect. However, we demonstrate that an Autler-Townes double peak structure in the two-step excitation scheme, which occurs for moderate pulse lengths as used in the experiment, can give rise to an antiblockade effect. It is observable in a lattice gas with regularly spaced atoms. Since the antiblockade effect is robust against a large number of lattice defects it should be experimentally realizable with an optical lattice created by CO$_{2}$ lasers.
Human social dilemmas are often shaped by actions involving uncertain goals and returns that may only be achieved in the future. Climate action, voluntary vaccination and other prospective choices stand as paramount examples of this setting. In this context, as well as in many other social dilemmas, uncertainty may produce non-trivial effects. Whereas uncertainty about collective targets and their impact were shown to negatively affect group coordination and success, no information is available about timing uncertainty, i.e. how uncertainty about when the target needs to be reached affects the outcome as well as the decision-making. Here we show experimentally, through a collective dilemma wherein groups of participants need to avoid a tipping point under the risk of collective loss, that timing uncertainty prompts not only early generosity but also polarized contributions, in which participants' total contributions are distributed more unfairly than when there is no uncertainty. Analyzing participant behavior reveals, under uncertainty, an increase in reciprocal strategies wherein contributions are conditional on the previous donations of the other participants, a group analogue of the well-known Tit-for-Tat strategy. Although large timing uncertainty appears to reduce collective success, groups that successfully collect the required amount show strong reciprocal coordination. This conclusion is supported by a game theoretic model examining the dominance of behaviors in case of timing uncertainty. In general, timing uncertainty casts a shadow on the future that leads participants to respond early, encouraging reciprocal behaviors, and unequal contributions.
We develop a framework for deriving Dyson-Schwinger Equations (DSEs) and Bethe-Salpeter Equation (BSE) in QCD at large $N_c$ limit. The starting point is a modified form (with auxiliary fields) of QCD generating functional. This framework provides a natural order-by-order truncation scheme for DSEs and BSE, and the kernels of the equations up to any order are explicitly given. Chiral symmetry (at chiral limit) is preserved in any order truncation, so it exemplifies the symmetry preserving truncation scheme. It provides a method to study DSEs and BSE beyond the Rainbow-Ladder truncation, and is especially useful to study contributions from non-Abelian dynamics (those arise from gluon self-interactions). We also derive the equation for the quark-ghost scattering kernel, and discuss the Slavnov-Taylor identity connecting the quark-gluon vertex, the quark propagator and the quark-ghost scattering kernel.
In inertial microfluidics lift forces cause a particle to migrate across streamlines to specific positions in the cross section of a microchannel. We control the rotational motion of a particle and demonstrate that this allows to manipulate the lift-force profile and thereby the particle's equilibrium positions. We perform two-dimensional simulation studies using the method of multi-particle collision dynamics. Particles with unconstrained rotational motion occupy stable equilibrium positions in both halfs of the channel while the center is unstable. When an external torque is applied to the particle, two equilibrium positions annihilate by passing a saddle-node bifurcation and only one stable fixpoint remains so that all particles move to one side of the channel. In contrast, non-rotating particles accumulate in the center and are pushed into one half of the channel when the angular velocity is fixed to a non-zero value.
We discuss the dynamical effects of bulk viscosity and particle creation on the early evolution of the Friedmann -Robertson -Walker model in the framework of open thermodynamical systems. We consider bulk viscosity and Particle creation as separate irreversible processes. Exact solutions of the Einstein field equations are obtained by using the "gamma-law" equation of state $p=(\gamma -1)\rho$, where the adiabatic parameter $\gamma$ varies with scale factor of the metric. We consider the cosmological model to study the early phases of the evolution of the universe as it goes from an inflationary phase to a radiation -dominated era in the presence of bulk viscosity and particle creation. Analytical solutions are obtained for particle number density and entropy for all models. It is seen that, by choosing appropriate functions for particle creation rate and bulk viscous coefficient, the models exhibit singular and non-singular beginnings.
Shock tubes are commonly employed to test candidate armor materials, validate numerical models, and conduct simulated blast experiments in animal models. As DoD interests desire to field wearable sensors as blast dosimeters, shock tubes may also serve for calibration and testing of these devices. The high blast pressures needed for experimental testing of candidate armors are unnecessary to test these sensors. An inexpensive, efficient, and easily available way of testing these pressure sensors is desirable. It is known that releasing compressed gas suddenly can create a repeatable shock front, and the pressures can be finely tuned by changing the pressure to which the gas is compressed. A Crosman 0.177 caliber air pistol was used (without loading any pellets) to compress and release air in one end of a 24 inch long 3/4 inch diameter standard pipe nipple to simulate a blast wave at the other end of the tube. A variable number of pumps were used to vary the peak blast pressure. As expected, the trials where 10 pumps were used to compress the air resulted in the largest average peak pressure of 101.99 kPa (+/- 2.63 kPa). The design with 7 pumps had the second biggest peak pressure, with an average peak pressure of 89.11 kPa (+/-1.77 kPa). The design with 5 pumps had the third largest peak pressure, with an average peak pressure of 78.80 kPa (+/-1.74 kPa). 3 pumps produced an average peak pressure of 61.37 kPa (+/-2.20 kPa). 2 pumps produced an average peak pressure of 48.11 kPa (+/-1.57 kPa). The design with just 1 pump had the smallest peak pressure and produced an average peak pressure of 30.13 kPa (+/-0.79 kPa). This inexpensive shock tube design had a shot-to-shot cycle time of between 30 and 45 seconds.
Background: A critical step in effective care and treatment planning for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the cause of the COVID-19 pandemic, is the assessment of the severity of disease progression. Chest x-rays (CXRs) are often used to assess SARS-CoV-2 severity, with two important assessment metrics being extent of lung involvement and degree of opacity. In this proof-of-concept study, we assess the feasibility of computer-aided scoring of CXRs of SARS-CoV-2 lung disease severity using a deep learning system. Materials and Methods: Data consisted of 396 CXRs from SARS-CoV-2 positive patient cases. Geographic extent and opacity extent were scored by two board-certified expert chest radiologists (with 20+ years of experience) and a 2nd-year radiology resident. The deep neural networks used in this study, which we name COVID-Net S, are based on a COVID-Net network architecture. 100 versions of the network were independently learned (50 to perform geographic extent scoring and 50 to perform opacity extent scoring) using random subsets of CXRs from the study, and we evaluated the networks using stratified Monte Carlo cross-validation experiments. Findings: The COVID-Net S deep neural networks yielded R$^2$ of 0.664 $\pm$ 0.032 and 0.635 $\pm$ 0.044 between predicted scores and radiologist scores for geographic extent and opacity extent, respectively, in stratified Monte Carlo cross-validation experiments. The best performing networks achieved R$^2$ of 0.739 and 0.741 between predicted scores and radiologist scores for geographic extent and opacity extent, respectively. Interpretation: The results are promising and suggest that the use of deep neural networks on CXRs could be an effective tool for computer-aided assessment of SARS-CoV-2 lung disease severity, although additional studies are needed before adoption for routine clinical use.
In a bicategory of spans (an example of a 'generic bicategory') the factorization of a span (s,t) as the span (s,1) followed by (1,t) satisfies a simple universal property with respect to all factorizations in terms of the generic bicategory structure. Here we show that this universal property can in fact be used to characterize bicategories of spans. This characterization of spans is very different from the others in that it does not mention any adjointness conditions within the bicategory.
In this paper we quantify the trade-off between setups optimized to be ancillary to Phase II Superbeams or Neutrino Factories and experiments tuned for maximal sensitivity to the subdominant terms of the neutrino transition probability at the atmospheric scale (``maximum discovery potential''). In particular, the theta(13) sensitivity is computed for both Phase I superbeams (JHF-SK and NuMI Off-Axis) and next generation long baseline experiments (ICARUS, OPERA and MINOS). It is shown that Phase I experiments cannot reach a sensitivity able to ground (or discourage in a definitive manner) the building of Phase II projects and that, in case of null result and without a dedicated $\bar{\nu}$ run, this capability is almost saturated by high energy beams like CNGS, especially for high values of the ratio $\Delta m^2_{21}/|\Delta m^2_{31}|$.
We obtain in this short article the non-asymptotic exact estimations for the norm of (generalized) weighted Hardy-Littlewood average integral operator in the so-called Bilateral Grand Lebesgue Spaces. We also give examples to show the sharpness of these inequalities.
Astronomical data take on a multitude of forms -- catalogs, data cubes, images, and simulations. The availability of software for rendering high-quality three-dimensional graphics lends itself to the paradigm of exploring the incredible parameter space afforded by the astronomical sciences. The software program Blender gives astronomers a useful tool for displaying data in a manner used by three-dimensional (3D) graphics specialists and animators. The interface to this popular software package is introduced with attention to features of interest in astronomy. An overview of the steps for generating models, textures, animations, camera work, and renders is outlined. An introduction is presented on the methodology for producing animations and graphics with a variety of astronomical data. Examples from sub-fields of astronomy with different kinds of data are shown with resources provided to members of the astronomical community. An example video showcasing the outlined principles and features is provided along with scripts and files for sample visualizations.
A binary constraint system game is a two-player one-round non-local game defined by a system of Boolean constraints. The game has a perfect quantum strategy if and only if the constraint system has a quantum satisfying assignment [R. Cleve and R. Mittal, arXiv:1209.2729]. We show that several concepts including the quantum chromatic number and the Kochen-Specker sets that arose from different contexts fit naturally in the binary constraint system framework. The structure and complexity of the quantum satisfiability problems for these constraint systems are investigated. Combined with a new construct called the commutativity gadget for each problem, several classic NP-hardness reductions are lifted to their corresponding quantum versions. We also provide a simple parity constraint game that requires $\Omega(\sqrt{n})$ EPR pairs in perfect strategies where $n$ is the number of variables in the constraint system.
We report drastically different onset temperatures of the reentrant integer quantum Hall states in the second and third Landau level. This finding is in quantitative disagreement with the Hartree-Fock theory of the bubble phases which is thought to describe these reentrant states. Our results indicate that the number of electrons per bubble in either the second or the third Landau level is likely different than predicted.
Direct imaging observations of planets revealed that wide-orbit ($>10$ au) giant planets exist even around subsolar-metallicity host stars and do not require metal-rich environments for their formation. A possible formation mechanism of wide-orbit giant planets in subsolar-metallicity environments is the gravitational fragmentation of massive protoplanetary discs. Here, we follow the long-term evolution of the disc for 1 Myr after its formation, which is comparable to disc lifetime, by way of a two-dimensional thin-disc hydrodynamic simulation with the metallicity of 0.1 ${\rm Z}_{\odot}$. We find a giant protoplanet that survives until the end of the simulation. The protoplanet is formed by the merger of two gaseous clumps at $\sim$0.5 Myr after disc formation, and then it orbits $\sim$200 au from the host star for $\sim$0.5 Myr. The protoplanet's mass is $\sim$10 ${\rm M}_{\rm J}$ at birth and gradually decreases to 1 ${\rm M}_{\rm J}$ due to the tidal effect from the host star. The result provides the minimum mass of 1 ${\rm M}_{\rm J}$ for protoplanets formed by gravitational instability in a subsolar-metallicity disc. We anticipate that the mass of a protoplanet experiencing reduced mass loss thanks to the protoplanetary contraction in higher resolution simulations can increase to $\sim$10 ${\rm M}_{\rm J}$. We argue that the disc gravitational fragmentation would be a promising pathway to form wide-orbit giant planets with masses of $\ge1$ ${\rm M}_{\rm J}$ in subsolar-metallicity environments.
The main theme of the paper is the dynamics of Hamiltonian diffeomorphisms of ${\mathbb C}{\mathbb P}^n$ with the minimal possible number of periodic points (equal to $n+1$ by Arnold's conjecture), called here Hamiltonian pseudo-rotations. We prove several results on the dynamics of pseudo-rotations going beyond periodic orbits, using Floer theoretical methods. One of these results is the existence of invariant sets in arbitrarily small punctured neighborhoods of the fixed points, partially extending a theorem of Le Calvez and Yoccoz and Franks to higher dimensions. The other is a strong variant of the Lagrangian Poincar\'e recurrence conjecture for pseudo-rotations. We also prove the $C^0$-rigidity of pseudo-rotations with exponentially Liouville mean index vector. This is a higher-dimensional counterpart of a theorem of Bramham establishing such rigidity for pseudo-rotations of the disk.
S.C. Zhang has put forward the idea that high-temperature-superconductors can be described in the framework of an SO(5)-symmetric theory in which the three components of the antiferromagnetic order-parameter and the two components of the two-particle condensate form a five-component order-parameter with SO(5) symmetry. Interactions small in comparison to this strong interaction introduce anisotropies into the SO(5)-space and determine whether it is favorable for the system to be superconducting or antiferromagnetic. Here the view is expressed that Zhang's derivation of the effective interaction V_{eff} based on his Hamiltonian H_a is not correct. However, the orthogonality constraints introduced several pages after this 'derivation' give the key to an effective interaction very similar to that given by Zhang. It is shown that the orthogonality constraints are not rigorous constraints, but they maximize the entropy at finite temperature. If the interaction drives the ground-state to the largest possible eigenvalues of the operators under consideration (antiferromagnetic ordering, superconducting condensate, etc.), then the orthogonality constraints are obeyed by the ground-state, too.
We perform preliminary studies on a large longitudinal face database MORPH-II, which is a benchmark dataset in the field of computer vision and pattern recognition. First, we summarize the inconsistencies in the dataset and introduce the steps and strategy taken for cleaning. The potential implications of these inconsistencies on prior research are introduced. Next, we propose a new automatic subsetting scheme for evaluation protocol. It is intended to overcome the unbalanced racial and gender distributions of MORPH-II, while ensuring independence between training and testing sets. Finally, we contribute a novel global framework for age estimation that utilizes posterior probabilities from the race classification step to compute a racecomposite age estimate. Preliminary experimental results on MORPH-II are presented.
Expected energy spectra calculations for large volume liquid scintillation detectors to inverse $\beta$-decay for antineutrinos produced by $^{144}$Ce -- $^{144}$Pr artificial source have been performed. The calculations were carried out through Monte-Carlo method within GEANT4.10 framework and were purposed to search for neutrino oscillation to sterile eigenstate with mass about 1 eV. The analysis of relative sensitivity to oscillation parameters for different detector shapes has been performed.
We give a criterion of classicality for mixed states in terms of expectation values of a quantum observable. Using group representation theory we identify all cases when the criterion can be computed exactly in terms of the spectrum of a single operator.
The dynamic nature of resource allocation and runtime conditions on Cloud can result in high variability in a job's runtime across multiple iterations, leading to a poor experience. Identifying the sources of such variation and being able to predict and adjust for them is crucial to cloud service providers to design reliable data processing pipelines, provision and allocate resources, adjust pricing services, meet SLOs and debug performance hazards. In this paper, we analyze the runtime variation of millions of production SCOPE jobs on Cosmos, an exabyte-scale internal analytics platform at Microsoft. We propose an innovative 2-step approach to predict job runtime distribution by characterizing typical distribution shapes combined with a classification model with an average accuracy of >96%, out-performing traditional regression models and better capturing long tails. We examine factors such as job plan characteristics and inputs, resource allocation, physical cluster heterogeneity and utilization, and scheduling policies. To the best of our knowledge, this is the first study on predicting categories of runtime distributions for enterprise analytics workloads at scale. Furthermore, we examine how our methods can be used to analyze what-if scenarios, focusing on the impact of resource allocation, scheduling, and physical cluster provisioning decisions on a job's runtime consistency and predictability.
Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations. Despite the long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings of threat models (e.g., single distortion metric and restrictive assumption on target model's feedback to queries) and/or suffer from prohibitively high query complexity. To push for further advances in this field, we introduce a general framework based on an operator splitting method, the alternating direction method of multipliers (ADMM) to devise efficient, robust black-box attacks that work with various distortion metrics and feedback settings without incurring high query complexity. Due to the black-box nature of the threat model, the proposed ADMM solution framework is integrated with zeroth-order (ZO) optimization and Bayesian optimization (BO), and thus is applicable to the gradient-free regime. This results in two new black-box adversarial attack generation methods, ZO-ADMM and BO-ADMM. Our empirical evaluations on image classification datasets show that our proposed approaches have much lower function query complexities compared to state-of-the-art attack methods, but achieve very competitive attack success rates.
We show that the space R^n x gl(n,R) with a certain antisymmetric bracket operation contains all n-dimensional Lie algebras. The bracket does not satisfy the Jacobi identity, but it does satisfy it for subalgebras which are isotropic under a certain symmetric bilinear form with values in R^n. We ask what the corresponding "group-like" object should be. The bracket may be obtained by linearizing at a point the bracket on TM + T*M introduced by T. Courant for the definition of Dirac structures, a notion which encompasses Poisson structures, closed 2-forms, and foliations.
The ubiquitous use of face recognition has sparked increasing privacy concerns, as unauthorized access to sensitive face images could compromise the information of individuals. This paper presents an in-depth study of the privacy protection of face images' visual information and against recovery. Drawing on the perceptual disparity between humans and models, we propose to conceal visual information by pruning human-perceivable low-frequency components. For impeding recovery, we first elucidate the seeming paradox between reducing model-exploitable information and retaining high recognition accuracy. Based on recent theoretical insights and our observation on model attention, we propose a solution to the dilemma, by advocating for the training and inference of recognition models on randomly selected frequency components. We distill our findings into a novel privacy-preserving face recognition method, PartialFace. Extensive experiments demonstrate that PartialFace effectively balances privacy protection goals and recognition accuracy. Code is available at: https://github.com/Tencent/TFace.
Previous studies have demonstrated the utility and applicability of machine learning techniques to jet physics. In this paper, we construct new observables for the discrimination of jets from different originating particles exclusively from information identified by the machine. The approach we propose is to first organize information in the jet by resolved phase space and determine the effective $N$-body phase space at which discrimination power saturates. This then allows for the construction of a discrimination observable from the $N$-body phase space coordinates. A general form of this observable can be expressed with numerous parameters that are chosen so that the observable maximizes the signal vs.~background likelihood. Here, we illustrate this technique applied to discrimination of $H\to b\bar b$ decays from massive $g\to b\bar b$ splittings. We show that for a simple parametrization, we can construct an observable that has discrimination power comparable to, or better than, widely-used observables motivated from theory considerations. For the case of jets on which modified mass-drop tagger grooming is applied, the observable that the machine learns is essentially the angle of the dominant gluon emission off of the $b\bar b$ pair.
While HDMaps are a crucial component of autonomous driving, they are expensive to acquire and maintain. Estimating these maps from sensors therefore promises to significantly lighten costs. These estimations however overlook existing HDMaps, with current methods at most geolocalizing low quality maps or considering a general database of known maps. In this paper, we propose to account for existing maps of the precise situation studied when estimating HDMaps. We identify 3 reasonable types of useful existing maps (minimalist, noisy, and outdated). We also introduce MapEX, a novel online HDMap estimation framework that accounts for existing maps. MapEX achieves this by encoding map elements into query tokens and by refining the matching algorithm used to train classic query based map estimation models. We demonstrate that MapEX brings significant improvements on the nuScenes dataset. For instance, MapEX - given noisy maps - improves by 38% over the MapTRv2 detector it is based on and by 8% over the current SOTA.
A self-consistent model of the chemical evolution of the globular cluster NGC 6752 is presented to test a popular theory that observed abundance anomalies are due to ``internal pollution'' from intermediate mass asymptotic giant branch stars. We simulated the chemical evolution of the intracluster medium under the assumption that the products of Type II SNe are completely expelled from the globular cluster, whereas material ejected from stars with m < 7 M_sun is retained, due to their weak stellar winds. By tracing the chemical evolution of the intracluster gas, we tested an internal pollution scenario in which the Na- and Al-enhanced ejecta from intermediate mass stars is either accreted onto the surfaces of other stars, or goes toward forming new stars. The observed spread in Na and Al was reproduced, but not the O-Na and Mg-Al anticorrelations. In particular, neither O nor Mg are sufficiently depleted to account for the observations. We predict that the Mg content of Na-rich cluster stars should be overwhelmingly dominated by the 25,26Mg isotopes, whereas the latest data shows only a mild 26Mg enhancement and no correlation with 25Mg. Furthermore, stars bearing the imprint of intermediate mass stellar ejecta are predicted to be strongly enhanced in both C and N, in conflict with the empirical data. We find that while standard AGB stellar models do show the hot H burning that seems required to explain the observations, this is accompanied by He burning, producing primary C, N, Mg and Na (via HBB) which do not match the observations. (Abridged)
Various mechanisms of thermal photon production are reviewed and their implications for heavy ion collisions are briefly sketched.
We construct $p$-adic $L$-functions for Rankin--Selberg products of automorphic forms of hermitian type in the anticyclotomic direction for both root numbers. When the root number is $+1$, the construction relies on global Bessel periods on definite unitary groups which, due to the recent advances on the global Gan--Gross--Prasad conjecture, interpolate classical central $L$-values. When the root number is $-1$, we construct an element in the Iwasawa Selmer group using the diagonal cycle on the product of unitary Shimura varieties, and conjecture that its $p$-adic height interpolates derivatives of cyclotomic $p$-adic $L$-functions. We also propose the nonvanishing conjecture and the main conjecture in both cases.
Starburst galaxies, which are known as "reservoirs" of high-energy cosmic-rays, can represent an important high-energy neutrino "factory" contributing to the diffuse neutrino flux observed by IceCube. In this paper, we revisit the constraints affecting the neutrino and gamma-ray hadronuclear emissions from this class of astrophysical objects. In particular, we go beyond the standard prototype-based approach leading to a simple power-law neutrino flux, and investigate a more realistic model based on a data-driven blending of spectral indexes, thereby capturing the observed changes in the properties of individual emitters. We then perform a multi-messenger analysis considering the extragalactic gamma-ray background (EGB) measured by Fermi-LAT and different IceCube data samples: the 7.5-year High-Energy Starting Events (HESE) and the 6-year high-energy cascade data. Along with starburst galaxies, we take into account the contributions from blazars and radio galaxies as well as the secondary gamma-rays from electromagnetic cascades. Remarkably, we find that, differently from the highly-constrained prototype scenario, the spectral index blending allows starburst galaxies to account for up to $40\%$ of the HESE events at $95.4\%$ CL, while satisfying the limit on the non-blazar EGB component. Moreover, values of $\mathcal{O}(100~\mathrm{PeV})$ for the maximal energy of accelerated cosmic-rays by supernovae remnants inside the starburst are disfavoured in our scenario. In broad terms, our analysis points out that a better modeling of astrophysical sources could alleviate the tension between neutrino and gamma-ray data interpretation.
The performance of a D-Wave Vesuvius quantum annealer was recently compared to a suite of classical algorithms on a class of constraint satisfaction instances based on frustrated loops. However, the construction of these instances leads the maximum coupling strength to increase with problem size. As a result, larger instances are subject to amplified analog control error, and are effectively annealed at higher temperatures in both hardware and software. We generate similar constraint satisfaction instances with limited range of coupling strength and perform a similar comparison to classical algorithms. On these instances the D-Wave Vesuvius processor, run with a fixed 20$\mu$s anneal time, shows a scaling advantage over the software solvers for the hardest regime studied. This scaling advantage opens the possibility of quantum speedup on these problems. Our results support the hypothesis that performance of D-Wave Vesuvius processors is heavily influenced by analog control error, which can be reduced and mitigated as the technology matures.
Determining the spin and the parity quantum numbers of the recently discovered Higgs-like boson at the LHC is a matter of great importance. In this paper, we consider the possibility of using the kinematics of the tagging jets in Higgs production via the vector boson fusion (VBF) process to test the tensor structure of the Higgs-vector boson ($HVV$) interaction and to determine the spin and CP properties of the observed resonance. We show that an anomalous $HVV$ vertex, in particular its explicit momentum dependence, drastically affects the rapidity between the two scattered quarks and their transverse momenta and, hence, the acceptance of the kinematical cuts that allow to select the VBF topology. The sensitivity of these observables to different spin-parity assignments, including the dependence on the LHC center of mass energy, are evaluated. In addition, we show that in associated Higgs production with a vector boson some kinematical variables, such as the invariant mass of the system and the transverse momenta of the two bosons and their separation in rapidity, are also sensitive to the spin--parity assignments of the Higgs--like boson.
In this paper, we propose a polar coding based scheme for set reconciliation between two network nodes. The system is modeled as a well-known Slepian-Wolf setting induced by a fixed number of deletions. The set reconciliation process is divided into two phases: 1) a deletion polar code is employed to help one node to identify the possible deletion indices, which may be larger than the number of genuine deletions; 2) a lossless compression polar code is then designed to feedback those indices with minimum overhead. Our scheme can be viewed as a generalization of polar codes to some emerging network-based applications such as the package synchronization in blockchains. Some connections with the existing schemes based on the invertible Bloom lookup tables (IBLTs) and network coding are also observed and briefly discussed.
We introduce RecurrentGemma, an open language model which uses Google's novel Griffin architecture. Griffin combines linear recurrences with local attention to achieve excellent performance on language. It has a fixed-sized state, which reduces memory use and enables efficient inference on long sequences. We provide a pre-trained model with 2B non-embedding parameters, and an instruction tuned variant. Both models achieve comparable performance to Gemma-2B despite being trained on fewer tokens.
A cloud server spent a lot of time, energy and money to train a Viola-Jones type object detector with high accuracy. Clients can upload their photos to the cloud server to find objects. However, the client does not want the leakage of the content of his/her photos. In the meanwhile, the cloud server is also reluctant to leak any parameters of the trained object detectors. 10 years ago, Avidan & Butman introduced Blind Vision, which is a method for securely evaluating a Viola-Jones type object detector. Blind Vision uses standard cryptographic tools and is painfully slow to compute, taking a couple of hours to scan a single image. The purpose of this work is to explore an efficient method that can speed up the process. We propose the Random Base Image (RBI) Representation. The original image is divided into random base images. Only the base images are submitted randomly to the cloud server. Thus, the content of the image can not be leaked. In the meanwhile, a random vector and the secure Millionaire protocol are leveraged to protect the parameters of the trained object detector. The RBI makes the integral-image enable again for the great acceleration. The experimental results reveal that our method can retain the detection accuracy of that of the plain vision algorithm and is significantly faster than the traditional blind vision, with only a very low probability of the information leakage theoretically.
We present polarization-independent optical shutters with a sub-millisecond switching time. The approach utilizes dual-frequency nematics doped with a dichroic dye. Two nematic cells with orthogonal alignment are driven simultaneously by a low-frequency or high-frequency electric field to switch the shutter either into a transparent or a light-absorbing state. The switching speed is accelerated via special short pulses of high amplitude voltage. The approach can be used in a variety of electro-optical devices.
The Pade approximant technique and the variational Monte Carlo method are applied to determine the ground-state energy of a finite number of charged bosons in two dimensions confined by a parabolic trap. The particles interact repulsively through a Coulombic, 1/r, potential. Analytic expressions for the ground-state energy are obtained. The convergence of the Pade sequence and comparison with the Monte Carlo results show that the error of the Pade estimate is less than 4% at any boson density and is exact in the extreme situations of very dilute and high density.
Topological superconductivity is central to a variety of novel phenomena involving the interplay between topologically ordered phases and broken-symmetry states. The key ingredient is an unconventional order parameter, with an orbital component containing a chiral $p_x$ + i$p_y$ wave term. Here we present phase-sensitive measurements, based on the quantum interference in nanoscale Josephson junctions, realized by using Bi$_2$Te$_3$ topological insulator. We demonstrate that the induced superconductivity is unconventional and consistent with a sign-changing order parameter, such as a chiral $p_x$ + i$p_y$ component. The magnetic field pattern of the junctions shows a dip at zero externally applied magnetic field, which is an incontrovertible signature of the simultaneous existence of 0 and $\pi$ coupling within the junction, inherent to a non trivial order parameter phase. The nano-textured morphology of the Bi$_2$Te$_3$ flakes, and the dramatic role played by thermal strain are the surprising key factors for the display of an unconventional induced order parameter.
We calculate the conductance of atomic chains as a function of their length. Using the Density Matrix Renormalization Group algorithm for a many-body model which takes into account electron-electron interactions and the shape of the contacts between the chain and the leads, we show that length-dependent oscillations of the conductance whose period depends on the electron density in the chain can result from electron-electron scattering alone. The amplitude of these oscillations can increase with the length of the chain, in contrast to the result from approaches which neglect the interactions.
Given a suitable arithmetic function h, we investigate the average order of h as it ranges over the values taken by an integral binary form F. A general upper bound is obtained for this quantity, in which the dependence upon the coefficients of F is made completely explicit.
The presence of forming planets embedded in their protoplanetary disks has been inferred from the detection of multiring structures in such disks. Most of these suspected planets are undetectable by direct imaging observations at current measurement sensitivities. Inward migration and accretion might make these putative planets accessible to the Doppler method, but the actual extent of growth and orbital evolution remains unconstrained. Under the premise that the gaps in the disk around HD 163296 originate from new-born planets, we investigate if and under which circumstances the gap-opening planets could represent progenitors of the exoplanet population detected around A-type stars. In particular, we study the dependence of final planetary masses and orbital parameters on the viscosity of the disk. The evolution of the embedded planets was simulated throughout the disk lifetime and up to 100 Myr after the dispersal of the disk, taking the evolving disk structure and a likely range of disk lifetimes into account. We find that the final configuration of the planets is largely determined by the $\alpha$ viscosity parameter of the disk and less dependent on the choice for the disk lifetime and the initial planetary parameters. If we assume that planets such as those in HD 163296 evolve to form the observed exoplanet population of A-type stars, a $\alpha$ parameter on the order of $3.16 \times 10^{-4} \lesssim \alpha \lesssim 10^{-3}$ is required for the disks to induce sufficiently high migration rates. Depending on whether or not future direct imaging surveys will uncover a larger number of planets with $m_\mathrm{pl} \lesssim 3 M_\mathrm{Jup}$ and $a_\mathrm{pl} \gtrsim 10 \mathrm{AU}$ we expect the $\alpha$ parameter to be at the lower or upper end of this range, always under the assumption that such disks indeed harbor wide orbit planets.
Enabling fast charging for lithium ion batteries is critical to accelerating the green energy transition. As such, there has been significant interest in tailored fast-charging protocols computed from the solutions of constrained optimal control problems. Here, we derive necessity conditions for a fast charging protocol based upon monotone control systems theory.
This text aims at providing a bird's eye view of system identification with special attention to nonlinear systems. The driving force is to give a feeling for the philosophical problems facing those that build mathematical models from data. Special attention will be given to grey-box approaches in nonlinear system identification. In this text, grey-box methods use auxiliary information such as the system steady-state data, possible symmetries, some bifurcations and the presence of hysteresis. The text ends with a sample of applications. No attempt is made to be thorough nor to survey such an extensive and mature field as system identification. In most parts references will be provided for a more detailed study.
We report a metal-insulator transition (MIT) in the half-filled multiorbital antiferromagnet (AF) BaMn$_2$Bi$_2$ that is tunable by a magnetic field perpendicular to the AF sublattices. Instead of an Anderson-Mott mechanism usually expected in strongly correlated systems, we find by scaling analyses that the MIT is driven by an Anderson localization. Electrical and thermoelectrical transport measurements in combination with electronic band calculations reveal a strong orbital-dependent correlation effect, where both weakly and strongly correlated $3d$-derived bands coexist with decoupled charge excitations. Weakly correlated holelike carriers in the $d_{xy}$-derived band dominate the transport properties and exhibit the Anderson localization, whereas other $3d$ bands show clear Mott-like behaviors with their spins ordered into AF sublattices. The tuning role played by the perpendicular magnetic field supports a strong spin-spin coupling between itinerant holelike carriers and the AF fluctuations, which is in sharp contrast to their weak charge coupling.
Consider the configuration spaces of manifolds. An influential theorem of McDuff, Segal and Church shows that the (co)homology of the unordered configuration space is independent of number of points in a range of degree called the stable range. We study the another important (and general) property of unordered configuration spaces of manifolds (not necessarily orientable, and not necessarily admitting non-vanishing vector field) that is homological monotonicity in unstable part. We show that the homological dimension of unordered configuration spaces of manifolds in each degree is monotonically increasing. Our results show that the monotonicity property is not depend on the differential structure and orientiability of manifold.
Given a graph G, a colouring is an assignment of colours to the vertices of G so that no two adjacent vertices are coloured the same. If all colour classes have size at most t, then we call the colouring t-bounded, and the t-bounded chromatic number of G, denoted by $\chi_t(G)$, is the minimum number of colours in such a colouring. Every colouring of G is then $\alpha(G)$-bounded, where $\alpha(G)$ denotes the size of a largest independent set. We study colourings of the random graph G(n, 1/2) and of the corresponding uniform random graph G(n,m) with $m=\left \lfloor \frac 12 {n \choose 2} \right \rfloor$. We show that $\chi_t(G(n,m))$ is maximally concentrated on at most two explicit values for $t = \alpha(G(n,m))-2$. This behaviour stands in stark contrast to that of the normal chromatic number, which was recently shown not to be concentrated on any sequence of intervals of length $n^{1/2-o(1)}$. Moreover, when $t = \alpha(G_{n, 1/2})-1$ and if the expected number of independent sets of size $t$ is not too small, we determine an explicit interval of length $n^{0.99}$ that contains $\chi_t(G_{n,1/2})$ with high probability. Both results have profound consequences: the former is at the core of the intriguing Zigzag Conjecture on the distribution of $\chi(G_{n, 1/2})$ and justifies one of its main hypotheses, while the latter is an important ingredient in the proof of a non-concentration result for $\chi(G_{n,1/2})$ that is conjectured to be optimal. These two results are consequences of a more general statement. We consider a class of colourings that we call tame, and provide tight bounds for the probability of existence of such colourings via a delicate second moment argument. We then apply those bounds to the two aforementioned cases. As a further consequence of our main result, we prove two-point concentration of the equitable chromatic number of G(n,m).
TensorFlow is a popular deep learning framework used by data scientists to solve a wide-range of machine learning and deep learning problems such as image classification and speech recognition. It also operates at a large scale and in heterogeneous environments --- it allows users to train neural network models or deploy them for inference using GPUs, CPUs and deep learning specific custom-designed hardware such as TPUs. Even though TensorFlow supports a variety of optimized backends, realizing the best performance using a backend may require additional efforts. For instance, getting the best performance from a CPU backend requires careful tuning of its threading model. Unfortunately, the best tuning approach used today is manual, tedious, time-consuming, and, more importantly, may not guarantee the best performance. In this paper, we develop an automatic approach, called TensorTuner, to search for optimal parameter settings of TensorFlow's threading model for CPU backends. We evaluate TensorTuner on both Eigen and Intel's MKL CPU backends using a set of neural networks from TensorFlow's benchmarking suite. Our evaluation results demonstrate that the parameter settings found by TensorTuner produce 2% to 123% performance improvement for the Eigen CPU backend and 1.5% to 28% performance improvement for the MKL CPU backend over the performance obtained using their best-known parameter settings. This highlights the fact that the default parameter settings in Eigen CPU backend are not the ideal settings; and even for a carefully hand-tuned MKL backend, the settings may be sub-optimal. Our evaluations also revealed that TensorTuner is efficient at finding the optimal settings --- it is able to converge to the optimal settings quickly by pruning more than 90% of the parameter search space.
We consider neutral evolution of a large population subject to changes in its population size. For a population with a time-variable carrying capacity we have computed the distributions of the total branch lengths of its sample genealogies. Within the coalescent approximation we have obtained a general expression, Eq. (27), for the moments of these distributions for an arbitrary smooth dependence of the population size on time. We investigate how the frequency of population-size variations alters the distributions. This allows us to discuss their influence on the distribution of the number of mutations, and on the population homozygosity in populations with variable size.
In this paper we study the problem of extension of holomorphic sections of line bundles/vector bundles from reduced unions of strata of divisors. An extension theorem of Ohsawa--Takegoshi type is proved. As consequences we deduce several qualitative results on extension from snc divisors and generic global generation of vector bundles.
Gyrochronology can yield useful ages for field main-sequence stars, a regime where other techniques are problematic. Typically, gyrochronology relations are calibrated using young ($\lesssim 2$ Gyr) clusters, but the constraints at older ages are scarce, making them potentially inaccurate and imprecise. In order to test the performance of existing relations, we construct samples of stellar pairs with coeval components, for a range of ages and with available rotation periods. These include randomly paired stars in clusters, and wide binaries in the Kepler field. We design indicators that, based on the measured rotation periods and expectations from gyrochronology, quantify the (dis)agreement between the coeval pairs and the gyrochronology calibrations under scrutiny. Our results show that wide binaries and cluster members are in better concordance with gyrochronology than samples of randomly paired field stars, confirming that the relations have predicting power. However, the agreement with the examined relations decreases for older stars, revealing a degradation of the examined relations with age, in agreement with recent works. This highlights the need for novel empirical constraints at older ages that may allow revised calibrations. Notably, using coeval stars to test gyrochronology poses the advantage of circumventing the need for age determinations while simultaneously exploiting larger samples at older ages. Our test is independent of any specific age-rotation relation, and it can be used to evaluate future spin-down models. In addition, taking gyrochronology at face value, we note that our results provide new empirical evidence that the components of field wide binaries are indeed coeval.
We study the drift of suspended micro-particles in a viscous liquid pumped back and forth through a periodic lattice of pores (drift ratchet). In order to explain the particle drift observed in such an experiment, we present an one-dimensional deterministic model of Stokes' drag. We show that the stability of oscillations of particle is related to their amplitude. Under appropriate conditions, particles may drift and two mechanisms of transport are pointed out. The first one is due to an spatio-temporal synchronization between the fluid and particle motions. As results the velocity is locked by the ratio of the space periodicity over the time periodicity. The direction of the transport may switch by tuning the parameters. Noteworthy, its emergence is related to a lattice of 2-periodic orbits but not necessary to chaotic dynamics. The second mechanism is due to an intermittent bifurcation and leads to a slow transport composed by long time oscillations following by a relative short transport to the next pore. Both steps repeat in a quasi-periodic manner. The direction of this last transport is strongly dependent on the pore geometry.
Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks. However, current pre-training objectives such as masked token prediction (for BERT-style PTLMs) and masked span infilling (for T5-style PTLMs) do not explicitly model the relational commonsense knowledge about everyday concepts, which is crucial to many downstream tasks that need common sense to understand or generate. To augment PTLMs with concept-centric commonsense knowledge, in this paper, we propose both generative and contrastive objectives for learning common sense from the text, and use them as intermediate self-supervised learning tasks for incrementally pre-training PTLMs (before task-specific fine-tuning on downstream datasets). Furthermore, we develop a joint pre-training framework to unify generative and contrastive objectives so that they can mutually reinforce each other. Extensive experimental results show that our method, concept-aware language model (CALM), can pack more commonsense knowledge into the parameters of a pre-trained text-to-text transformer without relying on external knowledge graphs, yielding better performance on both NLU and NLG tasks. We show that while only incrementally pre-trained on a relatively small corpus for a few steps, CALM outperforms baseline methods by a consistent margin and even comparable with some larger PTLMs, which suggests that CALM can serve as a general, plug-and-play method for improving the commonsense reasoning ability of a PTLM.
We study collective dynamics of networks of mutually coupled identical Lorenz oscillators near subcritical Hopf bifurcation. This system shows induced multistable behavior with interesting spatio-temporal dynamics including synchronization, desynchronization and chimera states. We find this network may exhibit intermittent behavior due to the complex basin structures, where, temporal dynamics of the oscillators in the ensemble switches between different attractors. Consequently, different oscillators may show dynamics that is intermittently synchronized (or desynchronized), giving rise to {\it intermittent chimera states}. The behaviour of the intermittent laminar phases is characterized by the characteristic time spend in the synchronization manifold, which decays as power law. This intermittent dynamics is quite general and can be extended for large number of oscillators interacting with nonlocal, global and local coupling schemes.
The automated analysis of administrative documents is an important field in document recognition that is studied for decades. Invoices are key documents among these huge amounts of documents available in companies and public services. Invoices contain most of the time data that are presented in tables that should be clearly identified to extract suitable information. In this paper, we propose an approach that combines an image processing based estimation of the shape of the tables with a graph-based representation of the document, which is used to identify complex tables precisely. We propose an experimental evaluation using a real case application.
We present a measurement of the angle phi1 of the CKM Unitarity Triangle using time-dependent Dalitz analysis of D -> Ks pi+ pi- decays produced in neutral B meson decay to a neutral D meson and a light meson (B0bar -> D(*) h0). The method allows a direct extraction of 2phi1 and, therefore, helps to resolve the ambiguity between 2phi1 and pi-2phi1 in the measurement of sin 2phi1. We obtain sin 2phi1=0.78+-0.44+-0.22 and cos 2phi1=1.87+0.40+0.22 -0.53-0.32 The sign of cos 2phi1 is determined to be positive at 98.3% C.L.
The dark matter puzzle is one of the most important open problems in modern physics. The ultra-light axion is a well-motivated dark matter candidate, conceived to resolve the strong-CP problem of quantum chromodynamics. Numerous precision experiments are searching for the three non-gravitational interactions of axion-like dark matter. Some of the searches are approaching fundamental quantum limits on their sensitivity. This Perspective describes several approaches that use quantum engineering to circumvent these limits. Squeezing and single-photon counting can enhance searches for the axion-photon interaction. Optimization of quantum spin ensemble properties is needed to realize the full potential of spin-based searches for the electric-dipole-moment and the gradient interactions of axion dark matter. Several metrological and sensing techniques, developed in the field of quantum information science, are finding natural applications in this area of experimental fundamental physics.
In the eikonal regime, we analytically calculate quasinormal resonance frequencies for massless scalar perturbations of the higher-dimensional Reissner--Nordstr\"{o}m (RN) black holes. Remarkably, we find that the higher-dimensional RN black holes coupled with the massless scalar fields have the fastest relaxation rates in the Schwarzschild limit, this is qualitatively different from the four-dimensional case where the black hole with non-vanishing charge has the fastest relaxation rate.
Estimators computed from adaptively collected data do not behave like their non-adaptive brethren. Rather, the sequential dependence of the collection policy can lead to severe distributional biases that persist even in the infinite data limit. We develop a general method -- $\mathbf{W}$-decorrelation -- for transforming the bias of adaptive linear regression estimators into variance. The method uses only coarse-grained information about the data collection policy and does not need access to propensity scores or exact knowledge of the policy. We bound the finite-sample bias and variance of the $\mathbf{W}$-estimator and develop asymptotically correct confidence intervals based on a novel martingale central limit theorem. We then demonstrate the empirical benefits of the generic $\mathbf{W}$-decorrelation procedure in two different adaptive data settings: the multi-armed bandit and the autoregressive time series.