text
stringlengths
6
128k
The characterization of a unitary gate is experimentally accomplished via Quantum Process Tomography, which combines the outcomes of different projective measurements to reconstruct the underlying operator. The process matrix is typically extracted from maximum-likelihood estimation. Recently, optimization strategies based on evolutionary and machine-learning techniques have been proposed. Here, we investigate a deep-learning approach that allows for fast and accurate reconstructions of space-dependent SU(2) operators, only processing a minimal set of measurements. We train a convolutional neural network based on a scalable U-Net architecture to process entire experimental images in parallel. Synthetic processes are reconstructed with average fidelity above 90%. The performance of our routine is experimentally validated on complex polarization transformations. Our approach further expands the toolbox of data-driven approaches to Quantum Process Tomography and shows promise in the real-time characterization of complex optical gates.
Motivated by tension between the predictions of ordinary cold dark matter (CDM) and observations at galactic scales, ultralight axionlike particles (ULALPs) with mass of the order $10^{-22}~{\rm eV}$ have been proposed as an alternative CDM candidate. We consider cold and collisionless ULALPs produced in the early Universe by the vacuum realignment mechanism and constituting most of CDM. The ULALP fluid is commonly described by classical field equations. However, we show that, like QCD axions, the ULALPs thermalize by gravitational self-interactions and form a Bose-Einstein condensate, a quantum phenomenon. ULALPs, like QCD axions, explain the observational evidence for caustic rings of dark matter because they thermalize and go to the lowest energy state available to them. This is one of rigid rotation on the turnaround sphere. By studying the heating effect of infalling ULALPs on galactic disk stars and the thickness of the nearby caustic ring as observed from a triangular feature in the IRAS map of our galactic disk, we obtain lower-mass bounds on the ULALP mass of order $10^{-23}$ and $10^{-20}~{\rm eV}$, respectively.
We discuss how to generate a black hole solution of the Einstein Equations (EE) via non-linear electrodynamics (NED). We discuss the thermodynamical properties of a general NED solution, recovering the First Law. Then we illustrate the general mechanism and discuss some specific cases, showing that finding a generating Lagrangian (for a specific solution) only requires solving an algebraic equation (we study some analytical cases). Finally, we argue that NED paradigm, though self-consistent, is not the best tool for studying regular black holes.
It is common in online markets for agents to learn from other's actions. Such observational learning can lead to herding or information cascades in which agents eventually "follow the crowd". Models for such cascades have been well studied for Bayes-rational agents that choose pay-off optimal actions. In this paper, we additionally consider the presence of fake agents that seek to influence other agents into taking one particular action. To that end, these agents take a fixed action in order to influence the subsequent agents towards their preferred action. We characterize how the fraction of such fake agents impacts behavior of the remaining agents and show that in certain scenarios, an increase in the fraction of fake agents in fact reduces the chances of their preferred outcome.
A novel online distillation technique was developed for the XENON1T dark matter experiment to reduce intrinsic background components more volatile than xenon, such as krypton or argon, while the detector was operating. The method is based on a continuous purification of the gaseous volume of the detector system using the XENON1T cryogenic distillation column. A krypton-in-xenon concentration of $(360 \pm 60)$ ppq was achieved. It is the lowest concentration measured in the fiducial volume of an operating dark matter detector to date. A model was developed and fit to the data to describe the krypton evolution in the liquid and gas volumes of the detector system for several operation modes over the time span of 550 days, including the commissioning and science runs of XENON1T. The online distillation was also successfully applied to remove Ar-37 after its injection for a low energy calibration in XENON1T. This makes the usage of Ar-37 as a regular calibration source possible in the future. The online distillation can be applied to next-generation experiments to remove krypton prior to, or during, any science run. The model developed here allows further optimization of the distillation strategy for future large scale detectors.
The biosorption of Au(III) - Spirulina platensis and Au(III) - Arthrobacter species (Arthrobacter globiformis and Arthrobacter oxidas) were studied at simultaneous application of dialysis and atomic absorption analysis. Also biosorption of Au(III) - Spirulina platensis at various pH were discussed. Biosorption constants for Au-cyanobacteris Spirulina platensis at different pH, and for Arthrobacter oxidas and Arthrobacter globiformis at pH=7.1 are : 1. K=3.91 x 10-4 (Au- Arthrobacter oxidas 61B, pH=7.1) 2. K=14.17 x 10-4 . (Au- Arthrobacter globiformis 151B, pH=7.1). 3. K=2.07x10-4 (Au- Spirulina platensis, pH=7.1) 4. K= 4.87x10-4 (Au- Spirulina platensis, pH=6.2) 5. K=8.7x10-4 (Au- Spirulina platensis, pH=8.4)
In this article, we propose and analyze an effective Hessian recovery strategy for the Lagrangian finite element of arbitrary order $k$. We prove that the proposed Hessian recovery preserves polynomials of degree $k+1$ on general unstructured meshes and superconverges at rate $O(h^k)$ on mildly structured meshes. In addition, the method preserves polynomials of degree $k+2$ on translation invariant meshes and produces a symmetric Hessian matrix when the sampling points for recovery are selected with symmetry. Numerical examples are presented to support our theoretical results.
We report on the coupling of an electric quadrupole transition in atom with plasmonic excitation in a nanostructured metallic metamaterial. The quadrupole transition at 685 nm in the gas of Cesium atoms is optically pumped, while the induced ground state population depletion is probed with light tuned on the strong electric dipole transition at 852 nm. We use selective reflection to resolve the Doppler-free hyperfine structure of Cesium atoms. We observed a strong modification of the reflection spectra at the presence of metamaterial and discuss the role of the spatial variation of the surface plasmon polariton on the quadrupole coupling.
The paper presents an analysis of entangling and non-local operations in a quantum nondemolition (QND) interaction between multimode light with orbital angular momentum and an atomic ensemble. A protocol consists of two QND operations with rotations of quadratures of atomic spin coherence and light between them. This protocol provides a wide range of two-qubit operations, while the multimode nature of the chosen degrees of freedom allows the implementation of parallel operations over multiple two-qubit systems. We have used the formalism of equivalence classes and local invariants to evaluate the properties of two-qubit transformations. It is shown that, when selecting suitable values of the governing parameters, such as the duration of each of the two QND interactions and the rotation angles of the qubits, the protocol allows to realise a deterministic non-local SWAP operation and entangling $\sqrt{SWAP}$ operation with probability 1/3.
We introduce a simple model of touching random surfaces, by adding a chemical potential rho for ``minimal necks'', and study this model numerically coupled to a Gaussian model in d-dimensions (for central charge c = d = 0, 1 and 2). For c <= 1, this model has a phase transition to branched polymers, for sufficiently large rho. For c = 2, however, the extensive simulations indicate that this transition is replaced by a cross-over behavior on finite lattices --- the model is always in the branched polymer phase. This supports recent speculations that, in 2d-gravity, the behavior observe in simulations for $c \leq 1$, is dominated by finite size effects, which are exponentially enhanced as c -> 1+.
We study the minimum spanning tree distribution on the space of spanning trees of the $n$-by-$n$ grid for large $n$. We establish bounds on the decay rates of the probability of the most and the least probable spanning trees as $n\rightarrow\infty$.
In this paper we establish some convergence results for Riemann-Liouville, Caputo, and Caputo-Fabrizio fractional operators when the order of differentiation approaches one. We consider some errors given by $\left|\left| D^{1-\al}f -f'\right|\right|_p$ for p=1 and $p=\infty$ and we prove that for both Caputo and Caputo Fabrizio operators the order of convergence is a positive real r, 0<r<1. Finally, we compare the speed of convergence between Caputo and Caputo-Fabrizio operators obtaining that they a related by the Digamma function.
We show that for rational surface singularities with odd determinant the mu-bar invariant defined by W. Neumann is an obstruction for the link of the singularity to bound a rational homology 4-ball. We identify the mu-bar invariant with the corresponding correction term in Heegaard Floer theory.
Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs) are useful for many practical tasks in machine learning. Synaptic weights, as well as neuron activation functions within the deep network are typically stored with high-precision formats, e.g. 32 bit floating point. However, since storage capacity is limited and each memory access consumes power, both storage capacity and memory access are two crucial factors in these networks. Here we present a method and present the ADaPTION toolbox to extend the popular deep learning library Caffe to support training of deep CNNs with reduced numerical precision of weights and activations using fixed point notation. ADaPTION includes tools to measure the dynamic range of weights and activations. Using the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit weights and activations with only 0.8% drop in Top-1 accuracy. The quantization, especially of the activations, leads to increase of up to 50% of sparsity especially in early and intermediate layers, which we exploit to skip multiplications with zero, thus performing faster and computationally cheaper inference.
In this article, we give a proof for a geometric presentation theorem for any irreducible scheme $X$ smooth projective over a discrete valuation ring $R$. As a consequence, for any reductive $R$-group scheme $\mathbf{G}$, we prove that any generically trivial principal $\mathbf{G}$-bundle over $X$ glued to a principal $\mathbf{G}_U$-bundle over the affine line $\mathbb{A}^1_U$ for a semilocal affine scheme $U$.
NenuFAR, the New Extension in Nancay Upgrading LOFAR, is currently in its early science phase. It is in this context that the Cosmic Filaments and Magnetism Pilot Survey is observing sources with the array as it is still under construction - with 57 (56 core, 1 distant) out of a total planned 102 (96 core, 6 distant) mini-arrays online at the time of observation - to get a first look at the low-frequency sky with NenuFAR. One of its targets is the Coma galaxy cluster: a well-known object, host of the prototype radio halo. It also hosts other features of scientific import, including a radio relic, along with a bridge of emission connecting it with the halo. It is thus a well-studied object. In this paper, we show the first confirmed NenuFAR detection of the radio halo and radio relic of the Coma cluster at 34.4 MHz, with associated intrinsic flux density estimates: we find an integrated flux value of 106.3 +- 3.5 Jy for the radio halo, and 102.0 +- 7.4 Jy for the radio relic. These are upper bound values, as they do not include point-source subtraction. We also give an explanation of the technical difficulties encountered in reducing the data, along with steps taken to resolve them. This will be helpful for other scientific projects which will aim to make use of standalone NenuFAR imaging observations in the future.
Oxides RNiO3 (R = rare-earth, R # La) exhibit a metal-insulator (MI) transition at a temperature TMI and an antiferromagnetic (AF) transition at TN. Specific heat (CP) and anelastic spectroscopy measurements were performed in samples of Nd1-xEuxNiO3, 0 <= x <= 0.35. For x = 0, a peak in CP is observed upon cooling and warming at essentially the same temperature TMI = TN ~ 195 K, although the cooling peak is much smaller. For x >= 0.25, differences between cooling and warming curves are negligible, and two well defined peaks are clearly observed: one at lower temperatures, that define TN, and the other one at TMI. An external magnetic field of 9 T had no significant effect on these results. The elastic compliance (s) and the reciprocal of the mechanical quality factor (Q^-1) of NdNiO3, measured upon warming, showed a very sharp peak at essentially the same temperature obtained from CP, and no peak is observed upon cooling. The elastic modulus hardens below TMI much more sharply upon warming, while the cooling and warming curves are reproducible above TMI. On the other hand, for the sample with x = 0.35, s and Q^-1 curves are very similar upon warming and cooling. The results presented here give credence to the proposition that the MI phase transition changes from first to second order with increasing Eu doping.
In this paper we derive necessary and sufficient conditions for the nonnegativity of Moore-Penrose inverses of unbounded Gram operators between real Hilbert spaces. These conditions include statements on acuteness of certain closed convex cones. The main result generalizes the existing result for bounded operators [11, Theorem 3.6].
Suppression of quarkonia in heavy ion collisions with respect to proton-proton collisions due to the Debye screening of the potential between the heavy quarks was hypothesized to be a signature of the Quark Gluon Plasma (QGP). However, other effects besides Debye screening, such as the statistical recombination of quark anti-quark heavy flavor pairs, or co-mover absorption can also affect quarkonia production in heavy ion collisions. Quantifying the suppression of an entire family of quarkonium mesons can give us a model dependent constraint on the temperature. The suppression of Upsilon can be quantified by calculating the Nuclear Modification factor, RAA, which is the ratio of the production in Au+Au collisions to the production in p+p scaled by the number of binary collisions. We present our results for mid-rapidity Upsilon(1S+2S+3S) production in p+p and Au+Au collisions at sqrt(sNN) = 200 GeV. The centrality dependence of RAA will be shown for the combined Upsilon(1S+2S+3S) yield.
Multi-Criteria Decision Analysis (MCDA) methods are widely used in various fields and disciplines. While most of the research has been focused on the development and improvement of new MCDA methods, relatively limited attention has been paid to their appropriate selection for the given decision problem. Their improper application decreases the quality of recommendations, as different MCDA methods deliver inconsistent results. The current paper presents a methodological and practical framework for selecting suitable MCDA methods for a particular decision situation. A set of 56 available MCDA methods was analyzed and, based on that, a hierarchical set of methods characteristics and the rule base were obtained. This analysis, rules and modelling of the uncertainty in the decision problem description allowed to build a framework supporting the selection of a MCDA method for a given decision-making situation. The practical studies indicate consistency between the methods recommended with the proposed approach and those used by the experts in reference cases. The results of the research also showed that the proposed approach can be used as a general framework for selecting an appropriate MCDA method for a given area of decision support, even in cases of data gaps in the decision-making problem description. The proposed framework was implemented within a web platform available for public use at www.mcda.it.
We develop an efficient method for solving transport equations, particularly in the context of electroweak baryogenesis. It provides fully-analytical results under mild approximations and can also test semi-analytical results, which are applicable in more general cases. Key elements of our method include the reduction of the second-order differential equations to first order, representing the set of coupled equations as a block matrix of the particle densities and their derivatives, identification of zero modes, and block decomposition of the matrix. We apply our method to calculate the baryon asymmetry of the Universe (BAU) in a Standard Model effective field theory framework of complex Yukawa couplings to determine the sensitivity of the resulting BAU to modifications of various model parameters and rates, and to estimate the effect of the commonly-used thin-wall approximation.
Wav2vec 2.0 (W2V2) has shown impressive performance in automatic speech recognition (ASR). However, the large model size and the non-streaming architecture make it hard to be used under low-resource or streaming scenarios. In this work, we propose a two-stage knowledge distillation method to solve these two problems: the first step is to make the big and non-streaming teacher model smaller, and the second step is to make it streaming. Specially, we adopt the MSE loss for the distillation of hidden layers and the modified LF-MMI loss for the distillation of the prediction layer. Experiments are conducted on Gigaspeech, Librispeech, and an in-house dataset. The results show that the distilled student model (DistillW2V2) we finally get is 8x faster and 12x smaller than the original teacher model. For the 480ms latency setup, the DistillW2V2's relative word error rate (WER) degradation varies from 9% to 23.4% on test sets, which reveals a promising way to extend the W2V2's application scope.
In the last decade, two revolutionary concepts in nano magnetism emerged from research for storage technologies and advanced information processing. The first suggests the use of magnetic domain walls (DWs) in ferromagnetic nanowires to permanently store information in DW racetrack memories. The second proposes a hardware realisation of neuromorphic computing in nanomagnets using nonlinear magnetic oscillations in the GHz range. Both ideas originate from the transfer of angular momentum from conduction electrons to localised spins in ferromagnets, either to push data encoded in DWs along nanowires or to sustain magnetic oscillations in artificial neurones. Even though both concepts share a common ground, they live on very different time scales which rendered them incompatible so far. Here, we bridge both ideas by demonstrating the excitation of magnetic auto-oscillations inside nano-scale DWs using pure spin currents.
A pulsed source of entangled photons is desirable for some applications. Yet, such a source has intrinsic problems arising from the simultaneous arrival of the signal and noise photons to the detectors. These problems are analyzed and practical methods to calculate the number of accidental (or spurious) coincidences are described in detail, and experimentally checked, for the different regimes of interest. The results are useful not only to measure entanglement, but to all the situations where extracting the number of valid coincidences from noisy data is required. As an example of the use of those methods, we present the time-resolved measurement of the Concurrence of the field produced by spontaneous parametric down conversion with pump pulses of duration in the ns-range at a repetition of kHz. The predicted discontinuous evolution of the entanglement at the edges of the pump pulse is observed.
We propose a method to disentangle linear-encoded facial semantics from StyleGAN without external supervision. The method derives from linear regression and sparse representation learning concepts to make the disentangled latent representations easily interpreted as well. We start by coupling StyleGAN with a stabilized 3D deformable facial reconstruction method to decompose single-view GAN generations into multiple semantics. Latent representations are then extracted to capture interpretable facial semantics. In this work, we make it possible to get rid of labels for disentangling meaningful facial semantics. Also, we demonstrate that the guided extrapolation along the disentangled representations can help with data augmentation, which sheds light on handling unbalanced data. Finally, we provide an analysis of our learned localized facial representations and illustrate that the semantic information is encoded, which surprisingly complies with human intuition. The overall unsupervised design brings more flexibility to representation learning in the wild.
The neutrino long wavelength (just-so) oscillation is revisited as a solution to the solar neutrino problem. We consider just-so scenario in various cases: in the framework of the solar models with relaxed prediction of the boron neutrino flux, as well as in the presence of the non-standard weak range interactions between neutrino and matter constituents. We show that the fit of the experimental data in the just-so scenario is not very good for any reasonable value of the $^8B$ neutrino flux, but it substantially improves if the non-standard $\tau$-neutrino--electron interaction is included. These new interactions could also remove the conflict of the just-so picture with the shape of the SN 1987A neutrino spectrum. Special attention is devoted to the potential of the future real-time solar neutrino detectors as are Super-Kamiokande, SNO and BOREXINO, which could provide the model independent tests for the just-so scenario. In particular, these imply specific deformation of the original solar neutrino energy spectra, and time variation of the intermediate energy monochromatic neutrino ($^7Be$ and $pep$) signals.
A maximal $\varepsilon$-near perfect matching is a maximal matching which covers at least $(1-\varepsilon)|V(G)|$ vertices. In this paper, we study the number of maximal near perfect matchings in generalized quasirandom and dense graphs. We provide tight lower and upper bounds on the number of $\varepsilon$-near perfect matchings in generalized quasirandom graphs. Moreover, based on these results, we provide a deterministic polynomial time algorithm that for a given dense graph $G$ of order $n$ and a real number $\varepsilon>0$, returns either a conclusion that $G$ has no $\varepsilon$-near perfect matching, or a positive non-trivial number $\ell$ such that the number of maximal $\varepsilon$-near perfect matchings in $G$ is at least $n^{\ell n}$. Our algorithm uses algorithmic version of Szemer\'edi Regularity Lemma, and has $O(f(\varepsilon)n^{5/2})$ time complexity. Here $f(\cdot)$ is an explicit function depending only on $\varepsilon$.
We present the new basis functions to investigate the 't Hooft equation, the lowest order mesonic Light-Front Tamm-Dancoff equation for $\rm SU(N_C)$ gauge theories. We find the wave function can be well approximated by new basis functions and obtain an analytic formula for the mass of the lightest bound state. Its value is consistent with the precedent results.
In this work, we study the problem of learning a single model for multiple domains. Unlike the conventional machine learning scenario where each domain can have the corresponding model, multiple domains (i.e., applications/users) may share the same machine learning model due to maintenance loads in cloud computing services. For example, a digit-recognition model should be applicable to hand-written digits, house numbers, car plates, etc. Therefore, an ideal model for cloud computing has to perform well at each applicable domain. To address this new challenge from cloud computing, we develop a framework of robust optimization over multiple domains. In lieu of minimizing the empirical risk, we aim to learn a model optimized to the adversarial distribution over multiple domains. Hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency. Theoretically, we analyze the convergence rate for convex and non-convex models. To our best knowledge, we first study the convergence rate of learning a robust non-convex model with a practical algorithm. Furthermore, we demonstrate that the robustness of the framework and the convergence rate can be further enhanced by appropriate regularizers over the adversarial distribution. The empirical study on real-world fine-grained visual categorization and digits recognition tasks verifies the effectiveness and efficiency of the proposed framework.
We introduce a "deformation" of plumbing. We also define a structure of data used in a calculation by computer aid of the crosscap numbers of alternating knots.
The field of vision and language has witnessed a proliferation of pre-trained foundation models. Most existing methods are independently pre-trained with contrastive objective like CLIP, image-to-text generative objective like PaLI, or text-to-image generative objective like Parti. However, the three objectives can be pre-trained on the same data, image-text pairs, and intuitively they complement each other as contrasting provides global alignment capacity and generation grants fine-grained understanding. In this work, we present a Contrastive Bi-directional Image-Text generation model (CoBIT), which attempts to unify the three pre-training objectives in one framework. Specifically, CoBIT employs a novel unicoder-decoder structure, consisting of an image unicoder, a text unicoder and a cross-modal decoder. The image/text unicoders can switch between encoding and decoding in different tasks, enabling flexibility and shared knowledge that benefits both image-to-text and text-to-image generations. CoBIT achieves superior performance in image understanding, image-text understanding (Retrieval, Captioning, VQA, SNLI-VE) and text-based content creation, particularly in zero-shot scenarios. For instance, 82.7% in zero-shot ImageNet classification, 9.37 FID score in zero-shot text-to-image generation and 44.8 CIDEr in zero-shot captioning.
The density-constrained time-dependent Hartree-Fock (DC-TDHF) theory is a fully microscopic approach for calculating heavy-ion interaction potentials and fusion cross sections below and above the fusion barrier. We discuss recent applications of DC-TDHF method to fusion of light and heavy neutron-rich systems.
Using high resolution near-infrared spectroscopy with the Keck telescope, we have detected the radial velocity signatures of the cool secondary components in four optically identified pre-main-sequence, single-lined spectroscopic binaries. All are weak-lined T Tauri stars with well-defined center of mass velocities. The mass ratio for one young binary, NTTS 160905-1859, is M2/M1 = 0.18+/-0.01, the smallest yet measured dynamically for a pre-main-sequence spectroscopic binary. These new results demonstrate the power of infrared spectroscopy for the dynamical identification of cool secondaries. Visible light spectroscopy, to date, has not revealed any pre-main-sequence secondary stars with masses <0.5 M_sun, while two of the young systems reported here are in that range. We compare our targets with a compilation of the published young double-lined spectroscopic binaries and discuss our unique contribution to this sample.
Returns distributions are heavy-tailed across asset classes. In this note, I examine the implications of this well-known stylized fact for the joint statistics of performance (absolute return) and Sharpe ratio (risk-adjusted return). Using both synthetic and real data, I show that, all other things being equal, the investments with the best in-sample performance are never associated with the best in-sample Sharpe ratios (and vice versa). This counter-intuitive effect is unrelated to the risk-return tradeoff familiar from portfolio theory: it is, rather, a consequence of asymptotic correlations between the sample mean and sample standard deviation of heavy-tailed variables. In addition to its large sample noise, this non-monotonic association of the Sharpe ratio with performance puts into question its status as the gold standard metric of investment quality.
We assess the possibility of shear banding of semidilute rod-like colloidal suspensions under steady shear ow very close to the isotropic-nematic spinodal, using a combination of rheology, small angle neutron scattering, and laser Doppler velocimetry. Model systems are employed which allow for a length and stiffness variation of the particles. The rheological signature reveals that these systems are strongly shear thinning at moderate shear rates. It is shown that the longest and most flexible rods undergo the strongest shear thinning and have the greatest potential to form shear bands. Although we find a small but significant gradient of the orientational order parameter throughout the gap of the shear cell, no shear banding transition is tractable in the region of intermediate shear rates. At very low shear rates, gradient banding and wall slip occur simultaneously, but the shear bands are not stable over time.
I present a unified discussion of several recently published results concerning the escalation, timing and severity of violent events in human conflicts and global terrorism, and set them in the wider context of real-world and cyber-based collective violence and illicit activity. I point out how the borders distinguishing between such activities are becoming increasingly blurred in practice -- from insurgency, terrorism, criminal gangs and cyberwars, through to the 2011 Arab Spring uprisings and London riots. I review the robust empirical patterns that have been found, and summarize a minimal mechanistic model which can explain these patterns. I also explain why this mechanistic approach, which is inspired by non-equilibrium statistical physics, fits naturally within the framework of recent ideas within the social science literature concerning analytical sociology. In passing, I flag the fundamental flaws in each of the recent critiques which have surfaced concerning the robustness of these results and the realism of the underlying model mechanisms.
We explore possible pathways for the creation of ultracold polar NaK molecules in their absolute electronic and rovibrational ground state starting from ultracold Feshbach molecules. In particular, we present a multi-channel analysis of the electronic ground and K(4p)+Na(3s) excited state manifold of NaK, analyze the spin character of both the Feshbach molecular state and the electronically excited intermediate states and discuss possible coherent two-photon transfer paths from Feshbach molecules to rovibronic ground state molecules. The theoretical study is complemented by the demonstration of STIRAP transfer from the X^1\Sigma^+ (v=0) state to the a^3\Sigma^+ manifold on a molecular beam experiment.
Experimental work has shown that non-equilibrium concentration fluctuations arise during free diffusion in fluids and theoretical analysis has been carried on. The results show that, in usual three-dimensional fluids, the phenomenon is extremely weak, in terms of amplitude of the fluctuations and of corrugation of the diffusion wave fronts. In this paper, we show that the phenomena strongly depends on the dimensionality of the system: by extending the theory to two dimensional systems, we show that the root mean square amplitude of the fluctuations and the wave front corrugation become much stronger. We also present an evaluation of the Hausdorf dimension of the expected fluctuations. Experimentally, two-dimensional liquid systems can be realised as freely suspended liquid films; experiments and theoretical works on diffusion in such systems showed that the dynamics is deeply affected by the viscous drag exerted by the fluids (e.g. air) surrounding the film. We provide an evaluation of the drag on the fluctuations. In particular, we study the case of a concentration profile that is initially gaussian: it can be directly compared with the results from a fluorescence recovery after photobleaching (FRAP) experiment. We propose that this theory and the related experiments can be relevant for describing the diffusion along the cellular membranes of living organisms.
We consider a problem of diagnostic pattern recognition/classification from neuroimaging data. We propose a common data analysis pipeline for neuroimaging-based diagnostic classification problems using various ML algorithms and processing toolboxes for brain imaging. We illustrate the pipeline application by discovering new biomarkers for diagnostics of epilepsy and depression based on clinical and MRI/fMRI data for patients and healthy volunteers.
We show that the scotogenic dark symmetry can be obtained as a residual subgroup of the global $U(1)_{B-L}$ symmetry already present in Standard Model. We propose a general framework where the $U(1)_{B-L}$ symmetry is spontaneously broken to an even $\mathcal{Z}_{2n}$ subgroup, setting the general conditions for neutrinos to be Majorana and the dark matter stability in terms of the residual $\mathcal{Z}_{2n}$. Under this general framework, as examples, we build a class of simple models where, in the scotogenic spirit, the dark matter candidate is the lightest particle running inside the neutrino mass loop. The global $U(1)_{B-L}$ symmetry in our framework being anomaly free can also be gauged in a straightforward manner leading to a richer phenomenology.
A simple proof of the classical subconvexity bound $\zeta(1/2+it) \ll_\epsilon t^{1/6+\epsilon}$ for the Riemann zeta-function is given, and estimation by more refined techniques is discussed. The connections between the Dirichlet divisor problem and the mean square of $|\zeta(1/2+it)|$ are analysed.
We study numerically the scaling properties of scars in stadium billiard. Using the semiclassical criterion, we have searched systematically the scars of the same type through a very wide range, from ground state to as high as the 1 millionth state. We have analyzed the integrated probability density along the periodic orbit. The numerical results confirm that the average intensity of certain types of scars is independent of $\hbar$ rather than scales with $\sqrt{\hbar}$. Our findings confirm the theoretical predictions of Robnik (1989).
Many existing procedures for detecting multiple change-points in data sequences fail in frequent-change-point scenarios. This article proposes a new change-point detection methodology designed to work well in both infrequent and frequent change-point settings. It is made up of two ingredients: one is "Wild Binary Segmentation 2" (WBS2), a recursive algorithm for producing what we call a `complete' solution path to the change-point detection problem, i.e. a sequence of estimated nested models containing $0, \ldots, T-1$ change-points, where $T$ is the data length. The other ingredient is a new model selection procedure, referred to as "Steepest Drop to Low Levels" (SDLL). The SDLL criterion acts on the WBS2 solution path, and, unlike many existing model selection procedures for change-point problems, it is not penalty-based, and only uses thresholding as a certain discrete secondary check. The resulting WBS2.SDLL procedure, combining both ingredients, is shown to be consistent, and to significantly outperform the competition in the frequent change-point scenarios tested. WBS2.SDLL is fast, easy to code and does not require the choice of a window or span parameter.
The Large Area Telescope on board the \textit{Fermi} satellite (\textit{Fermi}-LAT) detected more than 1.6 million cosmic-ray electrons/positrons with energies above 60 GeV during its first year of operation. The arrival directions of these events were searched for anisotropies of angular scale extending from $\sim$ 10 $^\circ$ up to 90$^\circ$, and of minimum energy extending from 60 GeV up to 480 GeV. Two independent techniques were used to search for anisotropies, both resulting in null results. Upper limits on the degree of the anisotropy were set that depended on the analyzed energy range and on the anisotropy's angular scale. The upper limits for a dipole anisotropy ranged from $\sim0.5%$ to $\sim10%$.
We study the model of a composite-scalar made of a pair of scalar fields in 6-2 epsilon dimensions, using equivalence to the renormalizable three-elementary-scalar model under the "compositeness condition." In this model, the composite-scalar field is induced by the quantum effects through the vacuum polarization of elementary-scalar fields with 2N species. We first investigate scale dependences of the coupling constant and masses, in the renormalizable three-elementary-scalar model, and derive the results for the composite model by imposing the compositeness condition. The model exhibits the formerly found general property that the coupling constant of the composite field is independent of the scale.
DualSPHysics is a weakly compressible smoothed particle hydrodynamics (SPH) Navier-Stokes solver initially conceived to deal with coastal engineering problems, especially those related to wave impact with coastal structures. Since the first release back in 2011, DualSPHysics has shown to be robust and accurate for simulating extreme wave events along with a continuous improvement in efficiency thanks to the exploitation of hardware such as graphics processing units (GPUs) for scientific computing or the coupling with wave propagating models such as SWASH and OceanWave3D. Numerous additional functionalities have also been included in the DualSPHysics package over the last few years which allow the simulation of fluid-driven objects. The use of the discrete element method (DEM) has allowed the solver to simulate the interaction among different bodies (sliding rocks, for example), which provides a unique tool to analyse debris flows. In addition, the recent coupling with other solvers like Project Chrono or MoorDyn has been a milestone in the development of the solver. Project Chrono allows the simulation of articulated structures with joints, hinges, sliders and springs and MoorDyn allows simulating moored structures. Both functionalities make DualSPHysics one of the meshless model world leaders in the simulation of offshore energy harvesting devices. Lately, the present state of maturity of the solver goes beyond single phase simulations, allowing multi-phase simulations with gas-liquid and a combination of Newtonian and non-Newtonian models expanding further the capabilities and range of applications for the DualSPHysics solver. These advances and functionalities make DualSPHysics a state-of-the-art meshless solver with emphasis on free-surface flow modelling.
High contrast imaging enables the determination of orbital parameters for substellar companions (planets, brown dwarfs) from the observed relative astrometry and the estimation of model and age-dependent masses from their observed magnitudes or spectra. Combining astrometric positions with radial velocity gives direct constraints on the orbit and on the dynamical masses of companions. A brown dwarf was discovered with the VLT/SPHERE instrument in 2017, which orbits at $\sim$ 11 au around HD 206893. Its mass was estimated between 12 and 50 $M_{Jup}$ from evolutionary models and its photometry. However, given the significant uncertainty on the age of the system and the peculiar spectrophotometric properties of the companion, this mass is not well constrained. We aim at constraining the orbit and dynamical mass of HD 206893 B. We combined radial velocity data obtained with HARPS spectra and astrometric data obtained with the high contrast imaging VLT/SPHERE and VLT/NaCo instruments, with a time baseline less than three years. We then combined those data with astrometry data obtained by Hipparcos and Gaia with a time baseline of 24 years. We used a MCMC approach to estimate the orbital parameters and dynamical mass of the brown dwarf from those data. We infer a period between 21 and 33{\deg} and an inclination in the range 20-41{\deg} from pole-on from HD 206893 B relative astrometry. The RV data show a significant RV drift over 1.6 yrs. We show that HD 206893 B cannot be the source of this observed RV drift as it would lead to a dynamical mass inconsistent with its photometry and spectra and with Hipparcos and Gaia data. An additional inner (semimajor axis in the range 1.4-2.6 au) and massive ($\sim$ 15 $M_{Jup}$) companion is needed to explain the RV drift, which is compatible with the available astrometric data of the star, as well as with the VLT/SPHERE and VLT/NaCo nondetection.
With 3rd-order statistics of gravitational shear it will be possible to extract valuable cosmological information from ongoing and future weak lensing surveys which is not contained in standard 2nd-order statistics, due to the non-Gaussianity of the shear field. Aperture mass statistics are an appropriate choice for 3rd-order statistics due to their simple form and their ability to separate E- and B-modes of the shear. However, it has been demonstrated that 2nd-order aperture mass statistics suffer from E-/B-mode mixing because it is impossible to reliably estimate the shapes of close pairs of galaxies. This finding has triggered developments of several new 2nd-order statistical measures for cosmic shear. Whether the same developments are needed for 3rd-order shear statistics is largely determined by how severe this E-/B-mixing is for 3rd-order statistics. We test 3rd-order aperture mass statistics against E-/B-mode mixing, and find that the level of contamination is well-described by a function of $\theta/\theta_{\rm min}$, with $\theta_{\rm min}$ being the cut-off scale. At angular scales of $\theta > 10 \;\theta_{\rm min}$, the decrease in the E-mode signal due to E-/B-mode mixing is smaller than 1 percent, and the leakage into B-modes is even less. For typical small-scale cut-offs this E-/B-mixing is negligible on scales larger than a few arcminutes. Therefore, 3rd-order aperture mass statistics can safely be used to separate E- and B-modes and infer cosmological information, for ground-based surveys as well as forthcoming space-based surveys such as Euclid.
Fundamental to integrated photonic quantum computing is an on-chip method for routing and modulating quantum light emission. We demonstrate a hybrid integration platform consisting of arbitrarily designed waveguide circuits and single photon sources. InAs quantum dots (QD) embedded in GaAs are bonded to an SiON waveguide chip such that the QD emission is coupled to the waveguide mode. The waveguides are SiON core embedded in a SiO2 cladding. A tuneable Mach Zehnder modulates the emission between two output ports and can act as a path-encoded qubit preparation device. The single photon nature of the emission was verified by an on-chip Hanbury Brown and Twiss measurement.
FPN (Feature Pyramid Network) has become a basic component of most SoTA one stage object detectors. Many previous studies have repeatedly proved that FPN can caputre better multi-scale feature maps to more precisely describe objects if they are with different sizes. However, for most backbones such VGG, ResNet, or DenseNet, the feature maps at each layer are downsized to their quarters due to the pooling operation or convolutions with stride 2. The gap of down-scaling-by-2 is large and makes its FPN not fuse the features smoothly. This paper proposes a new SFPN (Synthetic Fusion Pyramid Network) arichtecture which creates various synthetic layers between layers of the original FPN to enhance the accuracy of light-weight CNN backones to extract objects' visual features more accurately. Finally, experiments prove the SFPN architecture outperforms either the large backbone VGG16, ResNet50 or light-weight backbones such as MobilenetV2 based on AP score.
Let V be a finite dimensional representation of the connected complex reductive group H. Denote by G the derived subgroup of H and assume that the categorical quotient of V by G is one dimensional. In this situation there exists a homomorphism, denoted by rad, from the algebra A of G-invariant differential operators on V to the first Weyl algebra. We show that the image of rad is isomorphic to the spherical subalgebra of a Cherednik algebra, whose parameters are determined by the b-function of the relative invariant associated to the prehomogeneous vector space (H : V). If (H : V) is furthemore assumed to be multiplicity free we obtain a Howe duality between a set of representations of G and modules over a subalgebra of the associative Lie algebra A. Some applications to holonomic modules and H-equivariant D-modules on V are also given.
Let ${\cal M}$ be a map with the underlying graph $\Gamma$. The automorphism group $Aut({\cal M})$ induces a natural action on the set of all vertex-edge-face incident triples, called {\em flags} of ${\cal M}$. The map ${\cal M}$ is said to be a {\em $k$-orbit} map if $Aut({\cal M})$ has $k$ orbits on the set of all flags of ${\cal M}$. It is known that there are seven different classes of $2$-orbit maps, with only four of them corresponding to arc-transitive maps, that is maps for which $Aut{\cal M}$ acts arc-transitively on the underlying graph $\Gamma$. The Petrie dual operator links these four classes in two pairs, one of which corresponds to the chiral maps and their Petrie duals. In this paper we focus on the other pair of classes of $2$-orbit arc-transitive maps. We investigate the connection of these maps to consistent cycles of the underlying graph with special emphasis on such maps of smallest possible valence, namely $4$. We then give a complete classification of such maps whose underlying graphs are arc-transitive Rose Window graphs.
We consider the problem of building non-invertible quantum symmetries (as characterized by actions of unitary fusion categories) on noncommutative tori. We introduce a general method to construct actions of fusion categories on inductive limit C*-algberas using finite dimenionsal data, and then apply it to obtain AT-actions of arbitrary Haagerup-Izumi categories on noncommutative 2-tori, of the even part of the $E_{8}$ subfactor on a noncommutative 3-torus, and of $\text{PSU}(2)_{15}$ on a noncommutative 4-torus.
This letter proposes a deep learning-based data-aided active user detection network (D-AUDN) for grant-free sparse code multiple access (SCMA) systems that leverages both SCMA codebook and Zadoff-Chu preamble for activity detection. Due to disparate data and preamble distribution as well as codebook collision, existing D-AUDNs experience performance degradation when multiple preambles are associated with each codebook. To address this, a user activity extraction network (UAEN) is integrated within the D-AUDN to extract a-priori activity information from the codebook, improving activity detection of the associated preambles. Additionally, efficient SCMA codebook design and Zadoff-Chu preamble association are considered to further enhance performance.
We discuss the neutrino mass matrix based on the Occam's Razor approach in the framework of the seesaw mechanism. We impose four zeros in the Dirac neutrino mass matrix, which give the minimum number of parameters needed for the observed neutrino masses and lepton mixing angles, while the charged lepton mass matrix and the right-handed Majorana neutrino mass matrix are taken to be real diagonal ones. The low-energy neutrino mass matrix has only seven physical parameters. We show successful predictions for the mixing angle $\theta_{13}$ and the CP violating phase $\delta_{CP}$ with the normal mass hierarchy of neutrinos by using the experimental data on the neutrino mass squared differences, the mixing angles $\theta_{12}$ and $\theta_{23}$. The most favored region of $\sin\theta_{13}$ is around $0.13\sim 0.15$, which is completely consistent with the observed value. The CP violating phase $\delta_{CP}$ is favored to be close to $\pm \pi/2$. We also discuss the Majorana phases as well as the effective neutrino mass for the neutrinoless double-beta decay $m_{ee}$, which is around $7\sim 8$ meV. It is extremely remarkable that we can perform a "complete experiment" to determine the low-energy neutrino mass matrix, since we have only seven physical parameters in the neutrino mass matrix. In particular, two CP violating phases in the neutrino mass matrix are directly given by two CP violating phases at high energy. Thus, assuming the leptogenesis we can determine the sign of the cosmic baryon in the universe from the low-energy experiments for the neutrino mass matrix.
Cosmological constant problem (in its various versions) is arguably the deepest gap in our understanding of theoretical physics, the solution to which may very likely require revisiting the Einstein theory of gravity. In this letter, I argue that the simplest consistent way to decouple gravity from the vacuum energy (and hence solve the problem) is through the introduction of an incompressible gravitational aether fluid. The theory then predicts that gravitational constant for radiation is 33% larger than that of non-relativistic matter, which is preferred by most cosmological observations (with the exception of light element abundances), but is not probed by current precision tests of gravity. I also show that slow-roll inflation can happen in this theory, with only minor modifications. Finally, interpreting gravitational aether as a thermodynamic description of gravity, I propose a finite-temperature correction to the equation of state of gravity, which would explain the present-day acceleration of the cosmic expansion as a consequence of the formation of stellar mass black holes.
In its long-duration observation phase, the PLATO satellite will observe two non-overlapping fields for a total of 4 yr. The exact duration of each pointing will be determined 2 yr before launch. Previous estimates of PLATO's yield of Earth-sized planets in the habitable zones (HZs) around solar-type stars ranged between 6 and 280. We use the PLATO Solar-like Light curve Simulator (PSLS) to simulate light curves with transiting planets around bright (m_V > 11) Sun-like stars at a cadence of 25 s, roughly representative of the >15,000 targets in PLATO's high-priority P1 sample (mostly F5-K7 dwarfs and sub-dwarfs). Our study includes light curves generated from synchronous observations of 6, 12, 18, and 24 of PLATO's 12 cm aperture cameras over both 2 yr and 3 yr of continuous observations. Automated detrending is done with the Wotan software and post-detrending transit detection is performed with the Transit Least Squares (TLS) algorithm. We scale the true positive rates (TPRs) with the expected number of stars in the P1 sample and with modern estimates of the exoplanet occurrence rates and predict the detection of planets with 0.5 R_E <= R_p <= 1.5 R_E in the HZs around F5-K7 dwarf stars. For the (2 yr + 2 yr) long-duration observation phase strategy we predict 11-34 detections, for the (3 yr + 1 yr) strategy we predict 8-25 discoveries. Our study of the effects of stellar variability on shallow transits of Earth-like planets illustrates that our estimates of PLATO's planet yield, which we derive using a photometrically quiet star like the Sun, must be seen as upper limits. In conclusion, PLATO's detection of about a dozen Earth-sized planets in the HZs around solar-type stars will mean a major contribution to this yet poorly sampled part of the exoplanet parameter space with Earth-like planets.
We investigate a mechanism to form and keep a planar spatial distribution of satellite galaxies in the Milky Way (MW), which is called the satellite plane. It has been pointed out that the {\Lambda}CDM cosmological model hardly explains the existence of such a satellite plane, so it is regarded as one of the serious problems in the current cosmology. We here focus on a rotation of the gravitational potential of a host galaxy, i.e., so-called a figure rotation, following the previous suggestion that this effect can induce the tilt of a so-called tube orbit. Our calculation shows that a figure rotation of a triaxial potential forms a stable orbital plane perpendicular to the rotational axis of the potential. Thus, it is suggested that the MW's dark halo is rotating with its axis being around the normal line of the satellite plane. Additionally, we find that a small velocity dispersion of satellites is required to keep the flatness of the planar structure, namely the standard derivation of their velocities perpendicular to the satellite plane needs smaller than their mean rotational velocity on the plane. Although not all the MW's satellites satisfy this condition, some fraction of them called member satellites, which are prominently on the plane, satisfy it. We suggest that this picture explaining the observed satellite plane can be achieved by the filamentary accretion of dark matter associated with the formation of the MW and a group infall of member satellites along this cosmic filament.
We study the spin-1/2 Heisenberg antiferromagnet on a bilayer honeycomb lattice including interlayer frustration. Using a set of complementary approaches, namely Schwinger bosons, dimer series expansion, bond operators, and exact diagonalization, we map out the quantum phase diagram. Analyzing ground state energies and elementary excitation spectra, we find four distinct phases, corresponding to three collinear magnetic long range ordered states, and one quantum disordered interlayer dimer phase. We detail, that the latter phase is adiabatically connected to an "exact" singlet product ground state of the the bilayer which exists along a line of maximum interlayer frustration. The order within the remaining three phases will be clarified.
The radio access networks (RANs) need to support massive and diverse data traffic with limited spectrum and energy. To cope with this challenge, software-defined radio access network (SDRAN) architectures have been proposed to renovate the RANs. However, current researches lack the design and evaluation of network protocols. In this paper, we address this problem by presenting the protocol design and evaluation of hyper-cellular networks (HyCell), an SDRAN framework making base station (BS) operations globally resource-optimized and energy-efficient (GREEN). Specifically, we first propose a separation scheme to realize the decoupled air interface in HyCell. Then we design a BS dispatching protocol which determines and assigns the optimal BS for serving mobile users, and a BS sleeping protocol to improve the network energy efficiency. Finally, we evaluate the proposed design in our HyCell testbed. Our evaluation validates the feasibility of the proposed separation scheme, demonstrates the effectiveness of BS dispatching, and shows great potential in energy saving through BS sleeping control.
We describe how a coherent optical drive that is near-resonant with the upper rungs of a three-level ladder system, in conjunction with a short pulse excitation, can be used to provide a frequency-tunable source of on-demand single photons. Using an intuitive master equation model, we identify two distinct regimes of device operation: (i) for a resonant drive, the source operates using the Autler-Townes effect, and (ii) for an off-resonant drive, the source exploits the ac Stark effect. The former regime allows for a large frequency tuning range but coherence suffers from timing jitter effects, while the latter allows for high indistinguishability and efficiency, but with a restricted tuning bandwidth due to high required drive strengths and detunings. We show how both these negative effects can be mitigated by using an optical cavity to increase the collection rate of the desired photons. We apply our general theory to semiconductor quantum dots, which have proven to be excellent single-photon sources, and find that scattering of acoustic phonons leads to excitation-induced dephasing and increased population of the higher energy level which limits the bandwidth of frequency tuning achievable while retaining high indistinguishability. Despite this, for realistic cavity and quantum dot parameters, indistinguishabilities of over $90\%$ are achievable for energy shifts of up to hundreds of $\mu$eV, and near-unity indistinguishabilities for energy shifts up to tens of $\mu$eV. Additionally, we clarify the often-overlooked differences between an idealized Hong-Ou-Mandel two-photon interference experiment and its usual implementation with an unbalanced Mach-Zehnder interferometer, pointing out the subtle differences in the single-photon visibility associated with these different setups.
Recent years have seen a surge in the popularity of commercial AI products based on generative, multi-purpose AI systems promising a unified approach to building machine learning (ML) models into technology. However, this ambition of ``generality'' comes at a steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit. In this work, we propose the first systematic comparison of the ongoing inference cost of various categories of ML systems, covering both task-specific (i.e. finetuned models that carry out a single task) and `general-purpose' models, (i.e. those trained for multiple tasks). We measure deployment cost as the amount of energy and carbon required to perform 1,000 inferences on representative benchmark dataset using these models. We find that multi-purpose, generative architectures are orders of magnitude more expensive than task-specific systems for a variety of tasks, even when controlling for the number of model parameters. We conclude with a discussion around the current trend of deploying multi-purpose generative ML systems, and caution that their utility should be more intentionally weighed against increased costs in terms of energy and emissions. All the data from our study can be accessed via an interactive demo to carry out further exploration and analysis.
In this paper, we use two different approaches to introduce $q$-analogs of Riemann's zeta function and prove that their values at even integers are related to the $q$-Bernoulli and $q$ Euler's numbers introduced by Ismail and Mansour [Analysis and Applications, {\bf{17}}, 6, 2019, 853--895].
Graph clustering or community detection constitutes an important task for investigating the internal structure of graphs, with a plethora of applications in several domains. Traditional techniques for graph clustering, such as spectral methods, typically suffer from high time and space complexity. In this article, we present CoreCluster, an efficient graph clustering framework based on the concept of graph degeneracy, that can be used along with any known graph clustering algorithm. Our approach capitalizes on processing the graph in an hierarchical manner provided by its core expansion sequence, an ordered partition of the graph into different levels according to the k-core decomposition. Such a partition provides an efficient way to process the graph in an incremental manner that preserves its clustering structure, while making the execution of the chosen clustering algorithm much faster due to the smaller size of the graph's partitions onto which the algorithm operates. An experimental analysis on a multitude of real and synthetic data demonstrates that our approach can be applied to any clustering algorithm accelerating the clustering process, while the quality of the clustering structure is preserved or even improved.
The past two decades have witnessed the great success of the algorithmic modeling framework advocated by Breiman et al. (2001). Nevertheless, the excellent prediction performance of these black-box models rely heavily on the availability of strong supervision, i.e. a large set of accurate and exact ground-truth labels. In practice, strong supervision can be unavailable or expensive, which calls for modeling techniques under weak supervision. In this comment, we summarize the key concepts in weakly supervised learning and discuss some recent developments in the field. Using algorithmic modeling alone under a weak supervision might lead to unstable and misleading results. A promising direction would be integrating the data modeling culture into such a framework.
New classes of performance measures have been recently introduced to quantify the transient response to external disturbances of coupled dynamical systems on complex networks. These performance measures are time-integrated quadratic forms in the system's coordinates or their time derivative. So far, investigations of these performance measures have been restricted to Dirac-$\delta$ impulse disturbances, in which case they can be alternatively interpreted as giving the long time output variances for stochastic white noise power demand/generation fluctuations. Strictly speaking, the approach is therefore restricted to power fluctuating on time scales shorter than the shortest time scales in the swing equations. To account for power productions from new renewable energy sources, we extend these earlier works to the relevant case of colored noise power fluctuations, with a finite correlation time $\tau > 0$. We calculate a closed-form expression for generic quadratic performance measures. Applied to specific cases, this leads to a spectral representation of performance measures as a sum over the non-zero modes of the network Laplacian. Our results emphasize the competition between inertia, damping and the Laplacian modes, whose balance is determined to a large extent by the noise correlation time scale $\tau$.
A cost Markov chain is a Markov chain whose transitions are labelled with non-negative integer costs. A fundamental problem on this model, with applications in the verification of stochastic systems, is to compute information about the distribution of the total cost accumulated in a run. This includes the probability of large total costs, the median cost, and other quantiles. While expectations can be computed in polynomial time, previous work has demonstrated that the computation of cost quantiles is harder but can be done in PSPACE. In this paper we show that cost quantiles in cost Markov chains can be computed in the counting hierarchy, thus providing evidence that computing those quantiles is likely not PSPACE-hard. We obtain this result by exhibiting a tight link to a problem in formal language theory: counting the number of words that are both accepted by a given automaton and have a given Parikh image. Motivated by this link, we comprehensively investigate the complexity of the latter problem. Among other techniques, we rely on the so-called BEST theorem for efficiently computing the number of Eulerian circuits in a directed graph.
We introduce a new concept of perturbation of closed linear subspaces and operators in Banach spaces called uniform lambda-adjustment which is weaker than perturbations by small gap, operator norm, q-norm, and K2-approximation. In arbitrary Banach spaces some of the classical Fredholm stability theorems remain true under uniform lambda-adjustment, while other fail. However, uniformly lambda-adjusted subspaces and linear operators retain their (semi--)Fredholm properties in a Banach space which dual is Fr\'{e}chet-Urysohn in weak* topology. We also introduce another concept of perturbation called uniform mu-approximation which is weaker than perturbations by small gap, norm, and compact convergence, yet stronger than uniform lambda-adjustment. We present Fredholm stability theorems for uniform mu-approximation in arbitrary Banach spaces and a theorem on stability of Riesz kernels and ranges for commuting closed essentially Kato operators. Finally, we define the new concepts of a tuple of subspaces and of a complex of subspaces in Banach spaces, and present stability theorems for index and defect numbers of Fredholm tuples and complexes under uniform lambda-adjustment and uniform mu-approximation.
The results of the synthesis and characterization of the optimally doped (La)1.4(Sr1-yCay)1.6Mn2O7 solid solution with y=0, 0.25 and 0.5 are reported. By progressively replacing the Sr with the smaller Ca, while keeping fixed the hole-concentration due to the divalent dopant, the 'size effect' of the cation itself on the structural, transport and magnetic properties of the bilayered manganite has been analysed. Two different annealing treatments of the solid solution, in pure oxygen and in pure argon, allowed also to study the effect of the oxygen content variation. Structure and electronic properties of the samples have been investigated by means of X-ray powder diffraction and X-ray absorption spectroscopy measurements. Magnetoresistivity and static magnetization measurements have been carried out to complete the samples characterization. Oxygen annealing of the solid solution, that showed a limit for about y=0.5, induces an increase of the Mn average valence state and a transition of the crystal structure from tetragonal to orthorhombic while the argon annealing induces an oxygen under-stoichiometry and, in turn, a reduction of the Mn average valence state. Along with the Ca substitution, the Jahn-Teller distortion of the MnO6 octahedra is reduced. This has been directly connected to a general enhancement of the transport properties induced by the Ca-doping. For the same cation composition, oxygen over-stoichiometry leads to higher metal-insulator transition temperatures and lower resistivity values. Curie temperatures (TC) reduce by increasing the Ca-doping. The lower TC for all the annealed samples with respect to the 'as prepared' ones are connected to the strong influence on the magnetic interaction of the point defects due to the oxygen content variation.
We start by discussing some theoretical issues of renormalization group transformations and Monte Carlo renormalization group technique. A method to compute the anomalous dimension is proposed and investigated. As an application, we find excellent values for critical exponents in $\lambda \phi^4_3$. Some technical questions regarding the hybrid algorithm and strong coupling expansions, used to compute the critical couplings of the canonical surface, are also briefly discussed.
In this paper we prove the global in time well-posedness of strong solutions to the Quantum-Navier-Stokes equation driven by random initial data and stochastic external force. In particular, we first give a general local well-posedness result. Then, by means of the Bresch-Desjardins entropy, higher order energy estimates, and a continuation argument we prove that the density never vanishes, and thus that local strong solutions are indeed global.
High precision Kepler photometry is used to explore the details of AGB light curves. Since AGB variability has a typical time scale on order of a year we discuss at length the removal of long term trends and quarterly changes in Kepler data. Photometry for a small sample of nine SR AGB stars are examined using a 30 minute cadence over a period of 45 months. While undergoing long period variations of many magnitudes, the light curves are shown to be smooth at the millimagnitude level over much shorter time intervals. No flares or other rapid events were detected on the sub-day time scale. The shortest AGB period detected is on the order of 100 days. All the SR variables in our sample are shown to have multiple modes. This is always the first overtone typically combined with the fundamental. A second common characteristic of SR variables is shown to be the simultaneous excitation of multiple closely separated periods for the same overtone mode. Approximately half the sample had a much longer variation in the light curve, likely a long secondary period. The light curves were all well represented by a combination of sinusoids. However, the properties of the sinusoids are time variable with irregular variations present at low level. No non-radial pulsations were detected. It is argued that the long secondary period variation seen in many SR variables is intrinsic to the star and linked to multiple mode pulsation.
This is an exciting time for folks who are looking at neutrino cross sections, and the especially important quasielastic interaction. We are able to inspect several recent results from K2K and MiniBooNE and are looking forward to a couple more high statistics measurements of neutrino and anti-neutrino interactions. There is additional interest because of the need for this cross section information for current and upcoming neutrino oscillation experiments. This paper is a brief review of our current understanding and some puzzles when we compare the recent results with past measurements. I articulate some of the short term challenges facing experimentalists, neutrino event generators, and theoretical work on the quasielastic interaction.
We study inhomogeneous 1+1-dimensional quantum many-body systems described by Tomonaga-Luttinger-liquid theory with general propagation velocity and Luttinger parameter varying smoothly in space, equivalent to an inhomogeneous compactification radius for free boson conformal field theory. This model appears prominently in low-energy descriptions, including for trapped ultracold atoms, while here we present an application to quantum Hall edges with inhomogeneous interactions. The dynamics is shown to be governed by a pair of coupled continuity equations identical to inhomogeneous Dirac-Bogoliubov-de Gennes equations with a local gap and solved by analytical means. We obtain their exact Green's functions and scattering matrix using a Magnus expansion, which generalize previous results for conformal interfaces and quantum wires coupled to leads. Our results explicitly describe the late-time evolution following quantum quenches, including inhomogeneous interaction quenches, and Andreev reflections between coupled quantum Hall edges, revealing a remarkably universal dependence on details at stationarity or at late times out of equilibrium.
The study of toppling on permutations with an extra labeled chip was initiated by the first author with D. Hathcock and P. Tetali (arXiv:2010.11236), where the extra chip was added in the middle. We extend this to all possible locations $p$ as well as values $r$ of the extra chip and give a complete characterization of permutations which topple to the identity. Further, we classify all permutations which are outcomes of the toppling process in this generality, which we call resultant permutations. Resultant permutations turn out to be certain decomposable permutations. The number of configurations toppling to a given resultant permutation is shown to depend purely on the number of left-to-right maxima (or records) of the permutation to the left of $n-p$ and the number of right-to-left minima to the right of $n-p$. The number of permutations toppling to a given resultant permutation (identity or otherwise) is shown to be the binomial transform of a poly-Bernoulli number of type B.
This talk outlines the derivation of a high-energy, transverse momentum cut-off, solution of QCD in which the Regge pole and ``single gluon'' properties of the pomeron are directly related to the confinement and chiral symmetry breaking properties of the hadron spectrum. In first approximation, the pomeron is a single reggeized gluon plus a ``wee parton'' component that compensates for the color and particle properties of the gluon. This solution corresponds to a supercritical phase of Reggeon Field Theory.
In quantum optics, orbital angular momentum (OAM) is very promising to achieve high-dimensional quantum states due to the nature of infinite and discrete eigenvalue, which is quantized by the topological charge of l. Here, a heralded single-photon source with switchable OAM modes is proposed and demonstrated on silicon chip. At room-temperature, the heralded single photons with 11 OAM modes (l=2~6, -6~-1) have been successfully generated and switched through thermo-optical effect. We believe that such an integrated quantum source with multiple OAM modes and operating at room-temperature would provide a practical platform for high-dimensional quantum information processing. Moreover, our proposed architecture can also be extended to other material systems to further improve the performance of OAM quantum source.
We theoretically investigated the dephasing in an Aharonov-Bohm interferometer containing a lateral double quantum dot induced by coupling with a quantum dot charge sensor. We employed the interpolative 2nd-order perturbation theory to include the charge sensing Coulomb interaction. It is shown that the visibility of the Aharonov-Bohm oscillation of the linear conductance decreases monotonically as the sensing Coulomb interaction increases. In particular, for a weak sensing interaction regime, the visibility decreases parabolically, and it behaves linearly for a strong sensing interaction regime.
The model-checking problem for hybrid systems is a well known challenge in the scientific community. Most of the existing approaches and tools are limited to safety properties only, or operates by transforming the hybrid system to be verified into a discrete one, thus loosing information on the continuous dynamics of the system. In this paper we present a logic for specifying complex properties of hybrid systems called HyLTL, and we show how it is possible to solve the model checking problem by translating the formula into an equivalent hybrid automaton. In this way the problem is reduced to a reachability problem on hybrid automata that can be solved by using existing tools.
Discriminantal arrangements are hyperplane arrangements, which are generalized braid ones. They are constructed from given hyperplane arrangements, but their combinatorics are not invariant under combinatorial equivalence. However, it is known that the combinatorics of the discriminantal arrangement are constant on a Zariski open set of the space of hyperplane arrangements. In the present paper, we introduce non-very generic varieties in the space of hyperplane arrangements to classify discriminantal arrangements and show that the Zariski open set is the complement of non-very generic varieties. We study their basic properties and construction and provide examples, including infinite families of non-very generic varieties. In particular, the construction we call degeneration is a powerful tool for constructing non-very generic varieties. As an application, we provide lists of non-very generic varieties for spaces of small line arrangements.
In this paper, we propose a geometrical proof of the generalized mirror transformation of genus 0 Gromov-Witten invariants of degree k hypersurface in CP^{N-1}.
We study eigenvalues of non-self-adjoint Schr\"odinger operators on non-trapping asymptotically conic manifolds of dimension $n\ge 3$. Specifically, we are concerned with the following two types of estimates. The first one deals with Keller type bounds on individual eigenvalues of the Schr\"odinger operator with a complex potential in terms of the $L^p$-norm of the potential, while the second one is a Lieb-Thirring type bound controlling sums of powers of eigenvalues in terms of the $L^p$-norm of the potential. We extend the results of Frank (2011), Frank-Sabin (2017), and Frank-Simon (2017) on the Keller and Lieb-Thirring type bounds from the case of Euclidean spaces to that of non-trapping asymptotically conic manifolds. In particular, our results are valid for the operator $\Delta_g+V$ on $\mathbb{R}^n$ with $g$ being a non-trapping compactly supported (or suitably short range) perturbation of the Euclidean metric and $V\in L^p$ complex valued.
The physics of collective optical response of molecular assemblies, pioneered by Dicke in 1954, has long been at the center of theoretical and experimental scrutiny. The influence of the environment on such phenomena is also of great interest due to various important applications in e.g. energy conversion devices. In this manuscript we demonstrate both experimentally and theoretically the spatial modulations of the collective decay rates of molecules placed in proximity to a metal interface. We show in a very simple framework how the cooperative optical response can be analyzed in terms of intermolecular correlations causing interference between the response of different molecules and the polarization induced on a nearby metallic boundary and predict similar collective interference phenomena in excitation energy transfer between molecular aggregates.
Proximity coupled spin-orbit quantum wires have recently been shown to support midgap Majorana states at critical points. We show that in the presence of disorder these systems are prone to the buildup of a second bandcenter anomaly, which is of different physical origin but shares key characteristics with the Majorana state: it is narrow in width, insensitive to magnetic fields, carries unit spectral weight, and is rigidly tied to the band center. Depending on the parity of the number of subgap quasiparticle states, a Majorana mode does or does not coexist with the impurity generated peak. The strong 'entanglement' between the two phenomena may hinder an unambiguous detection of the Majorana by spectroscopic techniques.
The semiclassical limit of full non-commutative gauge theory is known as Poisson gauge theory. In this work we revise the construction of Poisson gauge theory paying attention to the geometric meaning of the structures involved and advance in the direction of a further development of the proposed formalism, including the derivation of Noether identities and conservation of currents. For any linear non-commutativity, $\Theta^{ab}(x)=f^{ab}_c\,x^c$, with $f^{ab}_c$ being structure constants of a Lie algebra, an explicit form of the gauge Lagrangian is proposed. In particular a universal solution for the matrix $\rho$ defining the field strength and the covariant derivative is found. The previously known examples of $\kappa$-Minkowski, $\lambda$-Minkowski and rotationally invariant non-commutativity are recovered from the general formula. The arbitrariness in the construction of Poisson gauge models is addressed in terms of Seiberg-Witten maps, i.e., invertible field redefinitions mapping gauge orbits onto gauge orbits.
Terahertz (THz) waves are electromagnetic waves in the 0.1 to 10 THz frequency range, and THz imaging is utilized in a range of applications, including security inspections, biomedical fields, and the non-destructive examination of materials. However, THz images have low resolution due to the long wavelength of THz waves. Therefore, improving the resolution of THz images is one of the current hot research topics. We propose a novel network architecture called J-Net which is improved version of U-Net to solve the THz image super-resolution. It employs the simple baseline blocks which can extract low resolution (LR) image features and learn the mapping of LR images to highresolution (HR) images efficiently. All training was conducted using the DIV2K+Flickr2K dataset, and we employed the peak signal-to-noise ratio (PSNR) for quantitative comparison. In our comparisons with other THz image super-resolution methods, JNet achieved a PSNR of 32.52 dB, surpassing other techniques by more than 1 dB. J-Net also demonstrates superior performance on real THz images compared to other methods. Experiments show that the proposed J-Net achieves better PSNR and visual improvement compared with other THz image super-resolution methods.
The last in-tree recognition problem asks whether a given spanning tree can be derived by connecting each vertex with its rightmost left neighbor of some search ordering. In this study, we demonstrate that the last-in-tree recognition problem for Generic Search is $\mathsf{NP}$-complete. We utilize this finding to strengthen a complexity result from order theory. Given partial order $\pi$ and a set of triples, the $\mathsf{NP}$-complete intermezzo problem asks for a linear extension of $\pi$ where each first element of a triple is not between the other two. We show that this problem remains $\mathsf{NP}$-complete even when the Hasse diagram of the partial order forms a tree of bounded height. In contrast, we give an $\mathsf{XP}$ algorithm for the problem when parameterized by the width of the partial order. Furthermore, we show that $\unicode{x2013}$ under the assumption of the Exponential Time Hypothesis $\unicode{x2013}$ the running time of this algorithm is asymptotically optimal.
We present a perturbative treatment of the evolution under their mutual self-gravity of particles displaced off an infinite perfect lattice, both for a static space and for a homogeneously expanding space as in cosmological N-body simulations. The treatment, analogous to that of perturbations to a crystal in solid state physics, can be seen as a discrete (i.e. particle) generalization of the perturbative solution in the Lagrangian formalism of a self-gravitating fluid. Working to linear order, we show explicitly that this fluid evolution is recovered in the limit that the initial perturbations are restricted to modes of wavelength much larger than the lattice spacing. The full spectrum of eigenvalues of the simple cubic lattice contains both oscillatory modes and unstable modes which grow slightly faster than in the fluid limit. A detailed comparison of our perturbative treatment, at linear order, with full numerical simulations is presented, for two very different classes of initial perturbation spectra. We find that the range of validity is similar to that of the perturbative fluid approximation (i.e. up to close to ``shell-crossing''), but that the accuracy in tracing the evolution is superior. The formalism provides a powerful tool to systematically calculate discreteness effects at early times in cosmological N-body simulations.
For general (1+1)-affine Markov processes, we prove the ergodicity and exponential ergodicity in total variation distances. Our methods follow the arguments of ergodic properties for L\'{e}vy-driven OU-processes and a coupling of CBI-processes constructed by stochastic equations driven by time-space noises. Then the strong Feller property is considered.
Voros coefficients of the generalized hypergeometric differential equations with a large parameter are defined and their explicit forms are given for the origin and for the infinity. It is shown that they are Borel summable in some specified regions in the space of parameters and their Borel sums in the regions are given.
The two kinds of indirect CP violation in neutral meson systems are related, in the absence of new weak phases in decay. The result is a model-independent expression relating CP violation in mixing, CP violation in the interference of decays with and without mixing, and the meson mass and width differences. It relates the semileptonic and time-dependent CP asymmetries; and CP-conjugate pairs of time-dependent $D^0$ CP asymmetries. CP violation in the interference of decays with and without mixing is related to the mixing parameters of relevance to model building: the off-diagonal mixing matrix elements |M12|, |Gamma12|, and phi12 = arg(M12 /Gamma12). Incorporating this relation into a fit to the D0 -\overline{D0} mixing data implies a level of sensitivity to |phiD12| of 0.10 [rad] at 1 sigma. The formalism is extended to include new weak phases in decay, and in Gamma12. The phases are highly constrained by direct CP violation measurements. Consequently, the bounds on |phiD12| are not significantly altered, and the effects of new weak phases in decay could be difficult to observe at a high luminosity flavor factory (D0) or at the LHC (Bs) via violations of the above relations, unlike in direct CP violation.
In this contribution to the conference "Beyond Einstein: Historical Perspectives on Geometry, Gravitation and Cosmology in the Twentieth Century", we give a critical status report of attempts to explain the late accelerated expansion of the universe by modifications of general relativity. Our brief review of such alternatives to the standard cosmological model addresses mainly readers who have not pursued the vast recent literature on this subject.
We use unoriented versions of instanton and knot Floer homology to prove inequalities involving the Euler characteristic and the number of local maxima appearing in unorientable cobordisms, which mirror results of a recent paper by Juhasz, Miller, and Zemke concerning orientable cobordisms. Most of the subtlety in our argument lies in the fact that maps for non-orientable cobordisms require more complicated decorations than their orientable counterparts. We introduce unoriented versions of the band unknotting number and the refined cobordism distance and apply our results to give bounds on these based on the torsion orders of the Floer homologies. Finally, we show that the difference between the unoriented refined cobordism distance of a knot $K$ from the unknot and the non-orientable slice genus of $K$ can be arbitrarily large.
In this paper, we apply Lagrangian descriptors to study the invariant manifolds that emerge from the top of two barriers existing in the LiCN<->LiNC isomerization reaction. We demonstrate that the integration times must be large enough compared with the characteristic stability exponents of the periodic orbit under study. The invariant manifolds manifest as singularities in the Lagrangian descriptors. Furthermore, we develop an equivalent potential energy surface with 2 degrees of freedom, which reproduces with a great accuracy previous results [Phys. Rev. E 99, 032221 (2019)]. This surface allows the use of an adiabatic approximation to develop a more simplified potential energy with solely 1 degree of freedom. The reduced dimensional model is still able to qualitatively describe the results observed with the original 2-degrees-of-freedom potential energy landscape. Likewise, it is also used to study in a more simple manner the influence on the Lagrangian descriptors of a bifurcation, where some of the previous invariant manifolds emerge, even before it takes place.
The recently discovered (Rb,Cs)EuFe4As4 compounds exhibit an unusual combination of superconductivity (Tc = 35 K) and ferromagnetism (Tm = 15 K). We have performed a series of x-ray diffraction, ac magnetic susceptibility, dc magnetization, and electrical resistivity measurements on both RbEuFe4As4 and CsEuFe4As4 to pressures as high as 30 GPa. We find that the superconductivity onset is suppressed monotonically by pressure while the magnetic transition is enhanced at initial rates of dTm/dP = 1.7 K/GPa and 1.5 K/GPa for RbEuFe4As4 and CsEuFe4As4, respectively. Near 7 GPa, Tc onset and Tm become comparable. At higher pressures, signatures of bulk superconductivity gradually disappear. Room temperature x-ray diffraction measurements suggest the onset of a transition from tetragonal (T) to a half collapsed-tetragonal (hcT) phase at 10 GPa (RbEuFe4As4) and 12 GPa (CsEuFe4As4). The ability to tune Tc and Tm into coincidence with relatively modest pressures highlights (Rb,Cs)EuFe4As4 compounds as ideal systems to study the interplay of superconductivity and ferromagnetism.
We prove that a generic homogeneous polynomial of degree $d$ is determined, up to a nonzero constant multiplicative factor, by the vector space spanned by its partial derivatives of order $k$ whenever $k\leq\frac{d}{2}-1$.
We discuss the thermodynamic potential of a charged Bose gas with the chemical potential in arbitrary dimensions. The critical temperature for Bose-Einstein condensation is investigated. In the case of the compactified background metric, it is shown that the critical temperature depends on the size of the extra spaces. The asymmetry of the "Kaluza-Klein charge" is also discussed.
We propose a non-minimal left-right symmetric model (LRSM) with Parity Symmetry where the fermion mixings arise as result of imposing an ${\bf S}_{3}\otimes {\bf Z}_{2}$ flavor symmetry, and an extra ${\bf Z}^{e}_{2}$ symmetry is considered to suppress some Yukawa couplings in the lepton sector. As a consequence, the effective neutrino mass matrix possesses approximately the $\mu-\tau$ symmetry. The breaking of the $\mu-\tau$ symmetry induces sizable non zero $\theta_{13}$, and the deviation of $\theta_{23}$ from $45^{\circ}$ is strongly controlled by an $\epsilon$ free parameter and the complex neutrino masses. Then, an analytic study on the extreme Majorana phases is done since these turn out to be relevant to enhance or suppress the reactor and atmospheric angle. So that we have constrained the parameter space for the $\epsilon$ parameter and the lightest neutrino mass that accommodate the mixing angles. The highlighted results are: a) the normal hierarchy is ruled out since the reactor angle comes out being tiny, for any values of the Majorana phases; b) for the inverted hierarchy there is one combination in the extreme phases where the values of the reactor and atmospheric angles are compatible up to $2, 3~\sigma$ of C. L., but the parameter space is tight; c) the model favors the degenerate ordering for one combination in the extreme Majorana phases. In this case, the reactor and atmospheric angle are compatible with the experimental data for a large set of values of the free parameters. Therefore, this model may be testable by the future result that the Nova and KamLAND-Zen collaborations will provide.
Zero-index metamaterials (ZIMs) offer unprecedented ways to manipulate the flow of light, and are of interest for wide range of applications including optical cloaking, super-coupling, and unconventional phase-matching properties in nonlinear optics. Impedance-matched ZIMs can be obtained through a photonic Dirac-cone (PDC) dispersion induced by an accidental degeneracy of two linear bands - typically an electric monopole mode and a transverse magnetic dipole mode - at the center of the Brillouin zone. Consequently, PDC can only be achieved for a particular combination of geometric parameters of the metamaterial, and hence is sensitive to fabrication imperfections. These fabrication imperfections may limit the usefulness in practical applications. In this work we overcome this obstacle and demonstrate robust all-dielectric (AD) ZIM that supports PDC dispersion over wide parameter space. Our structure, consisting of an array of Si pillars on silica substrate, is fabricated in silicon-oninsulator (SOI) platform and operates at telecom wavelengths.
Membership Inference Attacks (MIA) aim to infer whether a target data record has been utilized for model training or not. Prior attempts have quantified the privacy risks of language models (LMs) via MIAs, but there is still no consensus on whether existing MIA algorithms can cause remarkable privacy leakage on practical Large Language Models (LLMs). Existing MIAs designed for LMs can be classified into two categories: reference-free and reference-based attacks. They are both based on the hypothesis that training records consistently strike a higher probability of being sampled. Nevertheless, this hypothesis heavily relies on the overfitting of target models, which will be mitigated by multiple regularization methods and the generalization of LLMs. The reference-based attack seems to achieve promising effectiveness in LLMs, which measures a more reliable membership signal by comparing the probability discrepancy between the target model and the reference model. However, the performance of reference-based attack is highly dependent on a reference dataset that closely resembles the training dataset, which is usually inaccessible in the practical scenario. Overall, existing MIAs are unable to effectively unveil privacy leakage over practical fine-tuned LLMs that are overfitting-free and private. We propose a Membership Inference Attack based on Self-calibrated Probabilistic Variation (SPV-MIA). Specifically, since memorization in LLMs is inevitable during the training process and occurs before overfitting, we introduce a more reliable membership signal, probabilistic variation, which is based on memorization rather than overfitting. Furthermore, we introduce a self-prompt approach, which constructs the dataset to fine-tune the reference model by prompting the target LLM itself. In this manner, the adversary can collect a dataset with a similar distribution from public APIs.