text
stringlengths
6
128k
As binary systems move inside galaxies, they interact with the dark matter halo. This interaction leads to the accretion of dark matter particles inside binary components. The accretion of dark matter particles increases the mass of the binary components and then, the total mass of the system. Increased mass by this way can affect other physical parameters of the systems, like orbital periods of the systems. In this work, we estimated the period change of some known compact binary systems due to the accretion of dark matter particles inside them. Our conclustion is that, for WIMP particles with masses in the range $\simeq 10 \: GeV.c^{-2}$ and dark matter density as high as the dark matter density around the sun, period change due to accretion of dark matter partices is negligible compared to the period change due to the emission of gravitaional waves from the systems.
In this paper, we recover sparse signals from their noisy linear measurements by solving nonlinear differential inclusions, which is based on the notion of inverse scale space (ISS) developed in applied mathematics. Our goal here is to bring this idea to address a challenging problem in statistics, \emph{i.e.} finding the oracle estimator which is unbiased and sign-consistent using dynamics. We call our dynamics \emph{Bregman ISS} and \emph{Linearized Bregman ISS}. A well-known shortcoming of LASSO and any convex regularization approaches lies in the bias of estimators. However, we show that under proper conditions, there exists a bias-free and sign-consistent point on the solution paths of such dynamics, which corresponds to a signal that is the unbiased estimate of the true signal and whose entries have the same signs as those of the true signs, \emph{i.e.} the oracle estimator. Therefore, their solution paths are regularization paths better than the LASSO regularization path, since the points on the latter path are biased when sign-consistency is reached. We also show how to efficiently compute their solution paths in both continuous and discretized settings: the full solution paths can be exactly computed piece by piece, and a discretization leads to \emph{Linearized Bregman iteration}, which is a simple iterative thresholding rule and easy to parallelize. Theoretical guarantees such as sign-consistency and minimax optimal $l_2$-error bounds are established in both continuous and discrete settings for specific points on the paths. Early-stopping rules for identifying these points are given. The key treatment relies on the development of differential inequalities for differential inclusions and their discretizations, which extends the previous results and leads to exponentially fast recovering of sparse signals before selecting wrong ones.
In this paper, we prove a family of identities for closed and strictly convex hypersurfaces in the sphere and hyperbolic/de Sitter space. As applications, we prove Blaschke-Santal\'o type inequalities in the sphere and hyperbolic/de Sitter space, which generalizes the previous work of Gao, Hug and Schneider \cite{GHS03}. We also prove the quermassintegral inequalities in hyperbolic/de Sitter space.
Heterogeneous Graphs (HGs) can effectively model complex relationships in the real world by multi-type nodes and edges. In recent years, inspired by self-supervised learning, contrastive Heterogeneous Graphs Neural Networks (HGNNs) have shown great potential by utilizing data augmentation and contrastive discriminators for downstream tasks. However, data augmentation is still limited due to the graph data's integrity. Furthermore, the contrastive discriminators remain sampling bias and lack local heterogeneous information. To tackle the above limitations, we propose a novel Generative-Enhanced Heterogeneous Graph Contrastive Learning (GHGCL). Specifically, we first propose a heterogeneous graph generative learning enhanced contrastive paradigm. This paradigm includes: 1) A contrastive view augmentation strategy by using a masked autoencoder. 2) Position-aware and semantics-aware positive sample sampling strategy for generating hard negative samples. 3) A hierarchical contrastive learning strategy for capturing local and global information. Furthermore, the hierarchical contrastive learning and sampling strategies aim to constitute an enhanced contrastive discriminator under the generative-contrastive perspective. Finally, we compare our model with seventeen baselines on eight real-world datasets. Our model outperforms the latest contrastive and generative baselines on node classification and link prediction tasks. To reproduce our work, we have open-sourced our code at https://anonymous.4open.science/r/GC-HGNN-E50C.
Thanks to the rise of self-supervised learning, automatic speech recognition (ASR) systems now achieve near-human performance on a wide variety of datasets. However, they still lack generalization capability and are not robust to domain shifts like accent variations. In this work, we use speech audio representing four different French accents to create fine-tuning datasets that improve the robustness of pre-trained ASR models. By incorporating various accents in the training set, we obtain both in-domain and out-of-domain improvements. Our numerical experiments show that we can reduce error rates by up to 25% (relative) on African and Belgian accents compared to single-domain training while keeping a good performance on standard French.
Properties of atomic nuclei important for the prediction of astrophysical reaction rates are reviewed. In the first part, a recent simulation of evolution and nucleosynthesis of stars between 15 and 25 solar masses is presented. This study is used to illustrate the required nuclear input as well as to give examples of the sensitivity to certain rates. The second part focusses on the prediction of nuclear rates in the statistical model (Hauser-Feshbach) and direct capture (DWBA). Some of the important ingredients are addressed. Discussed in more detail are approaches to predict level densities, parity distributions, and optical alpha+nucleus potentials.
We study the Fundamental Theorem of Asset Pricing for a general financial market under Knightian Uncertainty. We adopt a functional analytic approach which require neither specific assumptions on the class of priors $\mathcal{P}$ nor on the structure of the state space. Several aspects of modeling under Knightian Uncertainty are considered and analyzed. We show the need for a suitable adaptation of the notion of No Free Lunch with Vanishing Risk and discuss its relation to the choice of an appropriate filtration. In an abstract setup, we show that absence of arbitrage is equivalent to the existence of \emph{approximate} martingale measures sharing the same polar set of $\mathcal{P}$. We then specialize the results to a discrete-time framework in order to obtain true martingale measures.
Many interesting classes of maps from homotopical algebra can be characterised as those maps with the right lifting property against certain sets of maps (such classes are sometimes referred to as cofibrantly generated). In a more sophisticated notion due to Garner (referred to as algebraically cofibrantly generated) the set of maps is replaced with a diagram over a small category. We give a yet more general definition where the set or diagram of maps is replaced with a vertical map in a Grothendieck fibration. In addition to an interesting new view of the existing examples above, we get new notions, such as computable lifting problems in presheaf assemblies, and internal lifting problems in a topos. We show that under reasonable conditions one can define a notion of universal lifting problem and carry out step-one of Garner's small object argument. We give explicit descriptions of what the general construction looks like in some examples.
We consider a bouncing Universe model which explains the flatness of the primordial scalar spectrum via complex scalar field that rolls down its negative quartic potential and dominates in the Universe. We show that in this model, there exists a rapid contraction regime of classical evolution. We calculate the power spectrum of tensor modes in this scenario. We find that it is blue and its amplitude is typically small, leading to mild constraints on the parameters of the model.
The microscopic structure of a charge stripe in an antiferromagnetic insulator is studied within the t-Jz model using analytical and numerical approaches. We demonstrate that a stripe in an antiferromagnet should be viewed as a system of composite holon-spin-polaron excitations condensed at the self-induced antiphase domain wall (ADW) of the antiferromagnetic spins. The properties of such excitations are studied in detail with numerical and analytical results for various quantities being in very close agreement. A picture of the stripe as an effective one-dimensional (1D) band of such excitations is also in very good agreement with numerical data. These results emphasize the primary role of kinetic energy in favoring the stripe as a ground state. A comparative analysis suggests the effect of pairing and collective meandering on the energetics of the stripe formation to be secondary. The implications of this microscopic picture of fermions bound to the 1D antiferromagnetic ADW for the effective theories of the stripe phase in the cuprates are discussed.
For $a/q\in\mathbb{Q}$ the Estermann function is defined as $D(s,a/q):=\sum_{n\geq1}d(n)n^{-s}\operatorname{e}(n\frac aq)$ if $\Re(s)>1$ and by meromorphic continuation otherwise. For $q$ prime, we compute the moments of $D(s,a/q)$ at the central point $s=1/2$, when averaging over $1\leq a<q$. As a consequence we deduce the asymptotic for the iterated moment of Dirichlet $L$-functions $\sum_{\chi_1,\dots,\chi_k\mod q}|L(\frac12,\chi_1)|^2\cdots |L(\frac12,\chi_k)|^2|L(\frac12,\chi_1\cdots \chi_k)|^2$, obtaining a power saving error term. Also, we compute the moments of certain functions defined in terms of continued fractions. For example, writing $f_{\pm}(a/q):=\sum_{j=0}^r (\pm1)^jb_j$ where $[0;b_0,\dots,b_r]$ is the continued fraction expansion of $a/q$ we prove that for $k\geq2$ and $q$ primes one has $\sum_{a=1}^{q-1}f_{\pm}(a/q)^k\sim2 \frac{\zeta(k)^2}{\zeta(2k)} q^k$ as $q\to\infty$.
In this paper we construct a new basis for the cyclotomic completion of the center of the quantum $\mathfrak{gl}_N$ in terms of the interpolation Macdonald polynomials. Then we use a result of Okounkov to provide a dual basis with respect to the quantum Killing form (or Hopf pairing). The main applications are: 1) cyclotomic expansions for the $\mathfrak{gl}_N$ Reshetikhin--Turaev link invariants and the universal $\mathfrak{gl}_N$ knot invariant; 2) an explicit construction of the unified $\mathfrak{gl}_N$ invariants for integral homology 3-spheres using universal Kirby colors. These results generalize those of Habiro for $\mathfrak{sl}_2$. In addition, we give a simple proof of the fact that the universal $\mathfrak{gl}_N$ invariant of any evenly framed link and the universal $\mathfrak{sl}_N$ invariant of any $0$-framed algebraically split link are $\Gamma$-invariant, where $\Gamma=Y/2Y$ with the root lattice $Y$.
We report bulk superconductivity at 2.5K in LaO0.5F0.5BiSe2 compound through the DC magnetic susceptibility and electrical resistivity measurements. The synthesized LaO0.5F0.5BiSe2 compound is crystallized in tetragonal structure with space group P4/nmm and Reitveld refined lattice parameters are a= 4.15(1)A and c=14.02(2)A. The lower critical field of Hc1= 40Oe, at temperature 2K is estimated through the low field magnetization measurements. The LaO0.5F0.5BiSe2 compound showed metallic normal state electrical resistivity with residual resistivity value of 1.35m{\Omega}-cm. The compound is type-II superconductor, and the estimated Hc2(0) value obtained by WHH formula is above 20kOe for 90percent Rn criteria. The superconducting transition temperature decreases with applied pressure till around 1.68GPa and with further higher pressures a high Tc phase emerges with possible onset Tc of above 5K for 2.5GPa.
Suppose that $p$ is a prime, $G$ is a finite group and $H$ is a strongly $p$-embedded subgroup in $G$. We consider the possibility that $F^*(H)$ is a simple group of Lie rank 2 defined in characteristic $p$.
We consider the problem of uplink power control for distributed massive multiple-input multiple-output systems where the base stations (BSs) are equipped with 1-bit analog-to-digital converters (ADCs). The scenario with a single-user equipment (UE) is first considered to provide insights into the signal-tonoise-and-distortion ratio (SNDR). With a single BS, the SNDR is a unimodal function of the UE transmit power. With multiple BSs, the SNDR at the output of the joint combiner can be made unimodal by adding properly tuned dithering at each BS. As a result, the UE can be effectively served by multiple BSs with 1-bit ADCs. Considering the signal-to-interference-plus-noise-anddistortion ratio (SINDR) in the multi-UE scenario, we aim at optimizing the UE transmit powers and the dithering at each BS based on the min-power and max-min-SINDR criteria. To this end, we propose three algorithms with different convergence and complexity properties. Numerical results show that, if the desired SINDR can only be achieved via joint combining across multiple BSs with properly tuned dithering, the optimal UE transmit power is imposed by the distance to the farthest serving BS (unlike in the unquantized case). In this context, dithering plays a crucial role in enhancing the SINDR, especially for UEs with significant path loss disparity among the serving BSs.
A symmetry extending the $T^2$-symmetry of the noncommutative torus $T^2_q$ is studied in the category of quantum groups. This extended symmetry is given by the quantum double-torus defined as a compact matrix quantum group consisting of the disjoint union of $T^2$ and $T^2_{q^2}$. The bicross-product structure of the polynomial Hopf algebra of the quantum double-torus is computed. The Haar measure and the complete list of unitary irreducible representations of the quantum double-torus are determined explicitly.
We perform a systematic application of the hybrid particle-field molecular dynamics technique [Milano et al, J. Chem. Phys. 2009, 130, 214106] to study interfacial properties and potential of mean force (PMF) for separating nanoparticles (NPs) in a melt. Specifically, we consider Silica NPs bare or grafted with Polystyrene chains, aiming to shed light on the interactions among free and grafted chains affecting the dispersion of NPs in the nanocomposite. The proposed hybrid models show good performances in catching the local structure of the chains, and in particular their density profiles, documenting the existence of the "wet-brush-to-dry-brush" transition. By using these models, the PMF between pairs of ungrafted and grafted NPs in Polystyrene matrix are calculated. Moreover, we estimate the three-particle contribution to the total PMF and its role in regulating the phase separation on the nanometer scale. In particular, the multi-particle contribution to the PMF is able to give an explanation of the complex experimental morphologies observed at low grafting densities. More in general, we propose this approach and the models utilized here for a molecular understanding of specific systems and the impact of the chemical nature of the systems on the composite final properties.
Primary definitions, notation and general observations of finite fibonomial operator calculus (ffoc) are presented. Kwasniewski's combinatorial interpretation of fibonomial coefficients by the use of fibonacci cobweb poset is given. Some elements of incidence algebra of fibonacci cobweb poset are defined.
In this work, we implement goal-oriented error control and spatial mesh adaptivity for stationary fluid-structure interaction. The a posteriori error estimator is realized using the dual-weighted residual method in which the adjoint equation arises. The fluid-structure interaction problem is formulated within a variational-monolithic framework using arbitrary Lagrangian-Eulerian coordinates. The overall problem is nonlinear and solved with Newton's method. We specifically consider the FSI-1 benchmark problem in which quantities of interest include the elastic beam displacements, drag, and lift. The implementation is provided open-source published on github https://github.com/tommeswick/goal-oriented-fsi. Possible extensions are discussed in the source code and in the conclusions of this paper.
We examine the sensitivity of the Love and the quasi-Rayleigh waves to model parameters. Both waves are guided waves that propagate in the same model of an elastic layer above an elastic halfspace. We study their dispersion curves without any simplifying assumptions, beyond the standard approach of elasticity theory in isotropic media. We examine the sensitivity of both waves to elasticity parameters, frequency and layer thickness, for varying frequency and different modes. In the case of Love waves, we derive and plot the absolute value of a dimensionless sensitivity coefficient in terms of partial derivatives, and perform an analysis to find the optimum frequency for determining the layer thickness. For a coherency of the background information, we briefly review the Love-wave dispersion relation and provide details of the less common derivation of the quasi-Rayleigh relation in an appendix. We compare that derivation to past results in the literature, finding certain discrepancies among them.
In this paper, we investigate the optical behaviors of a quantum Schwarzschild black hole with a spacetime solution including a parameter $\lambda$ that encodes its discretization. Concretly, we derive the effective potential of such solution. In particular, we study the circular orbits around the quantum black hole. Indeed, we find that the effective potential is characterized by a minimum and a maximum yielding a double photon spheres denoted by $r_{p_1}, r_{p_2}$ respectively. Then, we analyse the double shadow behaviors as a function of the parameter $\lambda$ where we show that it controles the shadow circular size. An inspection of the Innermost Stable Circular Orbits (ISCO) shows that the radius $r_{ISCO}$ increases as a function of $\lambda$. Besides, we find that such radius is equal to $6M$ for an angular momentum $L=2\sqrt{3}$ independently of $\lambda$. A numerical analysis shows that the photon sphere of radius $r_{p_1}$ generates a shadow with a radius larger than $r_{ISCO}$. Thus, a truncation of the effective potential is imposed to exclude such behavior. Finally, the $\lambda$-effect is inspect on the deflection angle of such a black hole showing that it increases when higher values of the parameter $\lambda$ are considered. However, such an increase is limited by an upper bound given by $\frac{6 M}{b}$.
Given a commutative noetherian non-positive DG-ring with bounded cohomology which has a dualizing DG-module, we study its regular, Gorenstein and Cohen-Macaulay loci. We give a sufficient condition for the regular locus to be open, and show that the Gorenstein locus is always open. However, both of these loci are often empty: we show that no matter how nice $\mathrm{H}^0(A)$ is, there are examples where the Gorenstein locus of $A$ is empty. We then show that the Cohen-Macaulay locus of a commutative noetherian DG-ring with bounded cohomology which has a dualizing DG-module always contains a dense open set. Our results imply that under mild hypothesis, eventually coconnective locally noetherian derived schemes are generically Cohen-Macaulay, but that even in very nice cases, they need not be generically Gorenstein.
In this paper, we extend some usual techniques of classification resulting from a large-scale data-mining and network approach. This new technology, which in particular is designed to be suitable to big data, is used to construct an open consolidated database from raw data on 4 million patents taken from the US patent office from 1976 onward. To build the pattern network, not only do we look at each patent title, but we also examine their full abstract and extract the relevant keywords accordingly. We refer to this classification as semantic approach in contrast with the more common technological approach which consists in taking the topology when considering US Patent office technological classes. Moreover, we document that both approaches have highly different topological measures and strong statistical evidence that they feature a different model. This suggests that our method is a useful tool to extract endogenous information.
Let $p$ be a prime, and $\mathbb{F}_p$ the field with $p$ elements. We prove that if $G$ is a mild pro-$p$ group with quadratic $\mathbb{F}_p$-cohomology algebra $H^\bullet(G,\mathbb{F}_p)$, then the algebras $H^\bullet(G,\mathbb{F}_p)$ and $\mathrm{gr}\mathbb{F}_p[\![G]\!]$ - the latter being induced by the quotients of consecutive terms of the $p$-Zassenhaus filtration of $G$ - are both Koszul, and they are quadratically dual to each other. Consequently, if the maximal pro-$p$ Galois group of a field is mild, then Positselski's and Weigel's Koszulity conjectures hold true for such a field.
We generalize the predictions for attractions between over-all neutral surfaces induced by charge fluctuations/correlations to non-uniform systems that include dielectric discontinuities, as is the case for mixed charged lipid membranes in an aqueous solution. We show that the induced interactions depend in a non-trivial way on the dielectric constants of membrane and water and show different scaling with distance depending on these properties. The generality of the calculations also allows us to predict under which dielectric conditions the interaction will change sign and become repulsive.
It has long been appreciated that transport properties can control reaction kinetics. This effect can be characterized by the time it takes a diffusing molecule to reach a target -- the first-passage time (FPT). Although essential to quantify the kinetics of reactions on all time scales, determining the FPT distribution was deemed so far intractable. Here, we calculate analytically this FPT distribution and show that transport processes as various as regular diffusion, anomalous diffusion, diffusion in disordered media and in fractals fall into the same universality classes. Beyond this theoretical aspect, this result changes the views on standard reaction kinetics. More precisely, we argue that geometry can become a key parameter so far ignored in this context, and introduce the concept of "geometry-controlled kinetics". These findings could help understand the crucial role of spatial organization of genes in transcription kinetics, and more generally the impact of geometry on diffusion-limited reactions.
The dynamics of a single hole in the t-J model on two- (2LL) and three- (3LL) leg ladders is studied using a recently developed quantum Monte Carlo algorithm. For the 2LL it is shown that in addition to the most pronounced features of the spectral function, well described by the limit of strong coupling along the rungs, a clear shadow band appears in the antibonding channel. Moreover, both the bonding band and its shadow have a finite quasiparticle (QP) weight in the thermodynamic limit. For strong coupling along the rungs of the 3LL, the low-energy spectrum in the antisymmetric channel is similar to a one-dimensional chain, whereas in the two symmetric channels it resembles the 2LL. The QP weight vanishes in the antisymmetric channel, but is finite in the symmetric one.
A hybrid stochastic-deterministic approach for computing the second-order perturbative contribution $E^{(2)}$ within multireference perturbation theory (MRPT) is presented. The idea at the heart of our hybrid scheme --- based on a reformulation of $E^{(2)}$ as a sum of elementary contributions associated with each determinant of the MR wave function --- is to split $E^{(2)}$ into a stochastic and a deterministic part. During the simulation, the stochastic part is gradually reduced by dynamically increasing the deterministic part until one reaches the desired accuracy. In sharp contrast with a purely stochastic MC scheme where the error decreases indefinitely as $t^{-1/2}$ (where $t$ is the computational time), the statistical error in our hybrid algorithm displays a polynomial decay $\sim t^{-n}$ with $n=3-4$ in the examples considered here. If desired, the calculation can be carried on until the stochastic part entirely vanishes. In that case, the exact result is obtained with no error bar and no noticeable computational overhead compared to the fully-deterministic calculation. The method is illustrated on the F$_2$ and Cr$_2$ molecules. Even for the largest case corresponding to the Cr$_2$ molecule treated with the cc-pVQZ basis set, very accurate results are obtained for $E^{(2)}$ for an active space of (28e,176o) and a MR wave function including up to $2 \times 10^7$ determinants.
In this work we discuss a random Tug-of-War game in graphs where one of the players has the power to decide at each turn whether to play a round of classical random Tug-of-War, or let the other player choose the new game position in exchange of a fixed payoff. We prove that this game has a value using a discrete comparison principle and viscosity tools, as well as probabilistic arguments. This game is related to Jensen's extremal equations, which have a key role in Jensen's celebrated proof of uniqueness of infinity harmonic functions.
We discuss strong local and global well-posedness for the three-dimensional NLS equation with nonlinearity concentrated on $\mathbb{S}^2$. Precisely, local well-posedness is proved for any $C^2$ power-nonlinearity, while global well-posedness is obtained either for small data or in the defocusing case under some growth assumptions. With respect to point-concentrated NLS models, widely studied in the literature, here the dimension of the support of the nonlinearity does not allow a direct extension of the known techniques and calls for new ideas.
In the recent years, we have witnessed the development of multi-label classification methods which utilize the structure of the label space in a divide and conquer approach to improve classification performance and allow large data sets to be classified efficiently. Yet most of the available data sets have been provided in train/test splits that did not account for maintaining a distribution of higher-order relationships between labels among splits or folds. We present a new approach to stratifying multi-label data for classification purposes based on the iterative stratification approach proposed by Sechidis et. al. in an ECML PKDD 2011 paper. Our method extends the iterative approach to take into account second-order relationships between labels. Obtained results are evaluated using statistical properties of obtained strata as presented by Sechidis. We also propose new statistical measures relevant to second-order quality: label pairs distribution, the percentage of label pairs without positive evidence in folds and label pair - fold pairs that have no positive evidence for the label pair. We verify the impact of new methods on classification performance of Binary Relevance, Label Powerset and a fast greedy community detection based label space partitioning classifier. Random Forests serve as base classifiers. We check the variation of the number of communities obtained per fold, and the stability of their modularity score. Second-Order Iterative Stratification is compared to standard k-fold, label set, and iterative stratification. The proposed approach lowers the variance of classification quality, improves label pair oriented measures and example distribution while maintaining a competitive quality in label-oriented measures. We also witness an increase in stability of network characteristics.
With the help of recent adjacent dyadic constructions by Hyt\"onen and the author, we give an alternative proof of results of Lechner, M\"uller and Passenbrunner about the $L^p$-boundedness of shift operators acting on functions $f \in L^p(X;E)$ where $1 < p < \infty$, $X$ is a metric space and $E$ is a UMD space.
Maintaining the preserved supersymmetry helps to find the effective Lagrangian on the BPS background in gauge theories with eight supercharges. As concrete examples, we take 1/2 BPS domain walls. The Lagrangian is given in terms of the superfields with manifest four preserved supercharges and is expanded in powers of the slow-movement parameter lambda. The O(lambda^0) gives the superfield form of the BPS equations, whereas all the fluctuation fields follow at O(lambda^1). The effective Lagrangian is given by the density of the K"ahler potential which emerges automatically from the lambda expansion making four preserved supercharges manifest. More complete account of our method and applications is given in hep-th/0602289 in which the case of non-Abelian vortices is also worked out.
We consider the scaling limit of the two-dimensional $q$-state Potts model for $q\leq 4$. We use the exact scattering theory proposed by Chim and Zamolodchikov to determine the one and two-kink form factors of the energy, order and disorder operators in the model. Correlation functions and universal combinations of critical amplitudes are then computed within the two-kink approximation in the form factor approach. Very good agreement is found whenever comparison with exact results is possible. We finally consider the limit $q\to 1$ which is related to the isotropic percolation problem. Although this case presents a serious technical difficulty, we predict a value close to 74 for the ratio of the mean cluster size amplitudes above and below the percolation threshold. Previous estimates for this quantity range from 14 to 220.
Deformation quantization of Poisson manifolds is studied within the framework of an expansion in powers of derivatives of Poisson structures. We construct the Lie group associated with a Poisson bracket algebra which defines a second order deformation in the derivative expansion.
We present a relationship between spiral arm pitch angle (a measure of the tightness of spiral structure) and the mass of supermassive black holes (BHs) in the nuclei of disk galaxies. We argue that this relationship is expected through a combination of other relationships, whose existence has already been demonstrated. The recent discovery of AGN in bulgeless disk galaxies suggests that halo concentration or virial mass may be one of the determining factors in BH mass. Taken together with the result that mass concentration seems to determine spiral arm pitch angle, one would expect a relation to exist between spiral arm pitch angle and supermassive BH mass in disk galaxies, and we find that this is indeed the case. We conclude that this relationship may be important for estimating evolution in BH masses in disk galaxies out to intermediate redshifts, since regular spiral arm structure can be seen in galaxies out to z~1.
An increasingly common expression of online hate speech is multimodal in nature and comes in the form of memes. Designing systems to automatically detect hateful content is of paramount importance if we are to mitigate its undesirable effects on the society at large. The detection of multimodal hate speech is an intrinsically difficult and open problem: memes convey a message using both images and text and, hence, require multimodal reasoning and joint visual and language understanding. In this work, we seek to advance this line of research and develop a multimodal framework for the detection of hateful memes. We improve the performance of existing multimodal approaches beyond simple fine-tuning and, among others, show the effectiveness of upsampling of contrastive examples to encourage multimodality and ensemble learning based on cross-validation to improve robustness. We furthermore analyze model misclassifications and discuss a number of hypothesis-driven augmentations and their effects on performance, presenting important implications for future research in the field. Our best approach comprises an ensemble of UNITER-based models and achieves an AUROC score of 80.53, placing us 4th on phase 2 of the 2020 Hateful Memes Challenge organized by Facebook.
Integrated photonic circuits are an integral part of all-optical and on-chip quantum information processing and quantum computer. Deterministically integrated single-photon sources in nanoplasmonic circuits lead to densely packed scalable quantum logic circuits operating beyond the diffraction limit. Here, we report the coupling efficiency of single-photon sources to the plasmonic waveguide, characteristic transmission spectrum, propagation length, decay length, and plasmonic Purcell factor. We simulated the transmission spectrum to find the appropriate wavelength for various width of the dielectric in the metal-dielectric-metal waveguide. We find the maximum propagation length of 3.98 $\mu$m for Al$_{2}$O$_{3}$ dielectric-width equal to 140 nm and coupling efficiency to be greater than 82 \%. The plasmonic Purcell factor was found to be inversely proportional to dielectric-width (w), reaching as high as 31974 for w equal to 1 nm. We also calculated quantum properties of the photons like indistinguishability and found that it can be enhanced by plasmonic-nanocavity if single-photon sources are deterministically coupled. We further, propose a scalable metal-dielectric-metal waveguide based quantum logic circuits using the plasmonic circuit and Mach-Zehnder interferometer.
Recent breakthroughs in artificial intelligence offer tremendous promise for the development of self-driving applications. Deep Neural Networks, in particular, are being utilized to support the operation of semi-autonomous cars through object identification and semantic segmentation. To assess the inadequacy of the current dataset in the context of autonomous and semi-autonomous cars, we created a new dataset named ANNA. This study discusses a custom-built dataset that includes some unidentified vehicles in the perspective of Bangladesh, which are not included in the existing dataset. A dataset validity check was performed by evaluating models using the Intersection Over Union (IOU) metric. The results demonstrated that the model trained on our custom dataset was more precise and efficient than the models trained on the KITTI or COCO dataset concerning Bangladeshi traffic. The research presented in this paper also emphasizes the importance of developing accurate and efficient object detection algorithms for the advancement of autonomous vehicles.
This research establishes a better understanding of the syntax choices in speech interactions and of how speech, gesture, and multimodal gesture and speech interactions are produced by users in unconstrained object manipulation environments using augmented reality. The work presents a multimodal elicitation study conducted with 24 participants. The canonical referents for translation, rotation, and scale were used along with some abstract referents (create, destroy, and select). In this study time windows for gesture and speech multimodal interactions are developed using the start and stop times of gestures and speech as well as the stoke times for gestures. While gestures commonly precede speech by 81 ms we find that the stroke of the gesture is commonly within 10 ms of the start of speech. Indicating that the information content of a gesture and its co-occurring speech are well aligned to each other. Lastly, the trends across the most common proposals for each modality are examined. Showing that the disagreement between proposals is often caused by a variation of hand posture or syntax. Allowing us to present aliasing recommendations to increase the percentage of users' natural interactions captured by future multimodal interactive systems.
The fishbone-induced transport of alpha particles is computed for the ITER 15 MA baseline scenario, using the nonlinear hybrid Kinetic-MHD code XTOR-K. Two limit cases have been studied, in order to analyse the characteristic regimes of the fishbone instability : the weak kinetic drive limit and the strong kinetic drive limit. In both those regimes, characteristic features of the n = m = 1 fishbone instability are recovered, such as a strong up/down-chirping of the mode frequency, associated with a resonant transport of trapped and passing alpha particles. The effects of the n = m = 0 sheared poloidal and toroidal plasma rotation are taken into account in the simulations. The shear is not negligible, which implies that the fishbone mode frequency has a radial dependency, impacting the wave-particle resonance condition. Phase space hole and clump structures are observed in both nonlinear regimes, centered around the precessional and passing resonances. These structures remains attached to the resonances as the different mode frequencies chirp up and down. In the nonlinear phase, the transport of individual resonant trapped particles is identified to be linked to mode-particle synchronization. On this basis, a partial mechanism for the nonlinear coupling between particle transport and mode dominant down-chirping is proposed. The overall transport of alpha particles inside out the q = 1 surface is of order 2-5% of the initial population between the simulations. The loss of alpha power is found to be directly equal to the loss of alpha particles.
We experimentally observed an accumulative type of nonlinear attenuation and distortion of slow light, i.e., Rydberg polaritons, with the Rydberg state $|32D_{5/2}\rangle$ in the weak-interaction regime. The present effect of attenuation and distortion cannot be explained by considering only the dipole-dipole interaction (DDI) between Rydberg atoms in $|32D_{5/2}\rangle$. Our observation can be attributed to the atoms in the dark Rydberg states other than those in the bright Rydberg state, i.e., $|32D_{5/2}\rangle$, driven by the coupling field. The dark Rydberg states are all the possible states, in which the population decaying from $|32D_{5/2}\rangle$ accumulated over time, and they were not driven by the coupling field. Consequently, the DDI between the dark and bright Rydberg atoms increased the decoherence rate of the Rydberg polaritons. We performed three different experiments to verify the above hypothesis, to confirm the existence of the dark Rydberg states, and to measure the decay rate from the bright to dark Rydberg states. In the theoretical model, we included the decay process from the bright to dark Rydberg states and the DDI effect induced by both the bright and dark Rydberg atoms. All the experimental data of slow light taken at various probe Rabi frequencies were in good agreement with the theoretical predictions based on the model. This study pointed out an additional decoherence rate in the Rydberg-EIT effect, and provides a better understanding of the Rydberg-polariton system.
Effect of Co substitution on Mn$_3$Ga is investigated using first-principles study for structural and magnetic properties. Without Co, ferrimagnetic Heusler compound Mn3Ga is in tetragonal phase. With Co substitution, depending on Co concentration (x) Mn$_3$Ga prefers tetragonal (cubic) phase when x \leq 0.5 (x \geq 0.5). Ferrimagnetism is robust regardless of x in both phases. While magnetic moments of two Mn do not vary significantly with x, Co magnetic moment in two phases exhibit different behaviors, leading to distinct features in total magnetic moment (M_{tot}). When x \leq 0.5, in tetragonal phase, Co magnetic moment is vanishingly small, resulting in a decrease of M_{tot} with x. In contrast, when x \geq 0.5, in cubic phase, Co magnetic moment is roughly 1$\mu_B$, which is responsible for an increase of Mtot. Electronic structure is analyzed with partial density of states for various x. To elucidate the counterintuitively small Co moment, the magnetic exchange interaction is investigated where exchange coefficient between Co and Mn is much smaller in x \leq 0.5 case than x \geq 0.5 one.
We derive a residual a posteriori estimator for the Kirchhoff plate bending problem. We consider the problem with a combination of clamped, simply supported and free boundary conditions subject to both distributed and concentrated (point and line) loads. Extensive numerical computations are presented to verify the functionality of the estimators.
By considering Higgs modes within the Ginzburg-Landau framework, we study influences of a rotated magnetic field on the color-flavor-locked-type matter of dense QCD. We demonstrate, in a model-independent way, that a diquark condensate may be triggered by the magnetic response of rotated-charged Higgs modes, in addition to the known color-flavor-locked condensate. Moreover, the condensate is applied to explore formations of vortices in the presence of external magnetic fields. The superfluid-like vortices are constructed for the magnetic-induced condensate. In the situation including both kinds of condensates, the theoretical possibility of vortons is suggested and the formation condition and the energy stability are investigated semi-classically.
Simulations of an isolated Milky Way-like galaxy, in which supernovae power a galactic fountain, reproduce the observed velocity and 21cm brightness statistics of galactic neutral hydrogen (HI). The simulated galaxy consists of a thin HI disk, similar in extent and brightness to that observed in the Milky Way, and extra-planar neutral gas at a range of velocities due to the galactic fountain. Mock observations of the neutral gas resemble the HI flux measurements from the Leiden-Argentine-Bonn (LAB) HI, survey, including a high-velocity tail which matches well with observations of high-velocity clouds. The simulated high-velocity clouds are typically found close to the galactic disk, with a typical line-of-sight distance of 13kpc from observers on the solar circle. The fountain efficiently cycles matter from the centre of the galaxy to its outskirts at a rate of around 0.5 M_sun/yr
We observe electromagnetically induced transparency (EIT) in Rb vapor at various optical intensities, starting from below saturation to several times the saturation intensity. The observed Lorentzian width of the EIT signal is very small. Solving the time dependant density matrix equation of motion with a phenomenological decay constant, we find an expression suitable in explaining the EIT signal. In this experimental observation and theoretical analysis intensity of EIT signal and its Lorentzian width increases with Rabi frequency of optical field.
We analyze nonlinear collective effects near surfaces of semi-infinite periodic systems with multi-gap transmission spectra and introduce a novel concept of multi-gap surface solitons as mutually trapped surface states with the components associated with different spectral gaps. We find numerically discrete surface modes in semi-infinite binary waveguide arrays which can support simultaneously two types of discrete solitons, and analyze different multi-gap states including the soliton-induced waveguides with the guided modes from different gaps and composite vector solitons.
Learning from imbalanced data is among the most challenging areas in contemporary machine learning. This becomes even more difficult when considered the context of big data that calls for dedicated architectures capable of high-performance processing. Apache Spark is a highly efficient and popular architecture, but it poses specific challenges for algorithms to be implemented for it. While oversampling algorithms are an effective way for handling class imbalance, they have not been designed for distributed environments. In this paper, we propose a holistic look on oversampling algorithms for imbalanced big data. We discuss the taxonomy of oversampling algorithms and their mechanisms used to handle skewed class distributions. We introduce a Spark library with 14 state-of-the-art oversampling algorithms implemented and evaluate their efficacy via extensive experimental study. Using binary and multi-class massive data sets, we analyze the effectiveness of oversampling algorithms and their relationships with different types of classifiers. We evaluate the trade-off between accuracy and time complexity of oversampling algorithms, as well as their scalability when increasing the size of data. This allows us to gain insight into the usefulness of specific components of oversampling algorithms for big data, as well as formulate guidelines and recommendations for designing future resampling approaches for massive imbalanced data. Our library can be downloaded from https://github.com/fsleeman/spark-class-balancing.git.
Planet formation is thought to occur in discs around young stars by the aggregation of small dust grains into much larger objects. The growth from grains to pebbles and from planetesimals to planets is now fairly well understood. The intermediate stage has however been found to be hindered by the radial-drift and fragmentation barriers. We identify a powerful mechanism in which dust overcomes both barriers. Its key ingredients are i) backreaction from the dust onto the gas, ii) grain growth and fragmentation, and iii) large-scale gradients. The pile-up of growing and fragmenting grains modifies the gas structure on large scales and triggers the formation of pressure maxima, in which particles are trapped. We show that these self-induced dust traps are robust: they develop for a wide range of disc structures, fragmentation thresholds and initial dust-to-gas ratios. They are favored locations for pebbles to grow into planetesimals, thus opening new paths towards the formation of planets.
We investigate the behaviour of dark energy using the recently released supernova data of Riess et al ~(2004) and a model independent parameterization for dark energy (DE). We find that, if no priors are imposed on $\Omega_{0m}$ and $h$, DE which evolves with time provides a better fit to the SNe data than $\Lambda$CDM. This is also true if we include results from the WMAP CMB data. From a joint analysis of SNe+CMB, the best-fit DE model has $w_0 < -1$ at the present epoch and the transition from deceleration to acceleration occurs at $z_T = 0.39 \pm 0.03$. However, DE evolution becomes weaker if the $\Lambda$CDM based CMB results $\Omega_{0m} = 0.27 \pm 0.04$, $h = 0.71 \pm 0.06$ are incorporated in the analysis. In this case, $z_T = 0.57 \pm 0.07$. Our results also show that the extent of DE evolution is sensitive to the manner in which the supernova data is sampled.
When thousands of processors are involved in performing event filtering on a trigger farm, there is likely to be a large number of failures within the software and hardware systems. BTeV, a proton/antiproton collider experiment at Fermi National Accelerator Laboratory, has designed a trigger, which includes several thousand processors. If fault conditions are not given proper treatment, it is conceivable that this trigger system will experience failures at a high enough rate to have a negative impact on its effectiveness. The RTES (Real Time Embedded Systems) collaboration is a group of physicists, engineers, and computer scientists working to address the problem of reliability in large-scale clusters with real-time constraints such as this. Resulting infrastructure must be highly scalable, verifiable, extensible by users, and dynamically changeable.
We introduce a simulation scheme for Brownian semistationary processes, which is based on discretizing the stochastic integral representation of the process in the time domain. We assume that the kernel function of the process is regularly varying at zero. The novel feature of the scheme is to approximate the kernel function by a power function near zero and by a step function elsewhere. The resulting approximation of the process is a combination of Wiener integrals of the power function and a Riemann sum, which is why we call this method a hybrid scheme. Our main theoretical result describes the asymptotics of the mean square error of the hybrid scheme and we observe that the scheme leads to a substantial improvement of accuracy compared to the ordinary forward Riemann-sum scheme, while having the same computational complexity. We exemplify the use of the hybrid scheme by two numerical experiments, where we examine the finite-sample properties of an estimator of the roughness parameter of a Brownian semistationary process and study Monte Carlo option pricing in the rough Bergomi model of Bayer et al. [Quant. Finance 16(6), 887-904, 2016], respectively.
The bosonic IIB matrix model is studied using a numerical method. This model contains the bosonic part of the IIB matrix model conjectured to be a non-perturbative definition of the type IIB superstring theory. The large N scaling behavior of the model is shown performing a Monte Carlo simulation. The expectation value of the Wilson loop operator is measured and the string tension is estimated. The numerical results show the prescription of the double scaling limit.
Language model pre-training has proven to be useful in many language understanding tasks. In this paper, we investigate whether it is still helpful to add the self-training method in the pre-training step and the fine-tuning step. Towards this goal, we propose a learning framework that making best use of the unlabel data on the low-resource and high-resource labeled dataset. In industry NLP applications, we have large amounts of data produced by users or customers. Our learning framework is based on this large amounts of unlabel data. First, We use the model fine-tuned on manually labeled dataset to predict pseudo labels for the user-generated unlabeled data. Then we use the pseudo labels to supervise the task-specific training on the large amounts of user-generated data. We consider this task-specific training step on pseudo labels as a pre-training step for the next fine-tuning step. At last, we fine-tune on the manually labeled dataset upon the pre-trained model. In this work, we first empirically show that our method is able to solidly improve the performance by 3.6%, when the manually labeled fine-tuning dataset is relatively small. Then we also show that our method still is able to improve the performance further by 0.2%, when the manually labeled fine-tuning dataset is relatively large enough. We argue that our method make the best use of the unlabel data, which is superior to either pre-training or self-training alone.
The starting point of this work are inaccurate statements found in the literature for Multi-terminal Binary Decision Diagrams (MTBDDs) regarding the well-definedness of the MTBDD abstraction operation. The statements try to relate an operation * on a set of terminal values M to the property that the abstraction over this operation does depend on the order of the abstracted variables. This paper gives a necessary and sufficient condition for the independence of the abstraction operation of the order of the abstracted variables in the case of an underlying monoid and it treats the more general setting of a magma.
In this letter we follow up recent work of Halverson-Kumar-Morrison on some exotic examples of gauged linear sigma models (GLSM's). Specifically, they describe a set of U(1) x Z_2 GLSM's with superpotentials that are quadratic in p fields, rather than linear as is typically the case. These theories RG flow to sigma models on branched double covers, where the double cover is realized via a Z_2 gerbe. For that gerbe structure, and hence the double cover, the Z_2 factor in the gauge group is essential. In this letter we propose an analogous geometric understanding of phases without that Z_2, in terms of Ricci-flat (but not Calabi-Yau) stacks which look like Fano manifolds with hypersurfaces of Z_2 orbifolds.
Uncertainty quantification is a critical yet unsolved challenge for deep learning, especially for the time series imputation with irregularly sampled measurements. To tackle this problem, we propose a novel framework based on the principles of recurrent neural networks and neural stochastic differential equations for reconciling irregularly sampled measurements. We impute measurements at any arbitrary timescale and quantify the uncertainty in the imputations in a principled manner. Specifically, we derive analytical expressions for quantifying and propagating the epistemic and aleatoric uncertainty across time instants. Our experiments on the IEEE 37 bus test distribution system reveal that our framework can outperform state-of-the-art uncertainty quantification approaches for time-series data imputations.
Dix formulated the inverse problem of recovering an elastic body from the measurements of wave fronts of point scatterers. We geometrize this problem in the framework of linear elasticity, leading to the geometrical inverse problem of recovering a Finsler manifold from certain sphere data in a given open subset of the manifold. We solve this problem locally along any geodesic through the measurement set.
Motivated by the nonlinear Hall effect observed in topological semimetals, we studied the photocurrent by the quantum kinetic equation. We recovered the shift current and injection current discovered by Sipe et al., and the nonlinear Hall current induced by Berry curvature dipole (BCD) proposed by Inti Sodemann and Liang Fu. Especially, we further proposed that 3-form tensor can also induce photocurrent, in addition to the Berry curvature and BCD. This work will supplement the existing mechanisms for photocurrent. In contrast to the shift current induced by shift vector, all photocurrents induced by gradient/curl of Berry curvature, and high rank tensor require circularly polarized light and topologically non-trivial band structure, viz. non-vanishing Berry curvature.
We show how the event-based notation offered by Event-B may be augmented by algorithmic modelling constructs without disrupting the refinement-based development process.
We develop and thoroughly test a stress-controlled, parallel plates shear cell that can be coupled to an optical microscope or a small angle light scattering setup, for simultaneous investigation of the rheological properties and the microscopic structure of soft materials under an imposed shear stress. In order to minimize friction, the cell is based on an air bearing linear stage, the stress is applied through a contactless magnetic actuator, and the strain is measured through optical sensors. We discuss the contributions of inertia and of the small residual friction to the measured signal and demonstrate the performance of our device in both oscillating and step stress experiments on a variety of viscoelastic materials.
M-theory has different global supersymmetry structures in its various dual incarnations, as characterized by the M-algebra in 11D, the type IIA, type-IIB, heterotic, type-I extended supersymmetries in 10D, and non-Abelian supersymmetries in the AdS_n x S^m backgrounds. We show that all of these supersymmetries are unified within the supersymmetry OSp(1/64), thus hinting that the overall global spacetime symmetry of M-theory is OSp(1/64). We suggest that the larger symmetries contained within OSp(1/64) which go beyond the familiar symmetries, are non-linearly realized hidden symmetries of M-theory. These can be made manifest by lifting 11D M-theory to the formalism of two-time physics in 13D by adding gauge degrees of freedom. We illustrate this idea by constructing a toy M-model on the worldline in 13D with manifest OSp(1/64) global supersymmetry, and a number of new local symmetries that remove ghosts. Some of the local symmetries are bosonic cousins of kappa supersymmetries. The model contains 0-superbrane and p-forms (for p=3,6) as degrees of freedom. The gauge symmetries can be fixed in various ways to come down to a one time physics model in 11D, 10D, AdS_n x S^m, etc., such that the linearly realized part of OSp(1/64) is the global symmetry of the various dual sectors of M-theory.
In this talk I presented and discussed our unexpected detection of water vapor in the disk-averaged spectrum of the K2IIIp red giant Arcturus [for details, see Ryde et al. (2002)]. Arcturus, or alpha Bootes is, with its effective temperature of 4300 K, the hottest star yet to show water vapor features. We argue that the water vapor is photospheric and that its detection provides us with new insights into the outer parts of the photosphere. We are not able to model the vater vapor with a standard, one-component, 1D, radiative-equilibrium, LTE model photosphere, which probably means we are lacking essential physics in such models. However, we are able to model several OH lines of different excitation and the water-vapor lines satisfactorily after lowering the temperature structure of the very outer parts of the photosphere at log tau_500=-3.8 and beyond compared to a flux-constant, hydrostatic, standard marcs model photosphere. Our new semi-empirical model is consistently calculated from the given temperature structure. I will discuss some possible reasons for a temperature decrease in the outer-most parts of the photosphere and the assumed break-down of the assumptions made in classical model-atmosphere codes. In order to understand the outer photospheres of these objects properly, we will, most likely, need 3D hydrodynamical models of red giants also taking into account full non-LTE and including time-dependent effects of, for example, acoustic wave heating sensitive to thermal instabilities.
We explore the weak-strong-coupling Bose-Fermi duality in a model of a single-channel integer or fractional quantum Hall edge state with a finite-range interaction. The system is described by a chiral Luttinger liquid with non-linear dispersion of bosonic and fermonic excitations. We use the bosonization, a unitary transformation, and a refermionization to map the system onto that of weakly interacting fermions at low temperature $T$ or weakly interacting bosons at high $T$. We calculate the equilibration rate which is found to scale with temperature as $T^5$ and $T^{14}$ in the high-temperature ("bosonic") and the low-temperature ("fermonic") regimes, respectively. The relaxation rate of a hot particle with the momentum $k$ in the fermonic regime scales as $k^7T^7$.
NGC 1333 IRAS 4A and IRAS 4B sources are among the best studied Stage 0 low-mass protostars which are driving prominent bipolar outflows. Most studies have so far concentrated on the colder parts (T<30K) of these regions. The aim is to characterize the warmer parts of the protostellar envelope in order to quantify the feedback of the protostars on their surroundings in terms of shocks, UV heating, photodissociation and outflow dispersal. Fully sampled large scale maps of the region were obtained; APEX-CHAMP+ was used for 12CO 6-5, 13CO 6-5 and [CI] 2-1, and JCMT-HARP-B for 12CO 3-2 emissions. Complementary Herschel-HIFI and ground-based lines of CO and its isotopologs, from 1-0 upto 10-9 (Eu/k 300K), are collected at the source positions. Radiative-transfer models of the dust and lines are used to determine temperatures and masses of the outflowing and UV-heated gas and infer the CO abundance structure. Broad CO emission line profiles trace entrained shocked gas along the outflow walls, with typical temperatures of ~100K. At other positions surrounding the outflow and the protostar, the 6-5 line profiles are narrow indicating UV excitation. The narrow 13CO 6-5 data directly reveal the UV heated gas distribution for the first time. The amount of UV-heated and outflowing gas are found to be comparable from the 12CO and 13CO 6-5 maps, implying that UV photons can affect the gas as much as the outflows. Weak [CI] emission throughout the region indicates a lack of CO dissociating photons. Modeling of the C18O lines indicates the necessity of a "drop" abundance profile throughout the envelope where the CO freezes out and is reloaded back into the gas phase, thus providing quantitative evidence for the CO ice evaporation zone around the protostars. The inner abundances are less than the canonical value of CO/H_2=2.7x10^-4, indicating some processing of CO into other species on the grains.
A Five dimensional Kaluza-Klein space-time is considered in the presence of a perfect fluid source with variable G and $\Lambda$. An expanding universe is found by using a relation between the metric potential and an equation of state. The gravitational constant is found to decrease with time as $G \sim t^{-(1-\omega)}$ whereas the variation for the cosmological constant follows as $\Lambda \sim t^{-2}$, $\Lambda \sim (\dot R/R)^2$ and $\Lambda \sim \ddot R/R$ where $\omega$ is the equation of state parameter and $R$ is the scale factor.
The asymmetry parameter alpha_p^NM for a proton exclusively emitted in the Lambda p -> np process was, for the first time, measured in the non-mesonic weak decay of a polarized 5_La,bda_He hypernucleus by selecting the proton-neutron pairs emitted in the back-to-back kinematics. The highly polarized 5_Lambda_He was abundantly produced with the (pi+,K+) reaction at 1.05GeV/c in the scattering angular range of +-15$ degrees. The obtained value alpha_p^NM=0.31+-0.22, as well as that for inclusive protons, alpha_p^NM=0.11+-0.08+-0.04, largely contradicts recent theoretical values of around -0.6, although these calculations well reproduce the branching ratios of non-mesonic weak decay.
We establish a Karhunen-Lo`eve expansion for generic centered, second order stochastic processes, which does not rely on topological assumptions. We further investigate in which norms the expansion converges and derive exact average rates of convergence for these norms. For Gaussian processes we additionally prove certain sharpness results in terms of the norm. Moreover, we show that the generic Karhunen-Lo`eve expansion can in some situations be used to construct reproducing kernel Hilbert spaces (RKHSs) containing the paths of a version of the process. As applications of the general theory, we compare the smoothness of the paths with the smoothness of the functions contained in the RKHS of the covariance function and discuss some small ball probabilities. Key tools for our results are a recently shown generalization of Mercer's theorem, spectral properties of the covariance integral operator, interpolation spaces of the real method, and for the smoothness results, entropy numbers of embeddings between classical function spaces.
We introduce a simplified protein model where the water degrees of freedom appear explicitly (although in an extremely simplified fashion). Using this model we are able to recover both the warm and the cold protein denaturation within a single framework, while addressing important issues about the structure of model proteins.
This paper is a follow-up of our recent papers \cite{CS08} and \cite{CS09} covering the two-particle Anderson model. Here we establish the phenomenon of Anderson localisation for a quantum $N$-particle system on a lattice $\Z^d$ with short-range interaction and in presence of an IID external potential with sufficiently regular marginal cumulative distribution function (CDF). Our main method is an adaptation of the multi-scale analysis (MSA; cf. \cite{FS}, \cite{FMSS}, \cite{DK}) to multi-particle systems, in combination with an induction on the number of particles, as was proposed in our earlier manuscript \cite{CS07}. Similar results have been recently obtained in an independent work by Aizenman and Warzel \cite{AW08}: they proposed an extension of the Fractional-Moment Method (FMM) developed earlier for single-particle models in \cite{AM93} and \cite{ASFH01} (see also references therein) which is also combined with an induction on the number of particles. An important role in our proof is played by a variant of Stollmann's eigenvalue concentration bound (cf. \cite{St00}). This result, as was proved earlier in \cite{C08}, admits a straightforward extension covering the case of multi-particle systems with correlated external random potentials: a subject of our future work. We also stress that the scheme of our proof is \textit{not} specific to lattice systems, since our main method, the MSA, admits a continuous version. A proof of multi-particle Anderson localization in continuous interacting systems with various types of external random potentials will be published in a separate papers.
Among the 'beyond Li-ion' battery chemistries, nonaqueous Li-O$_2$ batteries have the highest theoretical specific energy and as a result have attracted significant research attention over the past decade. A critical scientific challenge facing nonaqueous Li-O$_2$ batteries is the electronically insulating nature of the primary discharge product, lithium peroxide, which passivates the battery cathode as it is formed, leading to low ultimate cell capacities. Recently, strategies to enhance solubility to circumvent this issue have been reported, but rely upon electrolyte formulations that further decrease the overall electrochemical stability of the system, thereby deleteriously affecting battery rechargeability. In this study, we report that a significant enhancement (greater than four-fold) in Li-O$_2$ cell capacity is possible by appropriately selecting the salt anion in the electrolyte solution. Using $^7$Li nuclear magnetic resonance and modeling, we confirm that this improvement is a result of enhanced Li$^+$ stability in solution, which in turn induces solubility of the intermediate to Li$_2$O$_2$ formation. Using this strategy, the challenging task of identifying an electrolyte solvent that possesses the anti-correlated properties of high intermediate solubility and solvent stability is alleviated, potentially providing a pathway to develop an electrolyte that affords both high capacity and rechargeability. We believe the model and strategy presented here will be generally useful to enhance Coulombic efficiency in many electrochemical systems (e.g. Li-S batteries) where improving intermediate stability in solution could induce desired mechanisms of product formation.
We have evolved a series of thirteen complete solar models that utilize different assumed heavy element compositions. Models that are based upon the heavy element abundances recently determined by Asplund, Grevesse, and Sauval (2005) are inconsistent with helioseismological measurements. However, models in which the neon abundance is increased by 0.4-0.5 dex to log N(Ne) = 8.29 +- 0.05 (on the scale in which log N(H) = 12) are consistent with the helioseismological measurements even though the other heavy element abundances are in agreement with the determinations of Asplund et al. (2005). These results sharpen and strengthen an earlier study by Antia and Basu (2005). The predicted solar neutrino fluxes are affected by the uncertainties in the composition by less than their 1sigma theoretical uncertainties.
Diblock copolymers blended with homopolymer may self-assemble into spherical, cylindrical or lamellar aggregates. Transitions between these structures may be driven by varying the homopolymer molecular weight or the molecular weight or composition of the diblock. Using self-consistent field theory (SCFT), we reproduce these effects. Our results are compared with X-ray scattering and transmission electron microscopy measurements by Kinning, Winey and Thomas and good agreement is found, although the tendency to form cylindrical and lamellar structures is sometimes overestimated due to our neglect of edge effects due to the finite size of these aggregates. Our results demonstrate that self-consistent field theory can provide detailed information on the self-assembly of isolated block copolymer aggregates.
Protoclusters are the progenitors of massive galaxy clusters. Understanding the properties of these structures is important for building a complete picture of cluster formation and for understanding the impact of environment on galaxy evolution. Future cosmic microwave background (CMB) surveys may provide insight into the properties of protoclusters via observations of the thermal Sunyaev Zel'dovich (SZ) effect and gravitational lensing. Using realistic hydrodynamical simulations of protoclusters from the Three Hundred Project, we forecast the ability of CMB Stage 4-like (CMB-S4) experiments to detect and characterize protoclusters with observations of these two signals. For protoclusters that are the progenitors of clusters at $z = 0$ with $M_{200c} \gtrsim 10^{15}\,M_{\odot}$ we find that the S4-Ultra deep survey has a roughly 20\% chance of detecting the main halos in these structures with SNR > 5 at $z \sim 2$ and a 10\% chance of detecting them at $z \sim 2.5$, where these probabilities include the impacts of noise, CMB foregrounds, and the different possible evolutionary histories of the structures. On the other hand, if protoclusters can be identified using alternative means, such as via galaxy surveys like LSST and Euclid, CMB-S4 will be able to obtain high signal-to-noise measurements of their stacked lensing and SZ signals, providing a way to measure their average mass and gas content. With a sample of 2700 protoclusters at $z = 3$, the CMB-S4 wide survey can measure the stacked SZ signal with a signal-to-noise of 7.2, and the stacked lensing signal with a signal-to-noise of 5.7. Future CMB surveys thus offer exciting prospects for understanding the properties of protoclusters.
Recently, the power spectrum (PS) multipoles using the Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12) sample are analyzed \cite{160703150}. The based model for the analysis is the so-called TNS quasi-linear model and the analysis provides the multipoles up to the hexadecapole \cite{TNS}. Thus, one might be able to recover the real-space linear matter PS by using the combinations of multipoles to investigate the cosmology \cite{0407214}. We provide the analytic form of the ratio of quadrupole (hexadecapole) to monopole moments of the quasi-linear PS including the Fingers-of-God (FoG) effect to recover the real-space PS in the linear regime. One expects that observed values of the ratios of multipoles should be consistent with those of the linear theory at large scales. Thus, we compare the ratios of multipoles of the linear theory, including the FoG effect with the measured values. From these, we recover the linear matter power spectra in real-space. These recovered power spectra are consistent with the linear matter power spectra.
Program repair techniques offer cost-saving benefits for debugging within software development and programming education scenarios. With the proven effectiveness of Large Language Models (LLMs) in code-related tasks, researchers have explored their potential for program repair. However, it is crucial to recognize that existing repair benchmarks may have influenced LLM training data, potentially causing data leakage. To evaluate LLMs' realistic repair capabilities, (1) we introduce an extensive, non-crawled benchmark, referred to as TutorCode, comprising 1,239 C++ defect codes and associated information such as tutor guidance, solution description, failing test cases, and the corrected code. Our work assesses the repair performance of 12 LLMs on TutorCode, measuring repair correctness (TOP-5 and AVG-5) and patch precision (RPSR). (2) We then provide a comprehensive investigation into which types of extra information can help LLMs improve their performance in repairing defects. Among these types, tutor guidance was found to be the most effective information in enhancing LLM repair capabilities. To fully harness LLMs' conversational capabilities and the benefits of augmented information, (3) we introduce a novel conversational semi-automatic repair framework CREF assisting human tutor. It demonstrates a remarkable AVG-5 improvement of 17.2%-24.6% compared to the baseline, achieving an impressive AVG-5 of 76.6% when utilizing GPT-4. These results highlight the potential for enhancing LLMs' repair capabilities through interactions with tutors and historical conversations involving incorrect responses. The successful application of CREF in a real-world educational setting demonstrates its effectiveness in reducing tutors' workload and improving students' learning experience, while also showcasing its promise for facilitating other software engineering tasks, such as code review.
Deposition of particles while flowing past constrictions is a ubiquitous phenomenon observed in diverse systems. Some common examples are jamming of salt crystals near the orifice of saltshakers, clogging of filter systems, gridlock in vehicular traffic etc. Our work investigates the deposition events of colloidal microspheres flowing over microstructured barriers in microfluidic devices. The interplay of DLVO, contact and hydrodynamic forces in facilitating rapid deposition of microspheres is discussed. Noticeably, a decrease in the electrostatic repulsion among microspheres leads to linear chain formations, whereas an increase in roughness results in rapid deposition.
The infinitary propositional logic of here-and-there is important for the theory of answer set programming in view of its relation to strongly equivalent transformations of logic programs. We know a formal system axiomatizing this logic exists, but a proof in that system may include infinitely many formulas. In this note we describe a relationship between the validity of infinitary formulas in the logic of here-and-there and the provability of formulas in some finite deductive systems. This relationship allows us to use finite proofs to justify the validity of infinitary formulas. This note is under consideration for publication in Theory and Practice of Logic Programming.
Software development for Cyber-Physical Systems (CPS) is a sophisticated activity as these systems are inherently complex. The engineering of CPS requires composition and interaction of diverse distributed software modules. Describing both, a systems architecture and behavior in integrated models, yields many advantages to cope with this complexity: the models are platform independent, can be decomposed to be developed independently by experts of the respective fields, are highly reusable and may be subjected to formal analysis. In this paper, we introduce a code generation framework for the MontiArcAutomaton modeling language. CPS are modeled as Component & Connector architectures with embedded I/O! automata. During development, these models can be analyzed using formal methods, graphically edited, and deployed to various platforms. For this, we present four code generators based on the MontiCore code generation framework, that implement the transformation from MontiArcAutomaton models to Mona (formal analysis), EMF Ecore (graphical editing), and Java and Python (deployment. Based on these prototypes, we discuss their commonalities and differences as well as language and application specific challenges focusing on code generator development.
A set $P = H \cup \{w\}$ of $n+1$ points in general position in the plane is called a wheel set if all points but $w$ are extreme. We show that for the purpose of counting crossing-free geometric graphs on such a set $P$, it suffices to know the frequency vector of $P$. While there are roughly $2^n$ distinct order types that correspond to wheel sets, the number of frequency vectors is only about $2^{n/2}$. We give simple formulas in terms of the frequency vector for the number of crossing-free spanning cycles, matchings, triangulations, and many more. Based on that, the corresponding numbers of graphs can be computed efficiently. In particular, we rediscover an already known formula for $w$-embracing triangles spanned by $H$. Also in higher dimensions, wheel sets turn out to be a suitable model to approach the problem of computing the simplicial depth of a point $w$ in a set $H$, i.e., the number of $w$-embracing simplices. While our previous arguments in the plane do not generalize easily, we show how to use similar ideas in $\mathbb{R}^d$ for any fixed $d$. The result is an $O(n^{d-1})$ time algorithm for computing the simplicial depth of a point $w$ in a set $H$ of $n$ points, improving on the previously best bound of $O(n^d\log n)$. Based on our result about simplicial depth, we can compute the number of facets of the convex hull of $n=d+k$ points in general position in $\mathbb{R}^d$ in time $O(n^{\max\{\omega,k-2\}})$ where $\omega \approx 2.373$, even though the asymptotic number of facets may be as large as $n^k$.
Following the general strategy proposed by G.Rybnikov, we present a proof of his well-known result, that is, the existence of two arrangements of lines having the same combinatorial type, but non-isomorphic fundamental groups. To do so, the Alexander Invariant and certain invariants of combinatorial line arrangements are presented and developed for combinatorics with only double and triple points. This is part of a more general project to better understand the relationship between topology and combinatorics of line arrangements.
It has long been known that every weak monoidal category A is equivalent via monoidal functors and monoidal natural transformations to a strict monoidal category st(A). We generalise the definition of weak monoidal category to give a definition of weak P-category for any strongly regular (operadic) theory P, and show that every weak P-category is equivalent via P-functors and P-transformations to a strict P-category. This strictification functor is then shown to have an interesting universal property.
Coronal mass ejections (CMEs) are solar eruptions into interplanetary space of as much as a few billion tons of plasma, with embedded magnetic fields from the Sun's corona. These perturbations play a very important role in solar--terrestrial relations, in particular in the spaceweather. In this work we present some preliminary results of the software development at the Universidad Nacional Autonoma de Mexico to perform Remote MHD Numerical Simulations. This is done to study the evolution of the CMEs in the interplanetary medium through a Web-based interface and the results are store into a database. The new astrophysical computational tool is called the Mexican Virtual Solar Observatory (MVSO) and is aimed to create theoretical models that may be helpful in the interpretation of observational solar data.
The Efimov effect was first predicted for three particles interacting at an $s$-wave resonance in three dimensions. Subsequent study showed that the same effect can be realized by considering two-body and three-body interactions in mixed dimensions. In this work, we consider the three-body problem of two bosonic $A$ atoms interacting with another single $B$ atom in mixed dimensions: The $A$ atoms are confined in a space of dimension $d_A$ and the $B$ atom in a space of dimension $d_B$, and there is an interspecies $s$-wave interaction in a $d_{\rm int}$-co-dimensional space accessible to both species. We find that when the $s$-wave interaction is tuned on resonance, there emerge an infinite series of universal three-body bound states for $\{d_A,d_B,d_{\rm int}\}=\{2,2,0\}$ and $\{2,3,1\}$. Going beyond the Efimov paradigm, the binding energies of these states follow the scaling $\ln|E_n|\sim-s(n\pi-\theta)^2/4$ with the scaling factor $s$ being unity for the former case and $\sqrt{m_B(2m_A+m_B)}/(m_A+m_B)$ for the latter. We discuss how our mixed dimensional systems can be realized in current cold atom experiment and how the effects of these universal three-body bound states can be detected.
In this paper, we propose deep learning architectures (FNN, CNN and LSTM) to forecast a regression model for time dependent data. These algorithm's are designed to handle Floating Car Data (FCD) historic speeds to predict road traffic data. For this we aggregate the speeds into the network inputs in an innovative way. We compare the RMSE thus obtained with the results of a simpler physical model, and show that the latter achieves better RMSE accuracy. We also propose a new indicator, which evaluates the algorithms improvement when compared to a benchmark prediction. We conclude by questioning the interest of using deep learning methods for this specific regression task.
In this paper, we compare the number of unmatched nodes and the size of dilations in two main random network models, the Scale-Free and Clustered Scale-Free networks. The number of unmatched nodes determines the necessary number of control inputs and is known to be a measure for network controllability, while the size of dilation is a measure of controllability recovery in case of control input failure. Our results show that clustered version of Scale-Free networks require fewer control inputs for controllability. Further, the average size of dilations is smaller in clustered Scale-Free networks, implying that potentially fewer options for controllability recovery are available.
The general theory of Grothendieck categories is presented. We systemize the principle methods and results of the theory, showing how these results can be used for studying rings and modules.
Laser brightness is a measure of the ability to de- liver intense light to a target, and encapsulates both the energy content and the beam quality. High brightness lasers requires that both parameters be maximised, yet standard laser cavities do not allow this. For example, in solid-state lasers multimode beams have a high energy content but low beam quality, while Gaussian modes have a small mode volume and hence low energy extraction, but in a good quality mode. Here we over- come this fundamental limitation and demonstrate an optimal approach to realising high brightness lasers. We employ intra- cavity beam shaping to produce a Gaussian mode that carries all the energy of the multimode beam, thus energy extraction and beam quality are simultaneously maximised. This work will have a significant influence on the design of future high brightness laser cavities.
A lot. [In this hard, unbiased and objective look at some past and continuing blunders in following Weinberg's suggestions to arrive at a comprehensive description of Nuclear Physics using Effective Field Theories, some names and citations are withheld to protect the innocent.]
Inspired by the recent success of self-supervised methods applied on images, self-supervised learning on graph structured data has seen rapid growth especially centered on augmentation-based contrastive methods. However, we argue that without carefully designed augmentation techniques, augmentations on graphs may behave arbitrarily in that the underlying semantics of graphs can drastically change. As a consequence, the performance of existing augmentation-based methods is highly dependent on the choice of augmentation scheme, i.e., hyperparameters associated with augmentations. In this paper, we propose a novel augmentation-free self-supervised learning framework for graphs, named AFGRL. Specifically, we generate an alternative view of a graph by discovering nodes that share the local structural information and the global semantics with the graph. Extensive experiments towards various node-level tasks, i.e., node classification, clustering, and similarity search on various real-world datasets demonstrate the superiority of AFGRL. The source code for AFGRL is available at https://github.com/Namkyeong/AFGRL.
Recent publications have described a method for stand-off optical detection of explosives using resonant infra-red photothermal imaging. This technique uses tuned lasers to selectively heat small particles of explosive lying on a substrate surface. The presence of these heated particles is then detected using thermal infra-red imagery. Although the method has been experimentally demonstrated, no adequate theoretical analysis of the laser heating and subsequent particle cooling has been developed. This paper provides the analytical description of these processes that is necessary to understand and optimize the operational parameters of the explosive detection system. The differential equations for particle and substrate temperatures are derived and solved in the Laplace transform domain. The results are used to describe unexplained cooling phenomena measured during the experiments. A limiting particle temperature is derived as a function of experimental parameters. The effects of radiative and natural convection cooling of the particle and of non-uniform particle temperature are examined and found to be negligible. Calculations using the analytical model are compared with experimental measurements. An analysis of thermal contact heating of the substrate is included in the appendix.
We present Charagram embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that Charagram embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks.
We revisit the non-collinear exchange coupling across the trilayer magnetic junction mediated by the diluted-magnetic-semiconductor thin film. By numerical approaches, we investigate the spiral angle between the ferromagnetic layers extensively in the parameter space. In contrast to previous study, we discovered the important role of spin relaxation, which tends to favor spiral exchange over the oscillatory Ruderman-Kittel-Kasuya-Yosida interaction. Finally, we discuss the physics origins of these two types of magnetic interactions.
The complete invariant for gradient like Morse-Smale dynamical systems (vector fields and diffeomorphisms) on closed 4-manifolds are constructed. It is same as Kirby diagram in a case of polar vector field without fixed points of index 3.
The classical Kepler-Coulomb system in 3 dimensions is well known to be 2nd order superintegrable, with a symmetry algebra that closes polynomially under Poisson brackets. This polynomial closure is typical for 2nd order superintegrable systems in 2D and for 2nd order systems in 3D with nondegenerate (4-parameter) potentials. However the degenerate 3-parameter potential for the 3D extended Kepler-Coulomb system (also 2nd order superintegrable) is an exception, as its quadratic symmetry algebra doesn't close polynomially. The 3D 4-parameter potential for the extended Kepler-Coulomb system is not even 2nd order superintegrable. However, Verrier and Evans (2008) showed it was 4th order superintegrable, and Tanoudis and Daskaloyannis (2011) showed that in the quantum case, if a second 4th order symmetry is added to the generators, the double commutators in the symmetry algebra close polynomially. Here, based on the Tremblay, Turbiner and Winternitz construction, we consider an infinite class of classical extended Kepler-Coulomb 3- and 4-parameter systems indexed by a pair of rational numbers $(k_1,k_2)$ and reducing to the usual systems when $k_1=k_2=1$. We show these systems to be superintegrable of arbitrarily high order and work out explicitly the structure of the symmetry algebras determined by the 5 basis generators we have constructed. We demonstrate that the symmetry algebras close rationally; only for systems admitting extra discrete symmetries is polynomial closure achieved. Underlying the structure theory is the existence of raising and lowering constants of the motion, not themselves polynomials in the momenta, that can be employed to construct the polynomial symmetries and their structure relations.
Transient current spectroscopy is proposed and demonstrated in order to investigate the energy relaxation inside a quantum dot in the Coulomb blockade regime. We employ a fast pulse signal to excite an AlGaAs/GaAs quantum dot to an excited state, and analyze the non-equilibrium transient current as a function of the pulse length. The amplitude and time-constant of the transient current are sensitive to the ground and excited spin states. We find that the spin relaxation time is longer than, at least, a few microsecond.
For the quantum algebra U_q(gl(n+1)) in its reduction on the subalgebra U_q(gl(n)) an explicit description of a Mickelsson-Zhelobenko reduction Z-algebra Z_q(gl(n+1),gl(n)) is given in terms of the generators and their defining relations. Using this Z-algebra we describe Hermitian irreducible representations of a discrete series for the noncompact quantum algebra U_q(u(n,1)) which is a real form of U_q(gl(n+1)), namely, an orthonormal Gelfand-Graev basis is constructed in an explicit form.
In this paper we devote to spaces that are not homotopically hausdorff and study their covering spaces. We introduce the notion of small covering and prove that every small covering of $X$ is the universal covering in categorical sense. Also, we introduce the notion of semi-locally small loop space which is the necessary and sufficient condition for existence of universal cover for non-homotopically hausdorff spaces, equivalently existence of small covering spaces. Also, we prove that for semi-locally small loop spaces, $X$ is a small loop space if and only if every cover of $X$ is trivial if and only if $\pi_1^{top}(X)$ is an indiscrete topological group.
Several recent applications of optimal transport (OT) theory to machine learning have relied on regularization, notably entropy and the Sinkhorn algorithm. Because matrix-vector products are pervasive in the Sinkhorn algorithm, several works have proposed to \textit{approximate} kernel matrices appearing in its iterations using low-rank factors. Another route lies instead in imposing low-rank constraints on the feasible set of couplings considered in OT problems, with no approximations on cost nor kernel matrices. This route was first explored by Forrow et al., 2018, who proposed an algorithm tailored for the squared Euclidean ground cost, using a proxy objective that can be solved through the machinery of regularized 2-Wasserstein barycenters. Building on this, we introduce in this work a generic approach that aims at solving, in full generality, the OT problem under low-rank constraints with arbitrary costs. Our algorithm relies on an explicit factorization of low rank couplings as a product of \textit{sub-coupling} factors linked by a common marginal; similar to an NMF approach, we alternatively updates these factors. We prove the non-asymptotic stationary convergence of this algorithm and illustrate its efficiency on benchmark experiments.