text
stringlengths
6
128k
Some people implement pattern and best practices without analyzing its efficiency on their projects. Consequently, our goal in this article is to convince software developers that it is worth to make an earnest effort to evaluate the use of best practices and software patterns. For such purpose, in this study we took a concrete case system for geographical locations inputs through user interfaces. Then, we performed a comparative study on a traditional method against our approach, named reverse logistic to retrieve results, by measuring the time that a user spends to perform actions when entering data into a system. Surprisingly, we had a decrease of 59% in the amount of time spent in comparison to the time spent on the traditional method. This result lays a foundation for feeding data from the typical final step and search based on string matching algorithms, speeding up the interaction between people and computer response
We revisit Wschebor's theorems on small increments for processes with scaling and stationary properties and deduce large deviation principles.
Hydrogen-enhanced decohesion (HEDE) is one of the many mechanisms of hydrogen embrittlement, a phenomenon that severely impacts structural materials such as iron and iron alloys. Grain boundaries (GBs) play a critical role in this mechanism, where they can provide trapping sites or act as hydrogen diffusion pathways. The interaction of H with GBs and other crystallographic defects, and thus the solubility and distribution of H in the microstructure, depends on the concentration, chemical potential, and local stress. Therefore, for a quantitative assessment of HEDE, a generalized solution energy in conjunction with the cohesive strength as a function of hydrogen coverage is needed. In this paper, we carry out density functional theory calculations to investigate the influence of H on the decohesion of the $\Sigma$5(310)[001] and $\Sigma$3(112)[1$\bar{1}$0] symmetrical tilt GBs in bcc Fe, as examples for open and close-packed GB structures. A method to identify the segregation sites at the GB plane is proposed. The results indicate that at higher local concentrations, H leads to a significant reduction of the cohesive strength of the GB planes, significantly more pronounced at the $\Sigma$5 than at the $\Sigma$3 GB. Interestingly, at finite stress, the $\Sigma$3 GB becomes more favorable for H solution, as opposed to the case of zero stress, where the $\Sigma$5 GB is more attractive. This suggests that, under certain conditions, stresses in the microstructure can lead to a redistribution of H to the stronger grain boundary, which opens a path to designing H-resistant microstructures. To round up our study, we investigate the effects of typical alloying elements in ferritic steel, C, V, Cr, and Mn, on the solubility of H and the strength of the GBs.
Let $\varphi:X\to X$ be a homeomorphism of a compact metric space $X$. For any continuous function $F:X\to \mathbb{R}$ there is a one-parameter group $\alpha^{F}$ of automorphisms on the crossed product $C^*$-algebra $C(X)\rtimes_{\varphi}\mathbb{Z}$ defined such that $\alpha^{F}_{t}(fU)=fUe^{-itF}$ when $f \in C(X)$ and $U$ is the canonical unitary in the construction of the crossed product. In this paper we study the KMS states for these flows by developing an intimate relation to the ergodic theory of non-singular transformations and show that the structure of KMS-states can be very rich and complicated. Our results are complete concerning the set of possible inverse temperatures; in particular, we show that when $C(X) \rtimes_{\phi} \mathbb Z$ is simple this set is either $\{0\}$ or the whole line $\mathbb R$.
Maintaining tissue homeostasis requires appropriate regulation of stem cell differentiation. The Waddington landscape posits that gene circuits in a cell form a potential landscape of different cell types, wherein cells follow attractors of the probability landscape to develop into distinct cell types. However, how adult stem cells achieve a delicate balance between self-renewal and differentiation remains unclear. We propose that random inheritance of epigenetic states plays a pivotal role in stem cell differentiation and present a hybrid model of stem cell differentiation induced by epigenetic modifications. Our comprehensive model integrates gene regulation networks, epigenetic state inheritance, and cell regeneration, encompassing multi-scale dynamics ranging from transcription regulation to cell population. Through model simulations, we demonstrate that random inheritance of epigenetic states during cell divisions can spontaneously induce cell differentiation, dedifferentiation, and transdifferentiation. Furthermore, we investigate the influences of interfering with epigenetic modifications and introducing additional transcription factors on the probabilities of dedifferentiation and transdifferentiation, revealing the underlying mechanism of cell reprogramming. This \textit{in silico} model provides valuable insights into the intricate mechanism governing stem cell differentiation and cell reprogramming and offers a promising path to enhance the field of regenerative medicine.
Magnetic reconnection in laser-produced magnetized plasma is investigated by using optical diagnostics. The magnetic field is generated via Biermann battery effect, and the inversely directed magnetic field lines interact with each other. It is shown by self-emission measurement that two colliding plasmas stagnate on a mid-plane forming two planar dense regions, and that they interact later in time. Laser Thomson scattering spectra are distorted in the direction of the self-generated magnetic field, indicating asymmetric ion velocity distribution and plasma acceleration. In addition, the spectra perpendicular to the magnetic field show different peak intensity, suggesting an electron current formation. These results are interpreted as magnetic field dissipation, reconnection, and outflow acceleration. Two-directional laser Thomson scattering is, as discussed here, a powerful tool for the investigation of microphysics in the reconnection region.
Large-scale fine-grained image retrieval has two main problems. First, low dimensional feature embedding can fasten the retrieval process but bring accuracy reduce due to overlooking the feature of significant attention regions of images in fine-grained datasets. Second, fine-grained images lead to the same category query hash codes mapping into the different cluster in database hash latent space. To handle these two issues, we propose a feature consistency driven attention erasing network (FCAENet) for fine-grained image retrieval. For the first issue, we propose an adaptive augmentation module in FCAENet, which is selective region erasing module (SREM). SREM makes the network more robust on subtle differences of fine-grained task by adaptively covering some regions of raw images. The feature extractor and hash layer can learn more representative hash code for fine-grained images by SREM. With regard to the second issue, we fully exploit the pair-wise similarity information and add the enhancing space relation loss (ESRL) in FCAENet to make the vulnerable relation stabler between the query hash code and database hash code. We conduct extensive experiments on five fine-grained benchmark datasets (CUB2011, Aircraft, NABirds, VegFru, Food101) for 12bits, 24bits, 32bits, 48bits hash code. The results show that FCAENet achieves the state-of-the-art (SOTA) fine-grained retrieval performance compared with other methods.
Some families of carbonaceous chondrites are rich in prebiotic organics that may have contributed to the origin of life on Earth and elsewhere. However, the formation and chemical evolution of complex soluble organic molecules from interstellar precursors under relevant parent body conditions has not been thoroughly investigated. In this study, we approach this topic by simulating meteorite parent body aqueous alteration of interstellar residue analogs. The distributions of amines and amino acids are qualitatively and quantitatively investigated and linked to closing the gap between interstellar and meteoritic prebiotic organic abundances. We find that the abundance trend of methylamine > ethylamine> glycine > serine > alanine > \b{eta}-alanine does not change from pre- to post-aqueous alteration, suggesting that certain cloud conditions have an influential role on the distributions of interstellar-inherited meteoritic organics. However, the abundances for most of the amines and amino acids studied here varied by about 2-fold when aqueously processed for 7 days at 125 {\deg}C, and the changes in the {\alpha}- to \b{eta}-alanine ratio were consistent with those of aqueously altered carbonaceous chondrites, pointing to an influential role of meteorite parent body processing on the distributions of interstellar-inherited meteoritic organics. We detected higher abundances of {\alpha}- over \b{eta}-alanine, which is opposite to what is typically observed in aqueously altered carbonaceous chondrites; these results may be explained by at least the lack of minerals and insoluble organic matter-relevant materials in the experiments. The high abundance of volatile amines in the non-aqueously altered samples suggests that these types of interstellar volatiles can be efficiently transferred to asteroids and comets, supporting the idea of the presence of interstellar organics in solar system objects.
We give a new construction, based on categorical logic, of Nori's $\mathbb Q$-linear abelian category of mixed motives associated to a cohomology or homology functor with values in finite-dimensional vector spaces over $\mathbb Q$. This new construction makes sense for infinite-dimensional vector spaces as well, so that it associates a $\mathbb Q$-linear abelian category of mixed motives to any (co)homology functor, not only Betti homology (as Nori had done) but also, for instance, $\ell$-adic, $p$-adic or motivic cohomology. We prove that the $\mathbb Q$-linear abelian categories of mixed motives associated to different (co)homology functors are equivalent if and only a family (of logical nature) of explicit properties is shared by these different functors. The problem of the existence of a universal cohomology theory and of the equivalence of the information encoded by the different classical cohomology functors thus reduces to that of checking these explicit conditions.
The inherent noise in the observed (e.g., scanned) binary document image degrades the image quality and harms the compression ratio through breaking the pattern repentance and adding entropy to the document images. In this paper, we design a cost function in Bayesian framework with dictionary learning. Minimizing our cost function produces a restored image which has better quality than that of the observed noisy image, and a dictionary for representing and encoding the image. After the restoration, we use this dictionary (from the same cost function) to encode the restored image following the symbol-dictionary framework by JBIG2 standard with the lossless mode. Experimental results with a variety of document images demonstrate that our method improves the image quality compared with the observed image, and simultaneously improves the compression ratio. For the test images with synthetic noise, our method reduces the number of flipped pixels by 48.2% and improves the compression ratio by 36.36% as compared with the best encoding methods. For the test images with real noise, our method visually improves the image quality, and outperforms the cutting-edge method by 28.27% in terms of the compression ratio.
We present an approach to study functional segregation and integration in the living brain based on community structure decomposition determined by maximum modularity. We demonstrate this method with a network derived from functional imaging data with nodes defined by individual image pixels, and edges in terms of correlated signal changes. We found communities whose anatomical distributions correspond to biologically meaningful structures and include compelling functional subdivisions between anatomically equivalent brain regions.
To control how a robot moves, motion planning algorithms must compute paths in high-dimensional state spaces while accounting for physical constraints related to motors and joints, generating smooth and stable motions, avoiding obstacles, and preventing collisions. A motion planning algorithm must therefore balance competing demands, and should ideally incorporate uncertainty to handle noise, model errors, and facilitate deployment in complex environments. To address these issues, we introduce a framework for robot motion planning based on variational Gaussian processes, which unifies and generalizes various probabilistic-inference-based motion planning algorithms, and connects them with optimization-based planners. Our framework provides a principled and flexible way to incorporate equality-based, inequality-based, and soft motion-planning constraints during end-to-end training, is straightforward to implement, and provides both interval-based and Monte-Carlo-based uncertainty estimates. We conduct experiments using different environments and robots, comparing against baseline approaches based on the feasibility of the planned paths, and obstacle avoidance quality. Results show that our proposed approach yields a good balance between success rates and path quality.
Starting from light to superheavy nuclei, we have calculated the effective surface properties such as the symmetry energy, neutron pressure, and symmetry energy curvature using the coherent density fluctuation model. The isotopic chains of O, Ca, Ni, Zr, Sn, Pb, and Z = 120 are considered in the present analysis, which cover nuclei over the whole nuclear chart. The matter density distributions of these nuclei along with the ground state bulk properties are calculated within the spherically symmetric effective field theory motivated relativistic mean field model by using the recently developed IOPB-I, FSUGarnet, and G3 parameter sets. The calculated results are compared with the predictions of the widely used NL3 parameter set and found in good agreement. We observe a few signatures of shell and/or sub-shell structure in the isotopic chains of nuclei. The present investigations are quite relevant for the synthesis of exotic nuclei with high isospin asymmetry including superheavy and also to constrain an equation of state of nuclear matter.
Many biological and physical systems exhibit population-density dependent transitions to synchronized oscillations in a process often termed "dynamical quorum sensing". Synchronization frequently arises through chemical communication via signaling molecules distributed through an external media. We study a simple theoretical model for dynamical quorum sensing: a heterogenous population of limit-cycle oscillators diffusively coupled through a common media. We show that this model exhibits a rich phase diagram with four qualitatively distinct mechanisms fueling population-dependent transitions to global oscillations, including a new type of transition we term "dynamic death". We derive a single pair of analytic equations that allows us to calculate all phase boundaries as a function of population density and show that the model reproduces many of the qualitative features of recent experiments of BZ catalytic particles as well as synthetically engineered bacteria.
Face recognition models embed a face image into a low-dimensional identity vector containing abstract encodings of identity-specific facial features that allow individuals to be distinguished from one another. We tackle the challenging task of inverting the latent space of pre-trained face recognition models without full model access (i.e. black-box setting). A variety of methods have been proposed in literature for this task, but they have serious shortcomings such as a lack of realistic outputs and strong requirements for the data set and accessibility of the face recognition model. By analyzing the black-box inversion problem, we show that the conditional diffusion model loss naturally emerges and that we can effectively sample from the inverse distribution even without an identity-specific loss. Our method, named identity denoising diffusion probabilistic model (ID3PM), leverages the stochastic nature of the denoising diffusion process to produce high-quality, identity-preserving face images with various backgrounds, lighting, poses, and expressions. We demonstrate state-of-the-art performance in terms of identity preservation and diversity both qualitatively and quantitatively, and our method is the first black-box face recognition model inversion method that offers intuitive control over the generation process.
Polynomials commute under composition are referred to as commuting polynomials. In this paper, we study division properties for commuting polynomials with rational (and integer) coefficients. As a consequence, we show an algebraic particularity of the commuting polynomials coming from weighted sums for cycle graphs with pendant edges (arXiv:2402.07209v1.).
We prove the following theorem: if $w$ is a quasiconformal mapping of the unit disk onto itself satisfying elliptic partial differential inequality $|L[w]|\le \mathcal{B}|\nabla w|^2+\Gamma$, then $w$ is Lipschitz continuous. This {result} extends some recent results, where instead of an elliptic differential operator is {only} considered {the} Laplace operator.
Content: 1. Introduction 2. Regge calculus and dynamical triangulations Simplicial manifolds and piecewise linear spaces - dual complex and volume elements - curvature and Regge action - topological invariants - quantum Regge calculus - dynamical triangulations 3. Two dimensional quantum gravity, dynamical triangulations and matrix models continuum formulation - dynamical triangulations and continuum limit - one matrix model - various matrix models - numerical studies - c=1 barrier - intrinsic geometry of 2d gravity - Liouville at c>25 4. Euclidean quantum gravity in three and four dimensions what are we looking for? - 3d simplicial gravity - 4d simplicial gravity - 3d and 4d Regge calculus 5. Non-perturbative problems in two dimensional quantum gravity double scaling limit - string equation - non-perturbative properties of the string equation - divergent series and Borel summability - non-perturbative effects in 2d gravity and string theories - stabilization proposals 6. Conclusion
The expression for entropy sometimes appears mysterious - as it often is asserted without justification. This short manuscript contains a discussion of the underlying assumptions behind entropy as well as simple derivation of this ubiquitous quantity.
Session-based recommendation techniques aim to capture dynamic user behavior by analyzing past interactions. However, existing methods heavily rely on historical item ID sequences to extract user preferences, leading to challenges such as popular bias and cold-start problems. In this paper, we propose a hybrid multimodal approach for session-based recommendation to address these challenges. Our approach combines different modalities, including textual content and item IDs, leveraging the complementary nature of these modalities using CatBoost. To learn universal item representations, we design a language representation-based item retrieval architecture that extracts features from the textual content utilizing pre-trained language models. Furthermore, we introduce a novel Decoupled Contrastive Learning method to enhance the effectiveness of the language representation. This technique decouples the sequence representation and item representation space, facilitating bidirectional alignment through dual-queue contrastive learning. Simultaneously, the momentum queue provides a large number of negative samples, effectively enhancing the effectiveness of contrastive learning. Our approach yielded competitive results, securing a 5th place ranking in KDD CUP 2023 Task 1. We have released the source code and pre-trained models associated with this work.
The Magnetism in Massive Stars (MiMeS) project represents the largest systematic survey of stellar magnetism ever undertaken. Based on a sample of over 550 Galactic B and O-type stars, the MiMeS project has derived the basic characteristics of magnetism in hot, massive stars. Herein we report preliminary results.
Several hairy black hole solutions are known to violate the original version of the celebrated no-hair conjecture. This prompted the development of a new theorem that establishes a universal lower bound on the extension of hairs outside any $4$-dimensional black hole solutions of general relativity. Our work presents a novel generalization of this ``no-short hair'' theorem, which notably does not use gravitational field equations and is valid for arbitrary spacetime dimensions ($D \geq 4$). Consequently, irrespective of the underlying theory of gravity, the ``hairosphere'' must extend to the innermost light ring of the black hole spacetime. Various possible observational implications of this intriguing theorem are discussed, and other useful consequences are explored.
Knowledge of the severity of an influenza outbreak is crucial for informing and monitoring appropriate public health responses, both during and after an epidemic. However, case-fatality, case-intensive care admission and case-hospitalisation risks are difficult to measure directly. Bayesian evidence synthesis methods have previously been employed to combine fragmented, under-ascertained and biased surveillance data coherently and consistently, to estimate case-severity risks in the first two waves of the 2009 A/H1N1 influenza pandemic experienced in England. We present in detail the complex probabilistic model underlying this evidence synthesis, and extend the analysis to also estimate severity in the third wave of the pandemic strain during the 2010/2011 influenza season. We adapt the model to account for changes in the surveillance data available over the three waves. We consider two approaches: (a) a two-stage approach using posterior distributions from the model for the first two waves to inform priors for the third wave model; and (b) a one-stage approach modelling all three waves simultaneously. Both approaches result in the same key conclusions: (1) that the age-distribution of the case-severity risks is "u"-shaped, with children and older adults having the highest severity; (2) that the age-distribution of the infection attack rate changes over waves, school-age children being most affected in the first two waves and the attack rate in adults over 25 increasing from the second to third waves; and (3) that when averaged over all age groups, case-severity appears to increase over the three waves. The extent to which the final conclusion is driven by the change in age-distribution of those infected over time is subject to discussion.
A Theorem of Hou, Leung and Xiang generalised Kneser's addition Theorem to field extensions. This theorem was known to be valid only in separable extensions, and it was a conjecture of Hou that it should be valid for all extensions. We give an alternative proof of the theorem that also holds in the non-separable case, thus solving Hou's conjecture. This result is a consequence of a strengthening of Hou et al.'s theorem that is a transposition to extension fields of an addition theorem of Balandraud.
In some class of supersymmetric (SUSY) models, the neutral Wino becomes the lightest superparticle and the Bino decays into the Wino and standard-model particles. In such models, we show that the measurement of the Bino mass is possible if the short charged tracks (with the length of O(10 cm)) can be identified as a signal of the charged-Wino production. We pay particular attention to the anomaly-mediated SUSY-breaking (AMSB) model with a generic form of K\"ahler potential, in which only the gauginos are kinematically accessible superparticles to the LHC, and discuss the implication of the Bino mass measurement for the test of the AMSB model.
The non-Abelian symmetries of the half-infinite XXZ spin chain for all possible types of integrable boundary conditions are classified. For each type of boundary conditions, an analog of the Chevalley-type presentation is given for the corresponding symmetry algebra. In particular, two new algebras arise that are, respectively, generated by the symmetry operators of the model with triangular and special $U_q(gl_2)-$invariant integrable boundary conditions.
We study the spin dynamics in a 3D quantum antiferromagnet on a face-centered cubic (FCC) lattice. The effects of magnetic field, single-ion anisotropy, and biquadratic interactions are investigated using linear spin wave theory with spins in a canted basis about the Type IIA FCC antiferromagnetic ground state structure which is known to be stable. We calculate the expected finite frequency neutron scattering intensity and give qualitative criteria for typical FCC materials MnO and CoO. The magnetization reduction due to quantum zero point fluctuations is also analyzed.
The transport of energy in heated plasmas requires the knowledge of the radiation coefficients. These coefficients consist of contribution of bremsstrahlung, photoionisation, bound-bound transmissions and scattering. Scattering of photons on electrons is taken into account by the model of Thomson, Klein-Nishina and the first order angular momentum of Klein-Nishina. It is shown that radiative scattering becomes an important part of energy transport in high temperature plasmas. Moreover, the contribution of transport correction to scattering is taken into account. The physics is discussed on the example of a heated plutonium plasma at different particle densities and temperatures in radiative equilibrium.
We construct covariant $q$-deformed holomorphic structures for all finitely-generated relative Hopf modules over the irreducible quantum flag manifolds endowed with their Heckenberger--Kolb calculi. In the classical limit these reduce to modules of sections of holomorphic homogeneous vector bundles over irreducible flag manifolds. For the case of simple relative Hopf modules, we show that this covariant holomorphic structure is unique. This generalises earlier work of Majid, Khalkhali, Landi, and van Suijlekom for line modules of the Podle\'s sphere, and subsequent work of Khalkhali and Moatadelro for general quantum projective space.
We present alfonso, an open-source Matlab package for solving conic optimization problems over nonsymmetric convex cones. The implementation is based on the authors' corrected analysis of a primal-dual interior-point method of Skajaa and Ye. This method enables optimization over any convex cone as long as a logarithmically homogeneous self-concordant barrier is available for the cone or its dual. This includes many nonsymmetric cones, for example, hyperbolicity cones and their duals (such as sum-of-squares cones), semidefinite and second-order cone representable cones, power cones, and the exponential cone. Besides enabling the solution of problems which cannot be cast as optimization problems over a symmetric cone, it also offers performance advantages for problems whose symmetric cone programming representation requires a large number of auxiliary variables or has a special structure that can be exploited in the barrier computation. The worst-case iteration complexity of alfonso is the best known for non-symmetric cone optimization: $O(\sqrt{\nu}\log(1/\epsilon))$ iterations to reach an $\epsilon$-optimal solution, where $\nu$ is the barrier parameter of the barrier function used in the optimization. alfonso can be interfaced with a Matlab function (supplied by the user) that computes the Hessian of a barrier function for the cone. For convenience, a simplified interface is also available to optimize over the direct product of cones for which a barrier function has already been built into the software. This interface can be easily extended to include new cones. Both interfaces are illustrated by solving linear programs. The oracle interface and the efficiency of alfonso are also demonstrated using a design of experiments problem in which the tailored barrier computation greatly decreases the solution time compared to using state-of-the-art conic optimization software.
We consider maximal slices of the Myers-Perry black hole, the doubly spinning black ring, and the Black Saturn solution. These slices are complete, asymptotically flat Riemannian manifolds with inner boundaries corresponding to black hole horizons. Although these spaces are simply connected as a consequence of topological censorship, they have non-trivial topology. In this note we investigate the question of whether the topology of spatial sections of the horizon uniquely determines the topology of the maximal slices. We show that the horizon determines the homological invariants of the slice under certain conditions. The homological analysis is extended to black holes for which explicit geometries are not yet known. We believe that these results could provide insights in the context of proving existence of deformations of this initial data. For the topological slices of the doubly spinning black ring and the Black Saturn we compute the homotopy groups up to dimension 3 and show that their 4-dimensional homotopy group is not trivial.
We study the large-time behavior of bounded from below solutions of parabolic viscous Hamilton-Jacobi Equations in the whole space $\mathbb{R}^N$ in the case of superquadratic Hamiltonians. Existence and uniqueness of such solutions are shown in a very general framework, namely when the source term and the initial data are only bounded from below with an arbitrary growth at infinity. Our main result is that these solutions have an ergodic behavior when $t\to +\infty$, i.e., they behave like $\lambda^*t + \phi(x)$ where $\lambda^*$ is the maximal ergodic constant and $\phi$ is a solution of the associated ergodic problem. The main originality of this result comes from the generality of the data: in particular, the initial data may have a completely different growth at infinity from those of the solution of the ergodic problem.
We provide a new method to prove and improve the Chemin-Masmoudi criterion for viscoelastic systems of Oldroyd type in \cite{CM} in two space dimensions. Our method is much easier than the one based on the well-known \textit{losing a priori estimate} and is expected to be easily adopted to other problems involving the losing \textit{a priori} estimate.
Through molecular dynamics simulations, we examined hydrodynamic behavior of the Brownian motion of fullerene particles based on molecular interactions. The solvation free energy and the velocity autocorrelation function (VACF) were calculated by using the Lennard-Jones (LJ) and Weeks-Chandler-Andersen (WCA) potentials for the solute-solvent and solvent-solvent interactions and by changing the size of the fullerene particles. We also measured the diffusion constant of the fullerene particles and the shear viscosity of the host fluid, and then the hydrodynamic radius $a_\mathrm{HD}$ was quantified from the Stokes-Einstein relation. The $a_\mathrm{HD}$ value exceeds that of the gyration radius of the fullerene when the solvation free energy exhibits largely negative values using the LJ potential. In contrast, $a_\mathrm{HD}$ becomes comparable to the size of bare fullerene, when the solvation free energy is positive using the WCA potential. Furthermore, the VACF of the fullerene particles is directly comparable with the analytical expressions utilizing the Navier-Stokes equations both in incompressible and compressible forms. Hydrodynamic long-time tail $t^{-3/2}$ is demonstrated for timescales longer than the kinematic time of the momentum diffusion over the particles' size. However, the VACF in shorter timescales deviates from the hydrodynamic description, particularly for smaller fullerene particles and for the LJ potential. This occurs even though the compressible effect is considered when characterizing the decay of VACF around the sound propagation time scale over the particles' size. These results indicate that the nanoscale Brownian motion is influenced by the solvation structure around the solute particles originating from the molecular interaction.
Mutual exclusion is one of the most commonly used techniques to handle contention in concurrent systems. Traditionally, mutual exclusion algorithms have been designed under the assumption that a process does not fail while acquiring/releasing a lock or while executing its critical section. However, failures do occur in real life, potentially leaving the lock in an inconsistent state. This gives rise to the problem of recoverable mutual exclusion (RME) that involves designing a mutual exclusion (ME) algorithm that can tolerate failures, while maintaining safety and liveness properties. In this work, we present a framework that transforms any algorithm that solves the RME problem into an algorithm that can also simultaneously adapt to (1) the number of processes competing for the lock, as well as (2) the number of failures that have occurred in the recent past, while maintaining the correctness and performance properties of the underlying RME algorithm. Additionally, the algorithm constructed as a result of this transformation adds certain desirable properties like fairness (a variation of FCFS) and bounded recovery. Assume that the worst-case RMR complexity of a critical section request in the underlying RME algorithm is $R(n)$. Then, our framework yields an RME algorithm for which the worst-case RMR complexity of a critical section request is given by $\mathcal{O}(\min \{\ddot{c}, \sqrt{F+1}, R(n)\})$, where $\ddot{c}$ denotes the point contention of the request and $F$ denotes the number of failures in the recent past of the request. We further extend our framework by presenting a novel memory reclamation algorithm to bound the worst-case space complexity of the RME algorithm. The memory reclamation techniques maintain the fairness, performance and correctness properties of our transformation and is general enough to be employed to bound the space of other RME algorithms.
We entirely compute the cohomology for a natural and large class of $\mathfrak{osp}(1|2)$ modules $M$. We study the restriction to the $\mathfrak{sl}(2)$ cohomology of $M$ and apply our results to the module $M={\mathfrak D}_{\lambda,\mu}$ of differential operators on the super circle, acting on densities.
This workshop aims to demonstrate how the Tracker Video Analysis and Modeling Tool engages, enables and empowers teachers to be learners so that we can be leaders in our teaching practice. Through this workshop, the kinematics of a falling ball and a projectile motion are explored using video analysis and in the later video modeling. We hope to lead and inspire other teachers by facilitating their experiences with this ICT-enabled video modeling pedagogy (Brown, 2008) and free tool for facilitating students-centered active learning, thus motivate students to be more self-directed.
Considering that Coupled Dictionary Learning (CDL) method can obtain a reasonable linear mathematical relationship between resource images, we propose a novel CDL-based Synthetic Aperture Radar (SAR) and multispectral pseudo-color fusion method. Firstly, the traditional Brovey transform is employed as a pre-processing method on the paired SAR and multispectral images. Then, CDL is used to capture the correlation between the pre-processed image pairs based on the dictionaries generated from the source images via enforced joint sparse coding. Afterward, the joint sparse representation in the pair of dictionaries is utilized to construct an image mask via calculating the reconstruction errors, and therefore generate the final fusion image. The experimental verification results of the SAR images from the Sentinel-1 satellite and the multispectral images from the Landsat-8 satellite show that the proposed method can achieve superior visual effects, and excellent quantitative performance in terms of spectral distortion, correlation coefficient, MSE, NIQE, BRISQUE, and PIQE.
In this paper, the potential benefits of applying non-orthogonal multiple access (NOMA) technique in $K$-tier hybrid heterogeneous networks (HetNets) is explored. A promising new transmission framework is proposed, in which NOMA is adopted in small cells and massive multiple-input multiple-output (MIMO) is employed in macro cells. For maximizing the biased average received power for mobile users, a NOMA and massive MIMO based user association scheme is developed. To evaluate the performance of the proposed framework, we first derive the analytical expressions for the coverage probability of NOMA enhanced small cells. We then examine the spectrum efficiency of the whole network, by deriving exact analytical expressions for NOMA enhanced small cells and a tractable lower bound for massive MIMO enabled macro cells. Lastly, we investigate the energy efficiency of the hybrid HetNets. Our results demonstrate that: 1) The coverage probability of NOMA enhanced small cells is affected to a large extent by the targeted transmit rates and power sharing coefficients of two NOMA users; 2) Massive MIMO enabled macro cells are capable of significantly enhancing the spectrum efficiency by increasing the number of antennas; 3) The energy efficiency of the whole network can be greatly improved by densely deploying NOMA enhanced small cell base stations (BSs); and 4) The proposed NOMA enhanced HetNets transmission scheme has superior performance compared to the orthogonal multiple access~(OMA) based HetNets.
A reliable critic is central to on-policy actor-critic learning. But it becomes challenging to learn a reliable critic in a multi-agent sparse reward scenario due to two factors: 1) The joint action space grows exponentially with the number of agents 2) This, combined with the reward sparseness and environment noise, leads to large sample requirements for accurate learning. We show that regularising the critic with spectral normalization (SN) enables it to learn more robustly, even in multi-agent on-policy sparse reward scenarios. Our experiments show that the regularised critic is quickly able to learn from the sparse rewarding experience in the complex SMAC and RWARE domains. These findings highlight the importance of regularisation in the critic for stable learning.
We study the representations of large integers $n$ as sums $p_1^2 + ... + p_s^2$, where $p_1,..., p_s$ are primes with $| p_i - (n/s)^{1/2} | \le n^{\theta/2}$, for some fixed $\theta < 1$. When $s = 5$ we use a sieve method to show that all sufficiently large integers $n \equiv 5 \pmod {24}$ can be represented in the above form for $\theta > 8/9$. This improves on earlier work by Liu, L\"{u} and Zhan, who established a similar result for $\theta > 9/10$. We also obtain estimates for the number of integers $n$ satisfying the necessary local conditions but lacking representations of the above form with $s = 3, 4$. When $s = 4$ our estimates improve and generalize recent results by L\"{u} and Zhai, and when $s = 3$ they appear to be first of their kind.
This is an early but comprehensive review of the PNP Poisson Nernst Planck theory of ion channels. Extensive reference is made to the earlier literature. The starting place for this theory of open channels is a theory of electrodiffusion rather like that used previously to describe membranes. The theory uses Poisson's equation to describe how charge on ions and the channel protein creates electrical potential; it uses the Nernst-Planck equations to describe migration and diffusion of ions in gradients of concentration and electrical potential. Combined, these are also the "drift-diffusion equations" of solid state physics, which are widely, if not universally used to describe the flow of current and the behavior of semiconductors.
In recent years, the development of Artificial Intelligence (AI) has offered the possibility to tackle many interdisciplinary problems, and the field of chemistry is not an exception. Drug analysis is crucial in drug discovery, playing an important role in human life. However, this task encounters many difficulties due to the wide range of computational chemistry methods. Drug analysis also involves a massive amount of work, including determining taste. Thus, applying deep learning to predict a molecule's bitterness is inevitable to accelerate innovation in drug analysis by reducing the time spent. This paper proposes an artificial neural network (ANN) based approach (EC-ANN) for the molecule's bitter prediction. Our approach took the SMILE (Simplified molecular-input line-entry system) string of a molecule as the input data for the prediction, and the 256-bit ECFP descriptor is the input vector for our network. It showed impressive results compared to state-of-the-art, with a higher performance on two out of three test sets according to the experiences on three popular test sets: Phyto-Dictionary, Unimi, and Bitter-new set [1]. For the Phyto-Dictionary test set, our model recorded 0.95 and 0.983 in F1-score and AUPR, respectively, depicted as the highest score in F1-score. For the Unimi test set, our model achieved 0.88 in F1-score and 0.88 in AUPR, which is roughly 12.3% higher than the peak of previous models [1, 2, 3, 4, 5].
Let $p$ be an odd prime. It is well known that $F_{p-(\frac p5)}\equiv 0\pmod{p}$, where $\{F_n\}_{n\ge0}$ is the Fibonacci sequence and $(-)$ is the Jacobi symbol. In this paper we show that if $p\not=5$ then we may determine $F_{p-(\frac p5)}$ mod $p^3$ in the following way: $$\sum_{k=0}^{(p-1)/2}\frac{\binom{2k}k}{(-16)^k}\equiv\left(\frac{p}5\right)\left(1+\frac{F_{p-(\frac {p}5)}}2\right)\pmod{p^3}.$$ We also use Lucas quotients to determine $\sum_{k=0}^{(p-1)/2}\binom{2k}k/m^k$ modulo $p^2$ for any integer $m\not\equiv0\pmod{p}$; in particular, we obtain $$\sum_{k=0}^{(p-1)/2}\frac{\binom{2k}k}{16^k}\equiv\left(\frac3{p}\right)\pmod{p^2}.$$ In addition, we pose three conjectures for further research.
This paper presents a rewriting logic specification of the Illinois Browser Operating System (IBOS) and defines several security properties, including the same-origin policy (SOP) in reachability logic. It shows how these properties can be deductively verified using our constructor-based reachability logic theorem prover. This paper also highlights the reasoning techniques used in the proof and three modularity principles that have been crucial to scale up and complete the verification effort.
Quantum versions of random walks on the line and cycle show a quadratic improvement in their spreading rate and mixing times respectively. The addition of decoherence to the quantum walk produces a more uniform distribution on the line, and even faster mixing on the cycle by removing the need for time-averaging to obtain a uniform distribution. We calculate numerically the entanglement between the coin and the position of the quantum walker and show that the optimal decoherence rates are such that all the entanglement is just removed by the time the final measurement is made.
Making use of the T-duality symmetry of superstring theory, and of the double geometry from Double Field Theory, we argue that cosmological singularities of a homogeneous and isotropic universe disappear. In fact, an apparent big bang singularity in Einstein gravity corresponds to a universe expanding to infinite size in the dual dimensions.
The electric-field response of a one-dimensional ring of interacting fermions, where the interactions are described by the extended Hubbard model, is investigated. By using an accurate real-time propagation scheme based on the Chebyshev expansion of the evolution operator, we uncover various non-linear regimes for a range of interaction parameters that allows modeling of metallic and insulating (either charge density wave or spin density wave insulators) rings. The metallic regime appears at the phase boundary between the two insulating phases and provides the opportunity to describe either weakly or strongly correlated metals. We find that the {\it fidelity susceptibility} of the ground state as a function of magnetic flux piercing the ring provides a very good measure of the short-time response. Even completely different interacting regimes behave in a similar manner at short time-scales as long as the fidelity susceptibility is the same. Depending on the strength of the electric field we find various types of responses: persistent currents in the insulating regime, dissipative regime or damped Bloch-like oscillations with varying frequencies or even irregular in nature. Furthermore, we also consider the dimerization of the ring and describe the response of a correlated band insulator. In this case the distribution of the energy levels is more clustered and the Bloch-like oscillations become even more irregular.
This article offers a simplified approach to the distribution theory of randomly weighted averages or $P$-means $M_P(X):= \sum_{j} X_j P_j$, for a sequence of i.i.d.random variables $X, X_1, X_2, \ldots$, and independent random weights $P:= (P_j)$ with $P_j \ge 0$ and $\sum_{j} P_j = 1$. The collection of distributions of $M_P(X)$, indexed by distributions of $X$, is shown to encode Kingman's partition structure derived from $P$. For instance, if $X_p$ has Bernoulli$(p)$ distribution on $\{0,1\}$, the $n$th moment of $M_P(X_p)$ is a polynomial function of $p$ which equals the probability generating function of the number $K_n$ of distinct values in a sample of size $n$ from $P$: $E (M_P(X_p))^n = E p^{K_n}$. This elementary identity illustrates a general moment formula for $P$-means in terms of the partition structure associated with random samples from $P$, first developed by Diaconis and Kemperman (1996) and Kerov (1998) in terms of random permutations. As shown by Tsilevich (1997) if the partition probabilities factorize in a way characteristic of the generalized Ewens sampling formula with two parameters $(\alpha,\theta)$, found by Pitman (1992), then the moment formula yields the Cauchy-Stieltjes transform of an $(\alpha,\theta)$ mean. The analysis of these random means includes the characterization of $(0,\theta)$-means, known as Dirichlet means, due to Von Neumann (1941), Watson (1956) and Cifarelli and Regazzini (1990) and generalizations of L\'evy's arcsine law for the time spent positive by a Brownian motion, due to Darling (1949) Lamperti (1958) and Barlow, Pitman and Yor (1989).
The breakup of an interface into a cascade of droplets and their subsequent coalescence is a generic problem of central importance to a large number of industrial settings such as mixing, separations, and combustion. We study the breakup of a liquid jet introduced through a cylindrical nozzle into a stagnant viscous phase via a hybrid interface-tracking/level-set method to account for the surface tension forces in a three-dimensional Cartesian domain. Numerical solutions are obtained for a range of Reynolds (Re) and Weber (We) numbers. We find that the interplay between the azimuthal and streamwise vorticity components leads to different interfacial features and flow regimes in Re-We space. We show that the streamwise vorticity plays a critical role in the development of the three-dimensional instabilities on the jet surface. In the inertia-controlled regime at high Re and We, we expose the details of the spatio-temporal development of the vortical structures affecting the interfacial dynamics. A mushroom-like structure is formed at the leading edge of the jet inducing the generation of a liquid sheet in its interior that undergoes rupture to form droplets. These droplets rotate inside the mushroom structure due to their interaction with the prevailing vortical structures. Additionally, Kelvin-Helmholtz vortices that form near the injection point deform in the streamwise direction to form hairpin vortices, which, in turn, trigger the formation of interfacial lobes in the jet core. The thinning of the lobes induces the creation of holes which expand to form liquid threads that undergo capillary breakup to form droplets.
The formalism that describes the non-linear growth of the angular momentum L of protostructures from tidal torques in a Friedmann Universe, as developed in a previous paper, is extended to include non-Gaussian initial conditions. We restrict our analysis here to a particular class of non-Gaussian primordial distributions, namely multiplicative models. In such models, strongly correlated phases are produced by obtaining the gravitational potential via a nonlinear local transformation of an underlying Gaussian random field. The dynamical evolution of the system is followed by describing the trajectories of fluid particles using second-order Lagrangian perturbation theory. In the Einstein-de Sitter universe, the lowest-order perturbative correction to the variance of the linear angular momentum of collapsing structures grows as t^8/3 for generic non-Gaussian statistics, which contrasts with the t^10/3 growth rate characteristic of Gaussian statistics. This is a consequence of the fact that the lowest-order perturbative spin contribution in the non-Gaussian case arises from the third moment of the gravitational potential, which is identically zero for a Gaussian field. Evaluating these corrections at the maximum expansion time of the collapsing structure, we find that these non-Gaussian and non-linear terms can be as high as the linear estimate, without the degree of non-Gaussianity as quantified by skewness and kurtosis of the density field being unacceptably large. The results suggest that higher-order terms in the perturbative expansion may contribute significantly to galactic spin which contrasts with the straightforward Gaussian case.
AA Tau, a classical T Tauri star in the Taurus cloud, has been the subject of intensive photometric monitoring for more than two decades due to its quasi-cyclic variation in optical brightness. Beginning in 2011, AA Tau showed another peculiar variation -- its median optical though near-IR flux dimmed significantly, a drop consistent with a 4-mag increase in visual extinction. It has stayed in the faint state since.Here we present 4.7um CO rovibrational spectra of AA Tau over eight epochs, covering an eleven-year time span, that reveal enhanced 12CO and 13CO absorption features in the $J_{\rm low}\leqslant$13 transitions after the dimming. These newly appeared absorptions require molecular gas along the line of sight with T~500 K and a column density of log (N12CO)~18.5 cm^{-2}, with line centers that show a constant 6 km s$^{-1}$ redshift. The properties of the molecular gas confirm an origin in the circumstellar material. We suggest that the dimming and absorption are caused by gas and dust lifted to large heights by a magnetic buoyancy instability. This material is now propagating inward, and on reaching the star within a few years will be observed as an accretion outburst.
The gyrokinetic theory of the residual flow, in the electrostatic limit, is revisited, with optimized stellarators in mind. We consider general initial conditions for the problem, and identify cases that lead to a non-zonal residual electrostatic potential, i.e. one having a significant component that varies within a flux surface. We investigate the behavior of the ``intermediate residual'' in stellarators, a measure of the flow that remains after geodesic acoustic modes have damped away, but before the action of the slower damping that is caused by unconfined particle orbits. The case of a quasi-isodynamic stellarator is identified as having a particularly large such residual, owing to the small orbit width achieved by optimization.
We present an efficient method for training slack-rescaled structural SVM. Although finding the most violating label in a margin-rescaled formulation is often easy since the target function decomposes with respect to the structure, this is not the case for a slack-rescaled formulation, and finding the most violated label might be very difficult. Our core contribution is an efficient method for finding the most-violating-label in a slack-rescaled formulation, given an oracle that returns the most-violating-label in a (slightly modified) margin-rescaled formulation. We show that our method enables accurate and scalable training for slack-rescaled SVMs, reducing runtime by an order of magnitude compared to previous approaches to slack-rescaled SVMs.
The defect in diamond formed by a vacancy surrounded by three nearest-neighbor nitrogen atoms and one carbon atom, $\mathrm{N}_{3}\mathrm{V}$, is found in $\approx98\%$ of natural diamonds. Despite $\mathrm{N}_{3}\mathrm{V}^{0}$ being the earliest electron paramagnetic resonance spectrum observed in diamond, to date no satisfactory simulation of the spectrum for an arbitrary magnetic field direction has been produced due to its complexity. In this work, $\mathrm{N}_{3}\mathrm{V}^{0}$ is identified in $^{15}\mathrm{N}$-doped synthetic diamond following irradiation and annealing. The $\mathrm{^{15}N}_{3}\mathrm{V}^{0}$ spin Hamiltonian parameters are revised and used to refine the parameters for $\mathrm{^{14}N}_{3}\mathrm{V}^{0}$, enabling the latter to be accurately simulated and fitted for an arbitrary magnetic field direction. Study of $\mathrm{^{15}N}_{3}\mathrm{V}^{0}$ under excitation with green light indicates charge transfer between $\mathrm{N}_{3}\mathrm{V}$ and $\mathrm{N_s}$. It is argued that this charge transfer is facilitated by direct ionization of $\mathrm{N}_{3}\mathrm{V}^{-}$, an as-yet unobserved charge state of $\mathrm{N}_{3}\mathrm{V}$.
We studied the magnetic excitations in the quasi-one-dimensional (q-1D) ladder subsystem of Sr_(14-x) Ca_x Cu_24 O_41(SCCO) using Cu L_3-edge resonant inelastic X-ray scattering (RIXS). By comparing momentum-resolved RIXS spectra with (x=12.2) and without (x=0) high Ca content, we track the evolution of the magnetic excitations from collective two-triplon (2T) excitations (x=0) to weakly-dispersive gapped modes at an energy of 280 meV (x=12.2). Density matrix renormalization group (DMRG) calculations of the RIXS response in the doped ladders suggest that the flat magnetic dispersion and damped excitation profile observed at x=12.2 originates from enhanced hole localization. This interpretation is supported by polarization-dependent RIXS measurements, where we disentangle the spin-conserving {\Delta}S=0 scattering from the predominant {\Delta}S=1 spin-flip signal in the RIXS spectra. The results show that the low-energy weight in the {\Delta}S=0 channel is depleted when Sr is replaced by Ca, consistent with a reduced carrier mobility. Our results demonstrate that off-ladder impurities can affect both the low-energy magnetic excitations and superconducting correlations in the CuO_4 plaquettes. Finally, our study characterizes the magnetic and charge fluctuations in the phase from which superconductivity emerges in SCCO at elevated pressures.
Nonadiabatic effects in the electron-phonon coupling are important whenever the ratio between the phononic and the electronic energy scales, the adiabatic ratio, is non negligible. For superconducting systems, this gives rise to additional diagrams in the superconducting self-energy, the vertex and cross corrections. In this work we explore these corrections in a two-dimensional single-band system through the crossover between the weak-coupling BCS and strong-coupling Bose-Einstein regimes. By focusing on the pseudogap phase, we identify the parameter range in which the pairing amplitude is amplified by nonadiabatic effects and map them throughout the BCS-BEC crossover. These effects become stronger as the system is driven deeply in the crossover regime, for phonon frequencies of the order of the hopping energy and for large enough electron-phonon coupling. Finally, we provide the phase space regions in which the effects of nonadiabaticity are more relevant for unconventional superconductors.
In singing voice synthesis (SVS), generating singing voices from musical scores faces challenges due to limited data availability. This study proposes a unique strategy to address the data scarcity in SVS. We employ an existing singing voice synthesizer for data augmentation, complemented by detailed manual tuning, an approach not previously explored in data curation, to reduce instances of unnatural voice synthesis. This innovative method has led to the creation of two expansive singing voice datasets, ACE-Opencpop and ACE-KiSing, which are instrumental for large-scale, multi-singer voice synthesis. Through thorough experimentation, we establish that these datasets not only serve as new benchmarks for SVS but also enhance SVS performance on other singing voice datasets when used as supplementary resources. The corpora, pre-trained models, and their related training recipes are publicly available at ESPnet-Muskits (\url{https://github.com/espnet/espnet})
We derive the evolution of the infrared (IR) luminosity function (LF) over the last 4/5ths of cosmic time, using deep 24um and 70um imaging of the GOODS North and South fields. We use an extraction technique based on prior source positions at shorter wavelengths to build the 24 and 70um source catalogs. The majority (93%) of the sources have a spectroscopic (39%) or a photometric redshift (54%) and, in our redshift range of interest (i.e., 1.3<z<2.3) ~20% of the sources have a spectroscopic redshifts. To extend our study to lower 70um luminosities we perform a stacking analysis and we characterize the observed L_24/(1+z) vs L_70/(1+z) correlation. Using spectral energy distribution templates which best fit this correlation, we derive the IR luminosity of sources from their 24 and 70 um fluxes. We then compute the IR LF at z=1.55+/-0.25 and z=2.05+/-0.25. The redshift evolution of the IR LF from z=1.3 to z=2.3 is consistent with a luminosity evolution proportional to (1+z)^1.0+/-0.9 combined with a density evolution proportional to (1+z)^-1.1+/-1.5. At z~2, luminous IR galaxies (LIRGs: 10^11Lsun< LIR <10^12Lsun) are still the main contributors to the total comoving IR luminosity density (IR LD) of the Universe. At z~2, LIRGs and ultra-luminous IR galaxies (ULIRGs: 10^12Lsun< LIR) account for ~49% and ~17% respectively of the total IR LD of the Universe. Combined with previous results for galaxies at z<1.3 and assuming a constant conversion between the IR luminosity and star-formation rate (SFR) of a galaxy, we study the evolution of the SFR density of the Universe from z=0 to z=2.3. We find that the SFR density of the Universe strongly increased with redshift from z=0 to z=1.3, but is nearly constant at higher redshift out to z=2.3. As part of the online material accompanying this article, we present source catalogs at 24um and 70um for both the GOODS-North and -South fields.
We introduce a gauge invariant and string independent two-point fermion correlator which is analyzed in the context of the Schwinger model (QED_2). We also derive an effective infrared worldline action for this correlator, thus enabling the computation of its infrared behavior. Finally, we briefly discuss possible perspectives for the string independent correlator in the QED_3 effective models for the normal state of HTc superconductors.
We consider inverse curvature flows in $\Hh$ with star-shaped initial hypersurfaces and prove that the flows exist for all time, and that the leaves converge to infinity, become strongly convex exponentially fast and also more and more totally umbilic. After an appropriate rescaling the leaves converge in $C^\infty$ to a sphere.
In recent years, deep learning has made brilliant achievements in Environmental Microorganism (EM) image classification. However, image classification of small EM datasets has still not obtained good research results. Therefore, researchers need to spend a lot of time searching for models with good classification performance and suitable for the current equipment working environment. To provide reliable references for researchers, we conduct a series of comparison experiments on 21 deep learning models. The experiment includes direct classification, imbalanced training, and hyperparameter tuning experiments. During the experiments, we find complementarities among the 21 models, which is the basis for feature fusion related experiments. We also find that the data augmentation method of geometric deformation is difficult to improve the performance of VTs (ViT, DeiT, BotNet and T2T-ViT) series models. In terms of model performance, Xception has the best classification performance, the ViT model consumes the least time for training, and the ShuffleNet-V2 model has the least number of parameters.
Within the interstellar medium, supernovae are thought to be the prevailing agents in driving turbulence. Until recently, their effects on magnetic field amplification in disk galaxies remained uncertain. Analytical models based on the uncorrelated-ensemble approach predicted that any created field would be expelled from the disk before it could be amplified significantly. By means of direct simulations of supernova-driven turbulence, we demonstrate that this is not the case. Accounting for galactic differential rotation and vertical stratification, we find an exponential amplification of the mean field on timescales of several hundred million years. We especially highlight the importance of rotation in the generation of helicity by showing that a similar mechanism based on Cartesian shear does not lead to a sustained amplification of the mean magnetic field.
The Alexander dual of an arbitrary meet-semilattice is described explicitly. Meet-distributive meet-semilattices whose Alexander dual is level are characterized.
Let $(\xi(s))_{s\geq 0}$ be a standard Brownian motion in $d\geq 1$ dimensions and let $(D_s)_{s \geq 0}$ be a collection of open sets in $\R^d$. For each $s$, let $B_s$ be a ball centered at 0 with $\vol(B_s) = \vol(D_s)$. We show that $\E[\vol(\cup_{s \leq t}(\xi(s) + D_s))] \geq \E[\vol(\cup_{s \leq t}(\xi(s) + B_s))]$, for all $t$. In particular, this implies that the expected volume of the Wiener sausage increases when a drift is added to the Brownian motion.
The deposition and intercalation of metal atoms can induce superconductivity in monolayer and bilayer graphenes. For example, it has been experimentally proved that Li-deposited graphene is a superconductor with critical temperature $T_{c}$ of 5.9 K, Ca-intercalated bilayer graphene C$_{6}$CaC$_{6}$ and K-intercalated epitaxial bilayer graphene C$_{8}$KC$_{8}$ are superconductors with $T_{c}$ of 2-4 K and 3.6 K, respectively. However, the $T_{c}$ of them are relatively low. To obtain higher $T_{c}$ in graphene-based superconductors, here we predict a new Ca-intercalated bilayer graphene C$_{2}$CaC$_{2}$, which shows higher Ca concentration than the C$_{6}$CaC$_{6}$. It is proved to be thermodynamically and dynamically stable. The electronic structure, electron-phonon coupling (EPC) and superconductivity of C$_{2}$CaC$_{2}$ are investigated based on first-principles calculations. The EPC of C$_{2}$CaC$_{2}$ mainly comes from the coupling between the electrons of C-$p_{z}$ orbital and the high- and low-frequency vibration modes of C atoms. The calculated EPC constant $\lambda$ of C$_{2}$CaC$_{2}$ is 0.75, and the superconducting $T_{c}$ is 18.9 K, which is much higher than other metal-intercalated bilayer graphenes. By further applying -4\% biaxial compressive strain to C$_{2}$CaC$_{2}$, the $T_{c}$ can be boosted to 26.6 K. Thus, the predicted C$_{2}$CaC$_{2}$ provides a new platform for realizing superconductivity with the highest $T_{c}$ in bilayer graphenes.
Website fingerprinting (WF) is a well-know threat to users' web privacy. New internet standards, such as QUIC, include padding to support defenses against WF. Previous work only analyzes the effectiveness of defenses when users are behind a VPN. Yet, this is not how most users browse the Internet. In this paper, we provide a comprehensive evaluation of QUIC-padding-based defenses against WF when users directly browse the web. We confirm previous claims that network-layer padding cannot provide good protection against powerful adversaries capable of observing all traffic traces. We further demonstrate that such padding is ineffective even against adversaries with constraints on traffic visibility and processing power. At the application layer, we show that defenses need to be deployed by both first and third parties, and that they can only thwart traffic analysis in limited situations. We identify challenges to deploy effective WF defenses and provide recommendations to address them.
We study renormalization group flows among N=1 SCFTs realized on the worldvolume of D3-branes probing toric Calabi-Yau singularities, thus admitting a brane tiling description. The flows are triggered by masses for adjoint or vector-like pairs of bifundamentals and are generalizations of the Klebanov-Witten construction of the N=1 theory for the conifold starting from the N=2 theory for the C^2/Z_2 orbifold. In order to preserve the toric condition pairs of masses with opposite signs have to be switched on. We offer a geometric interpretation of the flows as complex deformations of the Calabi-Yau singularity preserving the toric condition. For orbifolds, we support this interpretation by an explicit string amplitude computation of the gauge invariant mass terms generated by imaginary self-dual 3-form fluxes in the twisted sector. In agreement with the holographic a-theorem, the volume of the Sasaki-Einstein 5-base of the Calabi-Yau cone always increases along the flow.
Merging other branches into the current working branch is common in collaborative software development. However, developers still heavily rely on the textual merge tools to handle the complicated merge tasks. The latent semantic merge conflicts may fail to be detected and degrade the software quality. Regression testing is able to prevent regression faults and has been widely used in real-world software development. However, the merged software may fail to be well examined by rerunning the existing whole test suite. Intuitively, if the test suite fails to cover the changes of different branches at the same time, the merge conflicts would fail to be detected. Recently, it has been proposed to conduct verification on 3-way merges, but this approach does not support even some common cases such as different changes made to different parts of the program. In this paper, we propose an approach of regression unit test generation specifically for checking program merges according to our proposed test oracles. And our general test oracles support us to examine not only 3-way merges, but also 2-way and octopus merges. Considering the conflicts may arise in other locations besides changed methods of the project, we design an algorithm to select UUTs based on the dependency analysis of the whole project. On this basis, we implement a tool called TOM to generate unit tests for Java program merges. We also design the benchmark MCon4J consisting of 389 conflict 3-way merges and 389 conflict octopus merges to facilitate further studies on this topic. The experimental results show that TOM finds 45 conflict 3- way merges and 87 conflicts octopus merges, while the verification based tool fails to work on MCon4J.
A longstanding challenge for the Machine Learning community is the one of developing models that are capable of processing and learning from very long sequences of data. The outstanding results of Transformers-based networks (e.g., Large Language Models) promotes the idea of parallel attention as the key to succeed in such a challenge, obfuscating the role of classic sequential processing of Recurrent Models. However, in the last few years, researchers who were concerned by the quadratic complexity of self-attention have been proposing a novel wave of neural models, which gets the best from the two worlds, i.e., Transformers and Recurrent Nets. Meanwhile, Deep Space-State Models emerged as robust approaches to function approximation over time, thus opening a new perspective in learning from sequential data, followed by many people in the field and exploited to implement a special class of (linear) Recurrent Neural Networks. This survey is aimed at providing an overview of these trends framed under the unifying umbrella of Recurrence. Moreover, it emphasizes novel research opportunities that become prominent when abandoning the idea of processing long sequences whose length is known-in-advance for the more realistic setting of potentially infinite-length sequences, thus intersecting the field of lifelong-online learning from streamed data.
The anomalous Hall effect, a hallmark of broken time-reversal symmetry and spin-orbit coupling, is frequently observed in magnetically polarized systems. Its realization in non-magnetic systems, however, remains elusive. Here, we report on the observation of anomalous Hall effect in nominally non-magnetic KTaO3. Anomalous Hall effect emerges in reduced KTaO3 and shows an extrinsic to intrinsic crossover. A paramagnetic behavior is observed in reduced samples using first principles calculations and quantitative magnetometry. The observed anomalous Hall effect follows the oxygen vacancy-induced magnetization response, suggesting that the localized magnetic moments of the oxygen vacancies scatter conduction electrons asymmetrically and give rise to anomalous Hall effect. The anomalous Hall conductivity becomes insensitive to scattering rate in the low temperature limit (T<5 K), implying that the Berry curvature of the electrons on the Fermi surface controls the anomalous Hall effect. Our observations describe a detailed picture of many-body interactions, triggering anomalous Hall effect in a non-magnetic system.
We have implemented a universal quantum logic gate between qubits stored in the spin state of a pair of trapped calcium 40 ions. An initial product state was driven to a maximally entangled state deterministically, with 83% fidelity. We present a general approach to quantum state tomography which achieves good robustness to experimental noise and drift, and use it to measure the spin state of the ions. We find the entanglement of formation is 0.54.
Microcavity lasers based on erbium-doped lithium niobate on insulator (LNOI), which are key devices for LNOI integrated photonics, have attracted much attention recently. In this Letter, we report the realization of a C-band single-mode laser using Vernier effect in two coupled Erbium-doped LNOI microrings with different radii under the pump of a 980-nm continuous laser. The laser, operating stably over a large range of pumping power, has a pump threshold of ~200 {\mu}W and a side-mode suppression ratio exceeding 26 dB. The high-performance LNOI single-mode laser will promote the development of lithium niobate integrated photonics.
In this work we present a numerical method for the Optimal Mass Transportation problem. Optimal Mass Transportation (OT) is an active research field in mathematics.It has recently led to significant theoretical results as well as applications in diverse areas. Numerical solution techniques for the OT problem remain underdeveloped. The solution is obtained by solving the second boundary value problem for the MA equation, a fully nonlinear elliptic partial differential equation (PDE). Instead of standard boundary conditions the problem has global state constraints. These are reformulated as a tractable local PDE. We give a proof of convergence of the numerical method, using the theory of viscosity solutions. Details of the implementation and a fast solution method are provided in the companion paper arXiv:1208.4870.
The CCKS2019 shared task was devoted to inter-personal relationship extraction. Given two person entities and at least one sentence containing these two entities, participating teams are asked to predict the relationship between the entities according to a given relation list. This year, 358 teams from various universities and organizations participated in this task. In this paper, we present the task definition, the description of data and the evaluation methodology used during this shared task. We also present a brief overview of the various methods adopted by the participating teams. Finally, we present the evaluation results.
We consider competing pair-density-wave (PDW) and $d$-wave superconducting states in a magnetic field. We show that PDW order appears in the cores of $d$-wave vortices, driving checkerboard charge-density-wave (CDW) order in the vortex cores, which is consistent with experimental observations. Furthermore, we find an additional CDW order that appears on a ring outside the vortex cores. This CDW order varies with a period that is twice that of the checkerboard CDW and it only appears where both PDW and $d$-wave order co-exist. The observation of this additional CDW order would provide strong evidence for PDW order in the pseudogap phase of the cuprates. We further argue that the CDW seen by nuclear magnetic resonance at high fields is due to a PDW state that emerges when a magnetic field is applied.
We constrain the mass-to-light ratios, gas mass fractions, baryon mass fractions and the ratios of total to luminous mass for a sample of eight nearby relaxed galaxy groups and clusters: A262, A426, A478, A1795, A2052, A2063, A2199 and MKW4s. We use ASCA spatially resolved spectroscopic X-ray observations and ROSAT PSPC images to constrain the total and gas masses of these clusters. To measure cluster luminosities we use galaxy catalogs resulting from the digitization and automated processing of the second generation Palomar Sky Survey plates calibrated with CCD images in the Gunn-Thuan g, r, and i bands. Under the assumption of hydrostatic equilibrium and spherical symmetry, we can measure the total masses of clusters from their intra-cluster gas temperature and density profiles. Spatially resolved ASCA spectra show that the gas temperature decreases with increasing distance from the center. By comparison, the assumption that the gas is isothermal results in an underestimate of the total mass at small radii, and an overestimate at large cluster radii. We have obtained luminosity functions for all clusters in our sample. After correcting for background and foreground galaxies, we estimate the total cluster luminosity using Schechter function fits to the galaxy catalogs. In the three lowest redshift clusters where we can sample to fainter absolute magnitudes, we have detected a flattening of the luminosity function at intermediate magnitudes and a rise at the faint end. These clusters were fit with a sum of two Schechter functions. The remaining clusters were well fit with a single Schechter function.
The goal of this paper is to develop a continuous optimization-based refinement of the reference trajectory to 'push it out' of the obstacle-occupied space in the global phase for Multi-rotor Aerial Vehicles in unknown environments. Our proposed approach comprises two planners: a global planner and a local planner. The global planner refines the initial reference trajectory when the trajectory goes either through an obstacle or near an obstacle and lets the local planner calculate a near-optimal control policy. The global planner comprises two convex programming approaches: the first one helps to refine the reference trajectory, and the second one helps to recover the reference trajectory if the first approach fails to refine. The global planner mainly focuses on real-time performance and obstacles avoidance, whereas the proposed formulation of the constrained nonlinear model predictive control-based local planner ensures safety, dynamic feasibility, and the reference trajectory tracking accuracy for low-speed maneuvers, provided that local and global planners have mean computation times 0.06s (15Hz) and 0.05s (20Hz), respectively, on an NVIDIA Jetson Xavier NX computer. The results of our experiment confirmed that, in cluttered environments, the proposed approach outperformed three other approaches: sampling-based pathfinding followed by trajectory generation, a local planner, and graph-based pathfinding followed by trajectory generation.
We consider the phase ordering dynamics of an isolated quasi-two-dimensional spin-1 Bose gas quenched into an easy-plane ferromagnetic phase. Preparing the initial system in an unmagnetized anti-ferromagnetic state the subsequent ordering involves both polar core and Mermin-Ho spin vortices, with the ratio between the different vortices controllable by the quench parameter. Ferromagnetic domain growth occurs as these vortices annihilate. The distinct dynamics of the two types of vortices means that the domain growth law is determined by two macroscopic length scales, violating the standard dynamic scaling hypothesis. Nevertheless we find that universality of the ordering process manifests in the decay laws for the spin vortices.
We introduce a spatially explicit model for the competition between type $a$ and type $b$ alleles. Each vertex of the $d$-dimensional integer lattice is occupied by a diploid individual, which is in one of three possible states or genotypes: $aa$, $ab$ or $bb$. We are interested in the long-term behavior of the gene frequencies when Mendel's law of segregation does not hold. This results in a voter type model depending on four parameters; each of these parameters measures the strength of competition between genes during meiosis. We prove that with or without a spatial structure, type $a$ and type $b$ alleles coexist at equilibrium when homozygotes are poor competitors. The inclusion of a spatial structure, however, reduces the parameter region where coexistence occurs.
We investigate effective interactions between a colloidal particle, immersed in a binary mixture of smaller spheres, and a semipermeable membrane. The colloid is modeled as a big hard sphere and the membrane is represented as an infinitely thin surface which is fully permeable to one of the smaller spheres and impermeable to the other one. Within the framework of the density functional theory we evaluate the depletion potentials, and we consider two different approximate theories - the simple Asakura-Oosawa approximation and the accurate White-Bear version of the fundamental measure theory. The effective potentials are compared with the corresponding potentials for a hard, nonpermeable wall. Using statistical-mechanical sum rules we argue that the contact value of the depletion potential between a colloid and a semipermeable membrane is smaller in magnitude than the potential between a colloid and a hard wall. Explicit calculations confirm that the colloid-semipermeable membrane effective interactions are generally weaker than these near a hard nonpermeable wall. This effect is more pronounced for smaller osmotic pressures. The depletion potential for a colloidal particle inside a semipermeable vesicle is stronger than the potential for the colloidal particle located outside of a vesicle. We find that the asymptotic decay of the depletion potential for the semipermeable membrane is similar to that for the nonpermeable wall and reflects the asymptotics of the total correlation function of the corresponding binary mixture of smaller spheres. Our results demonstrate that the ability of the membrane to change its shape constitutes an important factor in determining the effective interactions between the semipermeable membrane and the colloidal macroparticle.
A gradient boosting decision tree (GBDT), which aggregates a collection of single weak learners (i.e. decision trees), is widely used for data mining tasks. Because GBDT inherits the good performance from its ensemble essence, much attention has been drawn to the optimization of this model. With its popularization, an increasing need for model interpretation arises. Besides the commonly used feature importance as a global interpretation, feature contribution is a local measure that reveals the relationship between a specific instance and the related output. This work focuses on the local interpretation and proposes an unified computation mechanism to get the instance-level feature contributions for GBDT in any version. Practicality of this mechanism is validated by the listed experiments as well as applications in real industry scenarios.
In this paper, we present a statistical-mechanical analysis of deep learning. We elucidate some of the essential components of deep learning---pre-training by unsupervised learning and fine tuning by supervised learning. We formulate the extraction of features from the training data as a margin criterion in a high-dimensional feature-vector space. The self-organized classifier is then supplied with small amounts of labelled data, as in deep learning. Although we employ a simple single-layer perceptron model, rather than directly analyzing a multi-layer neural network, we find a nontrivial phase transition that is dependent on the number of unlabelled data in the generalization error of the resultant classifier. In this sense, we evaluate the efficacy of the unsupervised learning component of deep learning. The analysis is performed by the replica method, which is a sophisticated tool in statistical mechanics. We validate our result in the manner of deep learning, using a simple iterative algorithm to learn the weight vector on the basis of belief propagation.
Vanishing (and exploding) gradients effect is a common problem for recurrent neural networks with nonlinear activation functions which use backpropagation method for calculation of derivatives. Deep feedforward neural networks with many hidden layers also suffer from this effect. In this paper we propose a novel universal technique that makes the norm of the gradient stay in the suitable range. We construct a way to estimate a contribution of each training example to the norm of the long-term components of the target function s gradient. Using this subroutine we can construct mini-batches for the stochastic gradient descent (SGD) training that leads to high performance and accuracy of the trained network even for very complex tasks. We provide a straightforward mathematical estimation of minibatch s impact on for the gradient norm and prove its correctness theoretically. To check our framework experimentally we use some special synthetic benchmarks for testing RNNs on ability to capture long-term dependencies. Our network can detect links between events in the (temporal) sequence at the range approx. 100 and longer.
Three steps in the development of the maximum likelihood (ML) method are presented. At first, the application of the ML method and Fisher information notion in the model selection analysis is described (Chapter 1). The fundamentals of differential geometry in the construction of the statistical space are introduced, illustrated also by examples of the estimation of the exponential models. At second, the notions of the relative entropy and the information channel capacity are introduced (Chapter 2). The observed and expected structural information principle (IP) and the variational IP of the modified extremal physical information (EPI) method of Frieden and Soffer are presented and discussed (Chapter 3). The derivation of the structural IP based on the analyticity of the logarithm of the likelihood function and on the metricity of the statistical space of the system is given. At third, the use of the EPI method is developed (Chapters 4-5). The information channel capacity is used for the field theory models classification. Next, the modified Frieden and Soffer EPI method, which is a nonparametric estimation that enables the statistical selection of the equation of motions of various field theory models (Chapter 4) or the distribution generating equations of statistical physics models (Chapter 5) is discussed. The connection between entanglement of the momentum degrees of freedom and the mass of a particle is analyzed. The connection between the Rao-Cramer inequality, the causality property of the processes in the Minkowski space-time and the nonexistence of tachions is shown. The generalization of the Aoki-Yoshikawa sectoral productivity econophysical model is also presented (Chapter 5). Finally, the Frieden EPI method of the analysis of the EPR-Bhom experiment is presented. It differs from the Frieden approach by the use of the information geometry methods.
The recent experiments revealed a remarkable possibility for the absence of the disparity between the phase diagrams of the electron- and hole-doped cuprate superconductors, while such an aspect should be also reflected in the dressing of the electrons. Here the phase diagram of the electron-doped cuprate superconductors and the related exotic features of the anisotropic dressing of the electrons are studied based on the kinetic-energy driven superconductivity. It is shown that although the optimized Tc in the electron-doped side is much smaller than that in the hole-doped case, the electron- and hole-doped cuprate superconductors rather resemble each other in the doping range of the superconducting dome, indicating an absence of the disparity between the phase diagrams of the electron- and hole-doped cuprate superconductors. In particular, the anisotropic dressing of the electrons due to the strong electron's coupling to a strongly dispersive spin excitation leads to that the electron Fermi surface is truncated to form the disconnected Fermi arcs centered around the nodal region. Concomitantly, the dip in the peak-dip-hump structure of the quasiparticle excitation spectrum is directly associated with the corresponding peak in the quasiparticle scattering rate, while the dispersion kink is always accompanied by the corresponding inflection point in the total self-energy, as the dip in the peak-dip-hump structure and dispersion kink in the hole-doped counterparts. The theory also predicts that both the normal and anomalous self-energies exhibit the well-pronounced low-energy peak-structures.
Let $X$ be a fine and saturated log scheme, and let $G$ be a commutative finite flat group scheme over the underlying scheme of $X$. If $G$-torsors for the fppf topology can be thought of as being unramified objects by nature, then $G$-torsors for the log flat topology allow us to consider tame ramification. Using the results of Kato, we define a concept of Galois structure for these torsors, then we generalize the author's previous constructions (class-invariant homomorphism for semi-stable abelian varieties) in this new setting, thus dropping some restrictions.
Let $A$ be any commutative unital ring and let $\operatorname{GL}_{2,A}$ be the general linear group scheme on $A$ of rank $2$. We study the representation theory of $\operatorname{GL}_{2,A}$ and the symmetric powers $\operatorname{Sym}^d(V)$, where $(V, \Delta)$ is the standard right comodule on $\operatorname{GL}_{2,A}$. We prove a refined Weyl character formula for $\operatorname{Sym}^d(V)$. There is for any integer $d \geq 1$ a (canonical) refined weight space decomposition $\operatorname{Sym}^d(V) \cong \oplus_i \operatorname{Sym}^d(V)^i$ where each direct summand $\operatorname{Sym}^d(V)^i$ is a comodule on $N \subseteq \operatorname{GL}_{2,A}$. Here $N$ is the schematic normalizer of the diagonal torus $T \subseteq \operatorname{GL}_{2,A}$. We prove a character formula for the direct summands of $\operatorname{Sym}^d(V)$ for any integer $d \geq 1$. This refined Weyl character formula implies the classical Weyl character formula. As a Corollary we get a refined Weyl character formula for the pull back $\operatorname{Sym}^d(V \otimes K)$ as a comodule on $\operatorname{GL}_{2,K}$ where $K$ is any field. We also calculate explicit examples involving the symmetric powers, symmetric tensors and their duals. The refined weight space decomposition exists in general for group schemes such as $\operatorname{GL}_{2,A}$ and $\operatorname{SL}_{2,A}$. The study may have applications to the study of groups $G$ such as $\operatorname{SL}(n,k)$ and $\operatorname{GL}(n,k)$ and quotients $G/H$ where $k$ is an arbitrary field (or a Dedekind domain) and $H \subseteq G$ is a closed subgroup. The refined weight space decomposition of $S_{\lambda}(V)$ has a relation with irreducible module over a field of positive characteristic. In an example I prove it recovers the irreducible module $V(\lambda) \subsetneq S_{\lambda}(V)$.
Observations of young stars hosting transition disks show that several of them have high accretion rates, despite their disks presenting extended cavities in their dust component. This represents a challenge for theoretical models, which struggle to reproduce both features. We explore if a disk evolution model, including a dead zone and disk dispersal by X-ray photoevaporation, can explain the high accretion rates and large gaps (or cavities) measured in transition disks. We implement a dead zone turbulence profile and a photoevaporative mass loss profile into numerical simulations of gas and dust. We perform a population synthesis study of the gas component, and obtain synthetic images and SED of the dust component through radiative transfer calculations. This model results in long lived inner disks and fast dispersing outer disks, that can reproduce both the accretion rates and gap sizes observed in transition disks. For a dead zone of turbulence $\alpha_{dz} = 10^{-4}$ and extent $r_{dz}$ = 10 AU, our population synthesis study shows that $63\%$ of our transition disks are accreting with $\dot{M}_g > 10^{-11} M_\odot/yr$ after opening a gap. Among those accreting transition disks, half display accretion rates higher than $5\times10^{-10} M_\odot/yr$ . The dust component in these disks is distributed in two regions: in a compact inner disk inside the dead zone, and in a ring at the outer edge of the photoevaporative gap, which can be located between 20 AU and 100 AU. Our radiative transfer calculations show that the disk displays an inner disk and an outer ring in the millimeter continuum, a feature observed in some transition disks. A disk model considering X-ray photoevaporative dispersal in combination with dead zones can explain several of the observed properties in transition disks including: the high accretion rates, the large gaps, and long-lived inner disks at mm-emission.
Interleaved Reed-Solomon codes admit efficient decoding algorithms which correct burst errors far beyond half the minimum distance in the random errors regime, e.g., by computing a common solution to the Key Equation for each Reed-Solomon code, as described by Schmidt et al. If this decoder does not succeed, it may either fail to return a codeword or miscorrect to an incorrect codeword, and good upper bounds on the fraction of error matrices for which these events occur are known. The decoding algorithm immediately applies to interleaved alternant codes as well, i.e., the subfield subcodes of interleaved Reed-Solomon codes, but the fraction of decodable error matrices differs, since the error is now restricted to a subfield. In this paper, we present new general lower and upper bounds on the fraction of error matrices decodable by Schmidt et al.'s decoding algorithm, thereby making it the only decoding algorithm for interleaved alternant codes for which such bounds are known.
We analyze systematics in the asteroseismological mass determination methods in pulsating PG 1159 stars. We compare the seismic masses resulting from the comparison of the observed mean period spacings with the usually adopted asymptotic period spacings, and the average of the computed period spacings. Computations are based on full PG1159 evolutionary models with stellar masses ranging from 0.530 to 0.741 Mo that take into account the complete evolution of progenitor stars. We conclude that asteroseismology is a precise and powerful technique that determines the masses to a high internal accuracy, but it depends on the adopted mass determination method. In particular, we find that in the case of pulsating PG 1159 stars characterized by short pulsation periods, like PG 2131+066 and PG 0122+200, the employment of the asymptotic period spacings overestimates the stellar mass by about 0.06 Mo as compared with inferences from the average of the period spacings. In this case, the discrepancy between asteroseismological and spectroscopical masses is markedly reduced when use is made of the mean period spacing instead of the asymptotic period spacing.
The ice Ansatz on matrix solutions of the Yang-Baxter equation is weakened to a condition which we call rime. Generic rime solutions of the Yang-Baxter equation are described. We prove that the rime non-unitary (respectively, unitary) R-matrix is equivalent to the Cremmer-Gervais (respectively, boundary Cremmer-Gervais) solution. Generic rime classical r-matices satisfy the (non-)homogeneous associative classical Yang-Baxter equation.
We consider a composite convex minimization problem associated with regularized empirical risk minimization, which often arises in machine learning. We propose two new stochastic gradient methods that are based on stochastic dual averaging method with variance reduction. Our methods generate a sparser solution than the existing methods because we do not need to take the average of the history of the solutions. This is favorable in terms of both interpretability and generalization. Moreover, our methods have theoretical support for both a strongly and a non-strongly convex regularizer and achieve the best known convergence rates among existing nonaccelerated stochastic gradient methods.
We study atom-ion scattering in the ultracold regime. To this aim, an analytical model based on the multichannel quantum defect formalism is developed and compared to close-coupled numerical calculations. We investigate the occurrence of magnetic Feshbach resonances focusing on the specific 40Ca+ - Na system. The presence of several resonances at experimentally accessible magnetic fields should allow the atom-ion interaction to be precisely tuned. A fully quantum-mechanical study of charge exchange processes shows that charge-exchange rates should remain small even in the presence of resonance effects. Most of our results can be cast in a system-independent form and are important for the realization of the charge-neutral ultracold systems.
The emergence of online enterprises spread across continents have given rise to the need for expert identification in this domain. Scenarios that includes the intention of the employer to find tacit expertise and knowledge of an employee that is not documented or self-disclosed has been addressed in this article. The existing reputation based approaches towards expertise ranking in enterprises utilize PageRank, normal distribution, and hidden Markov model for expertise ranking. These models suffer issue of negative referral, collusion, reputation inflation, and dynamism. The authors have however proposed a Bayesian approach utilizing beta probability distribution based reputation model for employee ranking in enterprises. The experimental results reveal improved performance compared to previous techniques in terms of Precision and Mean Average Error (MAE) with almost 7% improvement in precision on average for the three data sets. The proposed technique is able to differentiate categories of interactions in a dynamic context. The results reveal that the technique is independent of the rating pattern and density of data.
Kontsevich and Soibelman defined the notion of Donaldson-Thomas invariants of a 3d Calabi-Yau category with a stability condition. A family of examples of such categories can be constructed from an arbitrary cluster variety. The corresponding Donaldson-Thomas invariants are encoded by a special formal automorphism of the cluster variety, known as Donaldson-Thomas transformation. Fix two integers $m$ and $n$ with $1<m<m+1<n$. It is known that the configuration space $\mathrm{Conf}_n(\mathbb{P}^{m-1})$, closely related to Grassmannian $\mathrm{Gr}_m(n)$, is a cluster Poisson variety. In this paper we determine the Donaldson-Thomas transformation of $\mathrm{Conf}_n(\mathbb{P}^{m-1})$ as an explicitly defined birational automorphism of $\mathrm{Conf}_n(\mathbb{P}^{m-1})$. Its variant acts on the Grassmannian by a birational automorphism.
We introduce a novel network-adaptive algorithm that is suitable for alleviating network packet losses for low-latency interactive communications between a source and a destination. Our network-adaptive algorithm estimates in real-time the best parameters of a recently proposed streaming code that uses forward error correction (FEC) to correct both arbitrary and burst losses, which cause a crackling noise and undesirable jitters, respectively in audio. In particular, the destination estimates appropriate coding parameters based on its observed packet loss pattern and sends them back to the source for updating the underlying code. Besides, a new explicit construction of practical low-latency streaming codes that achieve the optimal tradeoff between the capability of correcting arbitrary losses and the capability of correcting burst losses is provided. Simulation evaluations based on statistical losses and real-world packet loss traces reveal the following: (i) Our proposed network-adaptive algorithm combined with our optimal streaming codes can achieve significantly higher performance compared to uncoded and non-adaptive FEC schemes over UDP (User Datagram Protocol); (ii) Our explicit streaming codes can significantly outperform traditional MDS (maximum-distance separable) streaming schemes when they are used along with our network-adaptive algorithm.
The XX model with uniform couplings represents the most natural choice for quantum state transfer through spin chains. Given that it has long been established that single-qubit states cannot be transferred with perfect fidelity in this model, the notion of pretty good state transfer has been recently introduced as a relaxation of the constraints on fidelity. In this paper, we study the transfer of multi-qubit entangled and unentangled states through unmodulated spin chains, and we prove that it is possible to have pretty good state transfer of any multi-particle state. This significantly generalizes the previous results on single-qubit state transfer, and opens way to using uniformly coupled spin chains as quantum channels for the transfer of arbitrary states of any dimension. Our results could be tested with current technology.
This is the first of two papers devoted to connections between asymptotic functions of groups and computational complexity. One of the main results of this paper states that if for every $m$ the first $m$ digits of a real number $\alpha\ge 4$ are computable in time $\le C2^{2^{Cm}}$ for some constant $C>0$ then $n^\alpha$ is equivalent (``big O'') to the Dehn function of a finitely presented group. The smallest isodiametric function of this group is $n^{3/4\alpha}$. On the other hand if $n^\alpha$ is equivalent to the Dehn function of a finitely presented group then the first $m$ digits of $\alpha$ are computable in time $\le C2^{2^{2^{Cm}}}$ for some constant $C$. This implies that, say, functions $n^{\pi+1}$, $n^{e^2}$ and $n^\alpha$ for all rational numbers $\alpha\ge 4$ are equivalent to the Dehn functions of some finitely presented group and that $n^\pi$ and $n^\alpha$ for all rational numbers $\alpha\ge 3$ are equivalent to the smallest isodiametric functions of finitely presented groups. Moreover we describe all Dehn functions of finitely presented groups $\succ n^4$ as time functions of Turing machines modulo two conjectures: \begin{enumerate} \item Every Dehn function is equivalent to a superadditive function. \item The square root of the time function of a Turing machine is equivalent to the time function of a Turing machine. \end{enumerate}
We study the behavior of non-Markovianity with respect to the localization of the initial environmental state. The "amount" of non-Markovianity is measured using divisibility and distinguishability as indicators, employing several schemes to construct the measures. The system used is a qubit coupled to an environment modeled by an Ising spin chain kicked by ultra-short pulses of a magnetic field. In the integrable regime, non-Markovianity and localization do not have a simple relation, but as the chaotic regime is approached, simple relations emerge, which we explore in detail. We also study the non-Markovianity measures in the space of the parameters of the spin coherent states and point out that the pattern that appears is robust under the choice of the interaction Hamiltonian but does not have a KAM-like phase-space structure.