text
stringlengths
6
128k
Sigmoid semilogarithmic functions with shape of Boltzmann equations, have become extremely popular to describe diverse biological situations. Part of the popularity is due to the easy avail- ability of software which fits Boltzmann functions to data, without much knowledge of the fitting procedure or the statistical properties of the parameters derived from the procedure. The purpose of this paper is to explore the plasticity of the Boltzmann function to fit data, some aspects of the optimization procedure to fit the function to data and how to use this plastic function to differentiate the effect of treatment on data and to attest the statistical significance of treatment effect on the data.
The automatic generation of high-quality mathematical problems is practically valuable in many educational scenarios. Large multimodal model provides a novel technical approach for the mathematical problem generation because of its wide success in cross-modal data scenarios. However, the traditional method of separating problem solving from problem generation and the mainstream fine-tuning framework of monotonous data structure with homogeneous training objectives limit the application of large multimodal model in mathematical problem generation. Addressing these challenges, this paper proposes COMET, a "Cone of Experience" enhanced large multimodal model for mathematical problem generation. Firstly, from the perspective of mutual ability promotion and application logic, we unify stem generation and problem solving into mathematical problem generation. Secondly, a three-stage fine-turning framework guided by the "Cone of Experience" is proposed. The framework divides the fine-tuning data into symbolic experience, iconic experience, and direct experience to draw parallels with experiences in the career growth of teachers. Several fine-grained data construction and injection methods are designed in this framework. Finally, we construct a Chinese multimodal mathematical problem dataset to fill the vacancy of Chinese multimodal data in this field. Combined with objective and subjective indicators, experiments on multiple datasets fully verify the effectiveness of the proposed framework and model.
We study the optical appearance of Schwarzschild-de Sitter and Reissner-Nordstr\"{o}m-de Sitter black holes viewed by distant observers inside cosmological horizons. Unlike their asymptotically flat counterparts, due to the positive cosmological constant, there are outermost stable circular orbits in the spacetimes, resulting in significant outer edges in the images. Besides, when the Reissner-Nordstr\"{o}m-de Sitter black hole has a stable Cauchy horizon, the photons from the preceding companion universe can be received by the observer in our universe. These rays create a multi-ring structure in the image. Since the stable Cauchy horizon violates the strong cosmic censorship conjecture, this novel image shed some light on the test of the conjecture by astronomical observations.
Let $p_n(y)=\sum_k\hat{\alpha}_k\phi(y-k)+\sum_{l=0}^{j_n-1}\sum_k\hat {\beta}_{lk}2^{l/2}\psi(2^ly-k)$ be the linear wavelet density estimator, where $\phi$, $\psi$ are a father and a mother wavelet (with compact support), $\hat{\alpha}_k$, $\hat{\beta}_{lk}$ are the empirical wavelet coefficients based on an i.i.d. sample of random variables distributed according to a density $p_0$ on $\mathbb{R}$, and $j_n\in\mathbb{Z}$, $j_n\nearrow\infty$. Several uniform limit theorems are proved: First, the almost sure rate of convergence of $\sup_{y\in\mathbb{R}}|p_n(y)-Ep_n(y)|$ is obtained, and a law of the logarithm for a suitably scaled version of this quantity is established. This implies that $\sup_{y\in\mathbb{R}}|p_n(y)-p_0(y)|$ attains the optimal almost sure rate of convergence for estimating $p_0$, if $j_n$ is suitably chosen. Second, a uniform central limit theorem as well as strong invariance principles for the distribution function of $p_n$, that is, for the stochastic processes $\sqrt{n}(F_n ^W(s)-F(s))=\sqrt{n}\int_{-\infty}^s(p_n-p_0),s\in\mathbb{R}$, are proved; and more generally, uniform central limit theorems for the processes $\sqrt{n}\int(p_n-p_0)f$, $f\in\mathcal{F}$, for other Donsker classes $\mathcal{F}$ of interest are considered. As a statistical application, it is shown that essentially the same limit theorems can be obtained for the hard thresholding wavelet estimator introduced by Donoho et al. [Ann. Statist. 24 (1996) 508--539].
We propose a dark matter model in which the dark sector is gauged under a new SU(2) group. The dark sector consists of SU(2) dark gauge fields, two triplet dark Higgs fields, and two dark fermion doublets (dark matter candidates in this model). The dark sector interacts with the SM sector through kinetic and mass mixing operators. The model explains both PAMELA and Fermi LAT data very well and also satisfies constraints from both the DM relic density and Standard Model precision observables. The phenomenology of the model at the LHC is also explored.
The present contribution focuses on the effect of adherend surface roughness on the strength of adhesive joints, which are particularly cost-effective and extensively applied in a wide range of industrial applications. However, the reliability of such solutions is a critical concern for the integrity of commercial products. To gain a deeper understanding on the effect of roughness, an extensive experimental campaign is proposed, where thermoplastic substrates are produced with a specified roughness, whose characterization has been performed using a confocal profilometer. Elastic strips are then bonded onto such substrates using Silicone adhesive while controlling the adhesive thickness. Peeling tests are finally carried out and the effects of joint parameters such as surface roughness, adhesive thickness, and loading rate are discussed in detail. Eventually, it is demonstrated that the surface roughness can increase the adhesion energy of joints depending on the value of a ratio between the adhesive thickness and the root mean square elevation of roughness.
In this work we analyze the stochastic dynamics of the Kauffman model evolving under the influence of noise. By considering the average crossing time between two distinct trajectories, we show that different Kauffman models exhibit a similar kind of behavior, even when the structure of their basins of attraction is quite different. This can be considered as a robust property of these models. We present numerical results for the full range of noise level and obtain approximate analytic expressions for the above crossing time as a function of the noise in the limit cases of small and large noise levels.
We consider a class of multi-robot motion planning problems where each robot is associated with multiple objectives and decoupled task specifications. The problems are formulated as an open-loop non-cooperative differential game. A distributed anytime algorithm is proposed to compute a Nash equilibrium of the game. The following properties are proven: (i) the algorithm asymptotically converges to the set of Nash equilibrium; (ii) for scalar cost functionals, the price of stability equals one; (iii) for the worst case, the computational complexity and communication cost are linear in the robot number.
Many factors influence biomolecules binding, and its assessment constitutes an elusive challenge in computational structural biology. In this respect, the evaluation of shape complementarity at molecular interfaces is one of the main factors to be considered. We focus on the particular case of antibody-antigen complexes to quantify the complementarities occurring at molecular interfaces. We relied on a method we recently developed, which employs the 2D Zernike descriptors, to characterize investigated regions with an ordered set of numbers summarizing the local shape properties. Collected a structural dataset of antibody-antigen complexes, we applied this method and we statistically distinguished, in terms of shape complementarity, pairs of interacting regions from non-interacting ones. Thus, we set up a novel computational strategy based on \textit{in-silico} mutagenesis of antibody binding site residues. We developed a Monte Carlo procedure to increase the shape complementarity between the antibody paratope and a given epitope on a target protein surface. We applied our protocol against several molecular targets in SARS-CoV-2 spike protein, known to be indispensable for viral cell invasion. We, therefore, optimized the shape of template antibodies for the interaction with such regions. As the last step of our procedure, we performed an independent molecular docking validation of the results of our Monte Carlo simulations.
We propose in this paper a combined model of Long Short Term Memory and Convolutional Neural Networks (LSTM-CNN) that exploits word embeddings and positional embeddings for cross-sentence n-ary relation extraction. The proposed model brings together the properties of both LSTMs and CNNs, to simultaneously exploit long-range sequential information and capture most informative features, essential for cross-sentence n-ary relation extraction. The LSTM-CNN model is evaluated on standard dataset on cross-sentence n-ary relation extraction, where it significantly outperforms baselines such as CNNs, LSTMs and also a combined CNN-LSTM model. The paper also shows that the LSTM-CNN model outperforms the current state-of-the-art methods on cross-sentence n-ary relation extraction.
Non-linear effects have become increasingly relevant in modern circular particle accelerators, and in recent years a change of paradigm has appeared, the attitude towards nonlinear effects having shifted from fighting them to exploiting them with the goal of devising new beam manipulations, such as the splitting of the beam in the transverse phase space by crossing a stable resonance. In the field of hadron accelerators, well-established operational techniques based on nonlinear effects exist, whereas for the case of synchrotron light sources these new techniques are only beginning their way into the field. In this paper, we discuss novel techniques aimed at providing split beams in synchrotron light sources that are obtained by using stable islands in the transverse phase space or unsplit beams with AC dipoles to generate periodic closed orbits. The results of detailed numerical simulations, which support the proposed methods, are presented and discussed in this paper, together with possible applications.
Recently, Cai and Su [Phys. Rev. D {\bf 81}, 103514 (2010)] found that the sign of interaction $Q$ in the dark sector changed in the approximate redshift range of $0.45\,\lsim\, z\,\lsim\, 0.9$, by using a model-independent method to deal with the observational data. In fact, this result raises a remarkable problem, since most of the familiar interactions cannot change their signs in the whole cosmic history. Motivated by the work of Cai and Su, we have proposed a new type of interaction in a previous work [H. Wei, Nucl. Phys. B {\bf 845}, 381 (2011)]. The key ingredient is the deceleration parameter $q$ in the interaction $Q$, and hence the interaction $Q$ can change its sign when our universe changes from deceleration ($q>0$) to acceleration ($q<0$). In the present work, we consider the cosmological constraints on this new type of sign-changeable interactions, by using the latest observational data. We find that the cosmological constraints on the model parameters are fairly tight. In particular, the key parameter $\beta$ can be constrained to a narrow range.
We have studied the Gamow-Teller (GT) transitions from $N=Z+2$ neighbors to $N=Z=$ odd nuclei in $p$-shell region by using isospin-projected and $\beta\gamma$-constraint antisymmetrized molecular dynamics combined with generator coordinate method. The calculated GT transition strengths from $0^+1$ states to $1^+0$ states such as ${}^{6} \textrm{He}(0_1^+1)\rightarrow{}^{6} \textrm{Li}(1_1^+0)$, ${}^{10} \textrm{Be}(0_1^+1)\rightarrow{}^{10} \textrm{B}(1_1^+0)$, and ${}^{14} \textrm{C}(0_1^+1)\rightarrow{}^{14} \textrm{N}(1_2^+0)$ exhaust more than 50\% of the sum rule. These $N=Z+2$ initial states and $N=Z=$ odd final states are found to dominantly have $S=0,T=1$ $nn$ pairs and $S=1,T=0$ $pn$ pairs, respectively. Based on two-nucleon ($NN$) pair picture, we can understand the concentration of the GT strengths as the spin-isospin-flip transition $nn(S=0,T=1)\rightarrow pn(S=1,T=0)$ in $LS$-coupling scheme. The GT transition can be a good probe to identify the spin-isospin partner states with $nn$ pairs and $pn$ pairs of $N=Z+2$ and $N=Z=$ odd nuclei, respectively.
Deep neural networks are typically too computationally expensive to run in real-time on consumer-grade hardware and low-powered devices. In this paper, we investigate reducing the computational and memory requirements of neural networks through network pruning and quantisation. We examine their efficacy on large networks like AlexNet compared to recent compact architectures: ShuffleNet and MobileNet. Our results show that pruning and quantisation compresses these networks to less than half their original size and improves their efficiency, particularly on MobileNet with a 7x speedup. We also demonstrate that pruning, in addition to reducing the number of parameters in a network, can aid in the correction of overfitting.
A mechanical electroscope based on a change in the resonant frequency of a cantilever one micron in size in the presence of charge has recently been fabricated. We derive the decoherence rate of a charge superposition during measurement with such a device using a master equation theory adapted from quantum optics. We also investigate the information produced by such a measurement, using a quantum trajectory approach. Such instruments could be used in mesoscopic electronic systems, and future solid-state quantum computers, so it is useful to know how they behave when used to measure quantum superpositions of charge.
Rotary dynamics of polarized composite particles as dipole rigid bodies is considered. It is described the Euler equations singularly perturbed by the radiation reaction torque. The Schott term is taken into account, and the reduction procedure lowering higher derivatives is applied. Asymptotic methods of nonlinear mechanics are used to analyze the rotary dynamics of askew-polarized spinning top. Numerical estimates are relevant to the hypothetical DAST-nanocrystals that might possess a huge dipole moment.
We study the no gravity limit G_{N}-> 0 of the Ponzano-Regge amplitudes with massive particles and show that we recover in this limit Feynman graph amplitudes (with Hadamard propagator) expressed as an abelian spin foam model. We show how the G_{N} expansion of the Ponzano-Regge amplitudes can be resummed. This leads to the conclusion that the dynamics of quantum particles coupled to quantum 3d gravity can be expressed in terms of an effective new non commutative field theory which respects the principles of doubly special relativity. We discuss the construction of Lorentzian spin foam models including Feynman propagators
With much advancement in the field of nanotechnology, bioengineering and synthetic biology over the past decade, microscales and nanoscales devices are becoming a reality. Yet the problem of engineering a reliable communication system between tiny devices is still an open problem. At the same time, despite the prevalence of radio communication, there are still areas where traditional electromagnetic waves find it difficult or expensive to reach. Points of interest in industry, cities, and medical applications often lie in embedded and entrenched areas, accessible only by ventricles at scales too small for conventional radio waves and microwaves, or they are located in such a way that directional high frequency systems are ineffective. Inspired by nature, one solution to these problems is molecular communication (MC), where chemical signals are used to transfer information. Although biologists have studied MC for decades, it has only been researched for roughly 10 year from a communication engineering lens. Significant number of papers have been published to date, but owing to the need for interdisciplinary work, much of the results are preliminary. In this paper, the recent advancements in the field of MC engineering are highlighted. First, the biological, chemical, and physical processes used by an MC system are discussed. This includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms. Then, a comprehensive survey of some of the recent works on MC through a communication engineering lens is provided. The paper ends with a technology readiness analysis of MC and future research directions.
Within linear continuum theory, no magnetic texture can propagate faster than the maximum group velocity of its spin waves. Here we report a transient regime due to the appearance of additional antiferromagnetic textures that breaks the Lorentz translational invariance of the magnetic system by atomistic spin dynamics simulations. This dynamical regime is akin to domain wall Walker-breakdown in ferromagnets and involves the nucleation of an antiferromagnetic domain wall pair. Subsequently, one of the nucleated 180$^{\circ}$ domain wall creates with the original domain wall a 360$^{\circ}$ spin-rotation which remains static even under the action of the spin-orbit field. The other 180$^{\circ}$ domain wall becomes accelerated to super-magnonic speeds. Under large spin-orbit fields, multiple domain wall generation and recombination is obtained which may explain the recently experimentally observed current pulse induce shattering of large domain structures into small fragmented domains and the subsequent slow recreation of large-scale domain formation prior current pulse.
Our purpose is to investigate properties for processes with stationary and independent increments under $G$-expectation. As applications, we prove the martingale characterization to $G$-Brownian motion and present a decomposition for generalized $G$-Brownian motion.
Data-hunger and data-imbalance are two major pitfalls in many deep learning approaches. For example, on highly optimized production lines, defective samples are hardly acquired while non-defective samples come almost for free. The defects however often seem to resemble each other, e.g., scratches on different products may only differ in a few characteristics. In this work, we introduce a framework, Defect Transfer GAN (DT-GAN), which learns to represent defect types independent of and across various background products and yet can apply defect-specific styles to generate realistic defective images. An empirical study on the MVTec AD and two additional datasets showcase DT-GAN outperforms state-of-the-art image synthesis methods w.r.t. sample fidelity and diversity in defect generation. We further demonstrate benefits for a critical downstream task in manufacturing -- defect classification. Results show that the augmented data from DT-GAN provides consistent gains even in the few samples regime and reduces the error rate up to 51% compared to both traditional and advanced data augmentation methods.
In this paper, a study is carried out on the $e^-p \to e^-\gamma^* p \to p W^+\gamma \nu_e$ production to probe quartic $W^+W^-\gamma\gamma$ couplings using 10, 100 ${\rm fb^{-1}}$ of $e^-p$ collisions data at $\sqrt{s}$= 1.30, 1.98 GeV at the Large Hadron electron Collider (LHeC) and 100, 1000 ${\rm fb^{-1}}$ with $\sqrt{s}$= 3.46, 5.29 GeV at the Future Circular Collider-hadron electron (FCC-he). Production cross-sections are determined for both at leptonic and hadronic decay channel of the $W$-boson. With the data from future $e^-p$ colliders, it is possible to obtain sensitivity measures at $95\%$ C.L. on the anomalous $f_ {M,i}/\Lambda^4$ and $ f_ {T,i}/\Lambda^4$ couplings which are competitive with the limits obtained by the LHC, as well as with others limits reported in the literature. The production mode $e^-p \to e^-\gamma^* p \to p W^+\gamma \nu_e $ in $e^-p$ collisions offers a window for study the quartic $W^+W^-\gamma\gamma$ electroweak bosons couplings at the LHeC and the FCC-he, which provides a much cleaner collision environment than the LHC.
We show that the time-1 map of an Anosov flow, whose strong-unstable foliation is $C^2$ smooth and minimal, is $C^2$ close to a diffeomorphism having positive central Lyapunov exponent Lebesgue almost everywhere and a unique physical measure with full basin, which is $C^r$ stably ergodic. Our method is perturbative and does not rely on preservation of a smooth measure.
The purpose of the comment is to point out that the leading term of the Ginzburg-Landau nonanalytical correction to the interface tension of Bose-Einstein condensates with strong segregation and the surface tension of extreme type-I superconductors are described by a common coefficient derived from the universal equation for the phase boundary. The agreement between the numerical value of the coefficients gives a hint that this can be an exact result which deserves to be checked. The outcome will be of interest for physicists working in both fields.
This study develops a framework for testing hypotheses on structural parameters in incomplete models. Such models make set-valued predictions and hence do not generally yield a unique likelihood function. The model structure, however, allows us to construct tests based on the least favorable pairs of likelihoods using the theory of Huber and Strassen (1973). We develop tests robust to model incompleteness that possess certain optimality properties. We also show that sharp identifying restrictions play a role in constructing such tests in a computationally tractable manner. A framework for analyzing the local asymptotic power of the tests is developed by embedding the least favorable pairs into a model that allows local approximations under the limits of experiments argument. Examples of the hypotheses we consider include those on the presence of strategic interaction effects in discrete games of complete information. Monte Carlo experiments demonstrate the robust performance of the proposed tests.
The master equation of quantum optical density operator is transformed to the equation of characteristic function. The parametric amplification and amplitude damping as well as the phase damping are considered. The solution for the most general initial quantum state is obtained for parametric amplification and amplitude damping. The purity of one mode Gaussian system and the entanglement of two mode Gaussian system are studied.
Coherent states of light, and methods for distinguishing between them, are central to all applications of laser light. We obtain the ultimate quantum limit on the error probability exponent for discriminating among any M multimode coherent-state waveforms via the quantum Chernoff exponent in M-ary multi-copy state discrimination. A receiver, i.e., a concrete realization of a quantum measurement, called the Sequential Waveform Nulling (SWN) receiver, is proposed for discriminating an arbitrary coherent-state ensemble using only auxiliary coherent-state fields, beam splitters, and non-number-resolving single photon detectors. An explicit error probability analysis of the SWN receiver is used to show that it achieves the quantum limit on the error probability exponent, which is shown to be a factor of four greater than the error probability exponent of an ideal heterodyne-detection receiver on the same ensemble. We generalize the philosophy of the SWN receiver, which is itself adapted from some existing coherent-state receivers, and propose a receiver -- the Sequential Testing (ST) receiver-- for discriminating n copies of M pure quantum states from an arbitrary Hilbert space. The ST receiver is shown to achieve the quantum Chernoff exponent in the limit of a large number of copies, and is remarkable in requiring only local operations and classical communication (LOCC) to do so. In particular, it performs adaptive copy-by-copy binary projective measurements. Apart from being of fundamental interest, these results are relevant to communication, sensing, and imaging systems that use laser light and to photonic implementations of quantum information processing protocols in general.
We consider a problem in Multi-Task Learning (MTL) where multiple linear models are jointly trained on a collection of datasets ("tasks"). A key novelty of our framework is that it allows the sparsity pattern of regression coefficients and the values of non-zero coefficients to differ across tasks while still leveraging partially shared structure. Our methods encourage models to share information across tasks through separately encouraging 1) coefficient supports, and/or 2) nonzero coefficient values to be similar. This allows models to borrow strength during variable selection even when non-zero coefficient values differ across tasks. We propose a novel mixed-integer programming formulation for our estimator. We develop custom scalable algorithms based on block coordinate descent and combinatorial local search to obtain high-quality (approximate) solutions for our estimator. Additionally, we propose a novel exact optimization algorithm to obtain globally optimal solutions. We investigate the theoretical properties of our estimators. We formally show how our estimators leverage the shared support information across tasks to achieve better variable selection performance. We evaluate the performance of our methods in simulations and two biomedical applications. Our proposed approaches appear to outperform other sparse MTL methods in variable selection and prediction accuracy. We provide the sMTL package on CRAN.
Molecular Relational Learning (MRL), aiming to understand interactions between molecular pairs, plays a pivotal role in advancing biochemical research. Recently, the adoption of large language models (LLMs), known for their vast knowledge repositories and advanced logical inference capabilities, has emerged as a promising way for efficient and effective MRL. Despite their potential, these methods predominantly rely on the textual data, thus not fully harnessing the wealth of structural information inherent in molecular graphs. Moreover, the absence of a unified framework exacerbates the issue of information underutilization, as it hinders the sharing of interaction mechanism learned across diverse datasets. To address these challenges, this work proposes a novel LLM-based multi-modal framework for Molecular inTeraction prediction following Chain-of-Thought (CoT) theory, termed MolTC, which effectively integrate graphical information of two molecules in pair. To train MolTC efficiently, we introduce a Multi-hierarchical CoT concept to refine its training paradigm, and conduct a comprehensive Molecular Interactive Instructions dataset for the development of biochemical LLMs involving MRL. Our experiments, conducted across various datasets involving over 4,000,000 molecular pairs, exhibit the superiority of our method over current GNN and LLM-based baselines. Code is available at https://github.com/MangoKiller/MolTC.
Spin-orbit coupling plays a large role in stabilizing the low-temperature orthorhombic phase of La$_{2-x}$Sr$_x$CuO$_4$. It splits the degeneracy of the van Hove singularities (thereby stabilizing the distorted phase) and completely changes the shape of the Fermi surfaces, potentially introducing diabolical points into the band structure. The present paper gives a detailed account of the resulting electronic structure. A slave boson calculation shows how these results are modified in the presence of strong correlation effects. A scaling regime, found very close to the metal-insulator transition, allows an analytical determination of the crossover, in the limit of zero oxygen-oxygen hopping, $t_{OO}\rightarrow 0$. Extreme care must exercised in chosing the parameters of the three-band model. In particular, $t_{OO}$ is renormalized from its LDA value. Furthermore, it is suggested that the slave boson model be spin-corrected, in which case the system is close to a metal-insulator transition at half filling.
A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems. An abstract strategy solves every sample from a problem class, no matter its representation or complexity -- like algorithms in computer science. Neural networks are powerful models for processing sensory data, discovering hidden patterns, and learning complex functions, but they struggle to learn such iterative, sequential or hierarchical algorithmic strategies. Extending neural networks with external memories has increased their capacities in learning such strategies, but they are still prone to data variations, struggle to learn scalable and transferable solutions, and require massive training data. We present the Neural Harvard Computer (NHC), a memory-augmented network based architecture, that employs abstraction by decoupling algorithmic operations from data manipulations, realized by splitting the information flow and separated modules. This abstraction mechanism and evolutionary training enable the learning of robust and scalable algorithmic solutions. On a diverse set of 11 algorithms with varying complexities, we show that the NHC reliably learns algorithmic solutions with strong generalization and abstraction: perfect generalization and scaling to arbitrary task configurations and complexities far beyond seen during training, and being independent of the data representation and the task domain.
The classical Weyl Law says that if $N_M(\lambda)$ denotes the number of eigenvalues of the Laplace operator on a $d$-dimensional compact manifold $M$ without a boundary that are less than or equal to $\lambda$, then $$ N_M(\lambda)=c\lambda^d+O(\lambda^{d-1}).$$ In this paper, we show Duistermaat and Guillemin's result allows us to replace the $O(\lambda^{d-1})$ error with $o(\lambda^{d-1})$ if $M$ is a product manifold. We quantify this bound in the case of Cartesian product of spheres by reducing the problem to the study of the distribution of weighted integer lattice points in Euclidean space and formulate a conjecture in the general case reminiscent of the sum-product phenomenon in additive combinatorics.
The signless Laplacian spectral radius of a graph $G$, denoted by $q(G)$, is the largest eigenvalue of its signless Laplacian matrix. In this paper, we investigate extremal signless Laplacian spectral radius for graphs without short cycles or long cycles. Let $\mathcal{G}(m,g)$ be the family of graphs on $m$ edges with girth $g$ and $\mathcal{H}(m,c)$ be the family of graphs on $m$ edges with circumference $c$. More precisely, we obtain the unique extremal graph with maximal $q(G)$ in $\mathcal{G}(m,g)$ and $\mathcal{H}(m,c)$, respectively.
We discuss kinematical correlations between charged leptons from semileptonic decays of open charm/bottom, leptons produced in the Drell-Yan mechanism as well as some other mechanisms not included so far in the literature in proton-proton scattering at BNL RHIC. The distributions of charm and bottom quarks/antiquarks are calculated in the framework of the $k_t$-factorization approach. For this calculation we use the Kwieci\'nski unintegrated parton distributions. The hadronization of heavy quarks is done by means of Peterson et al. fragmentation function. We use semileptonic decay functions found by fitting recent semileptonic data obtained by the CLEO and BABAR collaborations. The Drell-Yan processes were calculated including transverse momenta of quarks and antiquarks, also using the Kwieci\'nski parton distributions. We have also took into consideration reactions initiated by purely QED $\gamma^*\gamma^*$-fusion in elastic and inelastic pp collisions as well as recently proposed diffractive mechanism of exclusive charm-anticharm production. The contribution of the later mechanism is rather small. We get good description of the dilepton invariant mass spectrum measured recently by the PHENIX collaboration and present predictions for the dilepton pair transverse momentum distribution as well as distribution in azimuthal angle between electron and positron.
Different measures of heart rate variability and particularly of respiratory sinus arrhythmia are widely used in research and clinical applications. Inspired by the ideas from the theory of coupled oscillators, we use simultaneous measurements of respiratory and cardiac activity to perform a nonlinear decomposition of the heart rate variability into the respiratory-related component and the rest. We suggest to exploit the technique as a universal preprocessing tool, both for the analysis of respiratory influence on the heart rate as well as in cases when effects of other factors on the heart rate variability are in focus. The theoretical consideration is illustrated by the analysis of 25 data sets from healthy subjects.
Optimizing non-convex functions is of primary importance in the vast majority of machine learning algorithms. Even though many gradient descent based algorithms have been studied, successive convex approximation based algorithms have been recently empirically shown to converge faster. However, such successive convex approximation based algorithms can get stuck in a first-order stationary point. To avoid that, we propose an algorithm that perturbs the optimization variable slightly at the appropriate iteration. In addition to achieving the same convergence rate results as the non-perturbed version, we show that the proposed algorithm converges to a second order stationary point. Thus, the proposed algorithm escapes the saddle point efficiently and does not get stuck at the first order saddle points.
Low-rank tensor approximation error bounds are proposed for the case of noisy input data that depend on low-rank representation type, rank and the dimensionality of the tensor. The bounds show that high-dimensional low-rank structured approximations provide superior noise-filtering properties compared to matrices with the same rank and total element count.
Video processing solutions for motion analysis are key tasks in many computer vision applications, ranging from human activity recognition to object detection. In particular, speed estimation algorithms may be relevant in contexts such as street monitoring and environment surveillance. In most realistic scenarios, the projection of a framed object of interest onto the image plane is likely to be affected by dynamic changes mainly related to perspectival transformations or periodic behaviours. Therefore, advanced speed estimation techniques need to rely on robust algorithms for object detection that are able to deal with potential geometrical modifications. The proposed method is composed of a sequence of pre-processing operations, that aim to reduce or neglect perspetival effects affecting the objects of interest, followed by the estimation phase based on the Maximum Likelihood (ML) principle, where the speed of the foreground objects is estimated. The ML estimation method represents, indeed, a consolidated statistical tool that may be exploited to obtain reliable results. The performance of the proposed algorithm is evaluated on a set of real video recordings and compared with a block-matching motion estimation algorithm. The obtained results indicate that the proposed method shows good and robust performance.
We consider a simple Higgs portal dark matter model, where the Standard Model is supplemented with a complex scalar whose imaginary part plays the role of WIMP dark matter (DM). We show that the direct DM detection cross section vanishes at tree level and zero momentum transfer due to a cancellation by virtue of a softly broken symmetry. This cancellation is operative for any mediator masses. As a result, our electroweak scale dark matter satisfies all of the phenomenological constraints quite naturally.
We use weak-value amplification to enhance the polarization-sensitive fast-light effect from induced Raman absorption in hot rubidium vapor. We experimentally demonstrate that projecting the output signal into an appropriate polarization state enables a pulse advancement of 4.2 {\mu}s, which is 15 times larger than that naturally caused by dispersion. More significantly, we show that combining weak-value amplification with the dispersive response of an atomic system provides a clear advantage in terms of the maximum pulse advancement achievable for a given value of loss. This technique has potential applications for designing novel quantum-information-processing gates and optical buffers for telecommunication systems.
The Accretion-Ejection Instability has been proposed to explain the low frequency Quasi-Periodic Oscillation (QPO) observed in low-mass X-Ray Binaries, in particular Black-Hole candidates. Its frequency, typically a fraction of the Keplerian frequency at the disk inner radius, is exactly in the range indicated by observations. The variations of the frequency with the disk inner radius (extracted from spectral fits of the X-ray emission) might thus be a useful test. In this paper we discuss how changes in the rotation curve, due to relativistic effects when the disk approaches the central object, affect the physics of the instability, and thus this frequency-inner radius relation. We find that the relationship between the frequency of the mode and the Keplerian frequency at the inner disk radius ($r_{int}$) departs from the one obtained in a Keplerian disk, when $r_{int}$ approaches the last stable orbit. This might agree with the recently published results, showing a discrepancy between the behavior of the QPO in the micro quasar GRO J1655, compared to other sources such as XTE J1550 and GRS 1915. In a companion paper (Rodriguez et al., 2002, hereafter Paper I) we have presented detailed observational results for GRO J1655 and GRS 1915. We show how the opposite correlations found in these sources between the disk color radius (assumed to be close to its inner radius) and the QPO frequency could indeed be explained by our theoretical result.
When photons from distant galaxies and stars pass through our neighboring environment, the wavelengths of the photons would be shifted by our local gravitational potential. This local gravitational redshift effect can potentially have an impact on the measurement of cosmological distance-redshift relation. Using available supernovae data, Wojtak et al [1] found seemingly large biases of cosmological parameters for some extended models (non-flat $\Lambda$CDM, $w$CDM, etc.). Huang [2] pointed out that, however, the biases can be reduced to a negligible level if cosmic microwave background (CMB) data are added to break the strong degeneracy between parameters in the extended models. In this article we forecast the cosmological bias due to local gravitational redshifts for a future WFIRST-like supernovae survey. We find that the local gravitational redshift effect remains negligible, provided that CMB data or some future redshift survey data are added to break the degeneracy between parameters.
The fluctuation exchange (FLEX) approximation is applied to study the Holstein-Hubbard model. Due to the retarded nature of the phonon-mediated electron-electron interaction, neither fast Fourier transform (FFT) nor previously developed NRG methods for Hubbard-type purely electronic models are applicable, while brute force solutions are limited by the demands on computational time and storage which increase rapidly at low temperature $T$. Here,we describe a new numerical renormalization group (NRG) technique to solve the FLEX equations efficiently. Several orders of magnitude of CPU time and storage can be saved at low $T$ ($\sim 80K$). To test our approach, we compare our NRG results to brute force calculations on small lattices at elevated temperatures. Both s-wave and d-wave superconducting phase diagrams are then obtained by applying the NRG approach at low $T$. The isotope effect for s-wave pairing is BCS-like in a realistic phonon frequency range, but vanishes at unphysically large phonon frequency ($\sim $ band width). For d-wave pairing, the isotope exponent is negative and small compared to the typical observed values in non-optimally doped cuprates.
This is an extended version of a series of lectures given in St Flour. It includes a discussion of relations between the occupation field of Markov loops with the corresponding free field.
A conjectural expression of the asymptotic gap between the rate-distortion function of an arbitrary generalized Gaussian multiterminal source coding system and that of its centralized counterpart in the high-resolution regime is proposed. The validity of this expression is verified when the number of sources is no more than 3.
Following its flyby and first imaging the Pluto-Charon binary, the New Horizons spacecraft visited the Kuiper-Belt-Object (KBO) (486958) 2014 MU69 (Arrokoth). Imaging showed MU69 to be a contact-binary, made of two individual lobes connected by a narrow neck, rotating at low spin period (15.92 h), and having high obliquity (~98 deg), similar to other KBO contact-binaries inferred through photometric observations. The origin of such peculiar configurations is puzzling, and all scenarios suggested for the origins of contact-binaries fail to reproduce such properties and their likely high frequency. Here we show that semi-secular perturbations operating only on ultra-wide (~0.1-0.4 Hill-radius) KBO-binaries can robustly lead to gentle, slow-speed binary mergers at arbitrarily high obliquities, but low rotational velocities, that can reproduce MU69's (and similar oblique contact binaries) characteristics. Using N-body simulations, we find that ~15% of all ultra-wide binaries with cosine-uniform inclination distribution are likely to merge through this process. Moreover, we find that such mergers are sufficiently gentle as to only slightly deform the KBO shape, and can produce the measured rotation speed of MU69. The semi-secular contact-binary formation channel not only explains the observed properties of MU69, but could also apply for other Kuiper/asteroid belt binaries, and for Solar/extra-solar moon systems.
Motivation: We investigate whether a template-based classification pipeline could be used to identify immunophenotypes in (and thereby classify) a heterogeneous disease with many subtypes. The disease we consider here is Acute Myeloid Leukemia, which is heterogeneous at the morphologic, cytogenetic and molecular levels, with several known subtypes. The prognosis and treatment for AML depends on the subtype. Results: We apply flowMatch, an algorithmic pipeline for flow cytometry data created in earlier work, to compute templates succinctly summarizing classes of AML and healthy samples. We develop a scoring function that accounts for features of the AML data such as heterogeneity to identify immunophenotypes corresponding to various AML subtypes, including APL. All of the AML samples in the test set are classified correctly with high confidence. Availability: flowMatch is available at www.bioconductor.org/packages/devel/bioc/html/flowMatch.html; programs specific to immunophenotyping AML are at www.cs.purdue.edu/homes/aazad/software.html.
The Maxwell and Maxwell-de Rham equations can be solved exactly to first order in an external gravitational field. The gravitational background induces phases in the wave functions of spin-1 particles. These phases yield the optics of the particles without requiring any thin lens approximation.
Photon correlation measurements reveal memory effects in the optical emission of single InAs quantum dots with timescales from 10 to 800 ns. With above-band optical excitation, a long-timescale negative correlation (antibunching) is observed, while with quasi-resonant excitation, a positive correlation (blinking) is observed. A simple model based on long-lived charged states is presented that approximately explains the observed behavior, providing insight into the excitation process. Such memory effects can limit the internal efficiency of light emitters based on single quantum dots, and could also be problematic for proposed quantum-computation schemes.
We report the first electron paramagnetic resonance studies of single crystals and powders of Pr_{0.6}Ca_{0.4}MnO_{3} in the 300-4.2 K range, covering the charge ordering transition at ~ 240 K and antiferromagnetic transition (T_N) at ~ 170 K. The asymmetry parameter for the Dysonian single crystal spectra shows anomalous increase at T_{co}. Below T_{co} the g-value increases continuously, suggesting a gradual strengthening of orbital ordering. The linewidth undergoes a sudden increase at T_{co} and continues to increase down to T_N. The intensity increases as the temperature is decreased till T_{co} due to the renormalization of magnetic susceptibility arising from the build up of ferromagnetic correlations. The value of the exchange constant, J, is estimated to be 154 K.
This paper explores the potential of large language models (LLMs) to make the Aeronautical Regulations of Colombia (RAC) more accessible. Given the complexity and extensive technicality of the RAC, this study introduces a novel approach to simplifying these regulations for broader understanding. By developing the first-ever RAC database, which contains 24,478 expertly labeled question-and-answer pairs, and fine-tuning LLMs specifically for RAC applications, the paper outlines the methodology for dataset assembly, expert-led annotation, and model training. Utilizing the Gemma1.1 2b model along with advanced techniques like Unsloth for efficient VRAM usage and flash attention mechanisms, the research aims to expedite training processes. This initiative establishes a foundation to enhance the comprehensibility and accessibility of RAC, potentially benefiting novices and reducing dependence on expert consultations for navigating the aviation industry's regulatory landscape. You can visit the dataset (https://huggingface.co/somosnlp/gemma-1.1-2b-it_ColombiaRAC_FullyCurated_format_chatML_V1) and the model (https://huggingface.co/datasets/somosnlp/ColombiaRAC_FullyCurated) here.
Data association is one of the fundamental problems in multi-sensor systems. Most current techniques rely on pairwise data associations which can be spurious even after the employment of outlier rejection schemes. Considering multiple pairwise associations at once significantly increases accuracy and leads to consistency. In this work, we propose two fully decentralized methods for consistent global data association from pairwise data associations. The first method is a consensus algorithm on the set of doubly stochastic matrices. The second method is a decentralization of the spectral method proposed by Pachauri et al.. We demonstrate the effectiveness of both methods using theoretical analysis and experimental evaluation.
Though convolutional neural networks are widely used in different tasks, lack of generalization capability in the absence of sufficient and representative data is one of the challenges that hinder their practical application. In this paper, we propose a simple, effective, and plug-and-play training strategy named Knowledge Distillation for Domain Generalization (KDDG) which is built upon a knowledge distillation framework with the gradient filter as a novel regularization term. We find that both the ``richer dark knowledge" from the teacher network, as well as the gradient filter we proposed, can reduce the difficulty of learning the mapping which further improves the generalization ability of the model. We also conduct experiments extensively to show that our framework can significantly improve the generalization capability of deep neural networks in different tasks including image classification, segmentation, reinforcement learning by comparing our method with existing state-of-the-art domain generalization techniques. Last but not the least, we propose to adopt two metrics to analyze our proposed method in order to better understand how our proposed method benefits the generalization capability of deep neural networks.
We discuss the thermal evolution of the spurion and messenger fields of ordinary gauge mediation models taking into account the Standard Model degrees of freedom. It is shown that for thermalized messengers the metastable susy breaking vacuum becomes thermally selected provided that the susy breaking sector is sufficiently weakly coupled to messengers or to any other observable field.
The RGB-D camera maintains a limited range for working and is hard to accurately measure the depth information in a far distance. Besides, the RGB-D camera will easily be influenced by strong lighting and other external factors, which will lead to a poor accuracy on the acquired environmental depth information. Recently, deep learning technologies have achieved great success in the visual SLAM area, which can directly learn high-level features from the visual inputs and improve the estimation accuracy of the depth information. Therefore, deep learning technologies maintain the potential to extend the source of the depth information and improve the performance of the SLAM system. However, the existing deep learning-based methods are mainly supervised and require a large amount of ground-truth depth data, which is hard to acquire because of the realistic constraints. In this paper, we first present an unsupervised learning framework, which not only uses image reconstruction for supervising but also exploits the pose estimation method to enhance the supervised signal and add training constraints for the task of monocular depth and camera motion estimation. Furthermore, we successfully exploit our unsupervised learning framework to assist the traditional ORB-SLAM system when the initialization module of ORB-SLAM method could not match enough features. Qualitative and quantitative experiments have shown that our unsupervised learning framework performs the depth estimation task comparable to the supervised methods and outperforms the previous state-of-the-art approach by $13.5\%$ on KITTI dataset. Besides, our unsupervised learning framework could significantly accelerate the initialization process of ORB-SLAM system and effectively improve the accuracy on environmental mapping in strong lighting and weak texture scenes.
Establishing a predictive ab initio method for solid systems is one of the fundamental goals in condensed matter physics and computational materials science. The central challenge is how to encode a highly-complex quantum-many-body wave function compactly. Here, we demonstrate that artificial neural networks, known for their overwhelming expressibility in the context of machine learning, are excellent tool for first-principles calculations of extended periodic materials. We show that the ground-state energies in real solids in one-, two-, and three-dimensional systems are simulated precisely, reaching their chemical accuracy. The highlight of our work is that the quasiparticle band spectra, which are both essential and peculiar to solid-state systems, can be efficiently extracted with a computational technique designed to exploit the low-lying energy structure from neural networks. This work opens up a path to elucidate the intriguing and complex many-body phenomena in solid-state systems.
The paper deals with the controllability of finite-dimensional linear difference delay equations, i.e., dynamics for which the state at a given time $t$ is obtained as a linear combination of the control evaluated at time $t$ and of the state evaluated at finitely many previous instants of time $t-\Lambda_1,\dots,t-\Lambda_N$. Based on the realization theory developed by Y.Yamamoto for general infinite-dimensional dynamical systems, we obtain necessary and sufficient conditions, expressed in the frequency domain, for the approximate controllability in finite time in $L^q$ spaces, $q \in [1, +\infty)$. We also provide a necessary condition for $L^1$ exact controllability, which can be seen as the closure of the $L^1$ approximate controllability criterion. Furthermore, we provide an explicit upper bound on the minimal times of approximate and exact controllability, given by $d\max\{\Lambda_1,\dots,\Lambda_N\}$, where $d$ is the dimension of the state space.
ZnGeN2 and other heterovalent ternary semiconductors have important potential applications in optoelectronics, but ordering of the cation sublattice, which can affect the band gap, lattice parameters, and phonons, is not yet well understood. Here the effects of growth and processing conditions on the ordering of the ZnGeN2 cation sublattice were investigated using x-ray diffraction and Raman spectroscopy. Polycrystalline ZnGeN2 was grown by exposing solid Ge to Zn and NH3 vapors at temperatures between 758 degree C and 914 degree C. Crystallites tended to be rod-shaped, with growth rates higher along the c-axis. The degree of ordering, from disordered, wurtzite-like x-ray diffraction spectra to orthorhombic, with space group Pna21, increased with increasing growth temperature, as evidenced by the appearance of superstructure peaks and peak splittings in the diffraction patterns. Annealing disordered, low-temperature-grown ZnGeN2 at 850 degree C resulted in increased cation ordering. Growth of ZnGeN2 on a liquid Sn-Ge-Zn alloy at 758 degree C showed an increase in the tendency for cation ordering at a lower growth temperature, and resulted in hexagonal platelet-shaped crystals. The trends shown here may help to guide understanding of the synthesis and characterization of other heterovalent ternary nitride semiconductors as well as ZnGeN2.
It is shown that the $n$-point functions of scalar massive free fields on the noncommutative Minkowski space are distributions which are boundary values of analytic functions. Contrary to what one might expect, this construction does not provide a connection to the popular traditional Euclidean approach to noncommutative field theory (unless the time variable is assumed to commute). Instead, one finds Schwinger functions with twistings involving only momenta that are on the mass-shell. This explains why renormalization in the traditional Euclidean noncommutative framework crudely differs from renormalization in the Minkowskian regime.
The properties of galaxies depend on their environment: red, passive elliptical galaxies are usually located in denser environments than blue, star-forming spiral galaxies. This difference in galaxy populations can be detected at all scales from groups of galaxies to superclusters. In this paper, we will discuss the effect of the large-scale environment on galaxies. Our results suggest that galaxies in superclusters are more likely to be passive than galaxies in voids even when they belong to groups with the same richness. In addition, the galaxies in superclusters are also affected by the morphology of the supercluster: filament-type superclusters contain relatively more red, passive galaxies than spider-type superclusters. These results suggest that the evolution of a galaxy is not determined by its local environment alone, but the large-scale environment also affects.
We study the fluid inclusion of both Lennard-Jones particles and particles with competing interaction ranges --short range attractive and long range repulsive (SALR)-- in a disordered porous medium constructed as a controlled pore glass in two dimensions. With the aid of a full two-dimensional Ornstein-Zernike approach, complemented by a Replica Ornstein-Zernike integral equation, we explicitly obtain the spatial density distribution of the fluid adsorbed in the porous matrix and a good approximation for the average fluid-matrix correlations. The results illustrate the remarkable differences between the adsorbed Lennard-Jones (LJ) and SALR systems. In the latter instance, particles tend to aggregate in clusters which occupy pockets and bays in the porous structure, whereas the LJ fluid uniformly wets the porous walls. A comparison with Molecular Dynamics simulations shows that the two-dimensional Ornstein-Zernike approach with a Hypernetted Chain closure together with a sensible approximation for the fluid-fluid correlations can provide an accurate picture of the spatial distribution of adsorbed fluids for a given configuration of porous material.
It is shown that the probabilities for the spin singlet can be reproduced through classical resources, with no communication between the distant parties, by using merely shared (pseudo-)randomness. If the parties are conscious beings aware of both the hidden-variables and the random mechanism, then one has a conspiracy. If the parties are aware of only the random variables, they may be induced to believe that they are able to send instantaneous information to one another. It is also possible to reproduce the correlations at the price of reducing the detection efficiency. It is further demonstrated that the same probability decomposition could be realized through action-at-a-distance, provided it existed.
We show that the $\g$-vector of the interval subdivision of a simplicial complex with a nonnegative and symmetric $h$-vector is nonnegative. In particular, we prove that such $\g$-vector is the $f$-vector of some balanced simplicial complex. Moreover, we show that the local $\g$-vector of the interval subdivision of a simplex is nonnegative; answering a question by Juhnke-Kubitzke et al.
Traditional syntax models typically leverage part-of-speech (POS) information by constructing features from hand-tuned templates. We demonstrate that a better approach is to utilize POS tags as a regularizer of learned representations. We propose a simple method for learning a stacked pipeline of models which we call "stack-propagation". We apply this to dependency parsing and tagging, where we use the hidden layer of the tagger network as a representation of the input tokens for the parser. At test time, our parser does not require predicted POS tags. On 19 languages from the Universal Dependencies, our method is 1.3% (absolute) more accurate than a state-of-the-art graph-based approach and 2.7% more accurate than the most comparable greedy model.
Recent work has shown that either (1) increasing the input length or (2) increasing model size can improve the performance of Transformer-based neural models. In this paper, we present a new model, called LongT5, with which we explore the effects of scaling both the input length and model size at the same time. Specifically, we integrated attention ideas from long-input transformers (ETC), and adopted pre-training strategies from summarization pre-training (PEGASUS) into the scalable T5 architecture. The result is a new attention mechanism we call {\em Transient Global} (TGlobal), which mimics ETC's local/global attention mechanism, but without requiring additional side-inputs. We are able to achieve state-of-the-art results on several summarization tasks and outperform the original T5 models on question answering tasks.
In this paper, we propose a spatial modulation (SM) scheme referred to as complex quadrature spatial modulation (CQSM). In contrast to quadrature spatial modulation (QSM), CQSM transmits two complex signal constellation symbols on the real and quadrature spatial dimensions at each channel use, increasing the spectral efficiency. To this end, signal symbols transmitted at any given time instant are drawn from two different modulation sets. The first modulation set is any of the conventional QAM/PSK alphabets, while the second is a rotated version of it. The optimal rotation angle is obtained through simulations for several modulation schemes and analytically proven for the case of QPSK, where both results coincide. Simulation results showed that CQSM outperformed QSM and generalized SM (GSM) by approximately 5 and 4.5 dB, respectively, for the same transmission rate. Its performance was similar to that of QSM; however, it achieved higher transmission rates. It was additionally shown numerically and analytically that CQSM outperformed QSM for a relatively large number of transmit antennas.
The Wilson contour integral approach is applied to resum the soft gluon radiative correctins to the quark form factors in the Sudakov regime. The one-loop order results for the quark-photon (color singlet form factor) and quark-gluon (color non-singlet form factor) vertices are presented. The explicit expressions for the vacuum averaged contour integrals in $g^2$ accuracy are derived for an arbitrary gauge field. The corresponding one-loop cusp anomalous dimensions are found in the case of perturbative gluon field in arbitrary covariant gauge. It is shown that the gauge dependence drops out from the leading high energy behavior.
In a generic supersymmetric extension of the Standard Model, whether unified or not, a simple and well motivated U(2) symmetry, acting on the lightest two generations, completely solves the flavour changing problem and necessarily leads to a predictive texture for the Yukawa couplings.
Given an invertible sheaf on a fibre space between projective varieties of positive characteristic, we show that fibrewise semi-ampleness implies relative semi-ampleness. The same statement fails in characteristic zero.
We analyze the high-resolution X-ray spectrum of the Seyfert 1 galaxy NGC 5548, for the full 0.1-10 keV band, using improved calibration results of the Chandra-LETGS instrument. The warm absorber consists of at least three ionization components, namely one with a low, medium and high ionization parameter. The X-ray absorbing material, from an outflowing wind, covers the full range of velocity components found from UV absorption lines. The presence of redshifted emission components for the strongest blue-shifted resonance absorption lines indicate that the absorber is located at a distance larger than the edge of the accretion disk. We derive an upper limit to the edge of the accretion disk of 1 light year. Absorption lines from ions of at least ten chemical elements have been detected, and in general for these elements there are no strong deviations from solar abundances. The narrow emission lines from the O VII and Ne IX forbidden and intercombination lines probably originate from much larger distances to the black hole. We find evidence for weak relativistically broadened oxygen and nitrogen emission lines from the inner parts of the accretion disk, but at a much smaller flux level than those observed in some other active galactic nuclei. In addition, there is a broad, non-relativistic C VI Ly alpha emission line that is consistent with emission lines from the inner part of the optical/UV broad line region.
We use Toponogov's triangle comparison theorem from Riemannian geometry along with quantitative scale oriented variants of classical propagation of singularities arguments to obtain logarithmic improvements of the Kakeya-Nikodym norms introduced in \cite{SKN} for manifolds of nonpositive sectional curvature. Using these and results from our paper \cite{BS15} we are able to obtain log-improvements of $L^p(M)$ estimates for such manifolds when $2<p<\tfrac{2(n+1)}{n-1}$. These in turn imply $(\log\lambda)^{\sigma_n}$, $\sigma_n\approx n$, improved lower bounds for $L^1$-norms of eigenfunctions of the estimates of the second author and Zelditch~\cite{SZ11}, and using a result from Hezari and the second author~\cite{HS}, under this curvature assumption, we are able to improve the lower bounds for the size of nodal sets of Colding and Minicozzi~\cite{CM} by a factor of $(\log \lambda)^{\mu}$ for any $\mu<\tfrac{2(n+1)^2}{n-1}$, if $n\ge3$.
We address second-order optimality conditions for optimal control problems involving sparsity functionals which induce spatio-temporal sparsity patterns. We employ the notion of (weak) second subderivatives. With this approach, we are able to reproduce the results from Casas, Herzog, and Wachsmuth (ESAIM COCV, 23, 2017, p. 263-295). Our analysis yields a slight improvement of one of these results and also opens the door for the sensitivity analysis of this class of problems.
Doubly heavy baryons $\left(QQq\right)$ and singly heavy antimesons $\left(\bar{Q}q\right)$ are related by the heavy quark-diquark (HQDQ) symmetry because in the $m_Q \to \infty$ limit, the light degrees of freedom in both the hadrons are expected to be in identical configurations. Hyperfine splittings of the ground states in both systems are nonvanishing at $O(1/m_Q)$ in the heavy quark mass expansion and HQDQ symmetry relates the hyperfine splittings in the two sectors. In this paper, working within the framework of Non-Relativistic QCD (NRQCD), we point out the existence of an operator that couples four heavy quark fields to the chromomagnetic field with a coefficient that is enhanced by a factor from Coulomb exchange. This operator gives a correction to doubly heavy baryon hyperfine splittings that scales as $1/m_Q^2 \times \alpha_S/r$, where $r$ is the separation between the heavy quarks in the diquark. This correction can be calculated analytically in the extreme heavy quark limit in which the potential between the quarks in the diquark is Coulombic. In this limit, the correction is $O(\alpha_s^2/m_Q)$ and comes with a small coefficient. For values of $\alpha_s$ relevant to doubly charm and doubly bottom systems, the correction to the hyperfine splittings in doubly heavy baryons is only a few percent or smaller. We also argue that nonperturbative corrections to the prediction for the hyperfine splittings are suppressed by $\Lambda^2_{\rm QCD}/m_Q^2$ rather than $\Lambda_{\rm QCD}/m_Q$. Corrections should be $\approx 10\%$ in the charm sector and smaller in heavier systems.
Hot-carrier cooling (HCC) in metal halide perovskites in the high-density regime is significantly slower compared to conventional semiconductors. This effect is commonly attributed to a hot-phonon bottleneck but the influence of the lattice properties on the HCC behaviour is poorly understood. Using pressure-dependent transient absorption spectroscopy (fs-TAS) we find that at an excitation density below Mott transition, pressure does not affect the HCC. On the contrary, above Mott transition, HCC in methylammonium lead iodide (MAPbI3) is around two times as fast at 0.3 GPa compared to ambient pressure. Our electron-phonon coupling calculations reveal about two times stronger electron-phonon coupling for the inorganic cage mode at 0.3 GPa. However, our experiments reveal that pressure promotes faster HCC only above Mott transition. Altogether, these findings suggest a change in the nature of excited carriers in the high-density regime, providing insights on the electronic behavior of devices operating at such high charge-carrier density.
Efficient optical quantum memories are a milestone required for several quantum technologies including repeater-based quantum key distribution and on-demand multi-photon generation. We present an efficiency optimization of an optical electromagnetically induced transparency (EIT) memory experiment in a warm cesium vapor using a genetic algorithm and analyze the resulting waveforms. The control pulse is represented either as a Gaussian or free-form pulse, and the results from the optimization are compared. We see an improvement factor of 3(7)\% when using optimized free-form pulses. By limiting the allowed pulse energy in a solution, we show an energy-based optimization giving a 30% reduction in energy, with minimal efficiency loss.
With the advent of high precision neutrino scattering experiments comes the need for improved radiative corrections. We present a phenomenological analysis of some contributions to the production of photons in neutrino neutral current scattering that are relevant to experiments subsuming the 1% level.
The most pristine remnants of the Solar system's planet formation epoch orbit the Sun beyond Neptune, the small bodies of the trans-Neptunian object populations. The bulk of the mass is in ~100 km objects, but objects at smaller sizes have undergone minimal collisional processing, with New Horizons recently revealing that ~20 km effective diameter body (486958) Arrokoth appears to be a primordial body, not a collisional fragment. This indicates bodies at these sizes (and perhaps smaller) retain a record of how they were formed, and are the most numerous record of that epoch. However, such bodies are impractical to find by optical surveys due to their very low brightnesses. Their presence can be inferred from the observed cratering record of Pluto and Charon, and directly measured by serendipitous stellar occultations. These two methods produce conflicting results, with occultations measuring roughly ten times the number of ~km bodies inferred from the cratering record. We use numerical models to explore how these observations can be reconciled with evolutionary models of the outer Solar system. We find that models where the initial size of bodies decreases with increasing semimajor axis of formation, and models where the surface density of bodies increases beyond the 2:1 mean-motion resonance with Neptune can produce both sets of observations, though comparison to various observational tests favours the former mechanism. We discuss how to evaluate the astrophysical plausibility of these solutions, and conclude extended serendipitous occultation surveys with broad sky coverage are the most practical approach.
In this work, we study the effects on the relevant observational parameters of an inflationary universe from a chaotic potential with a step. We numerically evolve the perturbation equations within both cold inflation and warm inflation. On the one hand, in a cold inflation scenario we analyse the scalar power spectrum $P_{\mathcal{R}}$ in terms of the number of e-folds $N_{e}$, and in terms of the ratio $k/k_{0}$, where $k_{0}$ is our pivot scale. We show how $P_{\mathcal{R}}$ oscillates around $0.2< k/k_{0} < 20$. Additionally, we present the evolution of two relevant parameters: the scalar spectral index $n_\mathrm{s}$ and the tensor-to-scalar ratio $r$. In fact, more than one region of $(n_\mathrm{s},r)$ lies within the observable window (Planck 2018). On the other hand, in the warm inflationary case, we also examine the evolution of $P_{\mathcal{R}}$ in terms of $N_{e}$ and $k/k_{0}$. Perturbations are amplified in WI; in fact, $P_{\mathcal{R}}$ can be much larger than the CMB value $P_{\mathcal{R}}> 2.22\times 10^{-9}$. This time, the spectral index $n_\mathrm{s}$ is clearly blue-tilted, at smaller scales, and the tensor-to-scalar ratio $r$ becomes too low. However, $n_\mathrm{s}$ can change from blue-tilted towards red-tilted, since $P_{\mathcal{R}}$ starts oscillating around $k/k_{0}\sim 40$. Indeed, the result from the step potential skims the Planck contours. Finally, one key aspect of this research was to contrast the features of an inflationary potential between both paradigms, and, in fact, they show similarities and differences. Due to a featured background and a combined effect of entropy fluctuations (only in warm inflation), in both scenarios certain fluctuation scales are not longer ``freeze in'' on super-horizon scales.
This paper proposes a novel Stochastic Split Linearized Bregman Iteration ($S^{2}$-LBI) algorithm to efficiently train the deep network. The $S^{2}$-LBI introduces an iterative regularization path with structural sparsity. Our $S^{2}$-LBI combines the computational efficiency of the LBI, and model selection consistency in learning the structural sparsity. The computed solution path intrinsically enables us to enlarge or simplify a network, which theoretically, is benefited from the dynamics property of our $S^{2}$-LBI algorithm. The experimental results validate our $S^{2}$-LBI on MNIST and CIFAR-10 dataset. For example, in MNIST, we can either boost a network with only 1.5K parameters (1 convolutional layer of 5 filters, and 1 FC layer), achieves 98.40\% recognition accuracy; or we simplify $82.5\%$ of parameters in LeNet-5 network, and still achieves the 98.47\% recognition accuracy. In addition, we also have the learning results on ImageNet, which will be added in the next version of our report.
Although significant recent progress has been made in improving the multi-core scalability of high throughput transactional database systems, modern systems still fail to achieve scalable throughput for workloads involving frequent access to highly contended data. Most of this inability to achieve high throughput is explained by the fundamental constraints involved in guaranteeing ACID --- the addition of cores results in more concurrent transactions accessing the same contended data for which access must be serialized in order to guarantee isolation. Thus, linear scalability for contended workloads is impossible. However, there exist flaws in many modern architectures that exacerbate their poor scalability, and result in throughput that is much worse than fundamentally required by the workload. In this paper we identify two prevalent design principles that limit the multi-core scalability of many (but not all) transactional database systems on contended workloads: the multi-purpose nature of execution threads in these systems, and the lack of advanced planning of data access. We demonstrate the deleterious results of these design principles by implementing a prototype system, ORTHRUS, that is motivated by the principles of separation of database component functionality and advanced planning of transactions. We find that these two principles alone result in significantly improved scalability on high-contention workloads, and an order of magnitude increase in throughput for a non-trivial subset of these contended workloads.
We calculate the effects of finite density of isospin asymmetric strange hadronic matter, for different strangeness fractions, on the in-medium properties of vector $\left( D^{\ast}, D_{s}^{\ast}, B^{\ast}, B_{s}^{\ast}\right)$ and axial-vector $\left( D_{1}, D_{1s}, B_{1}, B_{1s}\right)$ mesons using chiral hadronic SU(3) model and QCD sum rules. We focus on the evaluation of in-medium mass-shift and shift of decay constant of above vector and axial vector mesons. In QCD sum rule approach the properties e.g. masses and decay constants of vector and axial vector mesons are written in terms of quark and gluon condensates. These quarks and gluon condensates are evaluated in the present work using chiral SU(3) model through the medium modification of scalar-isoscalar fields $\sigma$ and $\zeta$, the scalar-isovector field $\delta$ and scalar dilaton field $\chi$ in strange hadronic medium which includes both nucleons as well as hyperons. As we shall see in detail the masses and decay constants of heavy vector and axial vector mesons are affected significantly due to isospin asymmetry and strangeness fraction of the medium and these modifications may influence the experimental observables produced in heavy ion collision experiments. The results of present investigations of in-medium properties of vector and axial-vector mesons at finite density of strange hadronic medium may be helpful for understanding the experimental data from heavy-ion collision experiments in-particular for the Compressed Baryonic Matter (CBM) experiment of FAIR facility at GSI, Germany.
We enumerate interlaced pairs of parking functions whose underlying Dyck path has a bounded height. We obtain an explicit formula for this enumeration in the form of a quotient of analogs of Chebicheff polynomials having coefficients in the ring of symmetric functions.
We generalize Moore's nonstandard proof of the Spectral theorem for bounded self-adjoint operators to the case of unbounded operators. The key step is to use a definition of the nonstandard hull of an internally bounded self-adjoint operator due to Raab.
The radiative capture process $n p \to d \gamma$ is considered within the framework of a recently developed six-quark dressed-bag model for the nucleon-nucleon interaction. The calculations presented here include both the nucleon current and the meson-exchange current contributions. The latter uses short-range hadronic form factors for the pion exchange currents consistent with the soft cut-off parameter $\Lambda_{\pi NN}$ from the $NN$-potential. Contributions of the pion exchange current and $\Delta$-isobar current to the total cross section still cannot explain the discrepancy between the theoretical and experimental cross sections. Possibilities for new types of meson exchange currents associated with chiral fields inside multi-quark dressed-bag states in nuclei are discussed.
Careful prompt design is critical to the use of large language models in zero-shot or few-shot learning. As a consequence, there is a growing interest in automated methods to design optimal prompts. In this work, we propose Test-time Prompt Editing using Reinforcement learning (TEMPERA). In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge, is adaptive to different queries and provides an interpretable prompt for every query. To achieve this, we design a novel action space that allows flexible editing of the initial prompts covering a wide set of commonly-used components like instructions, few-shot exemplars, and verbalizers. The proposed method achieves significant gains compared with recent SoTA approaches like prompt tuning, AutoPrompt, and RLPrompt, across a variety of tasks including sentiment analysis, topic classification, natural language inference, and reading comprehension. Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
The Einstein-Podolsky-Rosen (EPR) paradox is one of the milestones in quantum foundations, arising from the lack of local realistic description of quantum mechanics. The EPR paradox has stimulated an important concept of "quantum nonlocality", which manifests itself by three different types: quantum entanglement, quantum steering, and Bell nonlocality. Although Bell nonlocality is more often used to show the "quantum nonlocality", the original EPR paradox is essentially a steering paradox. In this work, we formulate the original EPR steering paradox into a contradiction equality,thus making it amenable to an experimental verification. We perform an experimental test of the steering paradox in a two-qubit scenario. Furthermore, by starting from the steering paradox, we generate a generalized linear steering inequality and transform this inequality into a mathematically equivalent form, which is more friendly for experimental implementation, i.e., one may only measure the observables in $x$-, $y$-, or $z$-axis of the Bloch sphere, rather than other arbitrary directions. We also perform experiments to demonstrate this scheme. Within the experimental errors, the experimental results coincide with the theoretical predictions. Our results deepen the understanding of quantum foundations and provide an efficient way to detect the steerability of quantum states.
We often seek to estimate the causal effect of an exposure on a particular outcome in both randomized and observational settings. One such estimation method is the covariate-adjusted residuals estimator, which was designed for individually or cluster randomized trials. In this manuscript, we study the properties of this estimator and develop a new estimator that utilizes both covariate adjustment and inverse probability weighting We support our theoretical results with a simulation study and an application in an infectious disease setting. The covariate-adjusted residuals estimator is an efficient and unbiased estimator of the average treatment effect in randomized trials; however, it is not guaranteed to be unbiased in observational studies. Our novel estimator, the covariate-adjusted residuals estimator with inverse probability weighting, is unbiased in randomized and observational settings, under a reasonable set of assumptions. Furthermore, when these assumptions hold, it provides efficiency gains over inverse probability weighting in observational studies. The covariate-adjusted residuals estimator is valid for use in randomized trials, but should not be used in observational studies. The covariate-adjusted residuals estimator with inverse probability weighting provides an efficient alternative for use in randomized and observational settings.
It is shown that for a certain class of the Kato functions the Trotter-Kato product formulae converge in Dixmier ideal C 1,$\infty$ in topology, which is defined by the $\times$ 1,$\infty$-norm. Moreover, the rate of convergence in this topology inherits the error-bound estimate for the corresponding operator-norm convergence. 1 since [24], [14]. Note that a subtle point of this program is the question about the rate of convergence in the corresponding topology. Since the limit of the Trotter-Kato product formula is a strongly continuous semigroup, for the von Neumann-Schatten ideals this topology is the trace-norm $\times$ 1 on the trace-class ideal C 1 (H). In this case the limit is a Gibbs semigroup [25]. For self-adjoint Gibbs semigroups the rate of convergence was estimated for the first time in [7] and [9]. The authors considered the case of the Gibbs-Schr{\"o}dinger semigroups. They scrutinised in these papers a dependence of the rate of convergence for the (exponential) Trotter formula on the smoothness of the potential in the Schr{\"o}dinger generator. The first abstract result in this direction was due to [19]. In this paper a general scheme of lifting the operator-norm rate convergence for the Trotter-Kato product formulae was proposed and advocated for estimation the rate of the trace-norm
We prove a new global existence result for the asymptotically flat, spherically symmetric Einstein-Vlasov system which describes in the framework of general relativity an ensemble of particles which interact by gravity. The data are such that initially all the particles are moving radially outward and that this property can be bootstrapped. The resulting non-vacuum spacetime is future geodesically complete.
The terrestrial fossil record shows that the exponential rise in biodiversity since the Precambrian period has been punctuated by large extinctions, at intervals of 40 to 140 Myr. These mass extinctions represent extremes over a background of smaller events and the natural process of species extinction. We point out that the non-terrestrial phenomena proposed to explain these events, such as boloidal impacts (a candidate for the end-Cretaceous extinction), and nearby supernovae, are collectively far more effective during the solar system's traversal of spiral arms. Using the best available data on the location and kinematics of the Galactic spiral structure (including distance scale and kinematic uncertainties), we present evidence that arm crossings provide a viable explanation for the timing of the large extinctions.
The photoluminescence (PL) spectrum of a two-dimensional electron gas (2DEG) in the fractional quantum Hall regime is studied. The response of the 2DEG to an optically injected valence hole depends on the separation d between the electron and hole layers. At d smaller than the magnetic length lambda, the PL spectrum shows recombination of neutral (X) and charged (X-) excitons. At d>lambda, the hole binds one or two Laughlin quasielectrons (QE) of the 2DEG to form fractionally charged excitons (FCX), hQE or hQE2. Different FCX states have different optical properties, and their stability depends critically on the presence of QE's in the 2DEG. This explains discontinuities observed in the PL spectrum at such (Laughlin) filling factors as nu=1/3 or 2/3.
We study the effect of varying sound speed on clustering dark energy in the Dirac-Born-Infeld (DBI) scenario. The DBI action is included in the class of $k$-essence models, and it has an important role in describing the effective degrees of freedom of D-branes in the string theory. In the DBI setup, we take the anti-de Sitter (AdS) warp factor $f(\phi)=f_0\, \phi^{-4}$, and investigate the self-interacting quartic potential $V(\phi)=\lambda\phi^{4}/4$. We calculate the full expression of the effective sound speed for our model, and show that it can evolve with time during the cosmological evolution. Besides, the adiabatic sound speed evolves with time here, and this influences the background dynamics to some extent. We show that the effective sound speed is very close to the adiabatic sound speed. We examine the effect of the variable sound speed on growth of the perturbations in both the linear and non-linear regimes. In the linear regime, we apply the Pseudo-Newtonian formalism, and show that dark energy suppresses the growth of perturbations at low redshifts. From study the Integrated Sachs-Wolf (ISW) effect in our setup, we see that the model manifests some deviation from the concordance $\Lambda$CDM model. In the non-linear regime, we follow the approach of spherical collapse model, and calculate the linear overdensity, the virial overdensity, overdensity at the turn around and the rate of expansion of collapsed region. We further compute relative number density of halo objects above a given mass in our setting, and show that the number of structures with respect to the $\Lambda$CDM model is reduced more in the high mass tail at high redshifts.
We apply the Adversarial NLI dataset to train the NLI model and show that the model has the potential to enhance factual correctness in abstract summarization. We follow the work of Falke et al. (2019), which rank multiple generated summaries based on the entailment probabilities between an source document and summaries and select the summary that has the highest entailment probability. The authors' earlier study concluded that current NLI models are not sufficiently accurate for the ranking task. We show that the Transformer models fine-tuned on the new dataset achieve significantly higher accuracy and have the potential of selecting a coherent summary.
The synthesis of antimonene, which is a promising group-V 2D material for both fundamental studies and technological applications, remains highly challenging. Thus far, it has been synthesized only by exfoliation or growth on a few substrates. In this study, we show that thin layers of antimonene can be grown on Ag (111) by molecular beam epitaxy. High-resolution scanning tunneling microscopy combined with theoretical calculations revealed that the submonolayer Sb deposited on a Ag (111) surface forms a layer of AgSb2 surface alloy upon annealing. Further deposition of Sb on the AgSb2 surface alloy causes an epitaxial layer of Sb to form, which is identified as antimonene with a buckled honeycomb structure. More interestingly, the lattice constant of the epitaxial antimonene (5 {\AA}) is much larger than that of freestanding antimonene, indicating a high tensile strain of more than 20%. This kind of large strain is expected to make the antimonene a highly promising candidate for room-temperature quantum spin Hall material.
The Time Projection Chamber (TPC) has been recognized as a potentially powerful detector for the search of WIMPs by measuring the directions of nuclear recoils, in which the most convincing signature of WIMPs, caused by the Earth's motion around the Galaxy, appears. We report on the first results of a performance study of the neutron exposure of our prototype micro-TPC with Ar-C$_2$H$_6$ (90:10) and CF$_4$ gas of 150 Torr.
The short coherence lengths characteristic of low-dimensional superconductors are associated with usefully high critical fields or temperatures. Unfortunately, such materials are often sensitive to disorder and suffer from phase fluctuations in the superconducting order parameter which diverge with temperature $T$, magnetic field $H$ or current $I$. We propose an approach to overcome synthesis and fluctuation problems: building superconductors from inhomogeneous composites of nanofilaments. Macroscopic crystals of quasi-one-dimensional Na$_{2-\delta}$Mo$_6$Se$_6$ featuring Na vacancy disorder ($\delta\approx$~0.2) are shown to behave as percolative networks of superconducting nanowires. Long range order is established via transverse coupling between individual one-dimensional filaments, yet phase coherence remains unstable to fluctuations and localization in the zero-($T$,$H$,$I$) limit. However, a region of reentrant phase coherence develops upon raising ($T$,$H$,$I$). We attribute this phenomenon to an enhancement of the transverse coupling due to electron delocalization. Our observations of reentrant phase coherence coincide with a peak in the Josephson energy $E_J$ at non-zero ($T$,$H$,$I$), which we estimate using a simple analytical model for a disordered anisotropic superconductor. Na$_{2-\delta}$Mo$_6$Se$_6$ is therefore a blueprint for a future generation of nanofilamentary superconductors with inbuilt resilience to phase fluctuations at elevated ($T$,$H$,$I$).
A weakly consecutive sequence (WCS) is a permutation $\sigma$ of $\{1, \ldots, k\}$ such that if an integer $d$ divides $\sigma(i)$, then $d$ also divides $\sigma(i \pm d)$ insofar as these are defined. The structure of weakly consecutive sequences is surprisingly rich, and it is difficult to find a formula for the number $N(k)$ of WCS's of length $k$. However, for a given $k$ we describe four starting sequences, to each of which we can apply three \emph{rules} or operations to generate new WCS's. We conjecture that any WCS can be constructed by applying these rules, which depend in an intricate way on the primality of $k$ and surrounding integers. We find bounds for $N(k)$ by analyzing these rules.
I develop an extension of the usual three-flavor quark model to four flavors (u, d, s and c), and discuss the classification of pentaquark states with hidden charm. This work is motivated by the recent observation of such states by the LHCb Collatoration at CERN.
We have investigated the cross-over from Zener tunneling of single charge carriers to avalanche type of bunched electron transport in a suspended graphene Corbino disk in the zeroth Landau level. At low bias, we find a tunneling current that follows the gyrotropic Zener tunneling behavior. At larger bias, we find avalanche type of transport that sets in at a smaller current the larger the magnetic field is. The low-frequency noise indicates strong bunching of the electrons in the avalanches. On the basis of the measured low-frequency switching noise power, we deduce the characteristic switching rates of the avalanche sequence. The simultaneous microwave shot noise measurement also reveals intrinsic correlations within the avalanche pulses and indicate decrease of correlations with increasing bias.
We present the Evolving Graph Fourier Transform (EFT), the first invertible spectral transform that captures evolving representations on temporal graphs. We motivate our work by the inadequacy of existing methods for capturing the evolving graph spectra, which are also computationally expensive due to the temporal aspect along with the graph vertex domain. We view the problem as an optimization over the Laplacian of the continuous time dynamic graph. Additionally, we propose pseudo-spectrum relaxations that decompose the transformation process, making it highly computationally efficient. The EFT method adeptly captures the evolving graph's structural and positional properties, making it effective for downstream tasks on evolving graphs. Hence, as a reference implementation, we develop a simple neural model induced with EFT for capturing evolving graph spectra. We empirically validate our theoretical findings on a number of large-scale and standard temporal graph benchmarks and demonstrate that our model achieves state-of-the-art performance.