text
stringlengths
6
128k
We consider the excitation of water in the Photon Dominated Region (PDR). With the use of a three-dimensional escape probability method we compute the level populations of ortho- and para-H_2O up to 350 K (i.e., 8 levels), as well as line intensities for various transitions. Homogeneous and inhomogeneous models are presented with densities of 10^4-10^5 cm^{-3} and the differences between the resulting intensities are displayed. Density, temperature, and abundance distributions inside the cloud are computed with the use of a self-consistent physi-chemical (in)homogeneous model in order to reproduce the line intensities observed with SWAS, and to make predictions for various lines that HIFI will probe in the future. Line intensities vary from 10^{-13} erg cm^{-2} s^{-1} sr^{-1} to a few times 10^{-6} erg cm^{-2} s^{-1} sr^{-1}. We can reproduce the intensity for the 1_{10}-1_{01} line observed by the SWAS satellite. It is found that the 2_{12}-1_{01} line is the strongest, whereas the 3_{12}-2_{21} line is the weakest, in all the models. It is found that the 1_{10}-1_{01} line probes the total column, while higher excitation lines probe the higher density gas (e.g., clumps).
Moerdijk's site description for equivariant sheaf toposes on open topological groupoids is used to give a proof for the (known, but apparently unpublished) proposition that if H is a strictly full subgroupoid of an open topological groupoid G, then the topos of equivariant sheaves on H is a subtopos of the topos of equivariant sheaves on G. This proposition is then applied to the study of quotient geometric theories and subtoposes. In particular, an intrinsic characterization is given of those subgroupoids that are definable by quotient theories.
The results of observations of the giant 1998 August 27 outburst in SGR 1900+14 are presented. A comparison is made of the two extremely intense events on August 27, 1998 and March 5, 1979. The striking similarity between the outbursts strongly implies a common nature. The observation of two giant outbursts within 20 years from different sources suggests that such events occur in an SGR once every 50-100 years.
Nowadays, owners and developers of deep learning models must consider stringent privacy-preservation rules of their training data, usually crowd-sourced and retaining sensitive information. The most widely adopted method to enforce privacy guarantees of a deep learning model nowadays relies on optimization techniques enforcing differential privacy. According to the literature, this approach has proven to be a successful defence against several models' privacy attacks, but its downside is a substantial degradation of the models' performance. In this work, we compare the effectiveness of the differentially-private stochastic gradient descent (DP-SGD) algorithm against standard optimization practices with regularization techniques. We analyze the resulting models' utility, training performance, and the effectiveness of membership inference and model inversion attacks against the learned models. Finally, we discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
Statistical observations of the epoch of reionization (EOR) power spectrum provide a rich data set for understanding the transition from the cosmic "dark ages" to the ionized universe we see today. EOR observations have become an active area of experimental cosmology, and three first generation observatories--MWA, PAST, and LOFAR--are currently under development. In this paper we provide the first quantitative calculation of the three dimensional power spectrum sensitivity, incorporating the design parameters of a planned array. This calculation is then used to explore the constraints these first generation observations can place on the EOR power spectrum. The results demonstrate the potential of upcoming power spectrum observations to constrain theories of structure formation and reionization.
The aim of the present paper is to obtain some new fractional integral inequalities for convex functions. Saigo fractional integral operator is used to establish the results.
The 'classical interpretation' of the wave function psi(x) reveals an interesting operational aspect of the Helmholtz spectra. It is shown that the traditional Sturm-Liouville problem contains the simplest key to predict the squeezing effect for charged particle states.
The connection between a Taylor series and a continued-fraction involves a nonlinear relation between the Taylor coefficients $\{ a_n \}$ and the continued-fraction coefficients $\{ b_n \}$. In many instances it turns out that this nonlinear relation transforms a complicated sequence $\{a_n \}$ into a very simple one $\{ b_n \}$. We illustrate this simplification in the context of graph combinatorics.
We study a generalized Einstein theory with the following two criteria:{\it i}) on the solar scale, it must be consistent with the classical tests of general relativity, {\it ii}) on the galactic scale, the gravitational potential is a sum of Newtonian and Yukawa potentials so that it may explain the flat rotation curves of spiral galaxies. Under these criteria, we find that such a generalized Einstein action must include at least one scalar field and one vector field as well as the quadratic term of the scalar curvature.
Understanding the collective quantum dynamics of nonequilibrium many-body systems is an outstanding challenge in quantum science. In particular, dynamics driven by quantum fluctuations are important for the formation of exotic quantum phases of matter \cite{altman2023quantum}, fundamental high-energy processes \cite{bauer2023highenergy}, quantum metrology \cite{degen2017sensing, li2023scrambling}, and quantum algorithms \cite{ebadi2022quantum}. Here, we use a programmable quantum simulator based on Rydberg atom arrays to experimentally study collective dynamics across a (2+1)D Ising quantum phase transition. After crossing the quantum critical point, we observe a gradual growth of correlations through coarsening of antiferromagnetically ordered domains~\cite{Samajdar2024}. By deterministically preparing and following the evolution of ordered domains, we show that the coarsening is driven by the curvature of domain boundaries, and find that the dynamics accelerate with proximity to the quantum critical point. We quantitatively explore these phenomena and further observe long-lived oscillations of the order parameter, corresponding to an amplitude (Higgs) mode \cite{pekker2015amplitude}. These observations offer a unique viewpoint into emergent collective dynamics in strongly correlated quantum systems and nonequilibrium quantum processes.
Although Large Language Models (LLMs) are becoming increasingly powerful, they still exhibit significant but subtle weaknesses, such as mistakes in instruction-following or coding tasks. As these unexpected errors could lead to severe consequences in practical deployments, it is crucial to investigate the limitations within LLMs systematically. Traditional benchmarking approaches cannot thoroughly pinpoint specific model deficiencies, while manual inspections are costly and not scalable. In this paper, we introduce a unified framework, AutoDetect, to automatically expose weaknesses in LLMs across various tasks. Inspired by the educational assessment process that measures students' learning outcomes, AutoDetect consists of three LLM-powered agents: Examiner, Questioner, and Assessor. The collaboration among these three agents is designed to realize comprehensive and in-depth weakness identification. Our framework demonstrates significant success in uncovering flaws, with an identification success rate exceeding 30% in prominent models such as ChatGPT and Claude. More importantly, these identified weaknesses can guide specific model improvements, proving more effective than untargeted data augmentation methods like Self-Instruct. Our approach has led to substantial enhancements in popular LLMs, including the Llama series and Mistral-7b, boosting their performance by over 10% across several benchmarks. Code and data are publicly available at https://github.com/thu-coai/AutoDetect.
With the progressive advancements in deep graph learning, out-of-distribution (OOD) detection for graph data has emerged as a critical challenge. While the efficacy of auxiliary datasets in enhancing OOD detection has been extensively studied for image and text data, such approaches have not yet been explored for graph data. Unlike Euclidean data, graph data exhibits greater diversity but lower robustness to perturbations, complicating the integration of outliers. To tackle these challenges, we propose the introduction of \textbf{H}ybrid External and Internal \textbf{G}raph \textbf{O}utlier \textbf{E}xposure (HGOE) to improve graph OOD detection performance. Our framework involves using realistic external graph data from various domains and synthesizing internal outliers within ID subgroups to address the poor robustness and presence of OOD samples within the ID class. Furthermore, we develop a boundary-aware OE loss that adaptively assigns weights to outliers, maximizing the use of high-quality OOD samples while minimizing the impact of low-quality ones. Our proposed HGOE framework is model-agnostic and designed to enhance the effectiveness of existing graph OOD detection models. Experimental results demonstrate that our HGOE framework can significantly improve the performance of existing OOD detection models across all 8 real datasets.
The super-sensitivity attained in quantum phase estimation is known to be compromised in the presence of decoherence. This is particularly patent at blind spots -- phase values at which sensitivity is totally lost. One remedy is to use a precisely known reference phase to shift the operation point of the sensor to a less vulnerable phase value. We present here an alternative approach based on combining the probe with an ancillary degree of freedom containing adjustable parameters to create an entangled quantum state of higher dimension. We validate this concept by simulating a configuration of a Mach-Zehnder interferometer with a two-photon probe and a polarization ancilla of adjustable parameters, entangled at a polarizing beam splitter. At the interferometer output, the photons are measured after an adjustable unitary transformation in the polarization subspace. Through calculation of the Fisher information and simulation of an adaptive estimation procedure, we show that optimizing the adjustable polarization parameters using an adaptive measurement process provides globally super-sensitive unbiased phase estimates for a range of decoherence levels, without prior information or a reference phase.
We compute the rational cohomology of the moduli space $\mathcal{M}_{4,1}$ of non-singular genus $4$ curves with $1$ marked point, using Gorinov-Vassiliev's method.
We analyse the velocity-dependent potentials seen by D0 and D4-brane probes moving in Type I' background for head-on scattering off the fixed planes. We find that at short distances (compared to string length) the D0-brane probe has a nontrivial moduli space metric, in agreement with the prediction of Type I' matrix model; however, at large distances it is modified by massive open strings to a flat metric, which is consistent with the spacetime equations of motion of Type I' theory. We discuss the implication of this result for the matrix model proposal for M-theory. We also find that the nontrivial metric at short distances in the moduli space action of the D0-brane probe is reflected in the coefficient of the higher dimensional v^4 term in the D4-brane probe action.
In this paper we consider the class of connected simple Lie groups equipped with the discrete topology. We show that within this class of groups the following approximation properties are equivalent: (1) the Haagerup property; (2) weak amenability; (3) the weak Haagerup property. In order to obtain the above result we prove that the discrete group GL(2,K) is weakly amenable with constant 1 for any field K.
We present axioms for the real numbers by omitting the field axioms and then derive the field properties of the real numbers. We prove all our theorems constructively.
The Byzantine agreement problem is considered to be a core problem in distributed systems. For example, Byzantine agreement is needed to build a blockchain, a totally ordered log of records. Blockchains are asynchronous distributed systems, fault-tolerant against Byzantine nodes. In the literature, the asynchronous byzantine agreement problem is studied in a fully connected network model where every node can directly send messages to every other node. This assumption is questionable in many real-world environments. In the reality, nodes might need to communicate by means of an incomplete network, and Byzantine nodes might not forward messages. Furthermore, Byzantine nodes might not behave correctly and, for example, corrupt messages. Therefore, in order to truly understand Byzantine Agreement, we need both ingredients: asynchrony and incomplete communication networks. In this paper, we study the asynchronous Byzantine agreement problem in incomplete networks. A classic result by Danny Dolev proved that in a distributed system with n nodes in the presence of f Byzantine nodes, the vertex connectivity of the system communication graph should be at least (2f+1). While Dolev's result was for synchronous deterministic systems, we demonstrate that the same bound also holds for asynchronous randomized systems. We show that the bound is tight by presenting a randomized algorithm, and a matching lower bound.
We review some of the properties of extensive cosmic ray air showers and describe a simple model of the radio-frequency radiation generated by shower electrons and positrons as they bend in the Earth's magnetic field. We perform simulations by calculating the trajectory and radiation of a few thousand charged shower particles. The results are then transformed to predict the strength and polarization of the electromagnetic radiation emitted by the whole shower.
For fibred boundary and fibred cusp metrics, Hausel, Hunsicker, and Mazzeo identified the space of $L^2$ harmonic forms of fixed degree with the images of maps between intersection cohomology groups of an associated stratified space obtained by collapsing the fibres of the fibration at infinity onto its base. In the present paper, we obtain a generalization of this result to situations where, rather than a fibration at infinity, there is a Riemannian foliation with compact leaves admitting a resolution by a fibration. If the associated stratified space (obtained now by collapsing the leaves of the foliation) is a Witt space and if the metric considered is a foliated cusp metric, then no such resolution is required.
Molecular Communication (MC) architectures suffer from molecular build-up in the channel if they do not have appropriate reuptake mechanisms. The molecular build-up either leads to intersymbol interference (ISI) or reduces the transmission rate. To measure the molecular build-up, we derive analytic expressions for the incidence rate and absorption rate for one-dimensional MC channels where molecular dispersion obeys the Brownian Motion. We verify each of our key results with Monte Carlo simulations. Our results contribute to the development of more complicated models and analytic expressions to measure the molecular build-up and the impact of ISI in MC.
We investigate the asymptotic properties of axisymmetric inertial modes propagating in a spherical shell when viscosity tends to zero. We identify three kinds of eigenmodes whose eigenvalues follow very different laws as the Ekman number $E$ becomes very small. First are modes associated with attractors of characteristics that are made of thin shear layers closely following the periodic orbit traced by the characteristic attractor. Second are modes made of shear layers that connect the critical latitude singularities of the two hemispheres of the inner boundary of the spherical shell. Third are quasi-regular modes associated with the frequency of neutral periodic orbits of characteristics. We thoroughly analyse a subset of attractor modes for which numerical solutions point to an asymptotic law governing the eigenvalues. We show that three length scales proportional to $E^{1/6}$, $E^{1/4}$ and $E^{1/3}$ control the shape of the shear layers that are associated with these modes. These scales point out the key role of the small parameter $E^{1/12}$ in these oscillatory flows. With a simplified model of the viscous Poincar\'e equation, we can give an approximate analytical formula that reproduces the velocity field in such shear layers. Finally, we also present an analysis of the quasi-regular modes whose frequencies are close to $\sin(\pi/4)$ and explain why a fluid inside a spherical shell cannot respond to any periodic forcing at this frequency when viscosity vanishes.
We construct a reflexive Banach space $X_\mathcal{D}$ with an unconditional basis such that all spreading models admitted by normalized block sequences in $X_\mathcal{D}$ are uniformly equivalent to the unit vector basis of $\ell_1$, yet every infinite-dimensional closed subspace of $X_\mathcal{D}$ fails the Lebesgue property. This is a new result in a program initiated by Odell in 2002 concerning the strong separation of asymptotic properties in Banach spaces.
We study generalizations of a classical link invariant -- the multivariable Alexander polynomial -- to tangles. The starting point is Archibald's tMVA invariant for virtual tangles which lives in the setting of circuit algebras, and whose target space has dimension that is exponential in the number of strands. Using the Hodge star map and restricting to tangles without closed components, we define a reduction of the tMVA to an invariant "rMVA" which is valued in matrices with Laurent polynomial entries, and so has a much more compact target space. We show the rMVA has the structure of a metamonoid morphism and is further equivalent to a tangle invariant defined by Bar-Natan. This invariant also reduces to the Gassner representation on braids and has a partially defined trace operation for closing open strands of a tangle.
An HCMU metric is a conformal metric which has a finite number of singularities on a compact Riemann surface and satisfies the equation of the extremal K\"{a}hler metric. In this paper, we give a necessary and sufficient condition for the existence of a kind of HCMU metrics which has both cusp singularities and conical singularities.
Curating annotations for medical image segmentation is a labor-intensive and time-consuming task that requires domain expertise, resulting in "narrowly" focused deep learning (DL) models with limited translational utility. Recently, foundation models like the Segment Anything Model (SAM) have revolutionized semantic segmentation with exceptional zero-shot generalizability across various domains, including medical imaging, and hold a lot of promise for streamlining the annotation process. However, SAM has yet to be evaluated in a crowd-sourced setting to curate annotations for training 3D DL segmentation models. In this work, we explore the potential of SAM for crowd-sourcing "sparse" annotations from non-experts to generate "dense" segmentation masks for training 3D nnU-Net models, a state-of-the-art DL segmentation model. Our results indicate that while SAM-generated annotations exhibit high mean Dice scores compared to ground-truth annotations, nnU-Net models trained on SAM-generated annotations perform significantly worse than nnU-Net models trained on ground-truth annotations ($p<0.001$, all).
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text. While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding. The first challenge, safety, involves ensuring that the model's responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency.
Even Artin groups generalize right-angled Artin groups by allowing the labels in the defining graph to be even. In this paper a complete characterization of quasi-projective even Artin groups is given in terms of their defining graphs. Also, it is shown that quasi-projective even Artin groups are realizable by K(pi,1) quasi-projective spaces.
This paper presents a framework to solve the strategic bidding problem of participants in an electricity market cleared by employing the full AC Optimal Power Flow (ACOPF) problem formulation. Traditionally, the independent system operators (ISOs) leveraged DC Optimal Power Flow (DCOPF) problem formulation to settle the electricity market. The main quest of this work is to find what would be the challenges and opportunities if ISOs leverage the full ACOPF as the market-clearing Problem (MCP)? This paper presents tractable mathematical programming with equilibrium constraints for the convexified AC market-clearing problem. Market participants maximize their profit via strategic bidding while considering the reactive power dispatch of generation units. The equilibrium constraints are procured by presenting the dual form of the relaxed ACOPF problem. The strategic bidding problem with ACOPF-based MCP improves the exactness of the location marginal prices (LMPs) and profit of market participants compared to the one with DCOPF. It is shown that the strategic bidding problem with DCOFP-based MCP is unable to model the limitations of reactive power support. The presented results display cases where the proposed strategic bidding method renders $52.3\%$ more profit for the Generation Company (GENCO) than the DCOPF-based MCP model. The proposed strategic bidding framework also addresses the challenges in coupling real and reactive power dispatch of generation constraints, ramping constraints, demand response implications with curtailable and time shiftable loads, and AC line flow constraints. Therefore, the presented method will help market participants leverage the more accurate ACOPF model in the strategic bidding problem.
We analyze the formation and dynamics of bright unstaggered solitons in the disk-shaped dipolar Bose-Einstein condensate, which features the interplay of contact (collisional) and long-range dipole-dipole (DD) interactions between atoms. The condensate is assumed to be trapped in a strong optical-lattice potential in the disk's plane, hence it may be approximated by a two-dimensional (2D) discrete model, which includes the on-site nonlinearity and cubic long-range (DD) interactions between sites of the lattice. We consider two such models, that differ by the form of the on-site nonlinearity, represented by the usual cubic term, or more accurate nonpolynomial one, derived from the underlying 3D Gross-Pitaevskii equation. Similar results are obtained for both models. The analysis is focused on effects of the DD interaction on fundamental localized modes in the lattice (2D discrete solitons). The repulsive isotropic DD nonlinearity extends the existence and stability regions of the fundamental solitons. New families of on-site, inter-site and hybrid solitons, built on top of a finite background, are found as a result of the interplay of the isotropic repulsive DD interaction and attractive contact nonlinearity. By themselves, these solutions are unstable, but they evolve into robust breathers which exist on an oscillating background. In the presence of the repulsive contact interactions, fundamental localized modes exist if the DD interaction (attractive isotropic or anisotropic) is strong enough. They are stable in narrow regions close to the anticontinuum limit, while unstable solitons evolve into breathers. In the latter case, the presence of the background is immaterial.
We study the behavior of quasi-one-dimensional (quasi-1d) Bose gases by Monte Carlo techniques, i.e., by the variational Monte Carlo, the diffusion Monte Carlo, and the fixed-node diffusion Monte Carlo technique. Our calculations confirm and extend our results of an earlier study [Astrakharchik et al., cond-mat/0308585]. We find that a quasi-1d Bose gas i) is well described by a 1d model Hamiltonian with contact interactions and renormalized coupling constant; ii) reaches the Tonks-Girardeau regime for a critical value of the 3d scattering length a_3d; iii) enters a unitary regime for |a_3d| -> infinity, where the properties of the gas are independent of a_3d and are similar to those of a 1d gas of hard-rods; and iv) becomes unstable against cluster formation for a critical value of the 1d gas parameter. The accuracy and implications of our results are discussed in detail.
We propose a multi-layer approach to simulate hyperpycnal and hypopycnal plumes in flows with free surface. The model allows to compute the vertical profile of the horizontal and the vertical components of the velocity of the fluid flow. The model can describe as well the vertical profile of the sediment concentration and the velocity components of each one of the sediment species that form the turbidity current. To do so, it takes into account the settling velocity of the particles and their interaction with the fluid. This allows to better describe the phenomena than a single layer approach. It is in better agreement with the physics of the problem and gives promising results. The numerical simulation is carried out by rewriting the multi-layer approach in a compact formulation, which corresponds to a system with non-conservative products, and using path-conservative numerical scheme. Numerical results are presented in order to show the potential of the model.
In this work, we derive a system of Boltzmann-type equations to describe the spread of SARS-CoV-2 virus at the microscopic scale, that is by modeling the human-to-human mechanisms of transmission. To this end, we consider two populations, characterized by specific distribution functions, made up of individuals without symptoms (population $1$) and infected people with symptoms (population $2$). The Boltzmann operators model the interactions between individuals within the same population and among different populations with a probability of transition from one to the other due to contagion or, vice versa, to recovery. In addition, the influence of innate and adaptive immune systems is taken into account. Then, starting from the Boltzmann microscopic description we derive a set of evolution equations for the size and mean state of each population considered. Mathematical properties of such macroscopic equations, as equilibria and their stability, are investigated and some numerical simulations are performed in order to analyze the ability of our model to reproduce the characteristic features of Covid-19.
Robots can rapidly acquire new skills from demonstrations. However, during generalisation of skills or transitioning across fundamentally different skills, it is unclear whether the robot has the necessary knowledge to perform the task. Failing to detect missing information often leads to abrupt movements or to collisions with the environment. Active learning can quantify the uncertainty of performing the task and, in general, locate regions of missing information. We introduce a novel algorithm for active learning and demonstrate its utility for generating smooth trajectories. Our approach is based on deep generative models and metric learning in latent spaces. It relies on the Jacobian of the likelihood to detect non-smooth transitions in the latent space, i.e., transitions that lead to abrupt changes in the movement of the robot. When non-smooth transitions are detected, our algorithm asks for an additional demonstration from that specific region. The newly acquired knowledge modifies the data manifold and allows for learning a latent representation for generating smooth movements. We demonstrate the efficacy of our approach on generalising elementary skills, transitioning across different skills, and implicitly avoiding collisions with the environment. For our experiments, we use a simulated pendulum where we observe its motion from images and a 7-DoF anthropomorphic arm.
We develop the theory of spectral invariants in periodic Floer homology (PFH) of area-preserving surface diffeomorphisms. We use this theory to prove $C^\infty$ closing lemmas for certain Hamiltonian isotopy classes of area-preserving surface diffeomorphisms. In particular, we show that for a $C^\infty$-generic area-preserving diffeomorphism of the torus, the set of periodic points is dense. Our closing lemmas are quantitative, asserting roughly speaking that for a given Hamiltonian isotopy, within time $\delta$ a periodic orbit must appear of period $O(\delta^{-1})$. We also prove a "Weyl law" describing the asymptotic behavior of PFH spectral invariants.
Recently, a new kind of spintronics materials, bipolar magnetic semiconductor (BMS), has been proposed. The spin polarization of BMS can be conveniently controlled by a gate voltage, which makes it very attractive in device engineering. Now, the main challenge is finding more BMS materials. In this article, we propose that hydrogenated wurtzite SiC nanofilm is a two-dimensional BMS material. Its BMS character is very robust under the effect of strain, substrate, or even a strong electric field. The proposed two-dimensional BMS material paves the way to use this promising new material in an integrated circuit.
Extending network lifetime of battery-operated devices is a key design issue that allows uninterrupted information exchange among distributive nodes in wireless sensor networks. Collaborative beamforming (CB) and cooperative transmission (CT) have recently emerged as new communication techniques that enable and leverage effective resource sharing among collaborative/cooperative nodes. In this paper, we seek to maximize the lifetime of sensor networks by using the new idea that closely located nodes can use CB/CT to reduce the load or even avoid packet forwarding requests to nodes that have critical battery life. First, we study the effectiveness of CB/CT to improve the signal strength at a faraway destination using energy in nearby nodes. Then, a 2D disk case is analyzed to assess the resulting performance improvement. For general networks, if information-generation rates are fixed, the new routing problem is formulated as a linear programming problem; otherwise, the cost for routing is dynamically adjusted according to the amount of energy remaining and the effectiveness of CB/CT. From the analysis and simulation results, it is seen that the proposed schemes can improve the lifetime by about 90% in the 2D disk network and by about 10% in the general networks, compared to existing schemes.
A state where spin currents exist in the absence of external fields has recently been proposed to describe the superconducting state of metals. It is proposed here that such a state also describes the ground state of aromatic molecules. It is argued that this point of view provides a more natural explanation for the large diamagnetic susceptibilities and NMR shifts observed in these molecules than the conventional viewpoint, and it provides a unified description of aromatic molecules and superconductors as sought by F. London. A six-atom ring model is solved by exact diagonalization and parameters in the model where a ground state spin current exists are found. We suggest that this physics plays a key role in biological matter.
Post-quantum cryptography studies the security of classical, i.e. non-quantum cryptographic protocols against quantum attacks. Until recently, the considered adversaries were assumed to use quantum computers and behave like classical adversaries otherwise. A more conservative approach is to assume that also the communication between the honest parties and the adversary is (partly) quantum. We discuss several options to define secure encryption and authentication against these stronger adversaries who can carry out 'superposition attacks'. We re-prove a recent result of Boneh and Zhandry, stating that a uniformly random function (and hence also a quantum-secure pseudorandom function) can serve as a message-authentication code which is secure, even if the adversary can evaluate this function in superposition.
We propose a new correlator in one-dimensional quantum spin chains, the $s-$Emptiness Formation Probability ($s-$EFP). This is a natural generalization of the Emptiness Formation Probability (EFP), which is the probability that the first $n$ spins of the chain are all aligned downwards. In the $s-$EFP we let the spins in question be separated by $s$ sites. The usual EFP corresponds to the special case when $s=1$, and taking $s>1$ allows us to quantify non-local correlations. We express the $s-$EFP for the anisotropic XY model in a transverse magnetic field, a system with both critical and non-critical regimes, in terms of a Toeplitz determinant. For the isotropic XY model we find that the magnetic field induces an interesting length scale.
The search for superconducting systems exhibiting nonreciprocal transport and, specifically, the diode effect, has proliferated in recent years. This trend encompasses a wide variety of systems, including planar hybrid structures, asymmetric SQUIDs, and certain noncentrosymmetric superconductors. A common feature of such systems is a gyrotropic symmetry, realized on different scales and characterized by a polar vector. Alongside time-reversal symmetry breaking, the presence of a polar axis allows for magnetoelectric effects, which, when combined with proximity-induced superconductivity, results in spontaneous non-dissipative currents that underpin the superconducting diode effect. This symmetry established, we present a comprehensive theoretical study of transport in a lateral Josephson junctions composed of a normal metal supporting the spin Hall effect, and attached to a ferromagnetic insulator. Due to the presence of the latter, magnetoelectric effects arise without requiring external magnetic fields. We determine the dependence of the anomalous current on the spin relaxation length and the transport parameters commonly used in spintronics to characterize the interface between the metal and the ferromagnetic insulator. Therefore, our theory naturally unifies nonreciprocal transport in superconducting systems with classical spintronic effects, such as the spin Hall effect, spin galvanic effect, and spin Hall magnetoresistance. We propose an experiment involving measurements of magnetoresistance in the normal state and nonreciprocal transport in the superconducting state. Such experiment, on the one hand, allows for determining the parameters of the model and thus verifying with a greater precision the theories of magnetoelectric effects in normal systems. On the other hand, it contributes to a deeper understanding of the underlying microscopic origins that determine these parameters.
By construction, gauge theories require gauge fixing. In conventional approaches to spontaneously broken gauge theories, the choice of the Unitary ('t Hooft) gauge involves the sacrifice of manifest renormalizability (unitarity). It is shown that with a suitable modification of the background field gauge condition, the background field formalism allows manifest unitarity and renormalizability in a single framework.
Longitudinal data tracking under Local Differential Privacy (LDP) is a challenging task. Baseline solutions that repeatedly invoke a protocol designed for one-time computation lead to linear decay in the privacy or utility guarantee with respect to the number of computations. To avoid this, the recent approach of Erlingsson et al. (2020) exploits the potential sparsity of user data that changes only infrequently. Their protocol targets the fundamental problem of frequency estimation protocol for longitudinal binary data, with $\ell_\infty$ error of $O ( (1 / \epsilon) \cdot (\log d)^{3 / 2} \cdot k \cdot \sqrt{ n \cdot \log ( d / \beta ) } )$, where $\epsilon$ is the privacy budget, $d$ is the number of time periods, $k$ is the maximum number of changes of user data, and $\beta$ is the failure probability. Notably, the error bound scales polylogarithmically with $d$, but linearly with $k$. In this paper, we break through the linear dependence on $k$ in the estimation error. Our new protocol has error $O ( (1 / \epsilon) \cdot (\log d) \cdot \sqrt{ k \cdot n \cdot \log ( d / \beta ) } )$, matching the lower bound up to a logarithmic factor. The protocol is an online one, that outputs an estimate at each time period. The key breakthrough is a new randomizer for sequential data, FutureRand, with two key features. The first is a composition strategy that correlates the noise across the non-zero elements of the sequence. The second is a pre-computation technique which, by exploiting the symmetry of input space, enables the randomizer to output the results on the fly, without knowing future inputs. Our protocol closes the error gap between existing online and offline algorithms.
We present an automatic method for joint liver lesion segmentation and classification using a hierarchical fine-tuning framework. Our dataset is small, containing 332 2-D CT examinations with lesion annotated into 3 lesion types: cysts, hemangiomas, and metastases. Using a cascaded U-net that performs segmentation and classification simultaneously, we trained a strong lesion segmentation model on the dataset of MICCAI 2017 Liver Tumor Segmentation (LiTS) Challenge. We used the trained weights to fine-tune a slightly modified model to obtain improved lesion segmentation and classification, on the smaller dataset. Since pre-training was done with similar data on a related task, we were able to learn more representative features (especially higher-level features in the U-Net's encoder), and improve pixel-wise classification results. We show an improvement of over 10\% in Dice score and classification accuracy, compared to a baseline model. We further improve the classification performance by hierarchically freezing the encoder part of the network and achieve an improvement of over 15\% in Dice score and classification accuracy. We compare our results with an existing method and show an improvement of 14\% in the success rate and 12\% in the classification accuracy.
We show that the effects of decoherence on quantum steering ellipsoids can be controlled by a specific reservoir manipulating, in both Markovian and non-Markovian realms. Therefore, the so-called maximal steered coherence could be protected through reservoir engineering implemented by coupling auxiliary qubits to the reservoir.
By extending the classical Peyrard-Bishop model, we are able to obtain a fully analytical description for the mechanical resistance of DNA under stretching at variable values of temperature, number of base pairs and intrachains and interchains bonds stiffness. In order to compare elasticity and temperature effects, we first analyze the system in the zero temperature mechanical limit, important to describe several experimental effects including possible hysteresis. We then analyze temperature effects in the framework of equilibrium statistical mechanics. In particular, we obtain an analytical expression for the temperature dependent melting force and unzipping assigned displacement in the thermodynamical limit, also depending on the relative stability of intra vs inter molecular bonds. Such results coincide with the purely mechanical model in the limit of zero temperature and with the denaturation temperature that we obtain with the classical transfer integral method. Based on our analytical results, explicit analysis of the phase diagrams and cooperativity parameters are obtained, where also discreteness effect can be accounted for. The obtained results are successfully applied in reproducing the thermomechanical experimental melting of DNA and the response of DNA hairpins. Due to its generality, the proposed approach can be extended to other thermomechanically induced molecular melting phenomena.
Weak radiative decay B -> X_s gamma is known to be a loop-generated process. However, it does receive tree-level contributions from CKM-suppressed b -> u ubar s gamma transitions. In the present paper, we evaluate such contributions together with similar ones from the QCD penguin operators. For a low value of the photon energy cutoff E_0 ~ m_b/20 that has often been used in the literature, they can enhance the inclusive branching ratio by more than 10%. For E_0 = 1.6 GeV or higher, the effect does not exceed 0.4%, which is due to phase-space suppression. Our perturbative results contain collinear logarithms that depend on the light quark masses m_q (q=u,d,s). We have allowed m_b/m_q to vary from 10 to 50, which corresponds to values of m_q that are typical for the constituent quark masses. Such a rough method of estimation may be improved in the future with the help of fragmentation functions once the considered effects begin to matter in the overall error budget for BR(B -> X_s gamma).
We introduce the notion of reflections for selfinjective algebras from the point of view of torsion theories induced by two-term tilting complexes. As an application, we determine the transformations of Brauer trees associated with reflections. In particular, we provide a way to transform every Brauer tree into a Brauer line.
Computing response functions by following the time evolution of superoperators in Liouville space (whose vectors are ordinary Hilbert space operators) offers an attractive alternative to the diagrammatic perturbative expansion of many-body equilibrium and nonequilibrium Green functions. The bookkeeping of time ordering is naturally maintained in real (physical) time, allowing the formulation of Wick's theorem for superoperators, giving a factorization of higher order response functions in terms of two fundamental Green's functions. Backward propagations and the analytic continuations using artificial times (Keldysh loops and Matsubara contours) are avoided. A generating functional for nonlinear response functions unifies quantum field theory and the classical mode coupling formalism of nonlinear hydrodynamics and may be used for semiclassical expansions. Classical response functions may be obtained without the explicit computation of stability matrices.
We consider a gas of $N$ identical hard spheres in the whole space, and we enforce the Boltzmann-Grad scaling. We may suppose that the particles are essentially independent of each other at some initial time; even so, correlations will be created by the dynamics. We will prove a structure theorem for the correlations which develop at positive time. Our result generalizes a previous result which states that there are phase points where the three-particle marginal density factorizes into two-particle and one-particle parts, while further factorization is impossible. The result depends on uniform bounds which are known to hold on a small time interval, or globally in time when the mean free path is large.
Generating text from structured data is challenging because it requires bridging the gap between (i) structure and natural language (NL) and (ii) semantically underspecified input and fully specified NL output. Multilingual generation brings in an additional challenge: that of generating into languages with varied word order and morphological properties. In this work, we focus on Abstract Meaning Representations (AMRs) as structured input, where previous research has overwhelmingly focused on generating only into English. We leverage advances in cross-lingual embeddings, pretraining, and multilingual models to create multilingual AMR-to-text models that generate in twenty one different languages. For eighteen languages, based on automatic metrics, our multilingual models surpass baselines that generate into a single language. We analyse the ability of our multilingual models to accurately capture morphology and word order using human evaluation, and find that native speakers judge our generations to be fluent.
Generating receding-horizon motion trajectories for autonomous vehicles in real-time while also providing safety guarantees is challenging. This is because a future trajectory needs to be planned before the previously computed trajectory is completely executed. This becomes even more difficult if the trajectory is required to satisfy continuous-time collision-avoidance constraints while accounting for a large number of obstacles. To address these challenges, this paper proposes a novel real-time, receding-horizon motion planning algorithm named REachability-based trajectory Design via Exact Formulation of Implicit NEural signed Distance functions (REDEFINED). REDEFINED first applies offline reachability analysis to compute zonotope-based reachable sets that overapproximate the motion of the ego vehicle. During online planning, REDEFINED leverages zonotope arithmetic to construct a neural implicit representation that computes the exact signed distance between a parameterized swept volume of the ego vehicle and obstacle vehicles. REDEFINED then implements a novel, real-time optimization framework that utilizes the neural network to construct a collision avoidance constraint. REDEFINED is compared to a variety of state-of-the-art techniques and is demonstrated to successfully enable the vehicle to safely navigate through complex environments. Code, data, and video demonstrations can be found at https://roahmlab.github.io/redefined/.
Let $\mathfrak{g}$ be a semisimple Lie algebra. We establish a new relation between the Goldie rank of a primitive ideal $\mathcal{J}\subset U(\mathfrak{g})$ and the dimension of the corresponding irreducible representation $V$ of an appropriate finite W-algebra. Namely, we show that $\operatorname{Grk}(\mathcal{J}) \leqslant \dim V/d_V$, where $d_V$ is the index of a suitable equivariant Azumaya algebra on a homogeneous space. We also compute $d_V$ in representation theoretic terms.
We present integral-field spectroscopic observations with the VIMOS-IFU at the VLT of fast (2000-3000 km/s) Balmer-dominated shocks surrounding the northwestern rim of the remnant of supernova 1006. The high spatial and spectral resolution of the instrument enable us to show that the physical characteristics of the shocks exhibit a strong spatial variation over few atomic scale lengths across 133 sky locations. Our results point to the presence of a population of non-thermal protons (10-100 keV) which might well be the seed particles for generating high-energy cosmic rays. We also present observations of Tycho's supernova remnant taken with the narrow-band tunable filter imager OSIRIS at the GTC and the Fabry-Perot interferometer GHaFaS at the WHT to resolve respectively the broad and narrow H\alpha\ lines across a large part of the remnant.
The control of biofilm formation is a challenging goal that has not been reached yet in many aspects. One is the role of van der Waals forces and another the importance of mutual interactions between the adsorbing and the adsorbed biomolecules ('critical crowding'). Here, a combined exeperimental and theoretical approach is presented that fundamentally probes both aspects. On three model proteins, lysozyme, {\alpha}-amylase and bovine serum albumin (BSA), the adsorption kinetics is studied. Composite substrates are used enabling a separation of the short- and the long-range forces. Though usually neglected, experimental evidence is given for the influence of van der Waals forces on the protein adsorption as revealed by in situ ellipsometry. The three proteins were chosen for their different conformational stability in order to investigate the influence of conformational changes on the adsorption kinetics. Monte Carlo simulations are used to develop a model for these experimental results by assuming an internal degree of freedom to represent conformational changes. The simulations also provide data on the distribution of adsorption sites. By in situ atomic force microscopy we can also test this distribution experimentally which opens the possibility to e.g. investigate the interactions between adsorbed proteins.
The interaction between electrons and plasmons in trilayer graphene is investigated within the Overhauser approach resulting in the 'plasmaron' quasi-particle. This interaction is cast into a field theoretical problem, nd its effect on the energy spectrum is calculated using improved Wigner-Brillouin perturbation theory. The plasmaron spectrum is shifted with respect to the bare electron spectrum by $\Delta E(\mathbf{k})\sim 50\div200\,{\rm meV}$ for ABC stacked trilayer graphene and for ABA trilayer graphene by $\Delta E(\mathbf{k})\sim 30\div150\,{\rm meV}$ ($\Delta E(\mathbf{k})\sim 1\div5\,{\rm meV}$) for the hyperbolic linear) part of the spectrum. The shift in general increases with the electron concentration $n_{e}$ and electron momentum. The dispersion of plasmarons is more pronounced in \textit{ABC} stacked than in ABA tacked trilayer graphene, because of the different energy band structure and their different plasmon dispersion.
We investigate the statistics of gravitational lenses in flat, low-density cosmological models with different cosmic equations of state w. We compute the lensing probabilities as a function of image separation \theta using a lens population described by the mass function of Jenkins et al. and modeled as singular isothermal spheres on galactic scales and as Navarro, Frenk & White halos on cluster scales. It is found that COBE-normalized models with w > - 0.4 produce too few arcsecond-scale lenses in comparison with the JVAS/CLASS radio survey, a result that is consistent with other observational constraints on w. The wide-separation (\theta > 4'') lensing rate is a particularly sensitive probe of both w and the halo mass concentration. The absence of these systems in the current JVAS/CLASS data excludes highly concentrated halos in w < -0.7 models. The constraints can be improved by ongoing and future lensing surveys of > 10^5 sources.
We study ep deep inelastic scattering and the inclusive production of prompt photon within the framework of the quasi-multi-Regge-kinematic approach, applying the quark Reggeization hypothesis. We describe structure functions F_2 and F_L supposing that a virtual photon scatters on a Reggeized quark from a proton, via the effective gamma-Reggeon-quark vertex. It is shown that the main mechanism of the inclusive prompt photon production in p \bar p collisions is the fusion of a Reggeized quark and a Reggeized antiquark into a photon, via the effective Regeon-Reggeon-gamma vertex. We describe the inclusive photon transverse momentum spectra measured by the CDF and D0 Collaborations within errors and without free parameters, using the Kimber-Martin-Ryskin unintegrated quark and gluon distribution functions in a proton.
We have investigated a system with two sets of staggered fermions with charges 1 and -1/2 coupling to a non-compact U(1) gauge field in 4 dimensions. The model exhibits breaking of chiral symmetries of both fermions at different values of beta. Chiral condensates, renormalized fermion masses and renormalized charges have been measured. The renormalized charges show agreement with one-loop perturbation theory. We examine surfaces of constant renormalized charges in the space of bare parameters.
In this paper, we characterize a class of solutions to the unsteady 2-dimensional flow of a van der Waals fluid involving shock waves, and derive an asymptotic amplitude equation exhibiting quadratic and cubic nonlinearities including dissipation and diffraction. We exploit the theory of nonclassical symmetry reduction to obtain some exact solutions. Because of the nonlinearities present in the evolution equation, one expects that the wave profile will eventually encounter distortion and steepening which in the limit of vanishing dissipation culminates into a shock wave; and once shock is formed, it will propagate by separating the portions of the continuous region. Here we have shown how the real gas effects, which manifest themselves through the van der Waals parameters $\tilde{a}$ and $\tilde{b}$ influence the wave characteristics, namely the shape, strength, and decay behavior of shocks.
Modern electron linear accelerators are often designed to produce smooth bunch distributions characterized by their macroscopic ensemble-average moments. However, an increasing number of accelerator applications call for finer control over the beam distribution, e.g., by requiring specific shapes for its projection along one coordinate. Ultimately, the control of the beam distribution at the single-particle level could enable new opportunities in accelerator science. This review discusses the recent progress toward controlling electron beam distributions on the "mesoscopic" scale with an emphasis on shaping the beam or introducing complex correlations required for some applications. This review emphasizes experimental and theoretical developments of electron-bunch shaping methods based on bounded external electromagnetic fields or via interactions with the self-generated velocity and radiation fields.
Software testing is one of the very important Quality Assurance (QA) components. A lot of researchers deal with the testing process in terms of tester motivation and how tests should or should not be written. However, it is not known from the recommendations how the tests are written in real projects. In this paper, the following was investigated: (i) the denotation of the word "test" in different natural languages; (ii) whether the number of occurrences of the word "test" correlates with the number of test cases; and (iii) what testing frameworks are mostly used. The analysis was performed on 38 GitHub open source repositories thoroughly selected from the set of 4.3M GitHub projects. We analyzed 20,340 test cases in 803 classes manually and 170k classes using an automated approach. The results show that: (i) there exists a weak correlation (r = 0.655) between the number of occurrences of the word "test" and the number of test cases in a class; (ii) the proposed algorithm using static file analysis correctly detected 97% of test cases; (iii) 15% of the analyzed classes used main() function whose represent regular Java programs that test the production code without using any third-party framework. The identification of such tests is very complex due to implementation diversity. The results may be leveraged to more quickly identify and locate test cases in a repository, to understand practices in customized testing solutions, and to mine tests to improve program comprehension in the future.
We set forth a method to analyze the orbital angular momentum of a light field. Instead of using the canonical formalism for the conjugate pair angle-angular momentum, we model this latter variable by the superposition of two independent harmonic oscillators along two orthogonal axes. By describing each oscillator by a standard Wigner function, we derive, via a consistent change of variables, a comprehensive picture of the orbital angular momentum. We compare with previous approaches and show how this method works in some relevant examples.
We show the existence of solution for some classes of nonlocal problems. Our proof combines the presence of sub and supersolution with the pseudomonotone operators theory.
We show that nucleon electromagnetic structure functions of deep inelastic scattering in Regge-Gribov limit (fixed Q-squared, asymptotically large 1/x and s) can be well described in the two-component (soft + hard) approach. In the concrete model elaborated by authors, the soft part of the virtual photon-nucleon scattering is given by the vector meson dominance, with taking into account the radial excitations of the rho-meson and nondiagonal transitions in meson-nucleon interactions. The hard part is calculated by using the dipole factorization, i.e., the process is considered as the dissociation of the photon into a q and anti-q - pair (the "color dipole") and the subsequent interaction of this dipole with the nucleon. The dipole cross section has a Regge-type s-dependence and vanishes in the limit of large transverse sizes of the dipole. We give the brief description of the model and present results of the detailed comparison of model predictions with experimental data for electromagnetic structure functions of the nucleon.
Given a power grid and a transmission (coupling) strength, basin stability is a measure of synchronization stability for individual nodes. Earlier studies have focused on the basin stability's dependence of the position of the nodes in the network for single values of transmission strength. Basin stability grows from zero to one as transmission strength increases, but often in a complex, nonmonotonous way. In this study, we investigate the entire functional form of the basin stability's dependence on transmission strength. To be able to perform a systematic analysis, we restrict ourselves to small networks. We scan all isomorphically distinct networks with an equal number of power producers and consumers of six nodes or less. We find that the shapes of the basin stability fall into a few, rather well-defined classes, that could be characterized by the number of edges and the betweenness of the nodes, whereas other network positional quantities matter less.
For two qubits independently coupled to their respective structured reservoirs (Lorentzian spectrum), quantum beats for entanglement and discord are found which are the result of quantum interference between correlation oscillations induced by local non-Markovian environments. We also discuss the preservation of quantum correlations by the effective suppression of the spontaneous emission.
We present the results of our stellar photometry and spectroscopy for the new Local Group galaxy VV 124 (UGC 4879) obtained with the 6-m BTA telescope. The presence of a few bright supergiants in the galaxy indicates that the current star formation process is weak. The apparent distribution of stars with different ages in VV 124 does not differ from the analogous distributions of stars in irregular galaxies, but the ratio of the numbers of young and old stars indicates that VV 124 belongs to the rare Irr/Sph type of galaxies. The old stars (red giants) form the most extended structure, a thick disk with an exponential decrease in the star number density to the edge. Definitely, the young population unresolvable in images makes a great contribution to the background emission from the central galactic regions. The presence of young stars is also confirmed by the [O III] emission line visible in the spectra that belongs to extensive diffuse galactic regions. The mean radial velocity of several components (two bright supergiants, the unresolvable stellar population, and the diffuse gas) is v_h = -70+/-15 km/s and the velocity with which VV 124 falls into the Local Group is v_LG = -12+/-15 km/s. We confirm the distance to the galaxy D = 1.1+/-0.1 Mpc and the metallicity of red giants ([Fe/H] = -1.37) found by Kopylov et al. (2008).VV 124 is located on the periphery of the Local Group approximately at the same distance from M 31 and our Galaxy and is isolated from other galaxies. The galaxy LeoA nearest to it is 0.5 Mpc away.
Convolutional neural networks (CNNs) are being applied to an increasing number of problems and fields due to their superior performance in classification and regression tasks. Since two of the key operations that CNNs implement are convolution and pooling, this type of networks is implicitly designed to act on data described by regular structures such as images. Motivated by the recent interest in processing signals defined in irregular domains, we advocate a CNN architecture that operates on signals supported on graphs. The proposed design replaces the classical convolution not with a node-invariant graph filter (GF), which is the natural generalization of convolution to graph domains, but with a node-varying GF. This filter extracts different local features without increasing the output dimension of each layer and, as a result, bypasses the need for a pooling stage while involving only local operations. A second contribution is to replace the node-varying GF with a hybrid node-varying GF, which is a new type of GF introduced in this paper. While the alternative architecture can still be run locally without requiring a pooling stage, the number of trainable parameters is smaller and can be rendered independent of the data dimension. Tests are run on a synthetic source localization problem and on the 20NEWS dataset.
Policy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory. However, prior works have either required exact gradients or state-action visitation measure based mini-batch stochastic gradients with a diverging batch size, which limit their applicability in practical scenarios. In this paper, we consider classical policy gradient methods that compute an approximate gradient with a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization, along with the widely-used REINFORCE gradient estimation procedure. By controlling the number of "bad" episodes and resorting to the classical doubling trick, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate. These provide the first set of global convergence and sample efficiency results for the well-known REINFORCE algorithm and contribute to a better understanding of its performance in practice.
The suppression of spurious events in the region of interest for neutrinoless double beta decay will play a major role in next generation experiments. The background of detectors based on the technology of cryogenic calorimeters is expected to be dominated by {\alpha} particles, that could be disentangled from double beta decay signals by exploiting the difference in the emission of the scintillation light. CUPID-0, an array of enriched Zn$^{82}$Se scintillating calorimeters, is the first large mass demonstrator of this technology. The detector started data-taking in 2017 at the Laboratori Nazionali del Gran Sasso with the aim of proving that dual read-out of light and heat allows for an efficient suppression of the {\alpha} background. In this paper we describe the software tools we developed for the analysis of scintillating calorimeters and we demonstrate that this technology allows to reach an unprecedented background for cryogenic calorimeters.
The method exploits the contraction of space to systematically obtain compact solitary solutions. The latter is provided for the incompressible Euler and Navier-Stokes PDE. The nonlinear response of momentum advection is moved into a term for contracting space. Then the linear continuity PDE is solved by means of arbitrarily selected closure functions. The contracting space is then split into two variables. The compactness of some solutions is enhanced by numerically integrating the contracting domain while retaining a solution for the nonlinear PDE. The validation of numerical schemes is demonstrated for the Euler and Navier-Stokes PDE. As the nonlinear response is isolated in only one spatial dimension, the method permits to validate arbitrary unstructured meshes and domain geometries by introducing the spatial dimension n+1.
Superlinear scaling in cities, which appears in sociological quantities such as economic productivity and creative output relative to urban population size, has been observed but not been given a satisfactory theoretical explanation. Here we provide a network model for the superlinear relationship between population size and innovation found in cities, with a reasonable range for the exponent.
Using the framework of quasi-Hamiltonian actions, we compute the obstruction to prequantization for the moduli space of flat ${\rm PU}(p)$-bundles over a compact orientable surface with prescribed holonomies around boundary components, where $p>2$ is prime.
From the order-N electronic-structure formulation, a Hamiltonian is derived, of which lowest eigen state is the generalized or composite-band Wannier state. This Hamiltonian maps the locality of the Wannier state to that of a virtual impurity state and to a perturbation from a bonding orbital. These theories are demonstrated in the diamond-structure solids, where the Wannier states are constructed by a practical order-N algorithm with the Hamiltonian. The results give a prototypical picture of the Wannier states in covalent-bonded systems.
We consider layered superconductors with a flux lattice perpendicular to the layers and random columnar defects parallel to the magnetic field B. We show that the decoupling transition temperature Td, at which the Josephson coupling vanishes, is enhanced by columnar defects by an amount ~B^2 relative to Td. Decoupling by increasing field can be followed by a reentrant recoupling transition for strong disorder. We also consider a commensurate component of the columnar density and show that its pinning potential is renormalized to zero above a critical long wavelength disorder. This decommnesuration transition may account for a recently observed kink in the melting line.
Let F* be the field of q elements and let P(n,q) denote the projective space of dimension n-1 over F*. We construct a family H^{n}_{k,i} of combinatorial homology modules associated to P(n,q) for a coefficient field F of positive characteristic co-prime to q. As GL(n,q)-representations these modules are obtained from the permutation action of GL(n,q) on the set of subspaces of F*. We prove a branching rule for the H^{n}_{k,i} and use this to determine the homology representations completely. Results include a duality theorem, the characterisation of H^{n}_{k,i} through the standard irreducibles of GL(n,q) over F and applications.
We study the radiative corrections to all Kl3 decay modes to leading non-trivial order in the chiral effective field theory, working with a fully inclusive prescription on real photon emission. We present new results for Kmu3 modes and update previous results on Ke3 modes. Our analysis provides important theoretical input for the extraction of the CKM element Vus from Kl3 decays.
We show a new duality between the polynomial margin complexity of $f$ and the discrepancy of the function $f \circ \textsf{XOR}$, called an $\textsf{XOR}$ function. Using this duality, we develop polynomial based techniques for understanding the bounded error ($\textsf{BPP}$) and the weakly-unbounded error ($\textsf{PP}$) communication complexities of $\textsf{XOR}$ functions. We show the following. A weak form of an interesting conjecture of Zhang and Shi (Quantum Information and Computation, 2009) (The full conjecture has just been reported to be independently settled by Hatami and Qian (Arxiv, 2017). However, their techniques are quite different and are not known to yield many of the results we obtain here). Zhang and Shi assert that for symmetric functions $f : \{0, 1\}^n \rightarrow \{-1, 1\}$, the weakly unbounded-error complexity of $f \circ \textsf{XOR}$ is essentially characterized by the number of points $i$ in the set $\{0,1, \dots,n-2\}$ for which $D_f(i) \neq D_f(i+2)$, where $D_f$ is the predicate corresponding to $f$. The number of such points is called the odd-even degree of $f$. We show that the $\textsf{PP}$ complexity of $f \circ \textsf{XOR}$ is $\Omega(k/ \log(n/k))$. We resolve a conjecture of a different Zhang characterizing the Threshold of Parity circuit size of symmetric functions in terms of their odd-even degree. We obtain a new proof of the exponential separation between $\textsf{PP}^{cc}$ and $\textsf{UPP}^{cc}$ via an $\textsf{XOR}$ function. We provide a characterization of the approximate spectral norm of symmetric functions, affirming a conjecture of Ada et al. (APPROX-RANDOM, 2012) which has several consequences. Additionally, we prove strong $\textsf{UPP}$ lower bounds for $f \circ \textsf{XOR}$, when $f$ is symmetric and periodic with period $O(n^{1/2-\epsilon})$, for any constant $\epsilon > 0$.
(Abridged). One of the most metal-deficient blue compact galaxies (BCGs) HS 0822+3542 (Z=1/34 Zsun), is also one of the nearest such objects. A trigger mechanism for its current SF burst has remained unclear. We report the discovery of a very blue ((B-V)tot=0.08 and (V-R)tot=0.14) LSB (mu_B^0 > 23.4 arcsec^-2) dwarf irregular (dIrr) galaxy, named SAO 0822+3545. Its small relative velocity and projected distance of ~11 kpc from the BCG imply their physical association. For this LSB galaxy, we present spectroscopic results, total B,V,R magnitudes, the effective radii and surface brightness (SB), and we describe its morphological properties. We compare the very blue colours of this dwarf with PEGASE.2 models of the colour evolution of a Z=1/20 Zsun stellar population, and combine this analysis with the data on the LSBD EW(Ha) values. The models best describing all available observational data depend on the relative fraction of massive stars in the IMF used. For a Salpeter IMF with Mup = 120 Msun, the best model includes a "young" single stellar population (SSP) with an age of ~10 Myr and an "old" SSP with the age of ~0.2--10 Gyr. The mass ratio of the old to young components should be in the range of 10 to 30. The role of interaction in triggering the galaxies major SF episodes during the last ~100-200 Myr is discussed. For the BCG, based on the spectroscopy with the 6m telescope, we estimate the physical parameters of its SF region and present the first evidence of an ionized gas supershell. This pair of dwarfs lies deep within the nearby Lynx-Cancer void, with the nearest bright (L > L*) galaxies at distances > 3 Mpc. This is probably one of the main factors responsible for the unevolved state of HS 0822+3542.
Expressions corresponding to the transmission of a uniaxial optically active crystal platelet are provided for an optical axis parallel and perpendicular to the plane of interface. The optical activity is taken into account by a consistent multipolar expansion of the crystal medium response due to the path of an electromagnetic wave. Numerical examples of the effect of the optical activity are given for quartz platelets of chosen thicknesses. The optical activity's effects on the variations of the transmission of quartz platelets as a function of the angle of incidence is also investigated.
The underlying reasons for the difficulty of unitarily implementing the whole conformal group $SO(4,2)$ in a massless Quantum Field Theory (QFT) are investigated in this paper. Firstly, we demonstrate that the singular action of the subgroup of special conformal transformations (SCT), on the standard Minkowski space $M$, cannot be primarily associated with the vacuum radiation problems, the reason being more profound and related to the dynamical breakdown of part of the conformal symmetry (the SCT subgroup, to be more precise) when representations of null mass are selected inside the representations of the whole conformal group. Then we show how the vacuum of the massless QFT radiates under the action of SCT (usually interpreted as transitions to a uniformly accelerated frame) and we calculate exactly the spectrum of the outgoing particles, which proves to be a generalization of the Planckian one, this recovered as a given limit.
Single particle cryogenic electron microscopy (cryo-EM) is an imaging technique capable of recovering the high-resolution 3-D structure of biological macromolecules from many noisy and randomly oriented projection images. One notable approach to 3-D reconstruction, known as Kam's method, relies on the moments of the 2-D images. Inspired by Kam's method, we introduce a rotationally invariant metric between two molecular structures, which does not require 3-D alignment. Further, we introduce a metric between a stack of projection images and a molecular structure, which is invariant to rotations and reflections and does not require performing 3-D reconstruction. Additionally, the latter metric does not assume a uniform distribution of viewing angles. We demonstrate uses of the new metrics on synthetic and experimental datasets, highlighting their ability to measure structural similarity.
To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How the animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multi-seconds in working memory? How temporal information is processed concurrently with spatial information and decision making? Why there are strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. Temporal and non-temporal information are coded in subspaces orthogonal with each other, and the state trajectories with time at different non-temporal information are quasi-parallel and isomorphic. Such coding geometry facilitates the decoding generalizability of temporal and non-temporal information across each other. The network structure exhibits multiple feedforward sequences that mutually excite or inhibit depending on whether their preferences of non-temporal information are similar or not. We identified four factors that facilitate strong temporal signals in non-timing tasks, including the anticipation of coming events. Our work discloses fundamental computational principles of temporal processing, and is supported by and gives predictions to a number of experimental phenomena.
We investigate the effects of disorder characterising a superconducting thin film on the proximity-induced superconductivity generated by the film (in, e.g., a semiconductor) based on the exact numerical analysis of a three-dimensional microscopic model. To make the problem numerically tractable, we use a recursive Green's function method in combination with a patching approach that exploits the short-range nature of the interface Green's function in the presence of disorder. As a result of the Fermi surface mismatch between the superconductor (SC) and the semiconductor (SM) in combination with the confinement-induced quantization of the transverse SC modes, the proximity effect induced by a clean SC film is typically one to three orders of magnitude smaller that the corresponding quantity for a bulk SC and exhibits huge thickness-dependent variations. The presence of disorder has competing effects: on the one hand it enhances the proximity-induced superconductivity and suppresses its strong thickness dependence, on the other hand it generates proximity-induced effective disorder in the SM. The effect of proximity-induced disorder on the topological superconducting phase and the associated Majorana modes is studied nonperturbatively.
This article investigates pathological behavior at the first limit stage in the sequence of inner mantles, obtained by iterating the definition of the mantle to get smaller and smaller inner models. I show: (A) it is possible that the $\omega$-th inner mantle is not a definable class; and (B) it is possible that the $\omega$-th inner mantle is a definable class but does not satisfy $\mathsf{AC}$. This answers a pair of questions of Fuchs, Hamkins, and Reitz [FHR15].
In this work, we continue to study the formation of particle chains (clusters) inside the annular sediment during the drying of a colloidal droplet on a substrate. The average value of the cluster size was determined after processing experimental data from other authors. We performed a series of calculations and found the value of the model parameter allowed to get numerical results agreed with the experiment. Also, a modification of the previously proposed algorithm is analyzed here.
Cooperative path-finding in multi-agent systems demands scalable solutions to navigate agents from their origins to destinations without conflict. Despite the breadth of research, scalability remains hampered by increased computational demands in complex environments. This study introduces the multi-agent RRT* potential field (MA-RRT*PF), an innovative algorithm that addresses computational efficiency and path-finding efficacy in dense scenarios. MA-RRT*PF integrates a dynamic potential field with a heuristic method, advancing obstacle avoidance and optimizing the expansion of random trees in congested spaces. The empirical evaluations highlight MA-RRT*PF's significant superiority over conventional multi-agent RRT* (MA-RRT*) in dense environments, offering enhanced performance and solution quality without compromising integrity. This work not only contributes a novel approach to the field of cooperative multi-agent path-finding but also offers a new perspective for practical applications in densely populated settings where traditional methods are less effective.
Consider the discrete 1D Schr\"odinger operator on $\Z$ with an odd $2k$ periodic potential $q$. For small potentials we show that the mapping: $q\to $ heights of vertical slits on the quasi-momentum domain (similar to the Marchenko-Ostrovski maping for the Hill operator) is a local isomorphism and the isospectral set consists of $2^k$ distinct potentials. Finally, the asymptotics of the spectrum are determined as $q\to 0$.
Kernel techniques are among the most popular and powerful approaches of data science. Among the key features that make kernels ubiquitous are (i) the number of domains they have been designed for, (ii) the Hilbert structure of the function class associated to kernels facilitating their statistical analysis, and (iii) their ability to represent probability distributions without loss of information. These properties give rise to the immense success of Hilbert-Schmidt independence criterion (HSIC) which is able to capture joint independence of random variables under mild conditions, and permits closed-form estimators with quadratic computational complexity (w.r.t. the sample size). In order to alleviate the quadratic computational bottleneck in large-scale applications, multiple HSIC approximations have been proposed, however these estimators are restricted to $M=2$ random variables, do not extend naturally to the $M\ge 2$ case, and lack theoretical guarantees. In this work, we propose an alternative Nystr\"om-based HSIC estimator which handles the $M\ge 2$ case, prove its consistency, and demonstrate its applicability in multiple contexts, including synthetic examples, dependency testing of media annotations, and causal discovery.
We compare several ConvNets with different depth and regularization techniques with multi-unit macaque IT cortex recordings and assess the impact of the same on representational similarity with the primate visual cortex. We find that with increasing depth and validation performance, ConvNet features are closer to cortical IT representations.
We study a class of noncommutative geometries that give rise to dimensionally reduced Yang-Mills theories. The emerging geometries describe sets of copies of an even dimensional manifold. Similarities to the D-branes in string theory are discussed.
The bubble structure generated by laser and plasma interactions changes in size depending on the local plasma density. The self injection electrons position with respect to wakefield can be controlled by tailoring the longitudinal plasma density. A regime to enhance the energy of the wakefield accelerated electrons and improve the beam quality is proposed and achieved using layered plasmas with increasing densities. Both the wakefield size and the electron bunch duration are significantly contracted in this regime. The electrons remain in the strong acceleration phase of the wakefield while their energy spread decreases because of their tight spatial distribution. An electron beam of 0.5GeV with less than 0.01 energy spread is obtained through 2.5D PIC simulations.
SAX J0635.2+0533 is a binary pulsar with a very short pulsation period ($P$ = 33.8 ms) and a high long-term spin down ($\dot P$ $>$ 3.8$\times10^{-13}$ s s$^{-1}$), which suggests a rotation-powered (instead of an accretion-powered) nature for this source. While it was discovered at a flux level around 10$^{-11}$ erg cm$^{-2}$ s$^{-1}$, between 2003 and 2004 this source was detected with XMM-Newton at an average flux of about 10$^{-13}$ erg cm$^{-2}$ s$^{-1}$; moreover, the flux varied of over one order of magnitude on time scales of a few days, sometimes decreasing below $3\times10^{-14}$ erg cm$^{-2}$ s$^{-1}$. Since both the rotation-powered and the accretion-powered scenarios have difficulties to explain these properties, the nature of SAX J0635.2+0533 is still unclear. Here we report on our recent long-term monitoring campaign on SAX J0635.2+0533 carried out with Swift and on a systematic reanalysis of all the RXTE observations performed between 1999 and 2001. We found that during this time interval the source remained almost always active at a flux level above 10$^{-12}$ erg cm$^{-2}$ s$^{-1}$.
Ontology can be used for the interpretation of natural language. To construct an anti-infective drug ontology, one needs to design and deploy a methodological step to carry out the entity discovery and linking. Medical synonym resources have been an important part of medical natural language processing (NLP). However, there are problems such as low precision and low recall rate. In this study, an NLP approach is adopted to generate candidate entities. Open ontology is analyzed to extract semantic relations. Six-word vector features and word-level features are selected to perform the entity linking. The extraction results of synonyms with a single feature and different combinations of features are studied. Experiments show that our selected features have achieved a precision rate of 86.77%, a recall rate of 89.03% and an F1 score of 87.89%. This paper finally presents the structure of the proposed ontology and its relevant statistical data.
We consider the chaotic motion of low-mass bodies in two-body high-order mean-motion resonances with planets in model planetary systems, and analytically estimate the Lyapunov and diffusion timescales of the motion in multiplets of interacting subresonances corresponding to the mean-motion resonances. We show that the densely distributed (though not overlapping) high-order mean-motion resonances, when certain conditions on the planetary system parameters are satisfied, may produce extended planetary chaotic zones -- "zones of weak chaotization," -- much broader than the well-known planetary connected chaotic zone, the Wisdom gap. This extended planetary chaotic zone covers the orbital range between the 2/1 and 1/1 resonances with the planet. On the other hand, the orbital space inner (closer to the host star) with respect to the 2/1 resonance location is essentially long-term stable. This difference arises because the adiabaticity parameter of subresonance multiplets specifically depends on the particle's orbit size. The revealed effect may control the structure of planetesimal disks in planetary systems: the orbital zone between the 2/1 and 1/1 resonances with a planet should be normally free from low-mass material (only that occasionally captured in the first-order 3/2 or 4/3 resonances may survive); whereas any low-mass population inner to the 2/1 resonance location should be normally long-lived (if not perturbed by secular resonances, which we do not consider in this study).
We present the results of work on a hybrid material composed of a tellurite glass rod doped with nanodiamonds containing nitrogen-vacancy-nitrogen and paramagnetic nitrogen-vacancy color centers. The reported results include details on tellurite glass and cane fabrication, confocal and wide-field imaging of the nanodiamond distribution in their volume, as well as on the spectroscopic characterization of their fluorescence and Optically Detected Magnetic Resonance measurements of magnetic fields and temperatures. Magnetic fields up to 50 G were examined with a sensitivity of 10$^{-5}$ T Hz$^{-1/2}$ whereas temperature measurements were simultaneously performed with a sensitivity of 74 kHz K$^{-1}$ within the 8 Kelvin range at room temperature. In that way, we demonstrate the suitability of such systems for fiber magneto- and thermometry with a reasonable performance already in the form of glass rods. At the same time, the rods constitute an interesting starting point for further processing into photonic components such as microstructured fibers or fiber tapers for the realization of specialized sensing modalities.
Non-classical correlations arising in complex quantum networks are attracting growing interest, both from a fundamental perspective and for potential applications in information processing. In particular, in an entanglement swapping scenario a new kind of correlations arise, the so-called nonbilocal correlations that are incompatible with local realism augmented with the assumption that the sources of states used in the experiment are independent. In practice, however, bilocality tests impose strict constraints on the experimental setup and in particular to presence of shared reference frames between the parties. Here, we experimentally address this point showing that false positive nonbilocal quantum correlations can be observed even though the sources of states are independent. To overcome this problem, we propose and demonstrate a new scheme for the violation of bilocality that does not require shared reference frames and thus constitute an important building block for future investigations of quantum correlations in complex networks.
In perturbative string theory, one is generally interested in asymptotic observables, such as the S-matrix in flat spacetime, and boundary correlation functions in anti-de Sitter spacetime. However, there are backgrounds in which such observables do not exist. We study examples of such backgrounds in 1+1 dimensional string theory. In these examples, the Liouville wall accelerates and can become spacelike in the past and/or future. When that happens, the corresponding null infinity, at which the standard scattering states are defined, is shielded by the Liouville wall. We compute scattering and particle production amplitudes in these backgrounds in the region in parameter space where the wall remains timelike, and discuss the continuation of this picture to the spacelike regime. We also discuss the physics from the point of view of the dynamics of free fermions in backgrounds with a time-dependent Fermi surface.
Masses of fermions in the SO(10) 16-plet are constructed using only the 10, 120 and 126 scalar multiplets. The mass matrices are restricted to be hermitian and the theory is constructed to have certain assumed quark masses, charged lepton masses and CKM matrix in accord with data. The remaining free parameters are found by fitting to light neutrino masses and MSN matrices result as predictions.