text
stringlengths
6
128k
(abridged) Afterglow light curves are constructed analytically for realistic gamma-ray burst remnants decelerating in either a homogeneous interstellar medium or a stellar wind environment, taking into account the radiative loss of the blast wave, which affects the temporal behaviors significantly. Inverse Compton scattering is considered. The inverse Compton effect prolongs the fast-cooling phase markedly, during which the relativistic shock is semi-radiative and the radiation efficiency is approximately constant. It is further shown that the shock is still semi-radiative for quite a long time after it transits into the slow-cooling phase. The temporal decaying index of the X-ray afterglow light curve in this semi-radiative phase is more consistent with the observed $<\alpha_{X}> \sim 1.3$ than the commonly used adiabatic one. To manifest as a bump or even dominant in the X-ray afterglows during the relativistic stage, it is required that the density should be larger than about 1-10 cm$^{-3}$ in the interstellar medium case, or the wind parameter $A_{\ast}$ should be larger than unity in the stellar wind case.
The physical phase of Causal Dynamical Triangulations (CDT) is known to be described by an effective, one-dimensional action in which three-volumes of the underlying foliation of the full CDT play a role of the sole degrees of freedom. Here we map this effective description onto a statistical-physics model of particles distributed on 1d lattice, with site occupation numbers corresponding to the three-volumes. We identify the emergence of the quantum de-Sitter universe observed in CDT with the condensation transition known from similar statistical models. Our model correctly reproduces the shape of the quantum universe and allows us to analytically determine quantum corrections to the size of the universe. We also investigate the phase structure of the model and show that it reproduces all three phases observed in computer simulations of CDT. In addition, we predict that two other phases may exists, depending on the exact form of the discretised effective action and boundary conditions. We calculate various quantities such as the distribution of three-volumes in our model and discuss how they can be compared with CDT.
The interplay between different orders is of fundamental importance in physics. The spontaneous, symmetry-breaking charge order, responsible for the stripe or the nematic phase, has been of great interest in many contexts where strong correlations are present, such as high-temperature superconductivity and quantum Hall effect. In this article we show the unexpected result that in an interacting two-dimensional electron system, the robustness of the nematic phase, which represents an order in the charge degree of freedom, not only depends on the orbital index of the topmost, half-filled Landau level, but it is also strongly correlated with the magnetic order of the system. Intriguingly, when the system is fully magnetized, the nematic phase is particularly robust and persists to much higher temperatures compared to the nematic phases observed previously in quantum Hall systems. Our results give fundamental new insight into the role of magnetization in stabilizing the nematic phase, while also providing a new knob with which it can be effectively tuned.
We study the non-inertial effects of a rotating frame on a spin-zero, Duffin-Kemmer-Petiau (DKP)-like oscillator in a cosmic string space-time with non-commutative geometry in the momentum space. The spin-zero DKP-like oscillator is obtained from the Klein-Gordon Lagrangian with a non-standard prescription for the oscillator coupling. We find that the solutions of the time-independent radial equation with the non-zero non-commutativity parameter parallel to the string are related to the confluent hypergeometric function. We find the quantized energy eigenvalues of the non-commutative oscillator.
Landauer-B\"uttiker formula describes the electronic quantum transports in nanostructures and molecules. It will be numerically demanding for simulations of complex or large size systems due to, for example, matrix inversion calculations. Recently, Chebyshev polynomial method has attracted intense interests in numerical simulations of quantum systems due to the high efficiency in parallelization, because the only matrix operation it involves is just the product of sparse matrices and vectors. Many progresses have been made on the Chebyshev polynomial representations of physical quantities for isolated or bulk quantum structures. Here we present the Chebyshev polynomial method to the typical electronic scattering problem, the Landauer-B\"uttiker formula for the conductance of quantum transports in nanostructures. We first describe the full algorithm based on the standard bath kernel polynomial method (KPM). Then, we present two simple butefficient improvements. One of them has a time consumption remarkably less than the direct matrix calculation without KPM. Some typical examples are also presented to illustrate the numerical effectiveness.
Transiting exoplanets provide unparalleled access to the fundamental parameters of both extrasolar planets and their host stars. We present limb-darkening coefficients (LDCs) for the exoplanet hunting CoRot and Kepler missions. The LDCs are calculated with ATLAS stellar atmospheric model grids and span a wide range of Teff, log g, and metallically [M/H]. Both CoRot and Kepler contain wide, nonstandard response functions, and are producing a large inventory of high-quality transiting lightcurves, sensitive to stellar limb darkening. Comparing the stellar model limb darkening to results from the first seven CoRot planets, we find better fits are found when two model intensities at the limb are excluded in the coefficient calculations. This calculation method can help to avoid a major deficiency present at the limbs of the 1D stellar models.
A pathwise construction of discontinuous Brownian motions on metric graphs is given for every possible set of non-local Feller-Wentzell boundary conditions. This construction is achieved by locally decomposing the metric graphs into star graphs, establishing local solutions on these partial graphs, pasting the solutions together, introducing non-local jumps, and verifying the generator of the resulting process.
It is shown that a 4D N=1 softly broken supersymmetric theory with higher derivative operators in the Kahler or the superpotential part of the Lagrangian and with an otherwise arbitrary superpotential, can be re-formulated as a theory without higher derivatives but with additional (ghost) superfields and modified interactions. The importance of the analytical continuation Minkowski-Euclidean space-time for the UV behaviour of such theories is discussed in detail. In particular it is shown that power counting for divergences in Minkowski space-time does not always work in models with higher derivative operators.
Multi-electron and multi-muon production have been measured at high transverse momentum in electron-proton collisions at HERA.Good overall agreement is found with the Standard Model predictions, dominated by photon-photon interactions. Events are observed with a di-electron mass above 100 GeV, a domain where the Standard Model prediction is low.
We study the adjoint and coadjoint representations of a class of Lie group including the Euclidean group. Despite the fact that these representations are not in general isomorphic, we show that there is a geometrically defined bijection between the sets of adjoint and coadjoint orbits of such groups. In addition, we show that the corresponding orbits, although different, are homotopy equivalent. We also provide a geometric description of the adjoint and coadjoint orbits of the Euclidean and orthogonal groups as a special class of flag manifold which we call a Hermitian flag manifold. These manifolds consist of flags endowed with complex structures equipped to the quotient spaces that define the flag.
Model evaluation is of crucial importance in modern statistics application. The construction of ROC and calculation of AUC have been widely used for binary classification evaluation. Recent research generalizing the ROC/AUC analysis to multi-class classification has problems in at least one of the four areas: 1. failure to provide sensible plots 2. being sensitive to imbalanced data 3. unable to specify mis-classification cost and 4. unable to provide evaluation uncertainty quantification. Borrowing from a binomial matrix factorization model, we provide an evaluation metric summarizing the pair-wise multi-class True Positive Rate (TPR) and False Positive Rate (FPR) with one-dimensional vector representation. Visualization on the representation vector measures the relative speed of increment between TPR and FPR across all the classes pairs, which in turns provides a ROC plot for the multi-class counterpart. An integration over those factorized vector provides a binary AUC-equivalent summary on the classifier performance. Mis-clasification weights specification and bootstrapped confidence interval are also enabled to accommodate a variety of of evaluation criteria. To support our findings, we conducted extensive simulation studies and compared our method to the pair-wise averaged AUC statistics on benchmark datasets.
We propose a new localization result for the leading eigenvalue and eigenvector of a symmetric matrix $A$. The result exploits the Frobenius inner product between $A$ and a given rank-one landmark matrix $X$. Different choices for $X$ may be used, depending upon the problem under investigation. In particular, we show that the choice where $X$ is the all-ones matrix allows to estimate the signature of the leading eigenvector of $A$, generalizing previous results on Perron-Frobenius properties of matrices with some negative entries. As another application we consider the problem of community detection in graphs and networks. The problem is solved by means of modularity-based spectral techniques, following the ideas pioneered by Miroslav Fiedler in mid 70s. We show that a suitable choice of $X$ can be used to provide new quality guarantees of those techniques, when the network follows a stochastic block model.
We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum ``Explain Like I'm Five'' (ELI5) where an online community provides answers to questions which are comprehensible by five year olds. Compared to existing datasets, ELI5 comprises diverse questions requiring multi-sentence answers. We provide a large set of web documents to help answer the question. Automatic and human evaluations show that an abstractive model trained with a multi-task objective outperforms conventional Seq2Seq, language modeling, as well as a strong extractive baseline. However, our best model is still far from human performance since raters prefer gold responses in over 86% of cases, leaving ample opportunity for future improvement.
In this paper we present a new regularized electric flux volume integral equation (D-VIE) for modeling high-contrast conductive dielectric objects in a broad frequency range. This new formulation is particularly suitable for modeling biological tissues at low frequencies, as it is required by brain epileptogenic area imaging, but also at higher ones, as it is required by several applications including, but not limited to, transcranial magnetic and deep brain stimulation (TMS and DBS, respectively). When modeling inhomogeneous objects with high complex permittivities at low frequencies, the traditional D-VIE is ill-conditioned and suffers from numerical instabilities that result in slower convergence and in less accurate solutions. In this work we address these shortcomings by leveraging a new set of volume quasi-Helmholtz projectors. Their scaling by the material permittivity matrix allows for the re-balancing of the equation when applied to inhomogeneous scatterers and thereby makes the proposed method accurate and stable even for high complex permittivity objects until arbitrarily low frequencies. Numerical results, canonical and realistic, corroborate the theory and confirm the stability and the accuracy of this new method both in the quasi-static regime and at higher frequencies.
Individual links in a wireless network may experience unequal fading coherence times due to differences in mobility or scattering environment, a practical scenario where the fundamental limits of communication have been mostly unknown. This paper studies broadcast and multiple access channels where multiple receivers experience unequal fading block lengths, and channel state information (CSI) is not available at the transmitter(s), or for free at any receiver. In other words, the cost of acquiring CSI at the receiver is fully accounted for in the degrees of freedom. In the broadcast channel, the method of product superposition is employed to find the achievable degrees of freedom. We start with unequal coherence intervals with integer ratios. As long as the coherence time is at least twice the number of transmit and receive antennas, these degrees of freedom meet the upper bound in four cases: when the transmitter has fewer antennas than the receivers, when all receivers have the same number of antennas, when the coherence time of one receiver is much shorter than all others, or when all receivers have identical block fading intervals. The degrees of freedom region of the broadcast under identical coherence times was also previously unknown and is settled by the results of this paper. The disparity of coherence times leads to gains that are distinct from those arising from other techniques such as spatial multiplexing or multi-user diversity; this class of gains is denoted coherence diversity. The inner bounds are further extended to the case of multiple receivers experiencing fading block lengths of arbitrary ratio or alignment. Also, in the multiple access channel with unequal coherence times, achievable and outer bounds on the degrees of freedom are obtained.
As we are entering the era of precision cosmology, it is necessary to count on accurate cosmological predictions from any proposed model of dark matter. In this paper we present a novel approach to the cosmological evolution of scalar fields that eases their analytic and numerical analysis at the background and at the linear order of perturbations. We apply the method to a scalar field endowed with a quadratic potential and revisit its properties as dark matter. Some of the results known in the literature are recovered, and a better understanding of the physical properties of the model is provided. It is shown that the Jeans wavenumber defined as $k_J = a \sqrt{mH}$ is directly related to the suppression of linear perturbations at wavenumbers $k>k_J$. We also discuss some semi-analytical results that are well satisfied by the full numerical solutions obtained from an amended version of the CMB code CLASS. Finally we draw some of the implications that this new treatment of the equations of motion may have in the prediction for cosmological observables.
We demonstrate the classical stability of the weak/Planck hierarchy within the Randall-Sundrum scenario, incorporating the Goldberger-Wise mechanism and higher-derivative interactions in a systematic perturbative expansion. Such higher-derivative interactions are expected if the RS model is the low-energy description of some more fundamental theory. Generically, higher derivatives lead to ill-defined singularities in the vicinity of effective field theory branes. These are carefully treated by the methods of classical renormalization.
The properties of the first generation of stars and their supernova (SN) explosions remains unknown due to the lack of their actual observations. Recently many transient surveys are conducted and the feasibility of the detection of supernovae (SNe) of Pop III stars is growing. In this paper we study the multicolor light curves for a number of metal-free core-collapse SN models (25-100 M$_{\odot}$) to provide the indicators for finding and identification of first generation SNe. We use mixing-fallback supernova explosion models which explain the observed abundance patterns of metal poor stars. Numerical calculations of the multicolor light curves are performed using multigroup radiation hydrodynamic code STELLA. The calculated light curves of metal-free SNe are compared with non-zero metallicity models and several observed SNe. We have found that the shock breakout characteristics, the evolution of the photosphere's velocity, the luminosity, the duration and color evolution of the plateau - all the SN phases are helpful to estimate the parameters of SN progenitor: the mass, the radius, the explosion energy and the metallicity. We conclude that the multicolor light curves can be potentially used to identify first generation SNe in the current (Subaru/HSC) and future transient surveys (LSST, JWST). They are also suitable for identification of the low-metallicity SNe in the nearby Universe (PTF, Pan-STARRS, Gaia).
The detection of the Majorana bound states (MBSs) is a central issue in the current investigation of the topological superconductors, and the topological Josephson junction is an important system for resolving this issue. In this work, we introduce an external quantum dot (QD) to Majorana Josephson junctions (MJJs), and study the parity flipping of the junction induced by the coupling between the QD and the MBSs. We demonstrate Landau-Zener (LZ) transitions between opposite Majorana parity states when the energy level of the QD is modulated. The resulted parity flipping processes exhibit voltage signals across the junction. In the presence of a periodic modulation on the QD level, we show Landau-Zener-St\"{u}ckelberg (LZS) interference on the parity states. We demonstrate distinctive interference patterns at distinct driving frequencies. These results can be used as signals for detecting the existence of the MBSs.
Discrete Element Methods (DEM) are a useful tool to model the fracture of cohesive granular materials. For this kind of application, simple particle shapes (discs in 2D, spheres in 3D) are usually employed. However, dealing with more general particle shapes allows to account for the natural heterogeneity of grains inside real materials. We present a discrete model allowing to mimic cohesion between contacting or non-contacting particles whatever their shape in 2D and 3D. The cohesive interactions are made of cohesion points placed on interacting particles, with the aim of representing a cohesive phase lying between the grains. Contact situations are solved according to unilateral contact and Coulomb friction laws. In order to test the developed model, 2D unixial compression simulations are performed. Numerical results show the ability of the model to mimic the macroscopic behavior of an aggregate grain subject to axial compression, as well as fracture initiation and propagation. A study of the influence of model and sample parameters provides important information on the ability of the model to reproduce various behaviors.
Exoplanet atmosphere characterisation has become an important tool in understanding exoplanet formation, evolution. However, clouds remain a key challenge for characterisation: upcoming space telescopes (e.g. JWST, ARIEL) and ground-based high-resolution spectrographs (e.g. CRIRES+) will produce data requiring detailed understanding of cloud formation and cloud effects. We aim to understand how the micro-porosity of cloud particles affects the cloud structure, particle size, and material composition. We examine the spectroscopic effects of micro-porous particles, the particle size distribution, and non-spherical cloud particles. We expanded our kinetic non-equilibrium cloud formation model and use a grid of prescribed 1D (T_gas-p_gas) DRIFT-PHOENIX profiles. We applied the effective medium theory and the Mie theory to model the spectroscopic properties of cloud particles with micro-porosity and a derived particle size distribution. We used a statistical distribution of hollow spheres to represent the effects of non-spherical cloud particles. Highly micro-porous cloud particles (90% vacuum) have a larger surface area, enabling efficient bulk growth higher in the atmosphere than for compact particles. Increases in single-scattering albedo and cross-sectional area for these mineral snowflakes cause the cloud deck to become optically thin only at a wavelength of ~100 ${\rm \mu m}$ instead of at the ~20 ${\rm \mu m}$ for compact cloud particles. A significant enhancement in albedo is also seen when cloud particles occur with a locally changing Gaussian size distribution. Non-spherical particles increase the opacity of silicate spectral features, which further increases the wavelength at which the clouds become optically thin. JWST MIRI will be sensitive to signatures of micro-porous and non-spherical cloud particles based on the wavelength at which clouds are optically thin.
Recently, Yang et al. (Quantum Inf Process 18, 74, 2019) proposed a two-party quantum key agreement protocol over a collective noisy channel. They claimed that their quantum key agreement protocol can ensure both of the participants have equal influence on the final shared key. However, this study shows that the participant who announces the permutation operation can manipulate the final shared key by himself/herself without being detected by the other. To avoid this loophole, an improvement is proposed here.
We study the category M consisting of U(sl_{n+1})-modules whose restriction to U(h) is free of rank 1, in particular we classify isomorphism classes of objects in M and determine their submodule structure. This leads to new sl_{n+1}-modules. For n=1 we also find the central characters and derive an explicit formula for taking tensor product with a simple finite dimensional module.
We use the ratio $L_{\rm FIR}/L_{\rm B}$ and the IRAS color index S$_{25}$/S$_{12}$ (both widely used as indices of relative star formation rates in galaxies) to analyse subsets (containing no known AGNs or merging/interacting galaxies) of: (a) the IRAS Bright Galaxy Sample, (b) galaxies from the optically complete RSA sample which have IRAS detections in all four bands, and (c) a volume-limited IR-unselected sample. We confirm that IR-bright barred (SB) galaxies do, on average, have very significantly higher values of the FIR-optical and S$_{25}$/S$_{12}$ ratios (and presumably, higher relative star formation rates, SFR) than that do unbarred ones; the effect is most obvious in the IR colors. We also confirm that these differences are confined to early-type (S0/a - Sbc) spirals and are not evident among late-type systems (Sc - Sdm). {\it Unlike others, we see no enhancement of the SFR in weakly-barred (SAB) galaxies.} We further confirm that the effect of bars on the SFR is associated with the relative IR luminosity and show that it is detectable only in galaxies with $L_{\rm FIR}/L_{\rm B}$ $\proxgreat$ 1/3, suggesting that as soon as they have any effect, bars translate their host galaxies into this relatively IR-luminous group. Conversely, for galaxies with $L_{\rm FIR}/L_{\rm B}$ below$\sim$ 0.1 this luminosity ratio is {\it lower} among barred than unbarred systems, again confirming and quantifying an earlier result. Although there is no simple physical relation between HI content and star formation, a strong correlation of HI content with the presence of bars has been found for early-type spirals with $L_{\rm FIR}/L_{\rm B}$ $\proxgreat$ 1/3. This suggests that the availability of fuel is the factor determining just which galaxies undergo bar-induced starbursts.
A simplified derivation of Yurtsever's result, which states that the entropy of a truncated bosonic Fock space is given by a holographic bound when the energy of the Fock states is constrained gravitationally, is given for asymptotically flat spacetimes with arbitrary dimension d greater or equal to four. For this purpose, a scalar field confined to a spherical volume in d-dimensional spacetime is considered. Imposing an upper bound on the total energy of the corresponding Fock states which ensures that the system is in a stable configuration against gravitational collapse and imposing a cutoff on the maximum energy of the field modes of the order of the Planck energy leads to an entropy bound of holographic type. A simple derivation of the entropy bound is also given for the fermionic case.
Context. Slowly pulsating B (SPB) stars are main-sequence multi-periodic oscillators that display non-radial gravity modes. For a fraction of these pulsators, 4-year photometric light curves obtained with the Kepler space telescope reveal period spacing patterns from which their internal rotation and mixing can be inferred. In this inference, any direct resonant mode coupling has usually been ignored so far. Aims. We re-analysed the light curves of a sample of 38 known Kepler SPB stars. For 26 of those, the internal structure, including rotation and mixing, was recently inferred from their dipole prograde oscillation modes. Our aim is to detect direct nonlinear resonant mode coupling among the largest-amplitude gravity modes. Methods. We extract up to 200 periodic signals per star with five different iterative prewhitening strategies based on linear and nonlinear regression applied to the light curves. We then identify candidate coupled gravity modes by verifying whether they fulfil resonant phase relations. Results. For 32 of 38 SPB stars we find at least 1 candidate resonance that is detected in both the linear and the best nonlinear regression model fit to the light curve and involves at least one of the two largest-amplitude modes. Conclusions. The majority of the Kepler SPB stars reveal direct nonlinear resonances based on the largest-amplitude modes. These stars are thus prime targets for nonlinear asteroseismic modelling of intermediate-mass dwarfs to assess the importance of mode couplings in probing their internal physics.
AIMS: The separation of foreground contamination from cosmic microwave background (CMB) observations is one of the most challenging and important problem of digital signal processing in Cosmology. In literature, various techniques have been presented, but no general consensus about their real performances and properties has been reached. This is due to the characteristics of these techniques that have been studied essentially through numerical simulations based on semi-empirical models of the CMB and the Galactic foregrounds. Such models often have different level of sophistication and/or are based on different physical assumptions (e.g., the number of the Galactic components and the level of the noise). Hence, a reliable comparison is difficult. What actually is missing is a statistical analysis of the properties of the proposed methodologies. Here, we consider the "Internal Linear Combination" method (ILC) which, among the separation techniques, requires the smallest number of "a priori" assumptions. This feature is of particular interest in the context of the CMB polarization measurements at small angular scales where the lack of knowledge of the polarized backgrounds represents a serious limit. METHODS: The statistical characteristics of ILC are examined through an analytical approach and the basic conditions are fixed in a way to work satisfactorily. RESULTS: ILC provides satisfactory results only under rather restrictive conditions. This is a critical fact to take into consideration in planning the future ground-based observations (e.g., with ALMA) where, contrary to the satellite experiments, there is the possibility to have a certain control of the experimental conditions.
Achieving realistic, vivid, and human-like synthesized conversational gestures conditioned on multi-modal data is still an unsolved problem due to the lack of available datasets, models and standard evaluation metrics. To address this, we build Body-Expression-Audio-Text dataset, BEAT, which has i) 76 hours, high-quality, multi-modal data captured from 30 speakers talking with eight different emotions and in four different languages, ii) 32 millions frame-level emotion and semantic relevance annotations. Our statistical analysis on BEAT demonstrates the correlation of conversational gestures with facial expressions, emotions, and semantics, in addition to the known correlation with audio, text, and speaker identity. Based on this observation, we propose a baseline model, Cascaded Motion Network (CaMN), which consists of above six modalities modeled in a cascaded architecture for gesture synthesis. To evaluate the semantic relevancy, we introduce a metric, Semantic Relevance Gesture Recall (SRGR). Qualitative and quantitative experiments demonstrate metrics' validness, ground truth data quality, and baseline's state-of-the-art performance. To the best of our knowledge, BEAT is the largest motion capture dataset for investigating human gestures, which may contribute to a number of different research fields, including controllable gesture synthesis, cross-modality analysis, and emotional gesture recognition. The data, code and model are available on https://pantomatrix.github.io/BEAT/.
We propose and analyze an online algorithm for reconstructing a sequence of signals from a limited number of linear measurements. The signals are assumed sparse, with unknown support, and evolve over time according to a generic nonlinear dynamical model. Our algorithm, based on recent theoretical results for $\ell_1$-$\ell_1$ minimization, is recursive and computes the number of measurements to be taken at each time on-the-fly. As an example, we apply the algorithm to compressive video background subtraction, a problem that can be stated as follows: given a set of measurements of a sequence of images with a static background, simultaneously reconstruct each image while separating its foreground from the background. The performance of our method is illustrated on sequences of real images: we observe that it allows a dramatic reduction in the number of measurements with respect to state-of-the-art compressive background subtraction schemes.
Tableaux-based decision procedures for satisfiability of modal and description logics behave quite well in practice, but it is sometimes hard to obtain exact worst-case complexity results using these approaches, especially for EXPTIME-complete logics. In contrast, automata-based approaches often yield algorithms for which optimal worst-case complexity can easily be proved. However, the algorithms obtained this way are usually not only worst-case, but also best-case exponential: they first construct an automaton that is always exponential in the size of the input, and then apply the (polynomial) emptiness test to this large automaton. To overcome this problem, one must try to construct the automaton "on-the-fly" while performing the emptiness test. In this paper we will show that Voronkov's inverse method for the modal logic K can be seen as an on-the-fly realization of the emptiness test done by the automata approach for K. The benefits of this result are two-fold. First, it shows that Voronkov's implementation of the inverse method, which behaves quite well in practice, is an optimized on-the-fly implementation of the automata-based satisfiability procedure for K. Second, it can be used to give a simpler proof of the fact that Voronkov's optimizations do not destroy completeness of the procedure. We will also show that the inverse method can easily be extended to handle global axioms, and that the correspondence to the automata approach still holds in this setting. In particular, the inverse method yields an EXPTIME-algorithm for satisfiability in K w.r.t. global axioms.
We consider the Gelfand problem with general supercritical nonlinearities in the two-dimensional unit ball. In this paper, we prove the non-existence of an unstable solution for any small parameter and the uniformly boundedness of finite Morse index solutions. As a result, we obtain the existence of a radial singular solution and prove that the bifurcation curve has infinitely many turning points. We remark that these properties are well-known in $N$ dimensions with $3\le N \le 9$ and less known in two dimensions. The key idea of the proof is to utilize an interaction between a key gradient estimate of solutions and the supercriticality of the nonlinearities.
In medical imaging, chromosome straightening plays a significant role in the pathological study of chromosomes and in the development of cytogenetic maps. Whereas different approaches exist for the straightening task, typically geometric algorithms are used whose outputs are characterized by jagged edges or fragments with discontinued banding patterns. To address the flaws in the geometric algorithms, we propose a novel framework based on image-to-image translation to learn a pertinent mapping dependence for synthesizing straightened chromosomes with uninterrupted banding patterns and preserved details. In addition, to avoid the pitfall of deficient input chromosomes, we construct an augmented dataset using only one single curved chromosome image for training models. Based on this framework, we apply two popular image-to-image translation architectures, U-shape networks and conditional generative adversarial networks, to assess its efficacy. Experiments on a dataset comprised of 642 real-world chromosomes demonstrate the superiority of our framework, as compared to the geometric method in straightening performance, by rendering realistic and continued chromosome details. Furthermore, our straightened results improve the chromosome classification by 0.98%-1.39% mean accuracy.
Highly charged ions of heavy actinides from uranium to einsteinium are studied theoretically to find optical transitions sensitive to the variation of the fine structure constant. A number of promising transitions have been found in ions with ionisation degree $\sim$~10. All these transitions correspond in single-electron approximation to the $6p$ - $5f$ transitions. Many of the transitions are between ground and excited metastable states of the ions which means that they can probably be used as optical clock transitions. Some of the ions have more than one clock transition with different sensitivity to the variation of the fine structure constant $\alpha$. The most promising systems include the Np$^{10+}$, Np$^{9+}$, Pu$^{11+}$, Pu$^{10+}$, Pu$^{9+}$, Pu$^{8+}$, Bk$^{15+}$, Cm$^{12+}$, and Es$^{15+}$ ions.
Motivated by black hole solutions with matter fields outside their horizon, we study the effect of these matter fields in the motion of massless and massive particles. We consider as background a four-dimensional asymptotically AdS black hole with scalar hair. The geodesics are studied numerically and we discuss about the differences in the motion of particles between the four-dimensional asymptotically AdS black holes with scalar hair and their no-hair limit, that is, Schwarzschild AdS black holes. Mainly, we found that there are bounded orbits like planetary orbits in this background. However, the periods associated to circular orbits are modified by the presence of the scalar hair. Besides, we found that some classical tests such as perihelion precession, deflection of light and gravitational time delay have the standard value of general relativity plus a correction term coming from the cosmological constant and the scalar hair. Finally, we found a specific value of the parameter associated to the scalar hair, in order to explain the discrepancy between the theory and the observations, for the perihelion precession of Mercury and light deflection.
This paper addresses the problem of enabling inter-machine Ultra-Reliable Low-Latency Communication (URLLC) in future 6G Industrial Internet of Things (IIoT) networks. As far as the Radio Access Network (RAN) is concerned, centralized pre-configured resource allocation requires scheduling grants to be disseminated to the User Equipments (UEs) before uplink transmissions, which is not efficient for URLLC, especially in case of flexible/unpredictable traffic. To alleviate this burden, we study a distributed, user-centric scheme based on machine learning in which UEs autonomously select their uplink radio resources without the need to wait for scheduling grants or preconfiguration of connections. Using simulation, we demonstrate that a Multi-Armed Bandit (MAB) approach represents a desirable solution to allocate resources with URLLC in mind in an IIoT environment, in case of both periodic and aperiodic traffic, even considering highly populated networks and aggressive traffic.
The magnetic-field dependencies of the longitudinal and Hall resistance of the electron-doped compounds Nd$_{2-x}$Ce$_x$CuO$_{4+\delta}$ in underdoped region with $x$=0.14 and with varying degrees of disorder ($\delta$) were investigated. It was established experimentally that the correlation between the longitudinal electrical resistivity and the Hall resistivity can be analyzed on the basis of scaling relationships: $\rho_{xy}$($B$)$\sim$ $\rho^{\beta}_{xx}$($B$). For the totality of the investigated single-crystal films of Nd$_{2-x}$Ce$_x$CuO$_{4+\delta}$/SrTiO$_3$ the universal value $\beta$=1.5 $\pm$0.4 is found. The observed feature in the electron-doped two-dimensional systems can be associated both with a displaying of anisotropic $s$ - wave or $d$-wave pairing symmetry and with a rather strong pinning due to an essential degree of disorder in the samples under study.
In this paper a quasi-linear elliptic equation in the whole Euclidean space is considered. The nonlinearity of the equation is assumed to have exponential growth or have critical growth in view of Trudinger-Moser type inequality. Under some assumptions on the potential and the nonlinearity, it is proved that there is a nontrivial positive weak solution to this equation. Also it is shown that there are two distinct positive weak solutions to a perturbation of the equation. The method of proving these results is combining Trudinger-Moser type inequality, Mountain-pass theorem and Ekeland's variational principle.
Current state-of-the-art neural machine translation (NMT) uses a deep multi-head self-attention network with no explicit phrase information. However, prior work on statistical machine translation has shown that extending the basic translation unit from words to phrases has produced substantial improvements, suggesting the possibility of improving NMT performance from explicit modeling of phrases. In this work, we present multi-granularity self-attention (Mg-Sa): a neural network that combines multi-head self-attention and phrase modeling. Specifically, we train several attention heads to attend to phrases in either n-gram or syntactic formalism. Moreover, we exploit interactions among phrases to enhance the strength of structure modeling - a commonly-cited weakness of self-attention. Experimental results on WMT14 English-to-German and NIST Chinese-to-English translation tasks show the proposed approach consistently improves performance. Targeted linguistic analysis reveals that Mg-Sa indeed captures useful phrase information at various levels of granularities.
We report the discovery of a pair of exoplanets co-orbiting the red dwarf star GJ3470. The larger planet, GJ3470-d, was observed in a 14.9617-days orbit and the smaller planet, GJ3470-e, in a 14.9467-days orbit. GJ3470-d is sub-Jupiter size with a 1.4% depth and a duration of 3 hours, 4 minutes. The smaller planet, GJ3470-e, currently leads the larger planet by approximately 1.146-days and is extending that lead by about 7.5-minutes (JD 0.0052) per orbital cycle. It has an average depth of 0.5% and an average duration of 3 hours, 2 minutes. The larger planet, GJ3470-d, has been observed on seven separate occasions over a 3-year period, allowing for a very precise orbital period calculation. The last transit was observed by three separate observatories in Oklahoma and Arizona. The smaller planet, GJ3470-e, has been observed on five occasions over 2-years. Our data appears consistent with two exoplanets in a Horseshoe Exchange orbit. When confirmed, these will be the second and third exoplanets discovered and characterized by amateur astronomers without professional data or assistance. It will also be the first ever discovery of co-orbiting exoplanets in a Horseshoe Exchange orbit.
Knowledge is a formal way of understanding the world, providing a human-level cognition and intelligence for the next-generation artificial intelligence (AI). One of the representations of knowledge is semantic relations between entities. An effective way to automatically acquire this important knowledge, called Relation Extraction (RE), a sub-task of information extraction, plays a vital role in Natural Language Processing (NLP). Its purpose is to identify semantic relations between entities from natural language text. To date, there are several studies for RE in previous works, which have documented these techniques based on Deep Neural Networks (DNNs) become a prevailing technique in this research. Especially, the supervised and distant supervision methods based on DNNs are the most popular and reliable solutions for RE. This article 1) introduces some general concepts, and further 2) gives a comprehensive overview of DNNs in RE from two points of view: supervised RE, which attempts to improve the standard RE systems, and distant supervision RE, which adopts DNNs to design sentence encoder and de-noise method. We further 3) cover some novel methods and recent trends as well as discuss possible future research directions for this task.
In this paper, we analyze the steady state maximum overlap time in the M/M/1 queue. We derive the maximum overlap time tail distribution, its moments and the moment generating function. We also analyze the steady state minimum overlap time of the adjacent customers and compute its moments and moment generating function. Our results provide new insight on how customers become infected in the M/M/1 queue.
In biological, glassy, and active systems, various tracers exhibit Laplace-like, i.e., exponential, spreading of the diffusing packet of particles. The limitations of the central limit theorem in fully capturing the behaviors of such diffusive processes, especially in the tails, have been studied using the continuous time random walk model. For cases when the jump length distribution is super-exponential, e.g., a Gaussian, we use large deviations theory and relate it to the appearance of exponential tails. When the jump length distribution is sub-exponential the packet of spreading particles is described by the big jump principle. We demonstrate the applicability of our approach for finite time, indicating that rare events and the asymptotics of the large deviations rate function can be sampled for large length scales within a reasonably short measurement time.
The quantum dynamics of a simplest dissipative system, a particle moving in a constant external field , is exactly studied by taking into account its interaction with a bath of Ohmic spectral density. We apply the main idea and methods developed in our recent work [1] to quantum dissipative system with constant external field. Quantizing the dissipative system we obtain the simple and exact solutions for the coordinate operator of the system in Heisenberg picture and the wave function of the composite system of system and bath in Schroedinger picture. An effective Hamiltonian for the dissipative system is explicitly derived from these solutions with Heisenberg picture method and thereby the meaning of the wavefunction governed by it is clarified by analyzing the effect of the Brownian motion. Especially, the general effective Hamiltonian for the case with arbitrary potential is directly derived with this method for the case when the Brownian motion can be ignored. Using this effective Hamiltonian, we show an interesting fact that the dissipation suppresses the wave packet spreading.
Logical Table-to-Text (LT2T) generation is tasked with generating logically faithful sentences from tables. There currently exists two challenges in the field: 1) Faithfulness: how to generate sentences that are factually correct given the table content; 2) Diversity: how to generate multiple sentences that offer different perspectives on the table. This work proposes LoFT, which utilizes logic forms as fact verifiers and content planners to control LT2T generation. Experimental results on the LogicNLG dataset demonstrate that LoFT is the first model that addresses unfaithfulness and lack of diversity issues simultaneously. Our code is publicly available at https://github.com/Yale-LILY/LoFT.
Cosmic ray electron (CRE) acceleration and cooling are important physical processes in astrophysics. We develop an approximative framework to treat CRE physics in the parallel smoothed particle hydrodynamics code Gadget-3. In our methodology, the CRE spectrum of each fluid element is approximated by a single power-law distribution with spatially varying amplitude, upper cut-off, lower cut-off, and spectral index. We consider diffusive shock acceleration to be the source of injection, and oppositely the sinking processes is attributed to synchrotron radiation, inverse Compton scatters, and Coulomb scatters. The adiabatic gains and losses are also included. We show that our formalism produces the energy and pressure with an accuracy of $ > 90\%$ for a free cooling CRE spectrum. Both slope and intensity of the radio emission computed from the CRE population given by our method in cosmological hydro-simulation coincide well with observations, and our results also show that relaxed clusters have lower fluxes. Finally, we investigate several impacts of the CRE processes on the cosmological hydro-simulation, we find that: (1) the pressure of the CRE spectrum is very small and can be ignored in hydro-simulation, (2) the impacts of the CRE processes on the gas phase-space state of hydro-simulation is up to $3\%$, (3) the CRE processes induce a $5\%$ influence on the mass function in the mass range $10^{12} -10^{13} h^{-1} M_{\odot}$, (4) The gas temperature of massive galaxy cluster is influenced by the CRE processes up to $\sim 10\%$.
Let $R$ be a commutative ring with unity, $M$ be a unitary $R$-module and $G$ a finite abelian group (viewed as a $\mathbb{Z}$-module). The main objective of this paper is to study properties of mod-annihilators of $M$. For $x \in M$, we study the ideals $[x : M] =\{r\in R | rM\subseteq Rx\}$ of $R$ corresponding to mod-annihilator of $M$. We investigate that when $[x : M]$ is an essential ideal of $R$. We prove that arbitrary intersection of essential ideals represented by mod-annihilators is an essential ideal. We observe that $[x : M]$ is injective if and only if $R$ is non-singular and the radical of $R/[x : M]$ is zero. Moreover, if essential socle of $M$ is non-zero, then we show that $[x : M]$ is the intersection of maximal ideals and $[x : M]^2 = [x : M]$. Finally, we discuss the correspondence of essential ideals of $R$ and vertices of the annihilating graphs realized by $M$ over $R$.
Understanding ballistic effects caused by ion beam irradiation, and linking them with induced structure can be a key point for controlling and predicting the microstructure of irradiated materials. For this purpose, we have investigated ballistic effects from an ion mixing formalism point of view. The displacement cascades in copper and AgCu alloy were obtained using binary collision approximation (BCA) and molecular dynamics (MD) simulations. We employed BCA-based code MARLOWE for its ability to simulate high energy displacement cascades. A first set of simulations was performed using both methods on pure copper for energies ranging from 0.5 keV to 20 keV. The results of BCA and MD simulations are compared, evidencing rationally parametrized MARLOWE to be predictive. A second set of simulations was then carried out using BCA only. Following experimental studies, AgCu alloy was subjected to 1 MeV krypton ions. MARLOW simulations are found to be in good agreement with experimental results.
We use a Monte Carlo implementation of recently developed models of double diffraction to assess the sensitivity of the LHC experiments to Standard Model Higgs bosons produced in exclusive double diffraction. The signal is difficult to extract, due to experimental limitations related to the first level trigger, and to contamination by inclusive double diffractive background. Assuming the above difficulties can be overcome, the expected signal-to-background ratio is presented as a function of the experimental resolution on the missing mass. Injecting a missing mass resolution of 2 GeV, a signal-to-background ratio of about 0.5 is obtained; a resolution of 1 GeV brings a signal to background ratio of 1. This result is lower than previous estimates, and the discrepancy is explained.
Financial exchanges across the world use limit order books (LOBs) to process orders and match trades. For research purposes it is important to have large scale efficient simulators of LOB dynamics. LOB simulators have previously been implemented in the context of agent-based models (ABMs), reinforcement learning (RL) environments, and generative models, processing order flows from historical data sets and hand-crafted agents alike. For many applications, there is a requirement for processing multiple books, either for the calibration of ABMs or for the training of RL agents. We showcase the first GPU-enabled LOB simulator designed to process thousands of books in parallel, with a notably reduced per-message processing time. The implementation of our simulator - JAX-LOB - is based on design choices that aim to best exploit the powers of JAX without compromising on the realism of LOB-related mechanisms. We integrate JAX-LOB with other JAX packages, to provide an example of how one may address an optimal execution problem with reinforcement learning, and to share some preliminary results from end-to-end RL training on GPUs.
Non-orthogonal multiple access (NOMA) has drawn tremendous attention, being a potential candidate for the spectrum access technology for the fifth-generation (5G) and beyond 5G (B5G) wireless communications standards. Most research related to NOMA focuses on the system performance from Shannon's capacity perspective, which, although a critical system design criterion, fails to quantify the effect of delay constraints imposed by future wireless applications. In this paper, we analyze the performance of a single-input multiple-output (SIMO) two-user downlink NOMA system, in terms of the link-layer achievable rate, known as effective capacity (EC), which captures the performance of the system under a delay-limited quality-of-service (QoS) constraint. For signal combining at the receiver side, we use generalized selection combining (GSC), which bridges the performance gap between the two conventional diversity combining schemes, namely selection combining (SC) and maximal-ratio combining (MRC). We also derive two approximate expressions for the EC of NOMA-GSC which are accurate at low-SNR and at high-SNR, respectively. The analysis reveals a tradeoff between the number of implemented receiver radio-frequency (RF) chains and the achieved performance, and can be used to determine the appropriate number of paths to combine in a practical receiver design.
We carry out generalized-ensemble molecular dynamics simulations of the formation of small Helium (He) clusters in bulk Tungsten (W), a process of practical relevance for fusion energy production. We calculate formation free energies of small Helium clusters at temperatures up to the melting point of W, encompassing the whole range of interest for fusion-energy production. From this, parameters like cluster break-up or formation rates can be calculated, which help to refine models of microstructure evolution in He-irradiated Tungsten.
A field k is called anti-Mordellic if every smooth curve over k with a k-point has infinitely many k-points. We prove that for a function field over an anti-Mordellic field, the subfield of constants is defined by a certain universal first order formula. Under additional hypotheses regarding 2-cohomological dimension we prove that algebraic dependence of an n-tuple of elements in such a function field can be described by a first order formula, for each n. We also give a result that lets one distinguish various classes of fields using first order sentences.
We show for $k \geq 2$ that the locally Lipschitz viscosity solution to the $\sigma_k$-Loewner-Nirenberg problem on a given annulus $\{a < |x| < b\}$ is $C^{1,\frac{1}{k}}_{\rm loc}$ in each of $\{a < |x| \leq \sqrt{ab}\}$ and $\{\sqrt{ab} \leq |x| < b\}$ and has a jump in radial derivative across $|x| = \sqrt{ab}$. Furthermore, the solution is not $C^{1,\gamma}_{\rm loc}$ for any $\gamma > \frac{1}{k}$. Optimal regularity for solutions to the $\sigma_k$-Yamabe problem on annuli with finite constant boundary values is also established.
Multi-modal large language models (MLLMs) have been shown to efficiently integrate natural language with visual information to handle multi-modal tasks. However, MLLMs still face a fundamental limitation of hallucinations, where they tend to generate erroneous or fabricated information. In this paper, we address hallucinations in MLLMs from a novel perspective of representation learning. We first analyzed the representation distribution of textual and visual tokens in MLLM, revealing two important findings: 1) there is a significant gap between textual and visual representations, indicating unsatisfactory cross-modal representation alignment; 2) representations of texts that contain and do not contain hallucinations are entangled, making it challenging to distinguish them. These two observations inspire us with a simple yet effective method to mitigate hallucinations. Specifically, we introduce contrastive learning into MLLMs and use text with hallucination as hard negative examples, naturally bringing representations of non-hallucinative text and visual samples closer while pushing way representations of non-hallucinating and hallucinative text. We evaluate our method quantitatively and qualitatively, showing its effectiveness in reducing hallucination occurrences and improving performance across multiple benchmarks. On the MMhal-Bench benchmark, our method obtains a 34.66% /29.5% improvement over the baseline MiniGPT-4/LLaVA. Our code is available on https://github.com/X-PLUG/mPLUG-HalOwl/tree/main/hacl.
We present a generalization of multiview varieties as closures of images obtained by projecting subspaces of a given dimension onto several views, from the photographic and geometric points of view. Motivated by applications in Computer Vision for triangulation of world features, we investigate when the associated projection map is generically injective; an essential requirement for successful triangulation. We give a complete characterization of this property by determining two formulae for the dimensions of these varieties. Similarly, we describe for which center arrangements calibration of camera parameters is possible. We explore when the multiview variety is naturally isomorphic to its associated blowup. In the case of generic centers, we give a precise formula for when this occurs.
Lample and Charton (2019) describe a system that uses deep learning technology to compute symbolic, indefinite integrals, and to find symbolic solutions to first- and second-order ordinary differential equations, when the solutions are elementary functions. They found that, over a particular test set, the system could find solutions more successfully than sophisticated packages for symbolic mathematics such as Mathematica run with a long time-out. This is an impressive accomplishment, as far as it goes. However, the system can handle only a quite limited subset of the problems that Mathematica deals with, and the test set has significant built-in biases. Therefore the claim that this outperforms Mathematica on symbolic integration needs to be very much qualified.
We consider solutions to the linear wave equation in the interior region of extremal Kerr black holes. We show that axisymmetric solutions can be extended continuously beyond the Cauchy horizon and moreover that, if we assume suitably fast polynomial decay in time along the event horizon, their local energy is finite. We also extend these results to non-axisymmetric solutions on slowly rotating extremal Kerr-Newman black holes. These results are the analogues of results obtained in [D. Gajic, Linear waves in the interior of extremal black holes I, arXiv:1509.06568] for extremal Reissner-Nordstr\"om and stand in stark contrast to previously established results for the subextremal case, where the local energy was shown to generically blow up at the Cauchy horizon.
Multiphase estimation is a paradigmatic example of a multiparameter problem. When measuring multiple phases embedded in interferometric networks, specially-tailored input quantum states achieve enhanced sensitivities compared with both single-parameter and classical estimation schemes. Significant attention has been devoted to defining the optimal strategies for the scenario in which all of the phases are evaluated with respect to a common reference mode, in terms of optimal probe states and optimal measurement operators. As well, the strategies assume unlimited external resources, which is experimentally unrealistic. Here, we optimize a generalized scenario that treats all of the phases on an equal footing and takes into account the resources provided by external references. We show that the absence of an external reference mode reduces the number of simultaneously estimatable parameters, owing to the immeasurability of global phases, and that the symmetries of the parameters being estimated dictate the symmetries of the optimal probe states. Finally, we provide insight for constructing optimal measurements in this generalized scenario. The experimental viability of this work underlies its immediate practical importance beyond fundamental physics.
Identifying the level of expertise of its users is important for a system since it can lead to a better interaction through adaptation techniques. Furthermore, this information can be used in offline processes of root cause analysis. However, not much effort has been put into automatically identifying the level of expertise of an user, especially in dialog-based interactions. In this paper we present an approach based on a specific set of task related features. Based on the distribution of the features among the two classes - Novice and Expert - we used Random Forests as a classification approach. Furthermore, we used a Support Vector Machine classifier, in order to perform a result comparison. By applying these approaches on data from a real system, Let's Go, we obtained preliminary results that we consider positive, given the difficulty of the task and the lack of competing approaches for comparison.
We present a rewiew and also new possible applications of $p$-adic numbers to pre-spacetime physics. It is shown that instead of the extension $R^n\to Q_p^n$, which is usually implied in $p$-adic quantum field theory, it is possible to build a model based on the $R^n\to Q_p$, where p=n+2 extension and get rid of loop divergences. It is also shown that the concept of mass naturally arises in $p$-adic models as inverse transition probability with a dimensional constant of proportionality.
The number of open source software projects has been growing exponentially. The major online software repository host, GitHub, has accumulated tens of millions of publicly available Git version-controlled repositories. Although the research potential enabled by the available open source code is clearly substantial, no significant large-scale open source code datasets exist. In this paper, we present the Public Git Archive -- dataset of 182,014 top-bookmarked Git repositories from GitHub. We describe the novel data retrieval pipeline to reproduce it. We also elaborate on the strategy for performing dataset updates and legal issues. The Public Git Archive occupies 3.0 TB on disk and is an order of magnitude larger than the current source code datasets. The dataset is made available through HTTP and provides the source code of the projects, the related metadata, and development history. The data retrieval pipeline employs an optimized worker queue model and an optimized archive format to efficiently store forked Git repositories, reducing the amount of data to download and persist. Public Git Archive aims to open a myriad of new opportunities for ``Big Code`` research.
In this paper, we report the results of constraining the dynamical dark energy with a divergence-free parameterization, $w(z) = w_{0} + w_{a}(\frac{\ln(2+z)}{1+z}-\ln2)$, in the presence of spatial curvature and massive neutrinos, with the 7-yr WMAP temperature and polarization data, the power spectrum of LRGs derived from SDSS DR7, the Type Ia supernova data from Union2 sample, and the new measurements of $H_0$ from HST, by using a MCMC global fit method. Our focus is on the determinations of the spatial curvature, $\Omega_k$, and the total mass of neutrinos, $\sum m_{\nu}$, in such a dynamical dark energy scenario, and the influence of these factors to the constraints on the dark energy parameters, $w_0$ and $w_a$. We show that $\Omega_k$ and $\sum m_{\nu}$ can be well constrained in this model; the 95% CL limits are: $-0.0153<\Omega_k<0.0167$ and $\sum m_{\nu}<0.56$ eV. Comparing to the case in a flat universe, we find that the error in $w_0$ is amplified by 25.51%, and the error in $w_a$ is amplified by 0.14%; comparing to the case with a zero neutrino mass, we find that the error in $w_0$ is amplified by 12.24%, and the error in $w_a$ is amplified by 1.63%.
The spatial-dependent propagation model has been successfully used to explain diverse observational phenomena, including the spectral hardening of cosmic-ray nuclei above $200$ GV, the large-scale dipole anisotropy and the diffusive gamma distribution. In this work, we further apply the spatial-dependent propagation model to both electrons and positrons. To account for the excess of positrons above $10$ GeV, an additional local source is introduced. And we also consider a more realistic spiral distribution of background sources. We find that due to the gradual hardening above $10$ GeV, the hardening of electron spectrum above tens of GeV can be explained in the SDP model and both positron and electron spectra less than TeV energies could be naturally described. The spatial-dependent propagation with spiral-distributed sources could conforms with the total electron spectrum in the whole energy. Meanwhile compared with the conventional model, the spatial-dependent propagation with spiral-distributed sources could produce larger background positron flux, so that the multiplier of background positron flux is $1.42$, which is much smaller than the required value by the conventional model. Thus the shortage of background positron flux could be solved. Furthermore we compute the anisotropy of electron under spatial-dependent propagation model, which is well below the observational limit of Fermi-LAT experiment.
This manuscript revisits theoretical assumptions concerning dynamic mode decomposition (DMD) of Koopman operators, including the existence of lattices of eigenfunctions, common eigenfunctions between Koopman operators, and boundedness and compactness of Koopman operators. Counterexamples that illustrate restrictiveness of the assumptions are provided for each of the assumptions. In particular, this manuscript proves that the native reproducing kernel Hilbert space (RKHS) of the Gaussian RBF kernel function only supports bounded Koopman operators if the dynamics are affine. In addition, a new framework for DMD, that requires only densely defined Koopman operators over RKHSs is introduced, and its effectiveness is demonstrated through numerical examples.
Recently, it has been shown that the massless quantum vacuum state contains entanglement between timelike separated regions of spacetime, in addition to the entanglement between the spacelike separated regions usually considered. Here, we show that timelike entanglement can be extracted from the Minkowski vacuum and converted into ordinary entanglement between two inertial, two-state detectors at the same spatial location -- one coupled to the field in the past and the other coupled to the field in the future. The procedure used here demonstrates a clear time correlation as a requirement for extraction, e.g. if the past detector was active at a quarter to 12:00, then the future detector must wait to become active at precisely a quarter past 12:00 in order to achieve entanglement.
In today's world, image processing plays a crucial role across various fields, from scientific research to industrial applications. But one particularly exciting application is image captioning. The potential impact of effective image captioning is vast. It can significantly boost the accuracy of search engines, making it easier to find relevant information. Moreover, it can greatly enhance accessibility for visually impaired individuals, providing them with a more immersive experience of digital content. However, despite its promise, image captioning presents several challenges. One major hurdle is extracting meaningful visual information from images and transforming it into coherent language. This requires bridging the gap between the visual and linguistic domains, a task that demands sophisticated algorithms and models. Our project is focused on addressing these challenges by developing an automatic image captioning architecture that combines the strengths of convolutional neural networks (CNNs) and encoder-decoder models. The CNN model is used to extract the visual features from images, and later, with the help of the encoder-decoder framework, captions are generated. We also did a performance comparison where we delved into the realm of pre-trained CNN models, experimenting with multiple architectures to understand their performance variations. In our quest for optimization, we also explored the integration of frequency regularization techniques to compress the "AlexNet" and "EfficientNetB0" model. We aimed to see if this compressed model could maintain its effectiveness in generating image captions while being more resource-efficient.
We investigate the use of piecewise linear systems, whose coefficient matrix is a piecewise constant function of the solution itself. Such systems arise, for example, from the numerical solution of linear complementarity problems and in the numerical solution of free-surface problems. In particular, we here study their application to the numerical solution of both the (linear) parabolic obstacle problem and the obstacle problem. We propose a class of effective semi-iterative Newton-type methods to find the exact solution of such piecewise linear systems. We prove that the semiiterative Newton-type methods have a global monotonic convergence property, i.e., the iterates converge monotonically to the exact solution in a finite number of steps. Numerical examples are presented to demonstrate the effectiveness of the proposed methods.
We propose a novel trap for confining cold neutral atoms in a microscopic ring using a magneto-electrostatic potential. The trapping potential is derived from a combination of a repulsive magnetic field from a hard drive atom mirror and the attractive potential produced by a charged disk patterned on the hard drive surface. We calculate a trap frequency of [29.7, 42.6, 62.8] kHz and a depth of [16.1, 21.8, 21.8] MHz for [133Cs, 87Rb, 40K], and discuss a simple loading scheme and a method for fabrication. This device provides a one-dimensional potential in a ring geometry that may be of interest to the study of trapped quantum degenerate one-dimensional gases.
Exclusive dilepton production occurs with high cross section in gamma-mediated processes at the LHC. The pure QED process $\gamma\gamma\rightarrow\ell^+\ell^-$ provides the conditions to study particle production with masses at the electroweak scale. By tagging the leading proton from the hard interaction, the Precision Proton Spectrometer (PPS) provides an increased sensitivity to selecting exclusive processes. PPS is a detector system to add tracking and timing information at approximately 210~m from the interaction point around the CMS detector. It is designed to operate at high luminosity with up to 50 interactions per 25~ns bunch crossing to perform measurements of e.g. the quartic gauge couplings and search for rare exclusive processes. Since 2016, PPS has been taking data in normal high-luminosity proton-proton LHC collisions. Exclusive dilepton production with proton tagging, the first results obtained with PPS, and the status of the ongoing program are discussed.
This paper present a model of dielectric barrier discharge DBD describe and simulate the electrical breakdown at atmospheric pressure in pure oxygen using Comsol Multiphysics for the better understanding and explanation of physical behaviours of the existing species in gap. For this model, we prefer to simulate using the 1D geometry using physics-based kinetic methods. DBD have a several applications, such as ozone generation, surface treatment, light source and other environmental industries. This model include a numerical and chemical model. Courant and voltage characteristics presented in this paper. Density of existing and newly created species in gap also presented.
We report B, V, and R band CCD photometry of the Seyfert galaxy NGC 4151 obtained with the 1.0-m telescope at Weihai Observatory of Shandong University and the 1.56-m telescope at Shanghai Astronomical Observatory from 2005 December to 2013 February. Combining all available data from literature, we have constructed a historical light curve from 1910 to 2013 to study the periodicity of the source using three different methods (the Jurkevich method, the Lomb-Scargle periodogram method and the Discrete Correlation Function method). We find possible periods of P_1=4\pm0.1, P_2=7.5\pm0.3 and P_3=15.9\pm0.3 yr.
It is argued that the quantum gravity attractions dynamically generate tiny degenerate Majorana masses for the neutrinos. The unequal masses of the charged leptons then induce a computable neutrino mass matrix with splittings and mixings through the electroweak interactions. In this way the Standard Model including quantum gravity can accommodate and predict the neutrino masses and mixings. Some consequences are pointed out.
Metric learning aims at learning a distance which is consistent with the semantic meaning of the samples. The problem is generally solved by learning an embedding for each sample such that the embeddings of samples of the same category are compact while the embeddings of samples of different categories are spread-out in the feature space. We study the features extracted from the second last layer of a deep neural network based classifier trained with the cross entropy loss on top of the softmax layer. We show that training classifiers with different temperature values of softmax function leads to features with different levels of compactness. Leveraging these insights, we propose a "heating-up" strategy to train a classifier with increasing temperatures, leading the corresponding embeddings to achieve state-of-the-art performance on a variety of metric learning benchmarks.
Fine's theorem concerns the question of determining the conditions under which a certain set of probabilities for pairs of four bivalent quantities may be taken to be the marginals of an underlying probability distribution. The eight CHSH inequalities are well-known to be necessary conditions, but Fine's theorem is the striking result that they are also a sufficient condition. It has application to the question of finding a local hidden variables theory for measurements of pairs of spins for a system in an EPRB state. Here we present two simple and self-contained proofs of Fine's theorem in which the origins of this non-obvious result can be easily seen. The first is a physically motivated proof which simply notes that this matching problem is solved using a local hidden variables model given by Peres. The second is a straightforward algebraic proof which uses a representation of the probabilities in terms of correlation functions and takes advantage of certain simplifications naturally arising in that representation. A third, unsuccessful attempt at a proof, involving the maximum entropy technique is also briefly described
We derive determinant representations and nonlinear differential equations for the scaled 2-point functions of the 2D Ising model on the cylinder. These equations generalize well-known results for the infinite lattice (Painlev\'e III equation and the equation for the $\tau$-function of Painlev\'e V).
In this paper homogenization of a mathematical model for plant tissue biomechanics is presented. The microscopic model constitutes a strongly coupled system of reaction-diffusion-convection equations for chemical processes in plant cells, the equations of poroelasticity for elastic deformations of plant cell walls and middle lamella, and Stokes equations for fluid flow inside the cells. The chemical process in cells and the elastic properties of cell walls and middle lamella are coupled because elastic moduli depend on densities involved in chemical reactions, whereas chemical reactions depend on mechanical stresses. Using homogenization techniques we derive rigorously a macroscopic model for plant biomechanics. To pass to the limit in the nonlinear reaction terms, which depend on elastic strain, we prove the strong two-scale convergence of the displacement gradient and velocity field.
Sampling-based motion planning algorithms are very effective at finding solutions in high-dimensional continuous state spaces as they do not require prior approximations of the problem domain compared to traditional discrete graph-based searches. The anytime version of the Rapidly-exploring Random Trees (RRT) algorithm, denoted as RRT*, often finds high-quality solutions by incrementally approximating and searching the problem domain through random sampling. However, due to its low sampling efficiency and slow convergence rate, research has proposed many variants of RRT*, incorporating different heuristics and sampling strategies to overcome the constraints in complex planning problems. Yet, these approaches address specific convergence aspects of RRT* limitations, leaving a need for a sampling-based algorithm that can quickly find better solutions in complex high-dimensional state spaces with a faster convergence rate for practical motion planning applications. This article unifies and leverages the greedy search and heuristic techniques used in various RRT* variants to develop a greedy version of the anytime Rapidly-exploring Random Trees algorithm, denoted as Greedy RRT* (G-RRT*). It improves the initial solution-finding time of RRT* by maintaining two trees rooted at both the start and goal ends, advancing toward each other using greedy connection heuristics. It also accelerates the convergence rate of RRT* by introducing a greedy version of direct informed sampling procedure, which guides the sampling towards the promising region of the problem domain based on heuristics. We validate our approach on simulated planning problems, manipulation problems on Barrett WAM Arms, and on a self-reconfigurable robot, Panthera. Results show that G-RRT* produces asymptotically optimal solution paths and outperforms state-of-the-art RRT* variants, especially in high-dimensional planning problems.
In this article, we demonstrate that in a transport model of particles with kinetic constraints, long-lived spatial structures are responsible for the blocking dynamics and the decrease of the current at strong driving field. Coexistence between mobile and blocked regions can be anticipated by a first-order transition in the large deviation function for the current. By a study of the system under confinement, we are able to study finite-size effects and extract a typical length between mobile regions.
This paper is devoted to the study of the leading twist distribution amplitudes of P-wave nonrelativistic mesons. It is shown that at the leading order approximation in relative velocity of quark-antiquark pair inside the mesons these distribution amplitudes can be expressed through one universal function. As an example, the distribution amplitudes of the P-wave charmonia mesons are considered. Within QCD sum rules the model for the universal function of P-wave charmonia mesons is built. In addition, it is found the relations between the moments of the universal function and the nonrelativistic QCD matrix elements that control relativistic corrections to any amplitude involving P-wave charmonia. Our calculation shows that characteristic size of these corrections is of order of ~30 %.
We determine many of the atoms of the algebraic lattices arising in $\mathfrak{q}$-theory of finite semigroups.
In this paper, we consider function-indexed normalized weighted integrated periodograms for equidistantly sampled multivariate continuous-time state space models which are multivariate continuous-time ARMA processes. Thereby, the sampling distance is fixed and the driving L\'evy process has at least a finite fourth moment. Under different assumptions on the function space and the moments of the driving L\'evy process we derive a central limit theorem for the function-indexed normalized weighted integrated periodogram. Either the assumption on the function space or the assumption on the existence of moments of the L\'evy process is weaker. Furthermore, we show the weak convergence in both the space of continuous functions and in the dual space to a Gaussian process and give an explicit representation of the covariance function. The results can be used to derive the asymptotic behavior of the Whittle estimator and to construct goodness-of-fit test statistics as the Grenander-Rosenblatt statistic and the Cram\'er-von Mises statistic. We present the exact limit distributions of both statistics and show their performance through a simulation study.
In this paper, we present an on-line fully dynamic algorithm for maintaining strongly connected component of a directed graph in a shared memory architecture. The edges and vertices are added or deleted concurrently by fixed number of threads. To the best of our knowledge, this is the first work to propose using linearizable concurrent directed graph and is build using both ordered and unordered list-based set. We provide an empirical comparison against sequential and coarse-grained. The results show our algorithm's throughput is increased between 3 to 6x depending on different workload distributions and applications. We believe that there are huge applications in the on-line graph. Finally, we show how the algorithm can be extended to community detection in on-line graph.
We determine the couplings of the graviscalar radion in Randall-Sundrum models to Standard Model fields propagating in the bulk of the space, taking into account effects arising from the dynamics of the Goldberger-Wise scalar that stabilizes the size of the extra dimension. The leading corrections to the radion couplings are shown to arise from direct contact interactions between the Goldberger-Wise scalar and the Standard Model fields. We obtain a detailed interpretation of the results in terms of the holographic dual of the radion, the dilaton. In doing so, we determine how the familiar identification of the parameters on the two sides of the AdS/CFT correspondence is modified in the presence of couplings of the bulk Standard Model fields to the Goldberger-Wise scalar. We find that corrections to the form of the dilaton couplings from effects associated with the stabilization of the extra dimension are suppressed by the square of the ratio of the dilaton mass to the Kaluza-Klein scale, in good agreement with results from the CFT side of the correspondence.
A reversible adsorption-desorption parking process in one dimension is studied. An exact solution for the equilibrium properties is obtained. The coverage near saturation depends logarithmically on the ratio between the adsorption rate, $\k_+$, and the desorption rate, $\k_-$, \hbox{$\req\cong 1-1/\log(k_+/k_-)$}, when $\k_+\gg\k_-$. A time dependent version of the reversible problem with immediate adsorption ($k_+=\infty$) is also considered. Both heuristic arguments and numerical simulations reveal a logarithmically slow approach to the completely covered state, \hbox{$1-\rho(t)\sim 1/\log(t)$}.
We consider the asymptotic behaviour of the marginal maximum likelihood empirical Bayes posterior distribution in general setting. First we characterize the set where the maximum marginal likelihood estimator is located with high probability. Then we provide oracle type of upper and lower bounds for the contraction rates of the empirical Bayes posterior. We also show that the hierarchical Bayes posterior achieves the same contraction rate as the maximum marginal likelihood empirical Bayes posterior. We demonstrate the applicability of our general results for various models and prior distributions by deriving upper and lower bounds for the contraction rates of the corresponding empirical and hierarchical Bayes posterior distributions.
Observations of the $\gamma$-ray emission around star clusters, isolated supernova remnants, and pulsar wind nebulae indicate that the cosmic-ray (CR) diffusion coefficient near acceleration sites can be suppressed by a large factor compared to the Galaxy average. We explore the effects of such local suppression of CR diffusion on galaxy evolution using simulations of isolated disk galaxies with regular and high gas fractions. Our results show that while CR propagation with constant diffusivity can make gaseous disks more stable by increasing the midplane pressure, large-scale CR pressure gradients cannot prevent local fragmentation when the disk is unstable. In contrast, when CR diffusivity is suppressed in star-forming regions, the accumulation of CRs in these regions results in strong local pressure gradients that prevent the formation of massive gaseous clumps. As a result, the distribution of dense gas and star formation changes qualitatively: a globally unstable gaseous disk does not violently fragment into massive star-forming clumps but maintains a regular grand-design spiral structure. This effect regulates star formation and disk structure and is qualitatively different from and complementary to the global role of CRs in vertical hydrostatic support of the gaseous disk and in driving galactic winds.
The effect of 3-dimensional strain state on the magnetocaloric properties of epitaxial La0.8Ca0.2MnO3 (LCMO) thin films grown on two types of substrates, SrTiO3 (001) (STO) and LaAlO3 (001) (LAO) has been studied as a function of film thickness within the range of 25 to 300 nm. The STO substrate imposes an in-plane tensile biaxial strain while LAO substrate imposes an in-plane compressive biaxial strain. The in-plane biaxial strain on LCMO by STO substrate gets relaxed more rapidly than that by LAO substrate but both LCMO/STO and LCMO/LAO show a maximum entropy change of 12.1 J/Kg-K and 3.2 J/Kg-K, respectively at a critical thickness of 75 nm (at 6 T applied magnetic field). LCMO/LAO is found to exhibit a wider transition temperature region with full width at half maxima (FWHM) 40 K of the dM/dT vs T curve compared to LCMO/STO with FWHM 33 K of that curve. This broadening of the transition region indicates that the Table like magnetocaloric effect (MCE) is attainable by changing the strain type. The maximum Relative Cooling Power, 361 J/Kg of LCMO/STO and 339 J/Kg of LCMO/LAO is also observed at the thickness 75 nm. The Curie temperature varies with the thickness exploring the variation of ferromagnetic interaction strength due to strain relaxation. The film thickness and substrate induced lattice strain are proved to be the significant parameters for controlling MCE. The highest MCE response at a particular thickness shows the possibility of tuning MCE in other devices by optimizing thickness.
With the global color symmetry model being extended to finite chemical potential, we study the density dependence of the local and nonlocal scalar quark condensates in nuclear matter. The calculated results indicate that the quark condensates increase smoothly with the increasing of nuclear matter density before the critical value (about 12$\rho_0$) is reached. It also manifests that the chiral symmetry is restored suddenly as the density of nuclear matter reaches its critical value. Meanwhile, the nonlocal quark condensate in nuclear matter changes nonmonotonously against the space-time distance among the quarks.
Web applications are relied upon by many for the services they provide. It is essential that applications implement appropriate security measures to prevent security incidents. Currently, web applications focus resources towards the preventative side of security. Whilst prevention is an essential part of the security process, developers must also implement a level of attack awareness into their web applications. Being able to detect when an attack is occurring provides applications with the ability to execute responses against malicious users in an attempt to slow down or deter their attacks. This research seeks to improve web application security by identifying malicious behaviour from within the context of web applications using our tool BlackWatch. The tool is a Python-based application which analyses suspicious events occurring within client web applications, with the objective of identifying malicious patterns of behaviour. Based on the results from a preliminary study, BlackWatch was effective at detecting attacks from both authenticated, and unauthenticated users. Furthermore, user tests with developers indicated BlackWatch was user friendly, and was easy to integrate into existing applications. Future work seeks to develop the BlackWatch solution further for public release.
This paper studies a prototype of inverse initial boundary value problems whose governing equation is the heat equation in three dimensions. An unknown discontinuity embedded in a three-dimensional heat conductive body is considered. A {\it single} set of the temperature and heat flux on the lateral boundary for a fixed observation time is given as an observation datum. It is shown that this datum yields the minimum length of broken paths that start at a given point outside the body, go to a point on the boundary of the unknown discontinuity and return to a point on the boundary of the body under some conditions on the input heat flux, the unknown discontinuity and the body. This is new information obtained by using enclosure method.
Many prediction domains, such as ad placement, recommendation, trajectory prediction, and document summarization, require predicting a set or list of options. Such lists are often evaluated using submodular reward functions that measure both quality and diversity. We propose a simple, efficient, and provably near-optimal approach to optimizing such prediction problems based on no-regret learning. Our method leverages a surprising result from online submodular optimization: a single no-regret online learner can compete with an optimal sequence of predictions. Compared to previous work, which either learn a sequence of classifiers or rely on stronger assumptions such as realizability, we ensure both data-efficiency as well as performance guarantees in the fully agnostic setting. Experiments validate the efficiency and applicability of the approach on a wide range of problems including manipulator trajectory optimization, news recommendation and document summarization.
The chromatic polynomial $P(G,x)$ of a graph $G$ of order $n$ can be expressed as $\sum\limits_{i=1}^n(-1)^{n-i}a_{i}x^i$, where $a_i$ is interpreted as the number of broken-cycle free spanning subgraphs of $G$ with exactly $i$ components. The parameter $\epsilon(G)=\sum\limits_{i=1}^n (n-i)a_i/\sum\limits_{i=1}^n a_i$ is the mean size of a broken-cycle-free spanning subgraph of $G$. In this article, we confirm and strengthen a conjecture proposed by Lundow and Markstr\"{o}m in 2006 that $\epsilon(T_n)< \epsilon(G)<\epsilon(K_n)$ holds for any connected graph $G$ of order $n$ which is neither the complete graph $K_n$ nor a tree $T_n$ of order $n$. The most crucial step of our proof is to obtain the interpretation of all $a_i$'s by the number of acyclic orientations of $G$.
We are given a video of a person performing a certain activity, from which we extract a controllable model. The model generates novel image sequences of that person, according to arbitrary user-defined control signals, typically marking the displacement of the moving body. The generated video can have an arbitrary background, and effectively capture both the dynamics and appearance of the person. The method is based on two networks. The first network maps a current pose, and a single-instance control signal to the next pose. The second network maps the current pose, the new pose, and a given background, to an output frame. Both networks include multiple novelties that enable high-quality performance. This is demonstrated on multiple characters extracted from various videos of dancers and athletes.
Storing intermediate frame segmentations as memory for long-range context modeling, spatial-temporal memory-based methods have recently showcased impressive results in semi-supervised video object segmentation (SVOS). However, these methods face two key limitations: 1) relying on non-local pixel-level matching to read memory, resulting in noisy retrieved features for segmentation; 2) segmenting each object independently without interaction. These shortcomings make the memory-based methods struggle in similar object and multi-object segmentation. To address these issues, we propose a query modulation method, termed QMVOS. This method summarizes object features into dynamic queries and then treats them as dynamic filters for mask prediction, thereby providing high-level descriptions and object-level perception for the model. Efficient and effective multi-object interactions are realized through inter-query attention. Extensive experiments demonstrate that our method can bring significant improvements to the memory-based SVOS method and achieve competitive performance on standard SVOS benchmarks. The code is available at https://github.com/zht8506/QMVOS.
Subword tokenization has been widely successful in text-based natural language processing (NLP) tasks with Transformer-based models. As Transformer models become increasingly popular in symbolic music-related studies, it is imperative to investigate the efficacy of subword tokenization in the symbolic music domain. In this paper, we explore subword tokenization techniques, such as byte-pair encoding (BPE), in symbolic music generation and its impact on the overall structure of generated songs. Our experiments are based on three types of MIDI datasets: single track-melody only, multi-track with a single instrument, and multi-track and multi-instrument. We apply subword tokenization on post-musical tokenization schemes and find that it enables the generation of longer songs at the same time and improves the overall structure of the generated music in terms of objective metrics like structure indicator (SI), Pitch Class Entropy, etc. We also compare two subword tokenization methods, BPE and Unigram, and observe that both methods lead to consistent improvements. Our study suggests that subword tokenization is a promising technique for symbolic music generation and may have broader implications for music composition, particularly in cases involving complex data such as multi-track songs.
We review the path integral method wherein quantum systems are mapped with Feynman's path integrals onto a classical system of "ring-polymers" and then simulated with the Monte Carlo technique. Bose or Fermi statistics correspond to possible "cross-linking" of polymers. As proposed by Feynman, superfluidity and Bose condensation result from macroscopic exchange of bosons. To map fermions onto a positive probability distribution, one must restrict the paths to lie in regions where the fermion density matrix is positive. We discuss a recent application to the two-component electron-hole plasma. At low temperature excitons and bi-excitons form. We have used nodal surfaces incorporating paired fermions and see evidence of a Bose condensation in the energy, specific heat and superfluid density. In the restricted path integral picture, pairing appears as intertwined electron-hole paths. Bose condensation occurs when these intertwined paths wind around the periodic boundaries.
We use more than 4,500 microflares from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) microflare data set (Christe et al., 2008, Ap. J., 677, 1385) to estimate electron densities and volumetric filling factors of microflare loops using a cooling time analysis. We show that if the filling factor is assumed to be unity, the calculated conductive cooling times are much shorter than the observed flare decay times, which in turn are much shorter than the calculated radiative cooling times. This is likely unphysical, but the contradic- tion can be resolved by assuming the radiative and conductive cooling times are comparable, which is valid when the flare loop temperature is a maximum and when external heating can be ignored. We find that resultant radiative and con- ductive cooling times are comparable to observed decay times, which has been used as an assumption in some previous studies. The inferred electron densities have a mean value of 10^11.6 cm^-3 and filling factors have a mean of 10^-3.7. The filling factors are lower and densities are higher than previous estimates for large flares, but are similar to those found for two microflares by Moore et al. (Ap. J., 526, 505, 1999).
This paper investigates the anti-jamming performance of the NR-DCSK system. We consider several practical jamming environments including broad-band jamming (BBJ), partial-time jamming (PTJ), tone jamming (TJ) consisting of both single-tone and multi-tone, and sweep jamming (SWJ). We first analytically derived the bit error rates of the considered system under the BBJ and the PTJ environments in closed-form expressions. Our derived results show that the system performances under these two jamming environments are enhanced as $P$ increases, where $P$ is the parameter of the NR-DCSK modulation scheme denoting the number of times a chaotic sample is repeated. In addition, our results demonstrate that for the PTJ, the optimal value of the jamming factor is close to zero when the jamming power is small, however, it increases and approaches one as the jamming power enlarges. We then investigate the performance of the considered system under the TJ and the SWJ environments via Monte-Carlo simulations. Our simulations show that single-tone jamming causes a more significant performance degradation than that provided by multi-tone jamming counterparts. Moreover, we point out that the system performance is significantly degraded when the starting frequency of the sweep jammer is close to the carrier frequency of the transmitted chaotic signals, the sweep bandwidth is small, and the sweep time is half of the transmitted bit duration.
This paper uses model symmetries in the instrumental variable (IV) regression to derive an invariant test for the causal structural parameter. Contrary to popular belief, we show that there exist model symmetries when equation errors are heteroskedastic and autocorrelated (HAC). Our theory is consistent with existing results for the homoskedastic model (Andrews, Moreira, and Stock (2006) and Chamberlain (2007)). We use these symmetries to propose the conditional integrated likelihood (CIL) test for the causality parameter in the over-identified model. Theoretical and numerical findings show that the CIL test performs well compared to other tests in terms of power and implementation. We recommend that practitioners use the Anderson-Rubin (AR) test in the just-identified model, and the CIL test in the over-identified model.
Dark solitons in superfluid Bose gases decay through the snake instability mechanism, unless they are strongly confined. Recent experiments in superfluid Fermi gases have also interpreted soliton decay via this mechanism. However, we show using both an effective field numerical simulation and a perturbative analysis that there is a qualitative difference between soliton decay in the BEC- and BCS-regimes. On the BEC-side of the interaction domain, the characteristic snaking deformations are induced by fluctuations of the amplitude of the order parameter, while on the BCS-side, fluctuations of the phase destroy the soliton core through the formation of local Josephson currents. The latter mechanism is qualitatively different from the snaking instability and this difference should be experimentally detectable.