text
stringlengths
6
128k
We consider a cell-free massive multiple-input multiple-output (CFmMIMO) network operating in dynamic time division duplex (DTDD). The switching point between the uplink (UL) and downlink (DL) data transmission phases can be adapted dynamically to the instantaneous quality-of-service (QoS) requirements in order to improve energy efficiency (EE). To this end, we formulate a problem of optimizing the DTDD switching point jointly with the UL and DL power control coefficients, and the large-scale fading decoding (LSFD) weights for EE maximization. Then, we propose an iterative algorithm to solve the formulated challenging problem using successive convex approximation with an approximate stationary solution. Simulation results show that optimizing switching points remarkably improves EE compared with baseline schemes that adjust switching points heuristically.
The interaction between off-resonant laser pulses and excitons in monolayer transition metal dichalcogenides is attracting increasing interest as a route for the valley-selective coherent control of the exciton properties. Here, we extend the classification of the known off-resonant phenomena by unveiling the impact of a strong THz field on the excitonic resonances of monolayer MoS$_2$. We observe that the THz pump pulse causes a selective modification of the coherence lifetime of the excitons, while keeping their oscillator strength and peak energy unchanged. We rationalize these results theoretically by invoking a hitherto unobserved manifestation of the Franz-Keldysh effect on an exciton resonance. As the modulation depth of the optical absorption reaches values as large as 0.05 dB/nm at room temperature, our findings open the way to the use of semiconducting transition metal dichalcogenides as compact and efficient platforms for high-speed electroabsorption devices.
We study stripes in cuprates within the one-band and the three-band Hubbard model. Magnetic and charge excitations are described within the time-dependent Gutzwiller approximation. A variety of experiments (charge profile from resonant soft X-ray scattering, incommensurability vs. doping, optical excitations, magnetic excitations, etc.) are described within the same approach.
We demonstrate that the recently observed $X$, $Y$, $Z$ states cannot be purely from kinematic effect. Especially the narrow near-threshold structures in elastic channels call for nearby poles of the $S$-matrix which are qualified as states. We propose a way to distinguish cusp effects from genuine states and demonstrate that (not all of) the recently observed $X$, $Y$, $Z$ states cannot be purely from kinematic effects. Especially, we show that the narrow near-threshold structures in elastic channels call for nearby poles of the $S$-matrix, since the normal kinematic cusp effect cannot produce that narrow structures in the elastic channels in contrast to genuine $S$-matrix poles. In addition, it is also discussed how spectra can be used to distinguish different scenarios proposed for the structure of those poles, such as hadro-quarkonia, tetraquarks and hadronic molecules. The basic tool employed is heavy quark spin symmetry.
Motion planning in the configuration space (C-space) induces benefits, such as smooth trajectories. It becomes more complex as the degrees of freedom (DOF) increase. This is due to the direct relation between the dimensionality of the search space and the DOF. Self-organizing neural networks (SONN) and their famous candidate, the Self-Organizing Map, have been proven to be useful tools for C-space reduction while preserving its underlying topology, as presented in [29]. In this work, we extend our previous study with additional models and adapt the approach from human motion data towards robots' kinematics. The evaluation includes the best performant models from [29] and three additional SONN architectures, representing the consequent continuation of this previous work. Generated Trajectories, planned with the different SONN models, were successfully tested in a robot simulation.
Let $p: \mathcal{E} \to \mathcal{S}$ be a pre-cohesive geometric morphism. We show that the least subtopos of $\mathcal{E}$ containing both the subcategories $p^*: \mathcal{S} \to \mathcal{E}$ and $p^!: \mathcal{S} \to \mathcal{E}$ exists, and that it coincides with the least subtopos containing $p^*2$, where 2 denotes the subobject classifier of $\mathcal{S}$.
The X-ray flares of NGC 5905, RX J1242.6-1119A, and RX J1624.9+7554 observed by Chandra in 2001 and 2002 have been suggested as the candidate tidal disruption events. The distinct features observed from these events may be used to determine the type of a star tidally disrupted by a massive black hole. We investigate these three events, focusing on the differences for the tidal disruption of a giant star and a main sequence, resulted from their different relation between the mass and the radius. We argue that their X-ray flare properties could be modeled by the partial stripping of the outer layers of a solar type star. The tidal disruption of a giant star is excluded completely. This result may be useful for understanding the growth of a supermassive black hole by capturing stars, versus the growth mode through continuous mass accretion.
The method of this paper is my original creation. A new method for solving linear differential equations is proposed in this paper. The important conclusion of this paper is that arbitrary order linear ordinary differential equations with variable coefficients can be solved by the method of recursion and reduction of order under some conditions which easily be satisfied in practical applications.
Let $T$ be a tree, a vertex of degree one is called a leaf. The set of leaves of $T$ is denoted by $Leaf(T)$. The subtree $T-Leaf(T)$ of $T$ is called the stem of $T$ and denoted by $Stem(T).$ In this note, we give a sharp sufficient condition to show that a $K_{1,t}-$free graph has a spanning tree whose stem has a few leaves. By applying the main result, we give improvements of previous related results.
Current, near-term quantum devices have shown great progress in recent years culminating with a demonstration of quantum supremacy. In the medium-term, however, quantum machines will need to transition to greater reliability through error correction, likely through promising techniques such as surface codes which are well suited for near-term devices with limited qubit connectivity. We discover quantum memory, particularly resonant cavities with transmon qubits arranged in a 2.5D architecture, can efficiently implement surface codes with substantial hardware savings and performance/fidelity gains. Specifically, we *virtualize logical qubits* by storing them in layers distributed across qubit memories connected to each transmon. Surprisingly, distributing each logical qubit across many memories has a minimal impact on fault tolerance and results in substantially more efficient operations. Our design permits fast transversal CNOT operations between logical qubits sharing the same physical address which are 6x faster than lattice surgery CNOTs. We develop a novel embedding which saves ~10x in transmons with another 2x from an additional optimization for compactness. Although Virtualized Logical Qubits (VLQ) pays a 10x penalty in serialization, advantages in the transversal CNOT and area efficiency result in performance comparable to 2D transmon-only architectures. Our simulations show fault tolerance comparable to 2D architectures while saving substantial hardware. Furthermore, VLQ can produce magic states 1.22x faster for a fixed number of transmon qubits. This is a critical benchmark for future fault-tolerant quantum computers. VLQ substantially reduces the hardware requirements for fault tolerance and puts within reach a proof-of-concept experimental demonstration of around 10 logical qubits, requiring only 11 transmons and 9 attached cavities in total.
We study the dependence of thermal conductivity of single walled nanotubes (SWNT) on chirality, isotope impurity, tube length and temperature by nonequilibrium molecular dynamics method with accurate potentials. It is found that, contrary to electronic conductivity, the thermal conductivity is insensitive to the chirality. The isotope impurity, however, can reduce the thermal conductivity up to 60% and change the temperature dependence behavior. We also found that the tube length dependence of thermal conductivity is different for nanotubes of different radius at different temperatures.
We have seen that if \phi: M_n(\C) \rightarrow M_n(\C) is a unital q-positive map and \nu is a type II Powers weight, then the boundary weight double (\phi, \nu) induces a unique (up to conjugacy) type II_0 E_0-semigroup. Let \phi: M_n(\C) \rightarrow M_n(\C) and \psi: M_{n'}(\C) \rightarrow M_{n'}(\C) be unital rank one q-positive maps, so for some states \rho \in M_n(\C)^* and \rho' \in M_{n'}(\C)^*, we have \phi(A)=\rho(A)I_n and \psi(D) = \rho'(D)I_{n'} for all A \in M_n(\C) and D \in M_{n'}(\C). We find that if \nu and \eta are arbitrary type II Powers weights, then (\phi, \nu) and (\psi, \eta) induce non-cocycle conjugate E_0-semigroups if \rho and \rho' have different eigenvalue lists. We then completely classify the q-corners and hyper maximal q-corners from \phi to \psi, obtaining the following result: If \nu is a type II Powers weight of the form \nu(\sqrt{I - \Lambda(1)} B \sqrt{I - \Lambda(1)})=(f,Bf), then the E_0-semigroups induced by (\phi,\nu) and (\psi, \nu) are cocycle conjugate if and only if n=n' and \phi and \psi are conjugate.
We introduce an $A_\infty$ map from the cubical chain complex of the based loop space of Lagrangian submanifolds with Legendrian boundary in a Liouville Manifold $C_{*}(\Omega_{L} \mathcal{L}\mathit{ag})$ to wrapped Floer cohomology of Lagrangian submanifold $\mathcal{CW}^{-*}(L,L)$. In the case of a cotangent bundle and a Lagrangian co-fiber, the composition of our map with a previously constructed map from $\mathcal{CW}^*(L,L) \to C_{*}(\Omega_q Q) $ shows that this map is split surjective.
Radiation hydrodynamics simulations based on the one-fluid two-temperature model may violate the law of energy conservation because the governing equations are expressed in a nonconservative formulation. Here, we maintain the important physical requirements by employing a strategy based on the key concept that the mathematical structures associated with the conservative and nonconservative equations are preserved, even at the discrete level. To this end, we discretize the conservation laws and transform them via exact algebraic operations. The proposed scheme maintains the global conservation errors within the round-off level. In addition, a numerical experiment concerning the shock tube problem suggests that the proposed scheme well agrees with the jump conditions at the discontinuities regulated by the Rankine-Hugoniot relationship. The generalized derivation allows us to employ arbitrary central difference, artificial dissipation, and Runge-Kutta methods.
In relation to a thesis put forward by Marx Wartofsky, we seek to show that a historiography of mathematics requires an analysis of the ontology of the part of mathematics under scrutiny. Following Ian Hacking, we point out that in the history of mathematics the amount of contingency is larger than is usually thought. As a case study, we analyze the historians' approach to interpreting James Gregory's expression ultimate terms in his paper attempting to prove the irrationality of pi. Here Gregory referred to the last or ultimate terms of a series. More broadly, we analyze the following questions: which modern framework is more appropriate for interpreting the procedures at work in texts from the early history of infinitesimal analysis? as well as the related question: what is a logical theory that is close to something early modern mathematicians could have used when studying infinite series and quadrature problems? We argue that what has been routinely viewed from the viewpoint of classical analysis as an example of an "unrigorous" practice, in fact finds close procedural proxies in modern infinitesimal theories. We analyze a mix of social and religious reasons that had led to the suppression of both the religious order of Gregory's teacher degli Angeli, and Gregory's books at Venice, in the late 1660s.
We study the following coupled Schr\"odinger system \ds -\Delta u+u=u^{2^*-1}+\be u^{\frac{2^*}{2}-1}v^{\frac{2^*}{2}}+\la_1u^{\al-1}, &x\in \R^N, \ds -\Delta v+v=v^{2^*-1}+\be u^{\frac{2^*}{2}}v^{\frac{2^*}{2}-1}+\la_2v^{r-1}, &x\in \R^N, u,v > 0, &x\in \R^N, where $N\geq 5, \la_1,\la_2>0,\be\neq 0, 2<\al,r<2^*,2^*\triangleq \frac{2N}{N-2}.$ Note that the nonlinearity and the coupling terms are both critical. Using the Mountain Pass Theorem, Ekeland's variational principle and Nehari mainfold, we show that this critical system has a positive radial solution for positive $\be$ and some negative $\be$ respectively.
Covering the face and all body parts, sometimes the only evidence to identify a person is their hand geometry, and not the whole hand- only two fingers (the index and the middle fingers) while showing the victory sign, as seen in many terrorists videos. This paper investigates for the first time a new way to identify persons, particularly (terrorists) from their victory sign. We have created a new database in this regard using a mobile phone camera, imaging the victory signs of 50 different persons over two sessions. Simple measurements for the fingers, in addition to the Hu Moments for the areas of the fingers were used to extract the geometric features of the shown part of the hand shown after segmentation. The experimental results using the KNN classifier were encouraging for most of the recorded persons; with about 40% to 93% total identification accuracy, depending on the features, distance metric and K used.
The utility of optical coherence tomography signal intensity for measurement of glucose concentration has been analysed in tissue phantom and blood samples from human subjects. The diffusion equation based calculations as well as in-vivo OCT signal measurements confirms the cyclic correlation of signal intensity with glucose concentration and scatterer size.
We have applied laser calorimetry to the measurement of optical absorption in mono-crystalline sapphire at cryogenic temperatures. Sapphire is a promising candidate for the mirror substrates of the Large-scale Cryogenic Gravitational wave Telescope. The optical absorption coefficients of different sapphire samples at a wavelength of 1.064(micro)m at 5K were found to average 90ppm/cm.
This Ph.D. thesis focuses on developing a system for high-quality speech synthesis and voice conversion. Vocoder-based speech analysis, manipulation, and synthesis plays a crucial role in various kinds of statistical parametric speech research. Although there are vocoding methods which yield close to natural synthesized speech, they are typically computationally expensive, and are thus not suitable for real-time implementation, especially in embedded environments. Therefore, there is a need for simple and computationally feasible digital signal processing algorithms for generating high-quality and natural-sounding synthesized speech. In this dissertation, I propose a solution to extract optimal acoustic features and a new waveform generator to achieve higher sound quality and conversion accuracy by applying advances in deep learning. The approach remains computationally efficient. This challenge resulted in five thesis groups, which are briefly summarized below.
Unconventional density wave (UDW) has been speculated as a possible electronic ground state in excitonic insulator in 1968. Recent surge of interest in UDW is partly due to the proposal that the pseudogap phase in high T_c cuprate superconductors is d-wave density wave (d-DW). Here we review our recent works on UDW within the framework of mean field theory. In particular we have shown that many properties of the low temperature phase (LTP) in alpha-(BEDT-TTF)_2MHg(SCN)_4 with M=K, Rb and Tl are well characterized in terms of unconventional charge density wave (UCDW). In this identification the Landau quantization of the quasiparticle motion in a magnetic field (the Nersesyan effect) plays the crucial role. Indeed the angular dependent magnetoresistance and the negative giant Nernst effect are two hallmarks of UDW.
In their thought-provoking paper [1], Belkin et al. illustrate and discuss the shape of risk curves in the context of modern high-complexity learners. Given a fixed training sample size $n$, such curves show the risk of a learner as a function of some (approximate) measure of its complexity $N$. With $N$ the number of features, these curves are also referred to as feature curves. A salient observation in [1] is that these curves can display, what they call, double descent: with increasing $N$, the risk initially decreases, attains a minimum, and then increases until $N$ equals $n$, where the training data is fitted perfectly. Increasing $N$ even further, the risk decreases a second and final time, creating a peak at $N=n$. This twofold descent may come as a surprise, but as opposed to what [1] reports, it has not been overlooked historically. Our letter draws attention to some original, earlier findings, of interest to contemporary machine learning.
A coarse-grained variational model is used to investigate the polymer dynamics of barrier crossing for a diverse set of two-state folding proteins. The model gives reliable folding rate predictions provided excluded volume terms that induce minor structural cooperativity are included in the interaction potential. In general, the cooperative folding routes have sharper interfaces between folded and unfolded regions of the folding nucleus and higher free energy barriers. The calculated free energy barriers are strongly correlated with native topology as characterized by contact order. Increasing the rigidity of the folding nucleus changes the local structure of the transition state ensemble non-uniformly across the set of protein studied. Neverthless, the calculated prefactors k0 are found to be relatively uniform across the protein set, with variation in 1/k0 less than a factor of five. This direct calculation justifies the common assumption that the prefactor is roughly the same for all small two-state folding proteins. Using the barrier heights obtained from the model and the best fit monomer relaxation time 30ns, we find that 1/k0 (1-5)us (with average 1/k0 4us). This model can be extended to study subtle aspects of folding such as the variation of the folding rate with stability or solvent viscosity, and the onset of downhill folding.
Quantitative analysis of in utero human brain development is crucial for abnormal characterization. Magnetic resonance image (MRI) segmentation is therefore an asset for quantitative analysis. However, the development of automated segmentation methods is hampered by the scarce availability of fetal brain MRI annotated datasets and the limited variability within these cohorts. In this context, we propose to leverage the power of fetal brain MRI super-resolution (SR) reconstruction methods to generate multiple reconstructions of a single subject with different parameters, thus as an efficient tuning-free data augmentation strategy. Overall, the latter significantly improves the generalization of segmentation methods over SR pipelines.
This paper concerns quantum heuristics based on Mixer Hamiltonians that allow to restrict investigation on a specific subspace. Mixer Hamiltonian based approaches can be included in QAOA algorithm and we can state that Mixer Hamiltonians are mapping functions from the set of qubit-strings to the set of solutions. Mixer Hamiltonian offers an approach very similar to indirect representations commonly used in routing or in scheduling community for decades. After the initial publication of Cheng et al. in 1996 (Cheng et al., 1996), numerous propositions in OR lies on 1-to-n mapping functions, including the split algorithm that transform one TSP solution into a VRP solution. The objective is at first to give a compact and readable presentation of these Mixer Hamiltonians considering the functional analogies that exist between the OR community practices and the quantum field. Our experiments encompass numerical evaluations of circuit using the Qiskit library of IBM meeting the theoretical considerations.
Let $X$ be a compact K\"ahler manifold. We study plurisupported currents on $X$, i.e. closed, positive $(1,1)$-currents which are supported on a pluripolar set. In particular, we are able present a technical generalization of Witt-Nystr\"om's proof of the BDPP conjecture on projective manifolds, showing that this conjecture holds on $X$ admitting at least one plurisupported current $T$ such that $[T]$ is K\"ahler. One of the steps in our proof is to show an upper-bound for the pluripolar mass of certain envelopes of quasi-psh functions when the cohomology class is shifted, a result of independent interest. Using this, we are able to generalize an inequality of McKinnon and Roth to arbitrary pseudoeffective classes on compact K\"ahler manifolds.
Quality data is a fundamental contributor to success in statistics and machine learning. If a statistical assessment or machine learning leads to decisions that create value, data contributors may want a share of that value. This paper presents methods to assess the value of individual data samples, and of sets of samples, to apportion value among different data contributors. We use Shapley values for individual samples and Owen values for combined samples, and show that these values can be computed in polynomial time in spite of their definitions having numbers of terms that are exponential in the number of samples.
Consider the problem of recovering an unknown signal from undersampled measurements, given the knowledge that the signal has a sparse representation in a specified dictionary $D$. This problem is now understood to be well-posed and efficiently solvable under suitable assumptions on the measurements and dictionary, if the number of measurements scales roughly with the sparsity level. One sufficient condition for such is the $D$-restricted isometry property ($D$-RIP), which asks that the sampling matrix approximately preserve the norm of all signals which are sufficiently sparse in $D$. While many classes of random matrices are known to satisfy such conditions, such matrices are not representative of the structural constraints imposed by practical sensing systems. We close this gap in the theory by demonstrating that one can subsample a fixed orthogonal matrix in such a way that the $D$-RIP will hold, provided this basis is sufficiently incoherent with the sparsifying dictionary $D$. We also extend this analysis to allow for weighted sparse expansions. Consequently, we arrive at compressive sensing recovery guarantees for structured measurements and redundant dictionaries, opening the door to a wide array of practical applications.
We provide the full set of renormalization group functions for the renormalization of QCD in the minimal MOM scheme to four loops for the colour group SU(N_c).
We extend work of the first author concering relative double commutants and approximate double commutants of unital subalgebras of unital C*-algebras, including metric versions involving distance estimates. We prove metric results for AH subalgebras of von Neumann algebras or AF subalgebras of primitive C*-algebras. We prove other general results, including some for nonselfadjoint commutative subalgebras, using C*-algebraic versions of the Stone-Weierstrass and Bishop-Stone-Weierstrass theorems.
A well-balanced detector with high sensitivity and low noise is presented in this paper. The two-stage amplification structure is used to increase electronic gain while keeping an effective bandwidth of about 70 MHz. In order to further reduce electronic noise, a junction field-effect transistor(JFET) is connected between photodiodes and transimpedance amplifier to reduce the impact of amplifier leakage current. Benefit from these designs, the root-mean-square(RMS) of noise voltage is about 6 mV with a gain of 3.2E5 V/W, and it means an ultra-low noise equivalent power density of 2.2E-12 W/rtHz, only half of common low-noise commercial detectors. In addition, two photodiodes in similar frequency response are selected for detector and make the common mode rejection ratio(CMRR) of detector reached 53 dB, about 13 dB higher than commercial detectors. Further tests indicate that 16.8 dB shot-noise to electronic-noise ratio is measured in our detector, which is better than most high speed balanced detectors.
Anomaly detection is challenging, especially for large datasets in high dimensions. Here we explore a general anomaly detection framework based on dimensionality reduction and unsupervised clustering. We release DRAMA, a general python package that implements the general framework with a wide range of built-in options. We test DRAMA on a wide variety of simulated and real datasets, in up to 3000 dimensions, and find it robust and highly competitive with commonly-used anomaly detection algorithms, especially in high dimensions. The flexibility of the DRAMA framework allows for significant optimization once some examples of anomalies are available, making it ideal for online anomaly detection, active learning and highly unbalanced datasets.
We will present an extension of the standard model of particle physics in its almost-commutative formulation. This extension is guided by the minimal approach to almost-commutative geometries employed in [13], although the model presented here is not minimal itself. The corresponding almost-commutative geometry leads to a Yang-Mills-Higgs model which consists of the standard model and two new fermions of opposite electro-magnetic charge which may possess a new colour like gauge group. As a new phenomenon, grand unification is no longer required by the spectral action.
$O(\alpha_s)$ QCD corrections to the inclusive $B \to X_s e^+ e^-$ decay are investigated within the two - Higgs doublet extension of the standard model (2HDM). The analysis is performed in the so - called off-resonance region; the dependence of the obtained results on the choice of the renormalization scale is examined in details. It is shown that $O(\alpha_s)$ corrections can suppress the $B \to X_s e^+ e^-$ decay width up to $1.5 \div 3$ times (depending on the choice of the dilepton invariant mass $s$ and the low - energy scale $\mu$). As a result, in the experimentally allowed range of the parameters space, the relations between the $B \to X_s e^+ e^-$ branching ratio and the new physics parameters are strongly affected. It is found also that though the renormalization scale dependence of the $B \to X_s e^+ e^-$ branching is significantly reduced, higher order effects in the perturbation theory can still be nonnegligible.
Metasurfaces have drawn significant attentions due to their superior capability in tailoring electromagnetic waves with a wide frequency range, from microwave to visible light. Recently, programmable metasurfaces have demonstrated the ability of manipulating the amplitude or phase of electromagnetic waves in a programmable manner in real time, which renders them especially appealing in the applications of wireless communications. To practically demonstrate the feasibility of programmable metasurfaces in future communication systems, in this paper, we design and realize a novel metasurface-based wireless communication system. By exploiting the dynamically controllable property of programmable metasurface, we firstly introduce the fundamental principle of the metasurface-based wireless communication system design. We then present the design, implementation and experimental evaluation of the proposed metasurface-based wireless communication system with a prototype, which realizes single carrier quadrature phase shift keying (QPSK) transmission over the air. In the developed prototype, the phase of the reflected electromagnetic wave of programmable metasurface is directly manipulated in real time according to the baseband control signal, which achieves 2.048 Mbps data transfer rate with video streaming transmission over the air. Experimental result is provided to compare the performance of the proposed metasurface-based architecture against the conventional one. With the slight increase of the transmit power by 5 dB, the same bit error rate (BER) performance can be achieved as the conventional system in the absence of channel coding. Such a result is encouraging considering that the metasurface-based system has the advantages of low hardware cost and simple structure, thus leading to a promising new architecture for wireless communications.
The multichannel Kondo model supports effective anyons on the partially screened impurity, as suggested by its fractional impurity entropy. It was recently demonstrated for the multi-impurity chiral Kondo model, that scattering of an electron through the impurities depends on the anyon's total fusion channel. Here we study the correlation between impurity-spins. We argue, based on a combination of conformal field theory, a perturbative limit with a large number of channels $k$, and the exactly solvable two-channel case, that the inter-impurity spin correlation probes the anyon fusion of the pair of correlated impurities. This may allow, using measurement-only topological quantum computing protocols, to braid the multichannel Kondo anyons via consecutive measurements.
Deep generative models (DGMs) and their conditional counterparts provide a powerful ability for general-purpose generative modeling of data distributions. However, it remains challenging for existing methods to address advanced conditional generative problems without annotations, which can enable multiple applications like image-to-image translation and image editing. We present a unified Bayesian framework for such problems, which introduces an inference stage on latent variables within the learning process. In particular, we propose a variational Bayesian image translation network (VBITN) that enables multiple image translation and editing tasks. Comprehensive experiments show the effectiveness of our method on unsupervised image-to-image translation, and demonstrate the novel advanced capabilities for semantic editing and mixed domain translation.
The Fock-Krylov formalism for the calculation of survival probabilities of unstable states is revisited paying particular attention to the mathematical constraints on the density of states, the Fourier transform of which gives the survival amplitude. We show that it is not possible to construct a density of states corresponding to a purely exponential survival amplitude. he survival probability $P(t)$ and the autocorrelation function of the density of states are shown to form a pair of cosine Fourier transforms. This result is a particular case of the Wiener Khinchin theorem and forces $P(t)$ to be an even function of time which in turn forces the density of states to contain a form factor which vanishes at large energies. Subtle features of the transition regions from the non-exponential to the exponential at small times and the exponential to the power law decay at large times are discussed by expressing $P(t)$ as a function of the number of oscillations, $n$, performed by it. The transition at short times is shown to occur when the survival probability has completed one oscillation. The number of oscillations depend on the properties of the resonant state and a complete description of the evolution of the unstable state is provided by determining the limits on the number of oscillations in each region.
We describe further results of a program aimed to yield ~10^4 fully characterized optical identifications of ROSAT X-ray sources. Our program employs X-ray data from the ROSAT All-Sky Survey (RASS), and both optical imaging and spectroscopic data from the Sloan Digital Sky Survey (SDSS). RASS/SDSS data from 5740 deg^2 of sky spectroscopically covered in SDSS Data Release 5 (DR5) provide an expanded catalog of 7000 confirmed quasars and other AGN that are probable RASS identifications. Again in our expanded catalog, the identifications as X-ray sources are statistically secure, with only a few percent of the SDSS AGN likely to be randomly superposed on unrelated RASS X-ray sources. Most identifications continue to be quasars and Seyfert 1s with 15<m<21 and 0.01<z<4; but the total sample size has grown to include very substantial numbers of even quite rare AGN, e.g., now including several hundreds of candidate X-ray emitting BL Lacs and narrow-line Seyfert 1 galaxies. In addition to exploring rare subpopulations, such a large total sample may be useful when considering correlations between the X-ray and the optical, and may also serve as a resource list from which to select the "best" object (e.g., X-ray brightest AGN of a certain subclass, at a preferred redshift or luminosity) for follow-on X-ray spectral or alternate detailed studies.
We consider the solutions of the field equations for the large $N$ dilaton gravity model in $1+1$ dimensions recently proposed by Callan, Giddings, Harvey and Strominger (CGHS). We find time dependant solutions with finite mass and vanishing flux in the weak coupling regime, as well as solutions which lie entirely in the Liouville region.
We revisit the notion of individual fairness proposed by Dwork et al. A central challenge in operationalizing their approach is the difficulty in eliciting a human specification of a similarity metric. In this paper, we propose an operationalization of individual fairness that does not rely on a human specification of a distance metric. Instead, we propose novel approaches to elicit and leverage side-information on equally deserving individuals to counter subordination between social groups. We model this knowledge as a fairness graph, and learn a unified Pairwise Fair Representation (PFR) of the data that captures both data-driven similarity between individuals and the pairwise side-information in fairness graph. We elicit fairness judgments from a variety of sources, including human judgments for two real-world datasets on recidivism prediction (COMPAS) and violent neighborhood prediction (Crime & Communities). Our experiments show that the PFR model for operationalizing individual fairness is practically viable.
We report updates to an ongoing lattice-QCD calculation of the form factors for the semileptonic decays $B \to \pi \ell \nu$, $B_s \to K \ell \nu$, $B \to \pi \ell^+ \ell^-$, and $B \to K \ell^+ \ell^-$. The tree-level decays $B_{(s)} \to \pi (K) \ell \nu$ enable precise determinations of the CKM matrix element $|V_{ub}|$, while the flavor-changing neutral-current interactions $B \to \pi (K) \ell^+ \ell^-$ are sensitive to contributions from new physics. This work uses MILC's (2+1+1)-flavor HISQ ensembles at approximate lattice spacings between $0.057$ and $0.15$ fm, with physical sea-quark masses on four out of the seven ensembles. The valence sector is comprised of a clover $b$ quark (in the Fermilab interpretation) and HISQ light and $s$ quarks. We present preliminary results for the form factors $f_0$, $f_+$, and $f_T$, including studies of systematic errors.
Relaxation rates in the $13mLiNO_3-6,5mCa(NO_3)_2-H_2O$ ternary system have been measured for nuclei of water ($^1H$ and $^{17}O$), anion ($^{14}N$), and both cations ($^7Li$, $^{43}Ca$). The data analysis reveals the system structure as consisting of two main charged units: [Li(H$_2$O)$_4$]$^+$ and [Ca(NO$_3$)$_4$]$^{2-}$. Thus the system presents inorganic ionic liquid like structure.
This paper shows the debugging facilities provided by the SLAM system. The SLAM system includes i) a specification language that integrates algebraic specifications and model-based specifications using the object oriented model. Class operations are defined by using rules each of them with logical pre and postconditions but with a functional flavour. ii) A development environment that, among other features, is able to generate readable code in a high level object oriented language. iii) The generated code includes (part of) the pre and postconditions as assertions, that can be automatically checked in the debug mode execution of programs. We focus on this last aspect. The SLAM language is expressive enough to describe many useful properties and these properties are translated into a Prolog program that is linked (via an adequate interface) with the user program. The debugging execution of the program interacts with the Prolog engine which is responsible for checking properties.
We investigate determinacy of delay games with Borel winning conditions, infinite-duration two-player games in which one player may delay her moves to obtain a lookahead on her opponent's moves. First, we prove determinacy of such games with respect to a fixed evolution of the lookahead. However, strategies in such games may depend on information about the evolution. Thus, we introduce different notions of universal strategies for both players, which are evolution-independent, and determine the exact amount of information a universal strategy needs about the history of a play and the evolution of the lookahead to be winning. In particular, we show that delay games with Borel winning conditions are determined with respect to universal strategies. Finally, we consider decidability problems, e.g., "Does a player have a universal winning strategy for delay games with a given winning condition?", for omega-regular and omega-context-free winning conditions.
The origin of stellar-mass black hole mergers discovered through gravitational waves is being widely debated. Mergers in the disks of active galactic nuclei (AGN) represent a promising source of origin, with possible observational clues in the gravitational wave data. Beyond gravitational waves, a unique signature of AGN-assisted mergers is electromagnetic emission from the accreting black holes. Here we show that jets launched by accreting black holes merging in an AGN disk can be detected as peculiar transients by infrared, optical, and X-ray observatories We further show that this emission mechanism can explain the possible associations between gravitational wave events and the optical transient ZTF19abanrhr and the proposed gamma-ray counterparts GW150914-GBM and LVT151012-GBM. We demonstrate how these associations, if genuine, can be used to reconstruct the properties of these events' environments. Searching for infrared and X-ray counterparts to similar electromagnetic transients in the future, once host galaxies are localized by optical observations, could provide a smoking gun signature of the mergers' AGN origin.
Many people are aware of the theory of elastic fracture originated by AA Griffith, and although Griffith used the theorem of minimum potential energy most people seem unaware of the broader implications of this theorem. If it is set within its classical mechanics roots, it is clear that it is a restricted form of a Lagrangian. In advanced texts on fracture cracks are treated as dynamic entities, and the role of stress waves is clearly articulated. However, in most non-advanced texts on fracture and fatigue the role of stress waves are either not included or not emphasised, often leading to a possible misunderstanding of the fundamentals of fracture. What is done here is to extend Griffiths approach by setting it within the concept of Stationary Action, and introducing a quasi-static stress wave unloading model, which connects the energy release mechanism with the stress field. This leads to a definition of a dynamic stress intensity factor, and this model is then applied to fatigue of perfectly elastic and elastic-plastic materials to include crack tip plasticity. The results for the Griffiths crack and the dynamic case are retrodiction, to establish the validity of the methods used. The extension to fatigue gives significant new results, which show that for elastic-plastic materials the influence of the maximum stress in the cycle as a fraction of the yield stress, called the yield stress ratio, has not been recognised. The new form of the fatigue crack growth relationship derived answers many of the long standing questions about the Paris Law.
A novel channel representation for a two-hop decentralized wireless relay network (DWRN) is proposed, where the relays operate in a completely distributive fashion. The modeling paradigm applies an analogous approach to the description method for a double-directional multipath propagation channel, and takes into account the finite system spatial resolution and the extended relay listening/transmitting time. Specifically, the double-directional information azimuth spectrum (IAS) is formulated to provide a compact representation of information flows in a DWRN. The proposed channel representation is then analyzed from a geometrically-based statistical modeling perspective. Finally, we look into the problem of relay network tomography (RNT), which solves an inverse problem to infer the internal structure of a DWRN by using the instantaneous doubledirectional IAS recorded at multiple measuring nodes exterior to the relay region.
We present new techniques for inertial-sensing atom interferometers which produce multiple phase measurements per experimental cycle. With these techniques, we realize two types of multiport measurements, namely quadrature phase detection and real-time systematic phase cancellation, which address challenges in operating high-sensitivity cold-atom sensors in mobile and field applications. We confirm experimentally the increase in sensitivity due to quadrature phase detection in the presence of large phase uncertainty, and demonstrate suppression of systematic phases on a single shot basis.
We say that a nonselfadjoint operator algebra is partly free if it contains a free semigroup algebra. Motivation for such algebras occurs in the setting of what we call free semigroupoid algebras. These are the weak operator topology closed algebras generated by the left regular representations of semigroupoids associated with finite or countable directed graphs. We expand our analysis of partly free algebras from previous work and obtain a graph-theoretic characterization of when a free semigroupoid algebra with countable graph is partly free. This analysis carries over to norm closed quiver algebras. We also discuss new examples for the countable graph case.
We introduce an approximation scheme to perform an analytic study of the oscillation phenomena in a pedagogical and comprehensive way. By using Gaussian wave packets, we show that the oscillation is bounded by a time-dependent vanishing function which characterizes the slippage between the mass-eigenstate wave packets. We also demonstrate that the wave packet spreading represents a secondary effect which plays a significant role only in the non-relativistic limit. In our analysis, we note the presence of a new time-dependent phase and calculate how this additional term modifies the oscillating character of the flavor conversion formula. Finally, by considering Box and Sine wave packets we study how the choice of different functions to describe the particle localization changes the oscillation probability.
It is demonstrated that an emission of collinear photons by the polarized initial electron in elastic electron-proton polarization transfer scattering leads to an apparent shifting of real events with small momentum transfer into the data sample with large momentum transfer. Effectively this shows a fictive enhancement of the cross section at large momentum transfer. However, the enhancement is different for transverse and longitudinal polarizations of the recoil proton. The former is responsible for a deformation of results when extracting the proton electromagnetic form factors ratio from the data on electron-proton polarization transfer scattering. Nevertheless, this effect does not explain the suppression of the Dirac form factor at large momentum transfer completely.
Recent experiments have demonstrated that dynein motor exhibits catch bonding behaviour, in which the unbinding rate of a single dynein decreases with increasing force, for a certain range of force. Motivated by these experiments, we propose a model for catch bonding in dynein using a threshold force bond deformation (TFBD) model wherein catch bonding sets in beyond a critical applied load force. We study the effect of catch bonding on unidirectional transport properties of cellular cargo carried by multiple dynein motors within the framework of this model. We find catch bonding can result in dramatic changes in the transport properties, which are in sharp contrast to kinesin driven unidirectional transport, where catch bonding is absent. We predict that, under certain conditions, the average velocity of the cellular cargo can actually increase as applied load is increased. We characterize the transport properties in terms of a velocity profile phase plot in the parameter space of the catch bond strength and the stall force of the motor. This phase plot yields predictions that may be experimentally accessed by suitable modifications of motor transport and binding properties. Our work necessitates a reexamination of existing theories of collective bidirectional transport of cellular cargo where the catch bond effect of dynein described in this paper is expected to play a crucial role.
We study the non-equilibrium pattern formation that emerges when magnetically repelling colloids, trapped by optical tweezers, are abruptly released, forming colloidal explosions. For multiple colloids in a single trap we observe a pattern of expanding concentric rings. For colloids individually trapped in a line, we observe explosions with a zigzag pattern that persists even when magnetic interactions are much weaker than those that break the linear symmetry in equilibrium. Theory and computer simulations quantitatively describe these phenomena both in and out of equilibrium. An analysis of the mode spectrum allows us to accurately quantify the non-harmonic nature of the optical traps. Colloidal explosions provide a new way to generate well-characterized non-equilibrium behaviour in colloidal systems.
In this paper we study a type of stochastic McKean-Vlasov equations with non-Lipschitz coefficients. Firstly, by an Euler-Maruyama approximation existence of its weak solutions is proved. And then we observe pathwise uniqueness of its weak solutions. Finally, it is shown that the Euler-Maruyama approximation has an optimal strong convergence rate.
Scanning fluorescence correlation spectroscopy (SFCS) with a scan path perpendicular to the membrane plane was introduced to measure diffusion and interactions of fluorescent components in free standing biomembranes. Using a confocal laser scanning microscope (CLSM) the open detection volume is moved laterally with kHz frequency through the membrane and the photon events are continuously recorded and stored in a file. While the accessory hardware requirements for a conventional CLSM are minimal, data evaluation can pose a bottleneck. The photon events must be assigned to each scan, in which the maximum signal intensities have to be detected, binned, and aligned between the scans, in order to derive the membrane related intensity fluctuations of one spot. Finally, this time-dependent signal must be correlated and evaluated by well known FCS model functions. Here we provide two platform independent, open source software tools (PyScanFCS and PyCorrFit) that allow to perform all of these steps and to establish perpendicular SFCS in its one- or two-focus as well as its single- or dual-colour modality.
We have mapped the dense gas distribution and dynamics in the NW region of the Serpens molecular cloud in the CS(2-1) and N2H+(1-0) lines and 3 mm continuum using the FCRAO telescope and BIMA interferometer. 7 continuum sources are found. The N2H+ spectra are optically thin and fits to the 7 hyperfine components are used to determine the distribution of velocity dispersion. 8 cores, 2 with continuum sources, 6 without, lie at a local linewidth minimum and optical depth maximum. The CS spectra are optically thick and generally self-absorbed over the full 0.2 pc extent of the map. We use the line wings to trace outflows around at least 3, and possibly 4, of the continuum sources, and the asymmetry in the self-absorption as a diagnostic of relative motions between core centers and envelopes. The quiescent regions with low N2H+ linewidth tend to have more asymmetric CS spectra than the spectra around the continuum sources indicating higher infall speeds. These regions have typical sizes ~5000 AU, linewidths ~0.5 km/s, and infall speeds ~0.05 km/s. The correlation of CS asymmetry with N2H+ velocity dispersion suggests that the inward flows of material that build up pre-protostellar cores are driven at least partly by a pressure gradient rather than by gravity alone. We discuss a scenario for core formation and eventual star forming collapse through the dissipation of turbulence.
We report a high-pressure single-crystal study of the non-centrosymmetric superconductor YPtBi ($T_c = 0.77$ K). Magnetotransport measurements show a weak metallic behavior with a carrier concentration $n \simeq 2.2 \times 10^{19}$ cm$^{-3}$. Resistivity measurements up to $p = 2.51$ GPa reveal superconductivity is promoted by pressure. The reduced upper critical field $B_{c2}(T)$ curves collapse onto a single curve, with values that exceed the model values for spin-singlet superconductivity. The $B_{c2}$ data point to an odd-parity component in the superconducting order parameter, in accordance with predictions for non-centrosymmetric superconductors.
Modern abstractive summarization models often generate summaries that contain hallucinated or contradictory information. In this paper, we propose a simple but effective contrastive learning framework that incorporates recent developments in reward learning and factuality metrics. Empirical studies demonstrate that the proposed framework enables summarization models to learn from feedback of factuality metrics using contrastive reward learning, leading to more factual summaries by human evaluations. This suggests that further advances in learning and evaluation algorithms can feed directly into providing more factual summaries.
We consider the nature of the fluid-solid phase transition in a polydisperse mixture of hard spheres. For a sufficiently polydisperse mixture crystallisation occurs with simultaneous fractionation. At the fluid-solid boundary, a broad fluid diameter distribution is split into a number of narrower fractions, each of which then crystallises. The number of crystalline phases increases with the overall level of polydispersity. At high densities, freezing is followed by a sequence of demixing transitions in the polydisperse crystal.
In this work we show that the classification performance of high-dimensional structural MRI data with only a small set of training examples is improved by the usage of dimension reduction methods. We assessed two different dimension reduction variants: feature selection by ANOVA F-test and feature transformation by PCA. On the reduced datasets, we applied common learning algorithms using 5-fold cross-validation. Training, tuning of the hyperparameters, as well as the performance evaluation of the classifiers was conducted using two different performance measures: Accuracy, and Receiver Operating Characteristic curve (AUC). Our hypothesis is supported by experimental results.
Deep deraining networks consistently encounter substantial generalization issues when deployed in real-world applications, although they are successful in laboratory benchmarks. A prevailing perspective in deep learning encourages using highly complex data for training, with the expectation that richer image background content will facilitate overcoming the generalization problem. However, through comprehensive and systematic experimentation, we discover that this strategy does not enhance the generalization capability of these networks. On the contrary, it exacerbates the tendency of networks to overfit specific degradations. Our experiments reveal that better generalization in a deraining network can be achieved by simplifying the complexity of the training background images. This is because that the networks are ``slacking off'' during training, that is, learning the least complex elements in the image background and degradation to minimize training loss. When the background images are less complex than the rain streaks, the network will prioritize the background reconstruction, thereby suppressing overfitting the rain patterns and leading to improved generalization performance. Our research offers a valuable perspective and methodology for better understanding the generalization problem in low-level vision tasks and displays promising potential for practical application.
We present preliminary results from an experimental study of slow light in anti-relaxation-coated Rb vapor cells, and describe the construction and testing of such cells. The slow ground state decoherence rate allowed by coated cell walls leads to a dual-structured electromagnetically induced transparency (EIT) spectrum with a very narrow (<100 Hz) transparency peak on top of a broad pedestal. Such dual-structure EIT permits optical probe pulses to propagate with greatly reduced group velocity on two time scales. We discuss ongoing efforts to optimize the pulse delay in such coated cell systems.
In this paper we have analyzed IUE high resolution spectra of the central star (BD+602522) of the Bubble nebula. We discuss velocities of the different regions along the line of sight to the bubble. We find that the Bubble Nebula is younger (by a factor of 100) than the exciting star suggesting that either the bubble is expanding into an inhomogenuous interstellar medium or that the mechanics of the stellar wind are not fully understood.
Let $H$ be a pointed Hopf algebra over an algebraically closed field of characteristic zero. If $H$ is a domain with finite Gelfand-Kirillov dimension greater than or equal to two, then $H$ contains a Hopf subalgebra of Gelfand-Kirillov dimension two.
The article analyzes the contribution of stochastic thermal fluctuations in the attachment times of the immature T-cell receptor TCR: peptide-major-histocompatibility-complex pMHC immunological synapse bond. The key question addressed here is the following: how does a synapse bond remain stabilized in the presence of high frequency thermal noise that potentially equates to a strong detaching force? Focusing on the average time persistence of an immature synapse, we show that the high frequency nodes accompanying large fluctuations are counterbalanced by low frequency nodes that evolve over longer time periods. Our analysis shows that such a behavior could be easily explained from the fact that the survival probability distribution is governed by two distinct phases, for the two different time regimes. The relatively shorter time scales correspond to the cohesion:adhesion induced immature bond formation whereas the larger time reciprocates the association:dissociation regime leading to TCR:pMHC signaling. From an estimation of the bond survival probability, we show that at shorter time scales, this probability $P_{\Delta}(\tau)$ scales with time $\tau$ as an universal function of a rescaled noise amplitude $\frac{D}{\Delta^2}$, such that $P_{\Delta}(\tau)\sim \tau^{-(\frac{\Delta}{\sqrt{D}}+\frac{1}{2})}$, $\Delta$ being the distance from the mean inter-membrane (T cell:Antigen Presenting Cell) separation distance. The crossover from this shorter to a longer time regime leads to an universality in the dynamics, at which point the survival probability shows a different power-law scaling compared to the one at shorter time scales. In biological terms, such a crossover indicates that the TCR:pMHC bond has a survival probability with a slower decay rate than the longer LFA-1:ICAM-1 bond justifying its stability.
We introduce Parametric Linear Dynamic Logic (PLDL), which extends Linear Dynamic Logic (LDL) by temporal operators equipped with parameters that bound their scope. LDL was proposed as an extension of Linear Temporal Logic (LTL) that is able to express all $\omega$-regular specifications while still maintaining many of LTL's desirable properties like an intuitive syntax and a translation into non-deterministic B\"uchi automata of exponential size. But LDL lacks capabilities to express timing constraints. By adding parameterized operators to LDL, we obtain a logic that is able to express all $\omega$-regular properties and that subsumes parameterized extensions of LTL like Parametric LTL and PROMPT-LTL. Our main technical contribution is a translation of PLDL formulas into non-deterministic B\"uchi word automata of exponential size via alternating automata. This yields a PSPACE model checking algorithm and a realizability algorithm with doubly-exponential running time. Furthermore, we give tight upper and lower bounds on optimal parameter values for both problems. These results show that PLDL model checking and realizability are not harder than LTL model checking and realizability.
The starburst / AGN galaxy M82 was studied by Dahlem, Weaver and Heckman using X-ray data from ROSAT and ASCA, as part of their X-ray survey of edge-on starburst galaxies. They found seventeen unresolved hard-X-ray sources around M82, in addition to its strong nuclear source, and other X-rays within the main body of M82. We have measured optical point sources at these positions, and have obtained redshifts of six candidates at the Keck I 10-m telescope, using the low-resolution imaging spectrograph (LRIS). All six are highly compact optical and X-ray objects with redshifts ranging from 0.111 to 1.086. They all show emission lines. The three with the highest redshifts are clearly QSOs. The others with lower redshifts may either be QSOs or compact emission-line galaxies. In addition to these six there are nine QSOs lying very close to M82 which were discovered many years ago. There is no difference between optical spectra of these latter QSOs, only two of which are known to be X-ray sources, and the X-ray emitting QSOs. The redshifts of all fifteen range between 0.111 and 2.05. The large number of QSOs and their apparent association with ejected matter from M82 suggest that they are physically associated with the galaxy, and have large intrinsic redshift components. If this is correct, the absolute magnitudes lie in the range -8 < M_v < -10. Also we speculate that the luminous variable X-ray source which has been detected by Chandra in the main body of M82 some 9 arcseconds from the center is another QSO in the process of ejection from the nucleus, and propose some observational tests of this hypothesis.
Uncertainty relations are a distinctive characteristic of quantum theory that impose intrinsic limitations on the precision with which physical properties can be simultaneously determined. The modern work on uncertainty relations employs \emph{entropic measures} to quantify the lack of knowledge associated with measuring non-commuting observables. However, there is no fundamental reason for using entropies as quantifiers; any functional relation that characterizes the uncertainty of the measurement outcomes defines an uncertainty relation. Starting from a very reasonable assumption of invariance under mere relabelling of the measurement outcomes, we show that Schur-concave functions are the most general uncertainty quantifiers. We then discover a fine-grained uncertainty relation that is given in terms of the majorization order between two probability vectors, \textcolor{black}{significantly extending a majorization-based uncertainty relation first introduced in [M. H. Partovi, Phys. Rev. A \textbf{84}, 052117 (2011)].} Such a vector-type uncertainty relation generates an infinite family of distinct scalar uncertainty relations via the application of arbitrary uncertainty quantifiers. Our relation is therefore universal and captures the essence of uncertainty in quantum theory.
We study the \eta' N interaction within a chiral unitary approach which includes \pi N, \eta N and related pseudoscalar meson-baryon coupled channels. Since the SU(3) singlet does not contribute to the standard interaction and the \eta' is mostly a singlet, the resulting scattering amplitude is very small and inconsistent with experimental estimations of the \eta' N scattering length. The additional consideration of vector meson-baryon states into the coupled channel scheme, via normal and anomalous couplings of pseudoscalar to vector mesons, enhances substantially the \eta' N amplitude. We also exploit the freedom of adding to the Lagrangian a new term, allowed by the symmetries of QCD, which couples baryons to the singlet meson of SU(3). Adjusting the unknown strength to the \eta' N scattering length, we obtain predictions for the elastic \eta' N --> \eta' N and inelastic \eta' N --> \eta N, \pi N, K\Lambda, K\Sigma\ cross sections at low \eta' energies, and discuss their significance.
We establish a geometric quantization formula for a Hamiltonian action of a compact Lie group acting on a noncompact symplectic manifold with proper moment map.
Quick UDP Internet Connections (QUIC) is a recently proposed transport protocol, currently being standardized by the Internet Engineering Task Force (IETF). It aims at overcoming some of the shortcomings of TCP, while maintaining the logic related to flow and congestion control, retransmissions and acknowledgments. It supports multiplexing of multiple application layer streams in the same connection, a more refined selective acknowledgment scheme, and low-latency connection establishment. It also integrates cryptographic functionalities in the protocol design. Moreover, QUIC is deployed at the application layer, and encapsulates its packets in UDP datagrams. Given the widespread interest in the new QUIC features, we believe that it is important to provide to the networking community an implementation in a controllable and isolated environment, i.e., a network simulator such as ns-3, in which it is possible to test QUIC's performance and understand design choices and possible limitations. Therefore, in this paper we present a native implementation of QUIC for ns-3, describing the features we implemented, the main assumptions and differences with respect to the QUIC Internet Drafts, and a set of examples.
We determine the limiting empirical singular value distribution for random unitary matrices with Haar distribution and discrete Fourier transform (DFT) matrices when a random set of columns and rows is removed.
We discuss how the geometric theory of differential equations can be used for the numerical integration and visualisation of implicit ordinary differential equations, in particular around singularities of the equation. The Vessiot theory automatically transforms an implicit differential equation into a vector field distribution on a manifold and thus reduces its analysis to standard problems in dynamical systems theory like the integration of a vector field and the determination of invariant manifolds. For the visualisation of low-dimensional situations we adapt the streamlines algorithm of Jobard and Lefer to 2.5 and 3 dimensions. A concrete implementation in Matlab is discussed and some concrete examples are presented.
We consider the hydrodynamics for biaxial nematic phases described by a field of orthonormal frame, which can be derived from a molecular-theory-based tensor model. We prove the uniqueness of global weak solutions to the Cauchy problem of the frame hydrodynamics in dimensional two. The proof is mainly based on the suitable weaker energy estimates within the Littlewood--Paley analysis. We take full advantage of the estimates of nonlinear terms with rotational derivatives on $SO(3)$, together with cancellation relations and dissipative structures of the biaxial frame system.
We present new one loop calculations that confirm the theorems of Joglekar and Lee on the renormalization of composite operators. We do this by considering physical matrix elements with the operators inserted at non-zero momentum. The resulting IR singularities are regulated dimensionally. We show that the physical matrix element of the BRST exact gauge variant operator which appears in the energy- momentum tensor is zero. We then show that the physical matrix elements of the classical energy-momentum tensor and the gauge invariant twist two gluon operator are independent of the gauge fixing parameter. A Sudakov factor appears in the latter cases. The universality of this factor and the UV finiteness of the energy-momentum tensor provide another method of finding the anomalous dimension of the gluon operator. We conjecture that this method applies to higher loops and takes full advantage of the triangularity of the mixing matrix.
The variational theory of the perfect hypermomentum fluid is developed. The new type of the generalized Frenkel condition is considered. The Lagrangian density of such fluid is stated, and the equations of motion of the fluid and the Weyssenhoff-type evolution equation of the hypermomentum tensor are derived. The expressions of the matter currents of the fluid (the canonical energy-momentum 3-form, the metric stress-energy 4-form and the hypermomentum 3-form) are obtained. The Euler-type hydrodynamic equation of motion of the perfect hypermomentum fluid is derived. It is proved that the motion of the perfect fluid without hypermomentum in a metric-affine space coincides with the motion of this fluid in a Riemann space.
In a recent paper, W. She, J. Yu and R. Feng reported the slight deformations observed upon transmission of a light pulse through a short length of a silica glass nano-filament. Relating the shape and magnitude of these deformations to the momentum of the light pulse inside and outside the filament, these authors concluded that, within the fiber, the photons carry the Abraham momentum. We present an alternative evaluation of force and momentum in a system similar to the experimental setup of She et al. Using precise numerical calculations that take into account not only the electromagnetic momentum inside and outside the filament, but also the Lorentz force exerted by a light pulse in its entire path through the nano-waveguide, we conclude that the net effect should be a pull (rather than a push) force on the end face of the nano-filament.
The transfer matrix of the XXZ open spin-1/2 chain with general integrable boundary conditions and generic anisotropy parameter (q is not a root of unity and |q|=1) is diagonalized using the representation theory of the q-Onsager algebra. Similarly to the Ising and superintegrable chiral Potts models, the complete spectrum is expressed in terms of the roots of a characteristic polynomial of degree d=2^N. The complete family of eigenstates are derived in terms of rational functions defined on a discrete support which satisfy a system of coupled recurrence relations. In the special case of linear relations between left and right boundary parameters for which Bethe-type solutions are known to exist, our analysis provides an alternative derivation of the results by Nepomechie et al. and Cao et al.. In the latter case the complete family of eigenvalues and eigenstates splits in two sets, each associated with a characteristic polynomial of degree $d< 2^N$. Numerical checks performed for small values of $N$ support the analysis.
We present a simple method for the identification of weak signals associated with gravitational wave events. Its application reveals a signal with the same time lag as the GW150914 event in the released LIGO strain data with a significance around $3.2\sigma$. This signal starts about 10 minutes before GW150914 and lasts for about 45 minutes. Subsequent tests suggest that this signal is likely to be due to external sources.
We present the dependence of $D$ production on the charged particle multiplicity in proton-proton collisions at LHC energies. We show that, in a framework of source coherence, the open charm production exhibits a growth with the multiplicity which is stronger than linear in the high density domain. This departure from linearity was previously observed in the $J/\psi$ inclusive data from proton-proton collisions at 7 TeV and was successfully described in our approach. Our assumption, the existence of coherence effects present in proton-proton collisions at high energy, applies for high multiplicity proton-proton collisions in the central rapidity region and should affect any hard observable.
It is shown that the gauge field in Poincare-gauge theory of gravity consists in two parts: the translational gauge field (t -field), which is generated by the energy-momentum current of external fields, and the rotational gauge field (r -field), which is generated by the sum of the angular and spin momentum currents of external fields. In connection with this the physical field generating by rotating masses should exist.
We construct representations of the quantum algebras ~$U_{q{\bf q}}(gl(n))$ and ~$U_{q{\bf q}}(sl(n))$~ which are in duality with the multiparameter quantum groups ~$GL_{q{\bf q}}(n)$, ~$SL_{q{\bf q}}(n)$,~ respectively. These objects depend on ~$n(n-1)/2+1$~ deformation parameters ~$q,q_{ij}$ ($1\leq i <j\leq n$) which is the maximal possible number in the case of $GL(n)$. The representations are labelled by $n-1$ complex numbers ~$r_i$~ and are acting in the space of formal power series of ~$n(n-1)/2$~ non-commuting variables. These variables generate quantum flag manifolds of ~$GL_{q{\bf q}}(n)$, ~$SL_{q{\bf q}}(n)$. The case $n=3$ is treated in more detail.
Using the largest cosmological reionization simulation to-date (~24 billion particles), we use the genus curve to quantify the topology of neutral hydrogen distribution on scales > 1 Mpc as it evolves during cosmological reionization. We find that the reionization process proceeds primarily in an inside-out fashion, where higher density regions become ionized earlier than lower density regions. There are four distinct topological phases: (1) Pre-reionization at z ~ 15, when the genus curve is consistent with a Gaussian density distribution. (2) Pre-overlap at 10 < z < 15, during which the number of HII bubbles increases gradually with time, until percolation of HII bubbles starts to take effect, characterized by a very flat genus curve at high volume fractions. (3) Overlap at 8 < z < 10, when large HII bubbles rapidly merge, manifested by a precipitous drop in the amplitude of the genus curve. (4) Post-overlap at 6 < z < 8, when HII bubbles have mostly overlapped and the genus curve is consistent with a diminishing number of isolated neutral islands. After the end of reionization (z < 6), the genus of neutral hydrogen is consistent with Gaussian random phase, in agreement with observations.
Simply restricting the computation to non-sensitive part of the data may lead to inferences on sensitive data through data dependencies. Inference control from data dependencies has been studied in the prior work. However, existing solutions either detect and deny queries which may lead to leakage -- resulting in poor utility, or only protects against exact reconstruction of the sensitive data -- resulting in poor security. In this paper, we present a novel security model called full deniability. Under this stronger security model, any information inferred about sensitive data from non-sensitive data is considered as a leakage. We describe algorithms for efficiently implementing full deniability on a given database instance with a set of data dependencies and sensitive cells. Using experiments on two different datasets, we demonstrate that our approach protects against realistic adversaries while hiding only minimal number of additional non-sensitive cells and scales well with database size and sensitive data.
We study the ultrametric random matrix ensemble, whose independent entries have variances decaying exponentially in the metric induced by the tree topology on $\mathbb{N}$, and map out the entire localization regime in terms of eigenfunction localization and Poisson statistics. Our results complement existing works on complete delocalization and random matrix universality, thereby proving the existence of a phase transition in this model. In the simpler case of the Rosenzweig-Porter model, the analysis yields a complete characterization of the transition in the local statistics. The proofs are based on the flow of the resolvents of matrices with a random diagonal component under Dyson Brownian motion, for which we establish submicroscopic stability results for short times. These results go beyond norm-based continuity arguments for Dyson Brownian motion and complement the existing analysis after the local equilibration time.
We examine single chargino production in conjunction with R-parity lepton number violation in future lepton-lepton collisions. Present bounds on R-parity violating couplings allow for a production cross section of the order of ${\cal{O}} (10 {fb})$ for a wide range of sneutrino and chargino masses. Scenarios of chargino decay which lead to purely leptonic signals in the final state and without missing energy are also discussed.
Multi-wavelength observations with TeV photons are an essential diagnostic tool to study the physics of TeV sources. The complex variability of blazars, however (timescales from years down to minutes, with different patterns and SED behaviours), requires a great effort on simultaneous campaigns, which should be performed possibly over several days. Spectral information is essential, and now with the new TeV and X-ray telescopes it can be obtained on timescales less than one hour. The insights from such observations can be tremendous, since recent results have shown that the X-ray and TeV emissions do not always follow the same behaviour, and flares can have different relations between rise and decay times. Unfortunately, the strong pointing constraints of XMM do not allow the full use of this satellite simultaneously with ground telescopes.
For spoken dialog systems to conduct fluid conversational interactions with users, the systems must be sensitive to turn-taking cues produced by a user. Models should be designed so that effective decisions can be made as to when it is appropriate, or not, for the system to speak. Traditional end-of-turn models, where decisions are made at utterance end-points, are limited in their ability to model fast turn-switches and overlap. A more flexible approach is to model turn-taking in a continuous manner using RNNs, where the system predicts speech probability scores for discrete frames within a future window. The continuous predictions represent generalized turn-taking behaviors observed in the training data and can be applied to make decisions that are not just limited to end-of-turn detection. In this paper, we investigate optimal speech-related feature sets for making predictions at pauses and overlaps in conversation. We find that while traditional acoustic features perform well, part-of-speech features generally perform worse than word features. We show that our current models outperform previously reported baselines.
We prove a theorem that generalizes Schmidt's Subspace Theorem in the context of metric diophantine approximation. To do so we reformulate the Subspace theorem in the framework of homogeneous dynamics by introducing and studying a slope formalism and the corresponding notion of semistability for diagonal flows.
This paper presents a class of boundary integral equation methods for the numerical solution of acoustic and electromagnetic time-domain scattering problems in the presence of unbounded penetrable interfaces in two-spatial dimensions. The proposed methodology relies on Convolution Quadrature (CQ) methods in conjunction with the recently introduced Windowed Green Function (WGF) method. As in standard time-domain scattering from bounded obstacles, a CQ method of the user's choice is utilized to transform the problem into a finite number of (complex) frequency-domain problems posed on the domains involving penetrable unbounded interfaces. Each one of the frequency-domain transmission problems is then formulated as a second-kind integral equation that is effectively reduced to a bounded interface by means of the WGF method---which introduces errors that decrease super-algebraically fast as the window size increases. The resulting windowed integral equations can then be solved by means of any (accelerated or unaccelerated) off-the-shelf Helmholtz boundary integral equation solver capable of handling complex wavenumbers with a large imaginary part. A high-order Nystr\"om method based on Alpert quadrature rules is utilized here. A variety of numerical examples including wave propagation in open waveguides as well as scattering from multiply layered media demonstrate the capabilities of the proposed approach.
Inclusive jet cross sections in Z/gamma^* events, with Z/gamma^* decaying into an electron-positron pair, are measured as a function of jet transverse momentum and jet multiplicity in ppbar collisions at sqrt{s} = 1.96 TeV with the upgraded Collider Detector at Fermilab in Run II, based on an integrated luminosity of 1.7 fb^-1. The measurements cover the rapidity region | yjet | < 2.1 and the transverse momentum range ptjet > 30 GeV/c. Next-to-leading order perturbative QCD predictions are in good agreement with the measured cross sections.
Finding the set of nodes, which removed or (de)activated can stop the spread of (dis)information, contain an epidemic or disrupt the functioning of a corrupt/criminal organization is still one of the key challenges in network science. In this paper, we introduce the generalized network dismantling problem, which aims to find the set of nodes that, when removed from a network, results in a network fragmentation into subcritical network components at minimum cost. For unit costs, our formulation becomes equivalent to the standard network dismantling problem. Our non-unit cost generalization allows for the inclusion of topological cost functions related to node centrality and non-topological features such as the price, protection level or even social value of a node. In order to solve this optimization problem, we propose a method, which is based on the spectral properties of a novel node-weighted Laplacian operator. The proposed method is applicable to large-scale networks with millions of nodes. It outperforms current state-of-the-art methods and opens new directions in understanding the vulnerability and robustness of complex systems.
We report on a scheme for incorporating vertical radiative energy transport into a fully relativistic, Kerr-metric model of optically thick, advective, transonic alpha disks. Our code couples the radial and vertical equations of the accretion disk. The flux was computed in the diffusion approximation, and convection is included in the mixing-length approximation. We present the detailed structure of this "two-dimensional" slim-disk model for alpha=0.01. We then calculated the emergent spectra integrated over the disk surface. The values of surface density, radial velocity, and the photospheric height for these models differ by 20%-30% from those obtained in the polytropic, height-averaged slim disk model considered previously. However, the emission profiles and the resulting spectra are quite similar for both types of models. The effective optical depth of the slim disk becomes lower than unity for high values of the alpha parameter and for high accretion rates.
Biomedical text summarization is a critical tool that enables clinicians to effectively ascertain patient status. Traditionally, text summarization has been accomplished with transformer models, which are capable of compressing long documents into brief summaries. However, transformer models are known to be among the most challenging natural language processing (NLP) tasks. Specifically, GPT models have a tendency to generate factual errors, lack context, and oversimplify words. To address these limitations, we replaced the attention mechanism in the GPT model with a pointer network. This modification was designed to preserve the core values of the original text during the summarization process. The effectiveness of the Pointer-GPT model was evaluated using the ROUGE score. The results demonstrated that Pointer-GPT outperformed the original GPT model. These findings suggest that pointer networks can be a valuable addition to EMR systems and can provide clinicians with more accurate and informative summaries of patient medical records. This research has the potential to usher in a new paradigm in EMR systems and to revolutionize the way that clinicians interact with patient medical records.
In this article, we study several problems related to virtual traces for finite group actions on schemes of finite type over an algebraically closed field. We also discuss applications to fixed point sets. Our results generalize previous results obtained by Deligne, Laumon, Serre and others.
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character $n$-grams. A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.
It is shown that the renormalisation group (RG) equation can be viewed as an equation for Lie transport of physical amplitudes along the integral curves generated by the $\beta$-functions of a quantum field theory. The anomalous dimensions arise from Lie transport of basis vectors on the space of couplings. The RG equation can be interpreted as relating a particular diffeomorphism of flat space-(time), that of dilations, to a diffeomorphism on the space of couplings generated by the vector field associated with the $\beta$-functions.
A mechanism for phonon Hall effect (PHE) in non-magnetic insulators under an external magnetic field is theoretically studied. PHE is known in (para)magnetic compounds, where the magnetic moments and spin-orbit interaction play an essential role. In sharp contrast, we here show that a non-zero Berry curvature of acoustic phonons is induced by an external magnetic field due to the correction to the adiabatic Born-Oppenheimer approximation. This results in the finite thermal Hall conductivity $\kappa_H$ in nonmagnetic band insulators. Our estimate of $\kappa_H$ for a simple model gives $\kappa_H \sim 1.0\times 10^{-5} $[W/Km] at $ B=10 $[T] and $ T=150 $[K].
We consider the finite temperature Casimir interaction between two Dirichlet spheres in $(D+1)$-dimensional Minkowski spacetime. The Casimir interaction free energy is derived from the zero temperature Casimir interaction energy via the Matsubara formalism. In the high temperature region, the Casimir interaction is dominated by the term with zero Matsubara frequency, and it is known as the classical term since this term is independent of the Planck constant $\hbar$. Explicit expression of the classical term is derived and it is computed exactly using appropriate similarity transforms of matrices. We then compute the small separation asymptotic expansion of this classical term up to the next-to-leading order term. For the remaining part of the finite temperature Casimir interaction with nonzero Matsubara frequencies, we obtain its small separation asymptotic behavior by applying certain prescriptions to the corresponding asymptotic expansion at zero temperature. This gives us a leading term that is shown to agree precisely with the proximity force approximation at any temperature. The next-to-leading order term at any temperature is also derived and it is expressed as an infinite sum over integrals. To obtain the asymptotic expansion at the low and medium temperature regions, we apply the inverse Mellin transform techniques. In the low temperature region, we obtain results that agree with our previous work on the zero temperature Casimir interaction.