text
stringlengths
6
128k
We adapt the fluid description of Fractional Quantum Hall (FQH) states, as seen in (arXiv:2203.06516), to model a system of interacting two-component bosons. This system represents the simplest physical realization of an interacting bosonic Symmetry-Protected Topological (SPT) phase, also known as the integer quantum Hall effect (IQHE) of bosons. In particular, we demonstrate how the fluid dynamical boundary conditions of no-penetration and no-stress at a hard wall naturally give rise to the two counter-propagating boundary modes expected in these SPT phases. Moreover, we identify energy-conserving hydro boundary conditions that can either create a gap in these edge modes or completely isolate the edge states from the bulk, as described in (Physical Review X 14, 011057 (2024)), where they are termed fragile surface states. These fragile surface states are typically absent in K-matrix edge theories and require bulk dynamics to manifest. By leveraging insights from hydrodynamical boundary dynamics, we can further elucidate the intricate surface properties of SPTs beyond the usual topological quantum field theory based approaches.
We study Frege proofs for the one-to-one graph Pigeon Hole Principle defined on the $n\times n$ grid where $n$ is odd. We are interested in the case where each formula in the proof is a depth $d$ formula in the basis given by $\land$, $\lor$, and $\neg$. We prove that in this situation the proof needs to be of size exponential in $n^{\Omega (1/d)}$. If we restrict the size of each line in the proof to be of size $M$ then the number of lines needed is exponential in $n/(\log M)^{O(d)}$. The main technical component of the proofs is to design a new family of random restrictions and to prove the appropriate switching lemmas.
Diffusion wake is an unambiguous part of the jet-induced medium response in high-energy heavy-ion collisions that leads to a depletion of soft hadrons in the opposite direction of the jet propagation. New experimental data on $Z$-hadron correlation in Pb+Pb collisions at the Large Hadron Collider show, however, an enhancement of soft hadrons in the direction of both the $Z$ and the jet. Using a coupled linear Boltzmann transport and hydro model, we demonstrate that medium modification of partons from the initial multiple parton interaction (MPI) gives rise to a soft hadron enhancement that is uniform in azimuthal angle while jet-induced medium response and soft gluon radiation dominate the enhancement in the jet direction. After subtraction of the contributions from MPI with a mixed-event procedure, the diffusion wake becomes visible in the near-side $Z$-hadron correlation. We further employ the longitudinal and transverse gradient jet tomography for the first time to localize the initial jet production positions in $Z/\gamma$-jet events in which the effect of the diffusion wake is apparent in $Z/\gamma$-hadron correlation even without the subtraction of MPI.
High-quality dehazing performance is highly dependent upon the accurate estimation of transmission map. In this work, the coarse estimation version is first obtained by weightedly fusing two different transmission maps, which are generated from foreground and sky regions, respectively. A hybrid variational model with promoted regularization terms is then proposed to assisting in refining transmission map. The resulting complicated optimization problem is effectively solved via an alternating direction algorithm. The final haze-free image can be effectively obtained according to the refined transmission map and atmospheric scattering model. Our dehazing framework has the capacity of preserving important image details while suppressing undesirable artifacts, even for hazy images with large sky regions. Experiments on both synthetic and realistic images have illustrated that the proposed method is competitive with or even outperforms the state-of-the-art dehazing techniques under different imaging conditions.
The Telescope Array Collaboration recently reported the detection of a cosmic-ray particle, "Amaterasu", with an extremely high energy of $2.4\times10^{20}$ eV. Here we investigate its probable charge and the locus of its production. Interpreted as a primary iron nucleus or slightly stripped fragment, the event fits well within the existing paradigm for UHECR composition and spectrum. Using the most up-to-date modeling of the Galactic magnetic field strength and structure, and taking into account uncertainties, we identify the likely volume from which it originated. We estimate a localization uncertainty on the source direction of 6.6\% of $4\pi$ or 2726 deg$^2$. The uncertainty of magnetic deflections and the experimental energy uncertainties contribute about equally to the localization uncertainty. The maximum source distance is 8-50 Mpc, with the range reflecting the uncertainty on the energy assignment. We provide sky maps showing the localization region of the event and superimpose the location of galaxies of different types. There are no candidate sources among powerful radio galaxies. An origin in AGNs or star-forming galaxies is unlikely but cannot be completely ruled out without a more precise energy determination. The most straightforward option is that Amaterasu was created in a transient event in an otherwise undistinguished galaxy.
For finitely generated modules $N \subsetneq M$ over a Noetherian ring $R$, we study the following properties about primary decomposition: (1) The Compatibility property, which says that if $\ass (M/N)=\{P_1, P_2, ..., P_s\}$ and $Q_i$ is a $P_i$-primary component of $N \subsetneq M$ for each $i=1,2,...,s$, then $N =Q_1 \cap Q_2 \cap ... \cap Q_s$; (2) For a given subset $X=\{P_1, P_2, ..., P_r \} \subseteq \ass(M/N)$, $X$ is an open subset of $\ass(M/N)$ if and only if the intersections $Q_1 \cap Q_2\cap ... \cap Q_r= Q_1' \cap Q_2' \cap ... \cap Q_r'$ for all possible $P_i$-primary components $Q_i$ and $Q_i'$ of $N\subsetneq M$; (3) A new proof of the `Linear Growth' property, which says that for any fixed ideals $I_1, I_2, ..., I_t$ of $R$, there exists a $k \in \mathbb N$ such that for any $n_1, n_2, ..., n_t \in \mathbb N$ there exists a primary decomposition of $I_1^{n_1}I_2^{n_2}... I_t^{n_t}M \subset M$ such that every $P$-primary component $Q$ of that primary decomposition contains $P^{k(n_1+n_2+...+n_t)}M$.
We explore, using a Ginzburg-Landau expansion of the free energy, the Larkin-Ovchinnikov-Fulde-Ferrell (LOFF) phase of QCD with three flavors, using the NJL four-fermion coupling to mimic gluon interactions. We find that, below the point where the QCD homogeneous superconductive phases should give way to the normal phase, Cooper condensation of the pairs u-s and d-u is possible, but in the form of the inhomogeneous LOFF pairing.
Quantum metrology concerns estimating a parameter from multiple identical uses of a quantum channel. We extend quantum metrology beyond this standard setting and consider estimation of a physical process with quantum memory, here referred to as a parametrized quantum comb. We present a theoretic framework of metrology of quantum combs, and derive a general upper bound of the comb quantum Fisher information. The bound can be operationally interpreted as the quantum Fisher information of a memoryless quantum channel times a dimensional factor. We then show an example where the bound can be attained up to a factor of four. With the example and the bound, we show that memory in quantum sensors plays an even more crucial role in the estimation of combs than in the standard setting of quantum metrology.
Insight in the crumpling or compaction of one-dimensional objects is of great importance for understanding biopolymer packaging and designing innovative technological devices. By compacting various types of wires in rigid confinements and characterizing the morphology of the resulting crumpled structures, here we report how friction, plasticity, and torsion enhance disorder, leading to a transition from coiled to folded morphologies. In the latter case, where folding dominates the crumpling process, we find that reducing the relative wire thickness counter-intuitively causes the maximum packing density to decrease. The segment-size distribution gradually becomes more asymmetric during compaction, reflecting an increase of spatial correlations. We introduce a self-avoiding random walk model and verify that the cumulative injected wire length follows a universal dependence on segment size, allowing for the prediction of the efficiency of compaction as a function of material properties, container size, and injection force.
This paper undertakes a critical reexamination of the electrostatic version of the Aharonov-Bohm ("AB") effect. The conclusions are as follows: 1. Aharonov and Bohm's 1959 exposition is invalid because it does not consider the wavefunction of the entire system, including the source of electrostatic potential. 2. As originally proposed, the electrostatic AB effect does not exist. Perhaps surprisingly, this conclusion holds despite the relativistic covariance of the electromagnetic four-potential combined with the well-established magnetic AB effect. 3. Although the authors attempted, in a 1961 paper, to demonstrate that consideration of the entire system would not change their result, they inadvertently assumed the desired outcome in their analysis. 4. Claimed observations of the electrostatic AB effect or an analogue thereof are shown to be mistaken.
We consider the effects of network topology on the optimality of packet routing quantified by $\gamma_c$, the rate of packet insertion beyond which congestion and queue growth occurs. The key result of this paper is to show that for any network, there exists an absolute upper bound, expressed in terms of vertex separators, for the scaling of $\gamma_c$ with network size $N$, irrespective of the routing algorithm used. We then derive an estimate to this upper bound for scale-free networks, and introduce a novel static routing protocol which is superior to shortest path routing under intense packet insertion rates.
We prove that links with meridional rank 3 whose 2-fold branched covers are graph manifolds are 3-bridge links. This gives a partial answer to a question by S. Cappell and J. Shaneson on the relation between the bridge numbers and meridional ranks of links. To prove this, we also show that the meridional rank of any satellite knot is at least 4.
Binary feature descriptors have been widely used in various visual measurement tasks, particularly those with limited computing resources and storage capacities. Existing binary descriptors may not perform well for long-term visual measurement tasks due to their sensitivity to illumination variations. It can be observed that when image illumination changes dramatically, the relative relationship among local patches mostly remains intact. Based on the observation, consequently, this study presents an illumination-insensitive binary (IIB) descriptor by leveraging the local inter-patch invariance exhibited in multiple spatial granularities to deal with unfavorable illumination variations. By taking advantage of integral images for local patch feature computation, a highly efficient IIB descriptor is achieved. It can encode scalable features in multiple spatial granularities, thus facilitating a computationally efficient hierarchical matching from coarse to fine. Moreover, the IIB descriptor can also apply to other types of image data, such as depth maps and semantic segmentation results, when available in some applications. Numerical experiments on both natural and synthetic datasets reveal that the proposed IIB descriptor outperforms state-of-the-art binary descriptors and some testing float descriptors. The proposed IIB descriptor has also been successfully employed in a demo system for long-term visual localization. The code of the IIB descriptor will be publicly available.
The confluence of state-of-the-art electronic-structure computations and modern synthetic materials growth techniques is proving indispensable in the search for and discovery of new functionalities in oxide thin films and heterostructures. Here, we review the recent contributions of electronic-structure calculations to predicting, understanding, and discovering new materials physics in thin-film perovskite oxides. We show that such calculations can accurately predict both structure and properties in advance of film synthesis, thereby guiding the search for materials combinations with specific targeted functionalities. In addition, because they can isolate and decouple the effects of various parameters which unavoidably occur simultaneously in an experiment -- such as epitaxial strain, interfacial chemistry and defect profiles -- they are able to provide new fundamental knowledge about the underlying physics. We conclude by outlining the limitations of current computational techniques, as well as some important open questions that we hope will motivate further methodological developments in the field.
Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this paper we discuss a meta-learning algorithm (PSBML) which combines features of parallel algorithms with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the tradeoff achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.
Recently, KFe$_2$As$_2$ was shown to exhibit a structural phase transition from a tetragonal to a collapsed tetragonal phase under applied pressure of about $15~\mathrm{GPa}$. Surprisingly, the collapsed tetragonal phase hosts a superconducting state with $T_c \sim 12~\mathrm{K}$, while the tetragonal phase is a $T_c \leq 3.4~\mathrm{K}$ superconductor. We show that the key difference between the previously known non-superconducting collapsed tetragonal phase in AFe$_2$As$_2$ (A= Ba, Ca, Eu, Sr) and the superconducting collapsed tetragonal phase in KFe$_2$As$_2$ is the qualitatively distinct electronic structure. While the collapsed phase in the former compounds features only electron pockets at the Brillouin zone boundary and no hole pockets are present in the Brillouin zone center, the collapsed phase in KFe$_2$As$_2$ has almost nested electron and hole pockets. Within a random phase approximation spin fluctuation approach we calculate the superconducting order parameter in the collapsed tetragonal phase. We propose that a Lifshitz transition associated with the structural collapse changes the pairing symmetry from $d$-wave (tetragonal) to $s_\pm$ (collapsed tetragonal). Our DFT+DMFT calculations show that effects of correlations on the electronic structure of the collapsed tetragonal phase are minimal. Finally, we argue that our results are compatible with a change of sign of the Hall coefficient with pressure as observed experimentally.
A semi-analytic method for quickly approximating the density-reduced critical electric field for arbitrary mixtures of gases is proposed and validated. Determination of this critical electric field is crucial for designing and testing alternatives to SF$_6$ for insulating high voltage electrical equipment. We outline the theoretical basis of the approximation formula from electron fluid conservation equations, and demonstrate how for binary mixtures the critical electric field can be computed from the transport data of electrons in the pure gases. We demonstrate validity of the method in mixtures of N$_2$ and O$_2$, and SF$_6$ and O$_2$. We conclude with an application of the method to approximate the critical electric field for mixtures of SF$_6$ and HFO1234ze(E), which is a high interest mixture being actively studied for high voltage insulation applications.
The properties of previously discovered nucleon resonances are amended basing on the recent and more detailed experimental data about photoproduction of $\eta$-mesons on protons.
It is well known that Space-Time Block Codes (STBCs) from orthogonal designs (ODs) are single-symbol decodable/symbol-by-symbol decodable (SSD) and are obtainable from unitary matrix representations of Clifford algebras. However, SSD codes are obtainable from designs that are not orthogonal also. Recently, two such classes of SSD codes have been studied: (i) Coordinate Interleaved Orthogonal Designs (CIODs) and (ii) Minimum-Decoding-Complexity (MDC) STBCs from Quasi-ODs (QODs). Codes from ODs, CIODs and MDC-QODs are mutually non-intersecting classes of codes. The class of CIODs have {\it non-unitary weight matrices} when written as a Linear Dispersion Code (LDC) proposed by Hassibi and Hochwald, whereas several known SSD codes including CODs have {\it unitary weight matrices}. In this paper, we obtain SSD codes with unitary weight matrices (that are not CODs) called Clifford Unitary Weight SSDs (CUW-SSDs) from matrix representations of Clifford algebras. A main result of this paper is the derivation of an achievable upper bound on the rate of any unitary weight SSD code as $\frac{a}{2^{a-1}}$ for $2^a$ antennas which is larger than that of the CODs which is $\frac{a+1}{2^a}$. It is shown that several known classes of SSD codes are CUW-SSD codes and CUW-SSD codes meet this upper bound. Also, for the codes of this paper conditions on the signal sets which ensure full-diversity and expressions for the coding gain are presented. A large class of SSD codes with non-unitary weight matrices are obtained which include CIODs as a proper subclass.
Let $\mathcal{R}$ be a $2$-torsion free commutative ring with unity, $X$ a locally finite pre-ordered set and $I(X,\mathcal{R})$ the incidence algebra of $X$ over $\mathcal{R}$. If $X$ consists of a finite number of connected components, in this paper we give a sufficient and necessary condition for each commuting map on $I(X,\mathcal{R})$ being proper.
We describe recent analytical and numerical results on stability and behavior of viscous and inviscid detonation waves obtained by dynamical systems/Evans function techniques like those used to study shock and reaction diffusion waves. In the first part, we give a broad description of viscous and inviscid results for 1D perturbations; in the second, we focus on inviscid high-frequency stability in multi-D and associated questions in turning point theory/WKB expansion.
Consider the integer best approximations of a linear form in $n\ge 2$ real variables. While it is well-known that any tail of this sequence always spans a lattice is sharp for any $n\ge 2$. In this paper, we determine the exact Hausdorff and packing dimension of the set where equality occurs, in terms of $n$. Moreover, independently we show that there exist real vectors whose best approximations lie in a union of two two-dimensional sublattices of $\Z^{n+1}$. Our lattices jointly span a lattice of dimension three only, thereby leading to an alternative constructive proof of Moshchevitin's result. We determine the packing dimension and up to a small error term $O(n^{-1})$ also the Hausdorff dimension of the according set. Our method combines a new construction for a linear form in two variables $n=2$ with a result by Moshchevitin to amplify them. We further employ the recent variatonal principle and some of its consequences, as well as estimates for Hausdorff and packing dimensions of Cartesian products and fibers. Our method permits much freedom for the induced classical exponents of approximation.
Two-photon time-resolved photoluminescence has been recently applied to various semiconductor devices to determine carrier lifetime and surface recombination velocities. So far the theoretical modeling activity has been mainly limited to the commonly used one-photon counterpart of the technique. Here we provide the analytical solution to a 3D diffusion equation that describes two-photon microscopy in the low-injection regime. We focus on a system with a single buried interface with enhanced recombination, and analyze how transport, bulk and surface recombinations influence photoluminescence decays. We find that bulk measurements are dominated by diffusion at short times and by bulk recombination at long times. Surface recombination modifies bulk signals when the optical spot is less than a diffusion length away from the probed interface. In addition, the resolution is increased as the spot size is reduced, which however makes the signal more sensitive to diffusion.
In this article, we will construct the additional perturbative quantum torus symmetry of the dispersionless BKP hierarchy basing on the $W_{\infty}$ infinite dimensional Lie symmetry. These results show that the complete quantum torus symmetry is broken from the BKP hierarchy to its dispersionless hierarchy. Further a series of additional flows of the multicomponent BKP hierarchy will be defined and these flows constitute an $N$-folds direct product of the positive half of the quantum torus symmetries.
We used scattering dichroism to study the combined effects of viscous and magnetic forces on the dynamics of dipolar chains induced in magnetorheological suspensions under rotating magnetic fields. We found that the chains adjust their size to rotate synchronously with the field but with a constant phase lag. Two different behaviors for the dichroism (proportional to the total number of aggregated particles) and the phase lag are found below or above a critical frequency. We obtained a linear dependence of the critical frequency with the square of the magnetization and with the inverse of the viscosity. The Mason number (ratio of viscous to magnetic forces) governs the dynamics. Therefore there is a critical Mason number below which, the dichroism remains almost constant and above which, the rotation of the field prevents the particle aggregation process from taken place being this the mechanism responsible for the decrease of dichroism. Our experimental results have been corroborated with particle dynamics simulations showing good agreement.
We report heteronuclear photoassociation spectroscopy in a mixture of magneto-optically trapped 6Li and 7Li. Hyperfine resolved spectra of the vibrational level v=83 of the singlet state have been taken up to intensities of 1000 W/cm^2. Saturation of the photoassociation rate has been observed for two hyperfine transitions, which can be shown to be due to saturation of the rate coefficient near the unitarity limit. Saturation intensities on the order of 40 W/cm^2 can be determined.
We show the violation of the entanglement-area law for bosonic systems with Bose surfaces. For bosonic systems with gapless factorized energy dispersions on a N^d Cartesian lattice in d-dimension, e.g., the exciton Bose liquid in two dimension, we explicitly show that a belt subsystem with width L preserving translational symmetry along d-1 Cartesian axes has leading entanglement entropy (N^{d-1}/3)ln L. Using this result, the strong subadditivity inequality, and lattice symmetries, we bound the entanglement entropy of a rectangular subsystem from below and above showing a logarithmic violation of the area law. For subsystems with a single flat boundary we also bound the entanglement entropy from below showing a logarithmic violation, and argue that the entanglement entropy of subsystems with arbitrary smooth boundaries are similarly bounded.
Translated from the Latin original "Evolutio producti infiniti $(1-x)(1-xx)(1-x^3)(1x^4)(1-x^5)(1-x^6)$ etc. in seriem simplicem" (1775). E541 in the Enestroem index. In this paper Euler is revisiting his proof of the pentagonal number theorem. He gives his original proof explained a bit differently, and then gives a different proof. However this second proof is still rather close to his original proof. To understand the two proofs, I wrote them out using subscript notation and sum/product notation. It would be a useful exercise to try to really understand the proofs without using any modern notation. The right notation takes care of a lot for us, which we would otherwise have to keep active in our minds.
We study b-arc foliation change and exchange move of open book foliations which generalize the corresponding operations in braid foliation theory. We also define a bypass move as an analogue of Honda's bypass attachment operation. As applications, we study how open book foliations change under a stabilization of the open book. We also generalize Birman-Menasco's split/composite braid theorem: Closed braid representatives of a split (resp. composite) link in a certain open book can be converted to a split (resp. composite) closed braid by applying exchange moves finitely many times.
We report on the observation of superconductivity in LaRh$_2$As$_2$, which is the analogue without $f$-electrons of the heavy-fermion system with two superconducting phases CeRh$_2$As$_2$. A zero-resistivity transition, a specific-heat jump and a drop in magnetic ac susceptibility consistently point to a superconducting transition at a transition temperature of $T_c = 0.28$\,K. The magnetic field-temperature superconducting phase diagrams determined from field-dependent ac-susceptibility measurements reveal small upper critical fields $\mu_{\mathrm{0}}H_{c2} \approx 12$\,mT for $H\parallel ab$ and $\mu_{\mathrm{0}}H_{c2} \approx 9$\,mT for $H\parallel c$. The observed $H_{c2}$ is larger than the estimated thermodynamic critical field $H_c$ derived from the heat-capacity data, suggesting that LaRh$_2$A$s_2$ is a type-II superconductor with Ginzburg-Landau parameters $\kappa^{ab}_{GL} \approx 1.9$ and $\kappa^{c}_{GL}\approx 2.7$. The microscopic Eliashberg theory indicates superconductivity to be in the weak-coupling regime with an electron-phonon coupling constant $\lambda_{e-ph} \approx 0.4$. Despite a similar $T_c$ and the same crystal structure as the Ce compound, LaRh$_2$As$_2$ displays conventional superconductivity, corroborating the substantial role of the 4$f$ electrons for the extraordinary superconducting state in CeRh$_2$As$_2$.
With the development of computational intelligence algorithms, unsupervised monocular depth and pose estimation framework, which is driven by warped photometric consistency, has shown great performance in the daytime scenario. While in some challenging environments, like night and rainy night, the essential photometric consistency hypothesis is untenable because of the complex lighting and reflection, so that the above unsupervised framework cannot be directly applied to these complex scenarios. In this paper, we investigate the problem of unsupervised monocular depth estimation in highly complex scenarios and address this challenging problem by adopting an image transfer-based domain adaptation framework. We adapt the depth model trained on day-time scenarios to be applicable to night-time scenarios, and constraints on both feature space and output space promote the framework to learn the key features for depth decoding. Meanwhile, we further tackle the effects of unstable image transfer quality on domain adaptation, and an image adaptation approach is proposed to evaluate the quality of transferred images and re-weight the corresponding losses, so as to improve the performance of the adapted depth model. Extensive experiments show the effectiveness of the proposed unsupervised framework in estimating the dense depth map from highly complex images.
Let $g(x)=\chi_B(x)$ be the indicator function of a bounded convex set in $\Bbb R^d$, $d\geq 2$, with a smooth boundary and everywhere non-vanishing Gaussian curvature. Using a combinatorial appraoch we prove that if $d \neq 1 \mod 4$, then there does not exist $S \subset {\Bbb R}^{2d}$ such that ${ \{g(x-a)e^{2 \pi i x \cdot b} \}}_{(a,b) \in S}$ is an orthonormal basis for $L^2({\Bbb R}^d)$.
We prove that ideal boundary of a 7-systolic group is strongly hereditarily aspherical. For some class of 7-systolic groups we show their boundaries are connected and without local cut points, thus getting some results concerning splittings of those groups.
In this paper, we investigate the induced subgraphs of percolated random geometric graphs, and get some asymptotic results for the expected number of the subgraphs. Moreover, we get the Poisson approximation for the counting by Stein's method. We also present some similar results for the expectation of Betti number of the associated percolated random geometric complexes.
Ribbon tangles are proper embeddings of tori and cylinders in the $4$-ball~$B^4$, "bounding" $3$-manifolds with only ribbon disks as singularities. We construct an Alexander invariant $\mathsf{A}$ of ribbon tangles equipped with a representation of the fundamental group of their exterior in a free abelian group $G$. This invariant induces a functor in a certain category $\mathsf{R}ib_G$ of tangles, which restricts to the exterior powers of Burau-Gassner representation for ribbon braids, that are analogous to usual braids in this context. We define a circuit algebra $\mathsf{C}ob_G$ over the operad of smooth cobordisms, inspired by diagrammatic planar algebras introduced by Jones, and prove that the invariant $\mathsf{A}$ commutes with the compositions in this algebra. On the other hand, ribbon tangles admit diagrammatic representations, throught welded diagrams. We give a simple combinatorial description of $\mathsf{A}$ and of the algebra $\mathsf{C}ob_G$, and observe that our construction is a topological incarnation of the Alexander invariant of Archibald. When restricted to diagrams without virtual crossings, $\mathsf{A}$ provides a purely local description of the usual Alexander poynomial of links, and extends the construction by Bigelow, Cattabriga and the second author.
Smooth backfitting was first introduced in an additive regression setting via a direct projection alternative to the classic backfitting method by Buja, Hastie and Tibshirani. This paper translates the original smooth backfitting concept to a survival model considering an additively structured hazard. The model allows for censoring and truncation patterns occurring in many applications such as medical studies or actuarial reserving. Our estimators are shown to be a projection of the data into the space of multivariate hazard functions with smooth additive components. Hence, our hazard estimator is the closest nonparametric additive fit even if the actual hazard rate is not additive. This is different to other additive structure estimators where it is not clear what is being estimated if the model is not true. We provide full asymptotic theory for our estimators. We provide an implementation the proposed estimators that show good performance in practice even for high dimensional covariates.
As recently pointed out by Li and Xu, the definition of K-stability, and the author's proof of K-stability for cscK manifolds without holomorphic vector fields, need to be altered slightly: the Donaldson-Futaki invariant is positive for all test configurations which are not trivial in codimension 2.
Recently published studies suggested that the difference between Universal and Coordinated Time UT1-UTC could reach a large positive value in a few years, making it necessary to introduce a negative leap second into the UTC scale for the first time in its history. Based on the latest UT1 series provided by the International Earth Rotation and Reference Systems Service (IERS) and its prediction, it was shown that the tendency to acceleration of the Earth's rotation observed over past four years most likely will return to the deceleration, which is the usual behavior of the Earth rotational dynamics during past decades.
Spatiotemporal optical coherence (STOC) imaging is a new technique for suppressing coherent crosstalk noise in Fourier-domain full-field optical coherence tomography (FD-FF-OCT). In STOC imaging, the timevarying inhomogeneous phase masks modulate the incident light to alter the interferometric signal. Resulting interference images are then processed as in standard FD-FF-OCT and averaged incoherently or coherently to produce crosstalk-free volumetric OCT images of the sample. Here, we show that coherent averaging is suitable when phase modulation is performed for both interferometer arms simultaneously. We explain the advantages of coherent over incoherent averaging. Specifically, we show that modulated signal, after coherent averaging, preserves lateral phase stability. This enables computational phase correction to compensate for geometrical aberrations. Ultimately, we employ it to correct for aberrations present in the image of the photoreceptor layer of the human retina that reveals otherwise invisible photoreceptor mosaic.
Human language defines the most complex outcomes of evolution. The emergence of such an elaborated form of communication allowed humans to create extremely structured societies and manage symbols at different levels including, among others, semantics. All linguistic levels have to deal with an astronomic combinatorial potential that stems from the recursive nature of languages. This recursiveness is indeed a key defining trait. However, not all words are equally combined nor frequent. In breaking the symmetry between less and more often used and between less and more meaning-bearing units, universal scaling laws arise. Such laws, common to all human languages, appear on different stages from word inventories to networks of interacting words. Among these seemingly universal traits exhibited by language networks, ambiguity appears to be a specially relevant component. Ambiguity is avoided in most computational approaches to language processing, and yet it seems to be a crucial element of language architecture. Here we review the evidence both from language network architecture and from theoretical reasonings based on a least effort argument. Ambiguity is shown to play an essential role in providing a source of language efficiency, and is likely to be an inevitable byproduct of network growth.
With the imminent start of LHC experiments, development of phenomenological tools, and in particular the Monte Carlo programs and algorithms, becomes urgent. A new algorithm for the generation of a parton shower initiated by the single initial hadron beam is presented. The new algorithm is of the class of the so called ``constrained MC'' type algorithm (an alternative to the backward evolution MC algorithm), in which the energy and the type of the parton at the end of the parton shower are constrained (predefined). The complete kinematics configurations with explicitly constructed four momenta are generated and tested. Evolution time is identical with rapidity and minimum transverse momentum is used as an infrared cut-off. All terms of the leading-logarithmic approximation in the DGLAP evolution are properly accounted for. In addition, the essential improvements towards the so-called CCFM/BFKL models are also properly implemented. The resulting parton distributions are cross-checked up to the 0.1% precision level with the help of a multitude of comparisons with other MC and non-MC programs. We regard these tests as an important asset to be exploited at the time when the presented MC will enter as a building block in a larger MC program for W/Z production process at LHC.
The Orchive is a large collection of over 20,000 hours of audio recordings from the OrcaLab research facility located off the northern tip of Vancouver Island. It contains recorded orca vocalizations from the 1980 to the present time and is one of the largest resources of bioacoustic data in the world. We have developed a web-based interface that allows researchers to listen to these recordings, view waveform and spectral representations of the audio, label clips with annotations, and view the results of machine learning classifiers based on automatic audio features extraction. In this paper we describe such classifiers that discriminate between background noise, orca calls, and the voice notes that are present in most of the tapes. Furthermore we show classification results for individual calls based on a previously existing orca call catalog. We have also experimentally investigated the scalability of classifiers over the entire Orchive.
Machine reading comprehension has made great progress in recent years owing to large-scale annotated datasets. In the clinical domain, however, creating such datasets is quite difficult due to the domain expertise required for annotation. Recently, Pampari et al. (EMNLP'18) tackled this issue by using expert-annotated question templates and existing i2b2 annotations to create emrQA, the first large-scale dataset for question answering (QA) based on clinical notes. In this paper, we provide an in-depth analysis of this dataset and the clinical reading comprehension (CliniRC) task. From our qualitative analysis, we find that (i) emrQA answers are often incomplete, and (ii) emrQA questions are often answerable without using domain knowledge. From our quantitative experiments, surprising results include that (iii) using a small sampled subset (5%-20%), we can obtain roughly equal performance compared to the model trained on the entire dataset, (iv) this performance is close to human expert's performance, and (v) BERT models do not beat the best performing base model. Following our analysis of the emrQA, we further explore two desired aspects of CliniRC systems: the ability to utilize clinical domain knowledge and to generalize to unseen questions and contexts. We argue that both should be considered when creating future datasets.
The data from the last seven experiments performed on polarized deep--inelastic scattering on proton and neutron (or deuteron) targets have been analyzed in search of a precise determination of the spin fraction carried by the quarks in the nucleon. We find that this fraction can be of the size expected from na\"{\i}ve quark model arguments, provided the gluon axial anomaly is explicitly included and the isosinglet axial charge normalization is fixed at a suitably low momentum scale, such that a) the running, strong coupling constant is about unit, and b) the orbital angular momentum inside the nucleon vanishes. We also find that, despite the appeal of this solution of the ``nucleon spin crisis'', a solution where the axial anomaly is absent and its effects are traded for an appreciable strange quark polarization can not however be excluded --- because of the limited accuracy of the data --- unless this latter and/or the gluon polarization in the nucleon are explicitly measured.
We consider the class of inclusive hadron collider processes in which one or more energetic jets are produced, possibly accompanied by colourless particles. We provide a general formulation of a slicing scheme for this class of processes, by identifying the various contributions that need to be computed up to next-to-leading order (NLO) in QCD perturbation theory. We focus on two novel observables, the one-jet resolution variable $\Delta E_t$ and the $n$-jet resolution variable $k_{T}^{\mathrm{ness}}$, and explicitly compute all the ingredients needed to carry out NLO computations using these variables. We contrast the behaviour of these variables when the slicing parameter becomes small. In the case of $k_{T}^{\mathrm{ness}}$ we also present results for the hadroproduction of multiple jets.
Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$. We define the second Yamabe invariant as the infimum of the second eigenvalue of the Yamabe operator over the metrics conformal to $g$ and of volume 1. We study when it is attained. As an application, we find nodal solutions of the Yamabe equation.
We characterize the group property of being with infinite conjugacy classes (or icc, in which all conjugacy classes beside 1 are infinite) for extensions of some specific groups ; namely extensions of abelian, centerless, icc, or word hyperbolic groups.
We compare for an $n$-dimensional Euclidean lattice $\Lb$ the smallest possible values of the product of the norms of $n$~vectors which either constitute a basis for $\Lb$ (Hermite-type inequalities) or are merely assumed to be independent (Minkowski-type inequalities). We improve on 1953 results of van der Waerden in dimensions $6$ to $8$ and prove partial result in dimension~$9$.
Recent observations indicate that core-like dark matter structures exist in many galaxies, while numerical simulations reveal a singular dark matter density profile at the center. In this article, I show that if the annihilation of dark matter particles gives invisible sterile neutrinos, the Sommerfeld enhancement of the annihilation cross-section can give a sufficiently large annihilation rate to solve the core-cusp problem. The resultant core density, core radius, and their scaling relation generally agree with recent empirical fits from observations. Also, this model predicts that the resultant core-like structures in dwarf galaxies can be easily observed, but not for large normal galaxies and galaxy clusters.
The structure of a stochastic nonlinear gene regulatory network is uncovered by studying its response to input signal generators. Four applications are studied in detail: a nonlinear connection of two linear systems, the design of a logic pulse, a molecular amplifier and the interference of three signal generators in E2F1 regulatory element. The gene interactions are presented using molecular diagrams that have a precise mathematical structure and retain the biological meaning of the processes.
Recent numerical simulations find a possible chiral spin liquid state in the intermediate coupling regime of the triangular lattice Hubbard model. Here we provide a simple picture for its origin in terms of a Bosonic RVB description. More specifically, we show that such a chiral spin liquid state can be understood as a quantum disordered tetrahedral spin state stabilized by the four spin ring exchange coupling, which suppresses the order-by-disorder effect toward a stripy spin state. Such a chiral spin liquid state features a spin Berry phase of $\frac{\pi}{2}$ per triangle. However, we show that the topological property of such a state is totally missed in the Schwinger Boson mean field description as a result of the lack of Boson rigidity caused by the no double occupancy constraint.
This paper has been withdrawn by the authors because of an error in the proof. We can, however, prove a weaker spatial fall-off that is still superlinear, namely exp[-x log x].
We compute the beta-function and the anomalous dimension of all the non-derivative operators of the theory up to three-loops for the most general nearest-neighbour O(N)-invariant action together with some contributions to the four-loop beta-function. These results are used to compute the first analytic corrections to various long-distance quantities as the correlation length and the general spin-$n$ susceptibility. It is found that these corrections are extremely large for $RP^{N-1}$ models (especially for small values of N), so that asymptotic scaling can be observed in these models only at very large values of beta. We also give the first three terms in the asymptotic expansion of the vector and tensor energies.
The main objective of this paper is to study the number of limit cycles in a family of polynomial systems. Using bifurcation methods, we obtain the maximal number of limit cycles in global bifurcation.
We report on the fabrication and transport characterization of atomically-precise single molecule devices consisting of a magnetic porphyrin covalently wired by graphene nanoribbon electrodes. The tip of a scanning tunneling microscope was utilized to contact the end of a GNR-porphyrin-GNR hybrid system and create a molecular bridge between tip and sample for transport measurements. Electrons tunneling through the suspended molecular heterostructure excited the spin multiplet of the magnetic porphyrin. The detachment of certain spin-centers from the surface shifted their spin-carrying orbitals away from an on-surface mixed-valence configuration, recovering its original spin state. The existence of spin-polarized resonances in the free-standing systems and their electrical addressability is the fundamental step for utilization of carbon-based materials as functional molecular spintronics systems.
We report a neutron scattering study of the long-wavelength dynamic spin correlations in the model two-dimensional $S=1/2$ square lattice Heisenberg antiferromagnets Sr$_2$CuO$_2$Cl$_2$ and Sr$_2$Cu$_3$O$_4$Cl$_2$. The characteristic energy scale, $\omega_0 (T/J)$, is determined by measuring the quasielastic peak width in the paramagnetic phase over a wide range of temperature ($0.2 \alt T/J \alt 0.7$). The obtained values for $\omega_0 (T/J)$ agree {\it quantitatively} between the two compounds and also with values deduced from quantum Monte Carlo simulations. The combined data show scaling behavior, $\omega \sim \xi^{-z}$, over the entire temperature range with $z=1.0(1)$, in agreement with dynamic scaling theory.
Positrons are accumulated within a Penning trap designed to make more precise measurements of the positron and electron magnetic moments. The retractable radioactive source used is weak enough to require no license for handling radioactive material and the radiation dosage one meter from the source gives an exposure several times smaller than the average radiation dose on the earth's surface. The 100 mK trap is mechanically aligned with the 4.2 K superconducting solenoid that produces a 6 tesla magnetic trapping field with a direct mechanical coupling.
The main purpose of the present paper is a study of orientations of the moduli spaces of pseudo-holomorphic discs with boundary lying on a \emph{real} Lagrangian submanifold, i.e., the fixed point set of an anti-symplectic involutions $\tau$ on a symplectic manifold. We introduce the notion of $\tau$-relatively spin structure for an anti-symplectic involution $\tau$, and study how the orientations on the moduli space behave under the involution $\tau$. We also apply this to the study of Lagrangian Floer theory of real Lagrangian submanifolds. In particular, we study unobstructedness of the $\tau$-fixed point set of symplectic manifolds and in particular prove its unobstructedness in the case of Calabi-Yau manifolds. And we also do explicit calculation of Floer cohomology of $\R P^{2n+1}$ over $\Lambda_{0,nov}^{\Z}$ which provides an example whose Floer cohomology is not isomorphic to its classical cohomology. We study Floer cohomology of the diagonal of the square of a symplectic manifold, which leads to a rigorous construction of the quantum Massey product of symplectic manifold in complete generality.
Primary crystallization in high Al-content metallic glasses is driven by nanometer-diameter regions with internal structure similar to fcc Al. Comparison of fluctuation electron microscopy (FEM) data to FEM simulations of fcc Al clusters dispersed in a dense-random packed matrix is used to extract the diameter and volume fraction of the ordered regions in a Al88Y7Fe5 base glass and in glasses with 1 at.% Cu substituted for Y or Al. The size and density of nanocrystals were measured as a function of isothermal annealing time for the same alloys. The volume fraction of crystalline material grows under isothermal annealing, so the phase transformation is not purely grain coarsening, but the crystalline volume fraction is lower than the volume fraction of ordered regions in the as-quenched samples, so not all of the ordered regions act as nuclei. Changes in diameter and volume fraction of the ordered regions with alloying are correlated with changes in the crystallization temperature, nucleation rate, and nanocrystal density. No evidence for phase separation is observed, and FEM simulations from a molecular dynamics quenched structural model of similar composition do not show the features observed in experiment.
The fundamental parameters of reddening, metallicity, age, and distance are presented for the poorly studied open clusters Be~89, Ru~135, and Be~10, derived from their CCD UBVRI photometry. By fitting the appropriate isochrones to the observed sequences of the clusters in five different color--magnitude diagrams, the weighted averages of distance moduli and heliocentric distances ($(V_0$--$M_{V}), d$(kpc)) are $(11\fm90\pm 0\fm06, 2.4\pm 0.06$) for Be~89, $(9\fm58\pm 0\fm07, 0.81\pm 0.03$) for Ru~135, and $(11\fm16\pm 0\fm06, 1.7 \pm 0.05$) for Be~10, and the weighted averages of the ages $(\log(A), A$(Gyr)) are $(9.58\pm 0.06, 3.8\pm 0.6)$ for Be~89, $(9.58\pm 0.06, 3.8\pm 0.7)$ for Ru~135, and $(9.06\pm 0.05, 1.08\pm 0.08)$ for Be~10.
MaxSAT is an optimization version of the famous NP-complete Satisfiability problem (SAT). Algorithms for MaxSAT mainly include complete solvers and local search incomplete solvers. In many complete solvers, once a better solution is found, a Soft conflict Pseudo Boolean (SPB) constraint will be generated to enforce the algorithm to find better solutions. In many local search algorithms, clause weighting is a key technique for effectively guiding the search directions. In this paper, we propose to transfer the SPB constraint into the clause weighting system of the local search method, leading the algorithm to better solutions. We further propose an adaptive clause weighting strategy that breaks the tradition of using constant values to adjust clause weights. Based on the above methods, we propose a new local search algorithm called SPB-MaxSAT that provides new perspectives for clause weighting on MaxSAT local search solvers. Extensive experiments demonstrate the excellent performance of the proposed methods.
We consider the possibility of inflationary magnetogenesis due to dynamical couplings of the electromagnetic fields to gravity. We find that large primordial magnetic fields can be generated during inflation without the strong coupling problem, backreaction problem, or curvature perturbation problem, which seed large-scale magnetic fields with observationally interesting strengths.
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for both 2D and 3D computer vision. Nevertheless, existing MAE-based methods still have certain drawbacks. Firstly, the functional decoupling between the encoder and decoder is incomplete, which limits the encoder's representation learning ability. Secondly, downstream tasks solely utilize the encoder, failing to fully leverage the knowledge acquired through the encoder-decoder architecture in the pre-text task. In this paper, we propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning. The proposed method decouples functions between the decoder and the encoder by introducing a mask regressor, which predicts the masked patch representation from the visible patch representation encoded by the encoder and the decoder reconstructs the target from the predicted masked patch representation. By doing so, we minimize the impact of decoder updates on the representation space of the encoder. Moreover, we introduce an alignment constraint to ensure that the representations for masked patches, predicted from the encoded representations of visible patches, are aligned with the masked patch presentations computed from the encoder. To make full use of the knowledge learned in the pre-training stage, we design a new finetune mode for the proposed Point-RAE. Extensive experiments demonstrate that our approach is efficient during pre-training and generalizes well on various downstream tasks. Specifically, our pre-trained models achieve a high accuracy of \textbf{90.28\%} on the ScanObjectNN hardest split and \textbf{94.1\%} accuracy on ModelNet40, surpassing all the other self-supervised learning methods. Our code and pretrained model are public available at: \url{https://github.com/liuyyy111/Point-RAE}.
We investigated the effect of inverse Compton scattering in mildly relativistic static and moving plasmas with low optical depth using Monte Carlo simulations, and calculated the Sunyaev-Zel'dovich effect in the cosmic background radiation. Our semi-analytic method is based on a separation of photon diffusion in frequency and real space. We use Monte Carlo simulation to derive the intensity and frequency of the scattered photons for a monochromatic incoming radiation. The outgoing spectrum is determined by integrating over the spectrum of the incoming radiation using the intensity to determine the correct weight. This method makes it possible to study the emerging radiation as a function of frequency and direction. As a first application we have studied the effects of finite optical depth and gas infall on the Sunyaev-Zel'dovich effect (not possible with the extended Kompaneets equation) and discuss the parameter range in which the Boltzmann equation and its expansions can be used. For high temperature clusters ($k_B T_e \gtrsim 15$ keV) relativistic corrections based on a fifth order expansion of the extended Kompaneets equation seriously underestimate the Sunyaev-Zel'dovich effect at high frequencies. The contribution from plasma infall is less important for reasonable velocities. We give a convenient analytical expression for the dependence of the cross-over frequency on temperature, optical depth, and gas infall speed. Optical depth effects are often more important than relativistic corrections, and should be taken into account for high-precision work, but are smaller than the typical kinematic effect from cluster radial velocities.
The introduction of DETR represents a new paradigm for object detection. However, its decoder conducts classification and box localization using shared queries and cross-attention layers, leading to suboptimal results. We observe that different regions of interest in the visual feature map are suitable for performing query classification and box localization tasks, even for the same object. Salient regions provide vital information for classification, while the boundaries around them are more favorable for box regression. Unfortunately, such spatial misalignment between these two tasks greatly hinders DETR's training. Therefore, in this work, we focus on decoupling localization and classification tasks in DETR. To achieve this, we introduce a new design scheme called spatially decoupled DETR (SD-DETR), which includes a task-aware query generation module and a disentangled feature learning process. We elaborately design the task-aware query initialization process and divide the cross-attention block in the decoder to allow the task-aware queries to match different visual regions. Meanwhile, we also observe that the prediction misalignment problem for high classification confidence and precise localization exists, so we propose an alignment loss to further guide the spatially decoupled DETR training. Through extensive experiments, we demonstrate that our approach achieves a significant improvement in MSCOCO datasets compared to previous work. For instance, we improve the performance of Conditional DETR by 4.5 AP. By spatially disentangling the two tasks, our method overcomes the misalignment problem and greatly improves the performance of DETR for object detection.
The experiments on detection of daemons captured into geocentric orbits, which are based on the postulated fast decay of daemon-containing nuclei, have been continued. By properly varying the experimental parameters, it has become possible to reveal and formulate some relations governing the interaction of daemons with matter. Among them are, for instance, the emission of energetic Auger-type electrons in the capture of an atomic nucleus, the possibility of charge exchange involving the capture of a heavier nucleus etc. The decay time of a daemon-containing proton has been measured to be D \tau ~ approx 2 mks. The daemon flux at the Earth's surface is f ~ 10^{-7} cm^-2 s^-1. One should point out, on the one hand, the reproducibility of the main results, and on the other, the desirability of building up larger statistics and employing more sophisticated experimental methods to reveal finer details in the daemon interaction with matter.
We present the results of the simultaneous deep XMM and Chandra observations of the bright Seyfert 1.9 galaxy MCG-5-23-16, which is thought to have one of the best known examples of a relativistically broadened iron K-alpha line. The time averaged spectral analysis shows that the iron K-shell complex is best modeled with an unresolved narrow emission component (FWHM < 5000 km/s, EW ~ 60 eV) plus a broad component. This latter component has FWHM ~ 44000 km/s and EW ~ 50 eV. Its profile is well described by an emission line originating from an accretion disk viewed with an inclination angle ~ 40^\circ and with the emission arising from within a few tens of gravitational radii of the central black hole. The time-resolved spectral analysis of the XMM EPIC-pn spectrum shows that both the narrow and broad components of the Fe K emission line appear to be constant in time within the errors. We detected a narrow sporadic absorption line at 7.7 keV which appears to be variable on a time-scale of 20 ksec. If associated with Fe XXVI Ly-alpha this absorption is indicative of a possibly variable, high ionization, high velocity outflow. The variability of this absorption feature appears to rule out a local (z=0) origin. The analysis of the XMM RGS spectrum reveals that the soft X-ray emission of MCG-5-23-16 is likely dominated by several emission lines superimposed on an unabsorbed scattered power-law continuum. The lack of strong Fe L shell emission together with the detection of a strong forbidden line in the O VII triplet is consistent with a scenario where the soft X-ray emission lines are produced in a plasma photoionized by the nuclear emission.
When comparing quantum states to each other, it is possible to obtain an unambiguous answer, indicating that the states are definitely different, already after a single measurement. In this paper we investigate comparison of coherent states, which is the simplest example of quantum state comparison for continuous variables. The method we present has a high success probability, and is experimentally feasible to realize as the only required components are beam splitters and photon detectors. An easily realizable method for quantum state comparison could be important for real applications. As examples of such applications we present a "lock and key" scheme and a simple scheme for quantum public key distribution.
We report the discovery, from WASP and CORALIE, of a transiting exoplanet in a 1.54-d orbit. The host star, WASP-36, is a magnitude V = 12.7, metal-poor G2 dwarf (Teff = 5959 \pm 134 K), with [Fe/H] = -0.26 \pm 0.10. We determine the planet to have mass and radius respectively 2.30 \pm 0.07 and 1.28 \pm 0.03 times that of Jupiter. We have eight partial or complete transit light curves, from four different observatories, which allows us to investigate the potential effects on the fitted system parameters of using only a single light curve. We find that the solutions obtained by analysing each of these light curves independently are consistent with our global fit to all the data, despite the apparent presence of correlated noise in at least two of the light curves.
Consistent perturbation theory for thermodynamical quantities in strongly type II superconductors in magnetic field at low temperatures is developed. It is complementary to the existing expansion valid at high temperatures. Magnetization and specific heat are calculated to two loop order and compare well with existing Monte Carlo simulations, other theories and experiments.
We present an efficient end-to-end pipeline for largescale landmark recognition and retrieval. We show how to combine and enhance concepts from recent research in image retrieval and introduce two architectures especially suited for large-scale landmark identification. A model with deep orthogonal fusion of local and global features (DOLG) using an EfficientNet backbone as well as a novel Hybrid-Swin-Transformer is discussed and details how to train both architectures efficiently using a step-wise approach and a sub-center arcface loss with dynamic margins are provided. Furthermore, we elaborate a novel discriminative re-ranking methodology for image retrieval. The superiority of our approach was demonstrated by winning the recognition and retrieval track of the Google Landmark Competition 2021.
Neural Networks (NNs) have increasingly apparent safety implications commensurate with their proliferation in real-world applications: both unanticipated as well as adversarial misclassifications can result in fatal outcomes. As a consequence, techniques of formal verification have been recognized as crucial to the design and deployment of safe NNs. In this paper, we introduce a new approach to formally verify the most commonly considered safety specifications for ReLU NNs -- i.e. polytopic specifications on the input and output of the network. Like some other approaches, ours uses a relaxed convex program to mitigate the combinatorial complexity of the problem. However, unique in our approach is the way we use a convex solver not only as a linear feasibility checker, but also as a means of penalizing the amount of relaxation allowed in solutions. In particular, we encode each ReLU by means of the usual linear constraints, and combine this with a convex objective function that penalizes the discrepancy between the output of each neuron and its relaxation. This convex function is further structured to force the largest relaxations to appear closest to the input layer; this provides the further benefit that the most problematic neurons are conditioned as early as possible, when conditioning layer by layer. This paradigm can be leveraged to create a verification algorithm that is not only faster in general than competing approaches, but is also able to verify considerably more safety properties; we evaluated PEREGRiNN on a standard MNIST robustness verification suite to substantiate these claims.
An experimental approach based on design of experiments, process maps and the analysis of deposition first stages to improve the biocompatibility of High-Velocity Oxygen Fuel (HVOF) hydroxyapatite (HAp) coatings is here presented. A two-level design of three factors (23) was performed using the stand-off distance (SOD), the fuel-oxygen ratio (F/O) and the powder feed rate (PFR). The effect of these experimental factors on the first stages of the coating formation was investigated to study the physical state of the particles before and after impacting the substrate. This study allowed the selection of the most suitable deposition parameter combinations to obtain HAp coatings with optimal crystallinity (> 45%), Ca/P ratio (approx. 1.67), phase content (> 95% of HAp), which guarantee the coatings mechanical stability and biocompatibility. The behavior of the coating within simulated body fluid (SBF) and cell culture (hFOB) was studied to analyze the apatite layer formation and the extracts cytotoxicity on human osteoblasts, respectively. The results show that the F/O ratio is the most influential factor on temperature and velocity on the in-flight particles and therefore on the coating properties. The SBF results confirmed the formation of an apatite layer after 14 days of immersion. Finally, the mitochondrial activity, measured by the MTS assay, and cell membrane integrity measures by LDH liberation assays, show that the coating released material does not induce toxicity on the exposed cells.
We consider smooth random dynamical systems defined by a distribution with a finite moment of the norm of the differential, and prove that under suitable non-degeneracy conditions any stationary measure must be H\"older continuous. The result is a vast generalization of the classical statement on H\"older continuity of stationary measures of random walks on linear groups.
We construct a kinematical analogue of superluminal travel in the ``warped'' space-times curved by gravitation, in the form of ``super-phononic'' travel in the effective space-times of perfect nonrelativistic fluids. These warp-field space-times are most easily generated by considering a solid object that is placed as an obstruction in an otherwise uniform flow. No violation of any condition on the positivity of energy is necessary, because the effective curved space-times for the phonons are ruled by the Euler and continuity equations, and not by the Einstein field equations.
Metallic optical systems can confine light to deep sub-wavelength dimensions, but verifying the level of confinement at these length scales typically requires specialized techniques and equipment for probing the near-field of the structure. We experimentally measured the confinement of a metal-based optical cavity by using the cavity modes themselves as a sensitive probe of the cavity characteristics. By perturbing the cavity modes with conformal dielectric layers of sub-nm thickness using atomic layer deposition, we find the exponential decay length of the modes to be less than 5% of the free-space wavelength (\lambda) and the mode volume to be of order \lambda^3/1000. These results provide experimental confirmation of the deep sub-wavelength confinement capabilities of metal-based optical cavities.
We select 37 most common and realistic dense matter equation of states to integrate the general relativistic stellar structure equations for static spherically symmetric matter configurations. For all these models, we check the compliance of the acceptability conditions that every stellar model should satisfy. It was found that some of the non-relativistic equation of states violate the causality and/or the dominant energy condition and that adiabatic instabilities appear in the inner crust for all equation of state considered.
Combining the semiclassical/Nekrasov-Shatashvili limit of the AGT conjecture and the Bethe/gauge correspondence results in a triple correspondence which identifies classical conformal blocks with twisted superpotentials and then with Yang-Yang functions. In this paper the triple correspondence is studied in the simplest, yet not completely understood case of pure SU(2) super-Yang-Mills gauge theory. A missing element of that correspondence is identified with the classical irregular block. Explicit tests provide a convincing evidence that such a function exists. In particular, it has been shown that the classical irregular block can be recovered from classical blocks on the torus and sphere in suitably defined decoupling limits of classical external conformal weights. These limits are "classical analogues" of known decoupling limits for corresponding quantum blocks. An exact correspondence between the classical irregular block and the SU(2) gauge theory twisted superpotential has been obtained as a result of another consistency check. The latter determines the spectrum of the 2-particle periodic Toda (sin-Gordon) Hamiltonian in accord with the Bethe/gauge correspondence. An analogue of this statement is found entirely within 2d CFT. Namely, considering the classical limit of the null vector decoupling equation for the degenerate irregular block a celebrated Mathieu's equation is obtained with an eigenvalue determined by the classical irregular block. As it has been checked this result reproduces a well known weak coupling expansion of Mathieu's eigenvalue. Finally, yet another new formulae for Mathieu's eigenvalue relating the latter to a solution of certain Bethe-like equation are found.
We consider a string wrapped many times around a compact circle in space, and let this string carry a right moving wave which imparts momentum and angular momentum to the string. The angular momentum causes the strands of the `multiwound' string to separate and cover the surface of a torus. We compute the supergravity solution for this string configuration. We map this solution by dualities to the D1-D5 system with angular momentum that has been recently studied. We discuss how constructing this multiwound string solution may help us to relate the microscopic and macroscopic pictures of black hole absorption.
An approach for relating the nucleon excited states extracted from lattice QCD and the nucleon resonances of experimental data has been developed using the Hamiltonian effective field theory (HEFT) method. By formulating HEFT in the finite volume of the lattice, the eigenstates of the Hamiltonian model can be related to the energy eigenstates observed in Lattice simulations. By taking the infinite-volume limit of HEFT, information from the lattice is linked to experiment. The approach opens a new window for the study of experimentally-observed resonances from the first principles of lattice QCD calculations. With the Hamiltonian approach, one not only describes the spectra of lattice-QCD eigenstates through the eigenvalues of the finite-volume Hamiltonian matrix, but one also learns the composition of the lattice-QCD eigenstates via the eigenvectors of the Hamiltonian matrix. One learns the composition of the states in terms of the meson-baryon basis states considered in formulating the effective field theory. One also learns the composition of the resonances observed in Nature. In this paper, we will focus on recent breakthroughs in our understanding of the structure of the $N^*(1535)$, $N^*(1440)$ and $\Lambda^*(1405)$ resonances using this method.
We show that there exists a unique crystal base of a parabolic Verma module over a quantum orthosymplectic superalgebra, which is induced from a $q$-analogue of a polynomial representation of a general linear Lie superalgebra.
Disfluencies in spontaneous speech are known to be associated with prosodic disruptions. However, most algorithms for disfluency detection use only word transcripts. Integrating prosodic cues has proved difficult because of the many sources of variability affecting the acoustic correlates. This paper introduces a new approach to extracting acoustic-prosodic cues using text-based distributional prediction of acoustic cues to derive vector z-score features (innovations). We explore both early and late fusion techniques for integrating text and prosody, showing gains over a high-accuracy text-only model.
We study the orbital evolution of wide binary stars in the solar neighborhood due to gravitational perturbations from passing stars. We include the effects of the Galactic tidal field and continue to follow the stars after they become unbound. For a wide variety of initial semi-major axes and formation times, we find that the number density (stars per unit logarithmic interval in projected separation) exhibits a minimum at a few times the Jacobi radius r_J, which equals 1.7 pc for a binary of solar-mass stars. The density peak interior to this minimum arises from the primordial distribution of bound binaries, and the exterior density, which peaks at \sim 100--300 pc separation, arises from formerly bound binaries that are slowly drifting apart. The exterior peak gives rise to a significant long-range correlation in the positions and velocities of disk stars that should be detectable in large astrometric surveys such as GAIA that can measure accurate three-dimensional distances and velocities.
In this paper we consider the natural random walk on a planar graph and scale it by a small positive number $\delta$. Given a simply connected domain $D$ and its two boundary points $a$ and $b$, we start the scaled walk at a vertex of the graph nearby $a$ and condition it on its exiting $D$ through a vertex nearby $b$, and prove that the loop erasure of the conditioned walk converges, as $\delta \downarrow 0$, to the chordal SLE$_{2}$ that connects $a$ and $b$ in $D$, provided that an invariance principle is valid for both the random walk and the dual walk of it.
We generalize a new approach to entanglement conditions for light of undefined photons numbers given in [Phys. Rev. A {\bf 95}, 042113 (2017)] for polarization correlations to a broader family of interferometric phenomena. Integrated optics allows one to perform experiments based upon multiport beamsplitters. To observe entanglement effects one can use multi-mode parametric down-conversion emissions. When the structure of the Hamiltonian governing the emissions has (infinitely) many equivalent Schmidt decompositions into modes (beams), one can have perfect EPR-like correlations of numbers of photons emitted into "conjugate modes" which can be monitored at spatially separated detection stations. We provide entanglement conditions for experiments involving three modes on each side, and three-input-three-output multiport beamsplitters, and show their violations by bright squeezed vacuum states. We show that a condition expressed in terms of averages of observed rates is a much better entanglement indicator than a related one for the usual intensity variables. Thus the rates seem to emerge as a powerful concept in quantum optics, especially for fields of undefined intensities.
This paper introduces a new (dis)similarity measure for 2D arrays, extending the Average Common Submatrix measure. This is accomplished by: (i) considering the frequency of matching patterns, (ii) restricting the pattern matching to a fixed-size neighborhood, and (iii) computing a distance-based approximate matching. This will achieve better performances with low execution time and larger information retrieval.
We introduce a linearised form of the square root of the Todd class inside the Verbitsky component of a hyper-K\"ahler manifold using the extended Mukai lattice. This enables us to define a Mukai vector for certain objects in the derived category taking values inside the extended Mukai lattice which is functorial for derived equivalences. As applications, we obtain a structure theorem for derived equivalences between hyper-K\"ahler manifolds as well as an integral lattice associated to the derived category of hyper-K\"ahler manifolds deformation equivalent to the Hilbert scheme of a K3 surface mimicking the surface case.
We consider the interplay between the antiferromagnetic and Kekul\'e valence bond solid orderings in the zero energy Landau levels of neutral monolayer and bilayer graphene. We establish the presence of Wess-Zumino-Witten terms between these orders: this implies that their quantum fluctuations are described by the deconfined critical theories of quantum spin systems. We present implications for experiments, including the possible presence of excitonic superfluidity in bilayer graphene.
We reexamine the constraints on universal extra dimensional models arising from the inclusive radiative Bbar -> X_s gamma decay. We take into account the leading order contributions due to the exchange of Kaluza-Klein modes as well as the available next-to-next-to-leading order corrections to the Bbar -> X_s gamma branching ratio in the standard model. For the case of one large flat universal extra dimension, we obtain a lower bound on the inverse compactification radius 1/R > 600 GeV at 95% confidence level that is independent of the Higgs mass.
In a previous letter, we computed the decay constants of heavy pseudoscalar mesons in the framework of relativistic (instantaneous) Bethe-Salpeter method (full $0^-$ Salpeter equation), in this letter, we solve the full $1^-$ Salpeter equation and compute the leptonic decay constants of heavy-heavy and heavy-light vector mesons. The theoretical estimate of mass spectra of these heavy-heavy and heavy-light vector mesons are also presented. Our results for the decay constants and mass spectra include the complete relativistic contributions. We find $F_{D^*_s} \approx 375 \pm 24 $, $F_{D^*} \approx 340 \pm 23 (D^{*0},D^{*\pm})$, $F_{B^*_s} \approx 272 \pm 20 $, $F_{B^*} \approx 238 \pm 18 (B^{*0},B^{*\pm})$, $F_{B^*_c} \approx 418 \pm 24 $, $F_{J/\Psi} \approx 459 \pm 28 $, $F_{\Psi(2S)} \approx 364 \pm 24 $, $F_{\Upsilon} \approx 498 \pm 20 $ and $F_{\Upsilon(2S)} \approx 366 \pm 27 $ MeV.
We discuss an integrable model describing one-dimensional electrons interacting with two-dimensional anharmonic phonons. In the low temperature limit it is possible to decouple phonons and consider one-dimensional excitations separately. They have a trivial two-body scattering matrix and obey fractional statistics. As far as we know the original model presents the first example of a model with local bare interactions generating purely statistical interactions between renormalized particles. As a by-product we obtain non-trivial thermodynamic equations for the interacting system of two-dimensional phonons.
Recent advancements in artificial intelligence have propelled the capabilities of Large Language Models, yet their ability to mimic nuanced human reasoning remains limited. This paper introduces a novel conceptual enhancement to LLMs, termed the Artificial Neuron, designed to significantly bolster cognitive processing by integrating external memory systems. This enhancement mimics neurobiological processes, facilitating advanced reasoning and learning through a dynamic feedback loop mechanism. We propose a unique framework wherein each LLM interaction specifically in solving complex math word problems and common sense reasoning tasks is recorded and analyzed. Incorrect responses are refined using a higher capacity LLM or human in the loop corrections, and both the query and the enhanced response are stored in a vector database, structured much like neuronal synaptic connections. This Artificial Neuron thus serves as an external memory aid, allowing the LLM to reference past interactions and apply learned reasoning strategies to new problems. Our experimental setup involves training with the GSM8K dataset for initial model response generation, followed by systematic refinements through feedback loops. Subsequent testing demonstrated a significant improvement in accuracy and efficiency, underscoring the potential of external memory systems to advance LLMs beyond current limitations. This approach not only enhances the LLM's problem solving precision but also reduces computational redundancy, paving the way for more sophisticated applications of artificial intelligence in cognitive tasks. This paper details the methodology, implementation, and implications of the Artificial Neuron model, offering a transformative perspective on enhancing machine intelligence.
We investigate collective nonlinear dynamics in a blue-detuned optomechanical cavity that is mechanically coupled to an undriven mechanical resonator. By controlling the strength of the driving field, we engineer a mechanical gain that balances the losses of the undriven resonator. This gain-loss balance corresponds to the threshold where both coupled mechanical resonators enter simultaneously into self-sustained limit cycle oscillations regime. Rich sets of collective dynamics such as in-phase and out-of-phase synchronizations therefore emerge, depending on the mechanical coupling rate, the optically induced mechanical gain and spring effect, and the frequency mismatch between the resonators. Moreover, we introduce the quadratic coupling that induces enhancement of the in-phase synchronization. This work shows how phonon transport can remotely induce synchronization in coupled mechanical resonator array and opens up new avenues for metrology, communication, phonon-processing, and novel memories concepts.
We present the first combinatorial polynomial time algorithm for computing the equilibrium of the Arrow-Debreu market model with linear utilities.
We constrain the viable models of Horndeski gravity, written in its equivalent Generalised Galileon version, by resorting to the Witten positive energy theorem. We find that the free function $G_3(\phi, X)$ in the Lagrangian is constrained to be a function solely of the scalar field, $G_3(\phi)$, and relations among the free functions are found. Other criterion for stability are also analysed, such as the attractiveness of gravity, the Dolgov-Kawasacki instability and the energy conditions. Some applications for Cosmology are discussed
Structural colors are produced by wavelength-dependent scattering of light from nanostructures. While living organisms often exploit phase separation to directly assemble structurally colored materials from macromolecules, synthetic structural colors are typically produced in a two-step process involving the sequential synthesis and assembly of building blocks. Phase separation is attractive for its simplicity, but applications are limited due to a lack of robust methods for its control. A central challenge is to arrest phase separation at the desired length scale. Here, we show that solid-state polymerization-induced phase separation can produce stable structures at optical length scales. In this process, a polymeric solid is swollen and softened with a second monomer. During its polymerization, the two polymers become immiscible and phase separate. As free monomer is depleted, the host matrix resolidifies and arrests coarsening. The resulting PS-PMMA composites have a blue or white appearance. We compare these biomimetic nanostructures to those in structurally-colored feather barbs, and demonstrate the flexibility of this approach by producing structural color in filaments and large sheets.
Apart from being the gateway for all access to the eukaryotic genome, chromatin has in recent years been identified as carrying an epigenetic code regulating transcriptional activity. The detailed knowledge of this code contrasts the ignorance of the fiber structure which it regulates, and none of the suggested fiber models are capable of predicting the most basic quantities of the fiber (diameter, nucleosome line density, etc.). We address this three-decade-old problem by constructing a simple geometrical model based on the nucleosome shape alone. Without fit parameters we predict the observed properties of the condensed chromatin fiber (e.g. its 30 nm diameter), the structure, and how the fiber changes with varying nucleosome repeat length. Our approach further puts the plethora of previously suggested models within a coherent framework, and opens the door to detailed studies of the interplay between chromatin structure and function.
Pulsar-like compact stars provide us a unique laboratory to explore properties of dense matter at supra-nuclear densities. One of the models for pulsar-like stars is that they are totally composed of "strangeons", and in this paper we studied the pulsar glitches in a strangeon star model. Strangeon stars would be solidified during cooling, and the solid stars would be natural to have glitches as the result of starquakes. Based on the starquake model established before, we proposed that when the starquake occurs, the inner motion of the star which changes the moment of inertia and has impact on the glitch sizes, is divided into plastic flow and elastic motion. The plastic flow which is induced in the fractured part of the outer layer, would move tangentially to redistribute the matter of the star and would be hard to recover. The elastic motion, on the other hand, changes its shape and would recover significantly. Under this scenario, we could understand the behaviors of glitches without significant energy releasing, including the Crab and the Vela pulsars, in an uniform model. We derive the recovery coefficient as a function of glitch size, as well as the time interval between two successive glitches as the function of the released stress. Our results show consistency with observational data under reasonable ranges of parameters. The implications on the oblateness of the Crab and the Vela pulsars are discussed.
We give simpler, sparser, and faster algorithms for differentially private fine-tuning of large-scale pre-trained language models, which achieve the state-of-the-art privacy versus utility tradeoffs on many standard NLP tasks. We propose a meta-framework for this problem, inspired by the recent success of highly parameter-efficient methods for fine-tuning. Our experiments show that differentially private adaptations of these approaches outperform previous private algorithms in three important dimensions: utility, privacy, and the computational and memory cost of private training. On many commonly studied datasets, the utility of private models approaches that of non-private models. For example, on the MNLI dataset we achieve an accuracy of $87.8\%$ using RoBERTa-Large and $83.5\%$ using RoBERTa-Base with a privacy budget of $\epsilon = 6.7$. In comparison, absent privacy constraints, RoBERTa-Large achieves an accuracy of $90.2\%$. Our findings are similar for natural language generation tasks. Privately fine-tuning with DART, GPT-2-Small, GPT-2-Medium, GPT-2-Large, and GPT-2-XL achieve BLEU scores of 38.5, 42.0, 43.1, and 43.8 respectively (privacy budget of $\epsilon = 6.8,\delta=$ 1e-5) whereas the non-private baseline is $48.1$. All our experiments suggest that larger models are better suited for private fine-tuning: while they are well known to achieve superior accuracy non-privately, we find that they also better maintain their accuracy when privacy is introduced.
The research on multi-robot coverage path planning (CPP) has been attracting more and more attention. In order to achieve efficient coverage, this paper proposes an improved DARP coverage algorithm. The improved DARP algorithm based on A* algorithm is used to assign tasks to robots and then combined with STC algorithm based on Up-First algorithm to achieve full coverage of the task area. Compared with the initial DARP algorithm, this algorithm has higher efficiency and higher coverage rate.