text
stringlengths
6
128k
Causal disentanglement seeks a representation of data involving latent variables that relate to one another via a causal model. A representation is identifiable if both the latent model and the transformation from latent to observed variables are unique. In this paper, we study observed variables that are a linear transformation of a linear latent causal model. Data from interventions are necessary for identifiability: if one latent variable is missing an intervention, we show that there exist distinct models that cannot be distinguished. Conversely, we show that a single intervention on each latent variable is sufficient for identifiability. Our proof uses a generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model. We corroborate our theoretical results with a method for causal disentanglement that accurately recovers a latent causal model.
While instruction-tuned language models have demonstrated impressive zero-shot generalization, these models often struggle to generate accurate responses when faced with instructions that fall outside their training set. This paper presents Instructive Decoding (ID), a simple yet effective approach that augments the efficacy of instruction-tuned models. Specifically, ID adjusts the logits for next-token prediction in a contrastive manner, utilizing predictions generated from a manipulated version of the original instruction, referred to as a noisy instruction. This noisy instruction aims to elicit responses that could diverge from the intended instruction yet remain plausible. We conduct experiments across a spectrum of such noisy instructions, ranging from those that insert semantic noise via random words to others like 'opposite' that elicit the deviated responses. Our approach achieves considerable performance gains across various instruction-tuned models and tasks without necessitating any additional parameter updates. Notably, utilizing 'opposite' as the noisy instruction in ID, which exhibits the maximum divergence from the original instruction, consistently produces the most significant performance gains across multiple models and tasks.
We check The Vaisman condition of geometric quantization for R-matrix type Poisson pencil on a coadjoint orbit of a compact Lie group. It is shown that this condition isn't satisfied.
It is well established that the notion of min-entropy fails to satisfy the \emph{chain rule} of the form $H(X,Y) = H(X|Y)+H(Y)$, known for Shannon Entropy. Such a property would help to analyze how min-entropy is split among smaller blocks. Problems of this kind arise for example when constructing extractors and dispersers. We show that any sequence of variables exhibits a very strong strong block-source structure (conditional distributions of blocks are nearly flat) when we \emph{spoil few correlated bits}. This implies, conditioned on the spoiled bits, that \emph{splitting-recombination properties} hold. In particular, we have many nice properties that min-entropy doesn't obey in general, for example strong chain rules, "information can't hurt" inequalities, equivalences of average and worst-case conditional entropy definitions and others. Quantitatively, for any sequence $X_1,\ldots,X_t$ of random variables over an alphabet $\mathcal{X}$ we prove that, when conditioned on $m = t\cdot O( \log\log|\mathcal{X}| + \log\log(1/\epsilon) + \log t)$ bits of auxiliary information, all conditional distributions of the form $X_i|X_{<i}$ are $\epsilon$-close to be nearly flat (only a constant factor away). The argument is combinatorial (based on simplex coverings). This result may be used as a generic tool for \emph{exhibiting block-source structures}. We demonstrate this by reproving the fundamental converter due to Nisan and Zuckermann (\emph{J. Computer and System Sciences, 1996}), which shows that sampling blocks from a min-entropy source roughly preserves the entropy rate. Our bound implies, only by straightforward chain rules, an additive loss of $o(1)$ (for sufficiently many samples), which qualitatively meets the first tighter analysis of this problem due to Vadhan (\emph{CRYPTO'03}), obtained by large deviation techniques.
It is shown that "nonintegrable phases of Wilson line integrals" are not true dynamical variables in Chern-Simons field theory.
User engagement prediction plays a critical role for designing interaction strategies to grow user engagement and increase revenue in online social platforms. Through the in-depth analysis of the real-world data from the world's largest professional social platforms, i.e., LinkedIn, we find that users expose diverse engagement patterns, and a major reason for the differences in user engagement patterns is that users have different intents. That is, people have different intents when using LinkedIn, e.g., applying for jobs, building connections, or checking notifications, which shows quite different engagement patterns. Meanwhile, user intents and the corresponding engagement patterns may change over time. Although such pattern differences and dynamics are essential for user engagement prediction, differentiating user engagement patterns based on user dynamic intents for better user engagement forecasting has not received enough attention in previous works. In this paper, we proposed a Dynamic Intent Guided Meta Network (DIGMN), which can explicitly model user intent varying with time and perform differentiated user engagement forecasting. Specifically, we derive some interpretable basic user intents as prior knowledge from data mining and introduce prior intents in explicitly modeling dynamic user intent. Furthermore, based on the dynamic user intent representations, we propose a meta predictor to perform differentiated user engagement forecasting. Through a comprehensive evaluation on LinkedIn anonymous user data, our method outperforms state-of-the-art baselines significantly, i.e., 2.96% and 3.48% absolute error reduction, on coarse-grained and fine-grained user engagement prediction tasks, respectively, demonstrating the effectiveness of our method.
The space density of the various classes of cataclysmic variables (CVs) could only be weakly constrained in the past. Reasons were the small number of objects in complete X-ray flux-limited samples and the difficulty to derive precise distances to CVs. The former limitation still exists. Here the impact of Gaia parallaxes and implied distances on the space density of X-ray selected complete, flux-limited samples is studied. The samples are described in the literature, those of non-magnetic CVs are based on ROSAT (RBS - ROSAT Bright Survey & NEP -- North Ecliptic Pole), that of the Intermediate Polars stems from Swift/BAT. All CVs appear to be rarer than previously thought, although the new values are all within the errors of past studies. Upper limits at 90\% confidence for the space densities of non-magnetic CVs are $\rho_{\rm RBS} < 1.1 \times 10^{-6}$ pc$^{-3}$, and $\rho_{\rm RBS+NEP} < 5.1 \times 10^{-6}$ p$^{-3}$, for an assumed scale height of $h=260$ pc and $\rho_{\rm IPs} < 1.3 \times 10^{-7}$ p$^{-3}$ for the long-period Intermediate Polars at a scale height of 120 pc. Most of the distances to the IPs were under-estimated in the past. The upper limits to the space densities are only valid in the case where CVs do not have lower X-ray luminosities than the lowest-luminosity member of the sample. These results need consolidation by larger sample sizes, soon to be established through sensitive X-ray all-sky surveys to be performed with eROSITA on the Spektrum-X-Gamma mission.
We construct a simple symplectic map to study the dynamics of eccentric orbits in non-spherical potentials. The map offers a dramatic improvement in speed over traditional integration methods, while accurately representing the qualitative details of the dynamics. We focus attention on planar, non-axisymmetric power-law potentials, in particular the logarithmic potential. We confirm the presence of resonant orbit families (``boxlets'') in this potential and uncover new dynamics such as the emergence of a stochastic web in nearly axisymmetric logarithmic potentials. The map can also be applied to triaxial, lopsided, non-power-law and rotating potentials.
Loopable music generation systems enable diverse applications, but they often lack controllability and customization capabilities. We argue that enhancing controllability can enrich these models, with emotional expression being a crucial aspect for both creators and listeners. Hence, building upon LooperGP, a loopable tablature generation model, this paper explores endowing systems with control over conveyed emotions. To enable such conditional generation, we propose integrating musical knowledge by utilizing multi-granular semantic and musical features during model training and inference. Specifically, we incorporate song-level features (Emotion Labels, Tempo, and Mode) and bar-level features (Tonal Tension) together to guide emotional expression. Through algorithmic and human evaluations, we demonstrate the approach's effectiveness in producing music conveying two contrasting target emotions, happiness and sadness. An ablation study is also conducted to clarify the contributing factors behind our approach's results.
We prove that the BRST complex of a topological conformal field theory is a homotopy Gerstenhaber algebra, as conjectured by Lian and Zuckerman in 1992. We also suggest a refinement of the original conjecture for topological vertex operator algebras. We illustrate the usefulness of our main tools, operads and "string vertices" by obtaining new results on Vassiliev invariants of knots and double loop spaces.
Multilead ECG compression (MlEC) has attracted tremendous attention in long-term monitoring of the patients heart behavior. This paper proposes a method denoted by block sparse MlEC (BlS MlEC) in order to exploit between-lead correlations to compress the signals in a more efficient way. This is due to the fact that multi-lead ECG signals are multiple observations of the same source (heart) from different locations. Consequently, they have high correlation in terms of the support set of their sparse models which leads them to share dominant common structure. In order to obtain the block sparse model, the collaborative version of lasso estimator is applied. In addition, we have shown that raised cosine kernel has advantages over conventional Gaussian and wavelet (Daubechies family) due to its specific properties. It is demonstrated that using raised cosine kernel in constructing the sparsifying basis matrix gives a sparser model which results in higher compression ratio and lower reconstruction error. The simulation results show the average improvement of 37%, 88% and 90-97% for BlS M-lEC compared to the non-collaborative case with raised cosine kernel, Gaussian kernel and collaborative case with Daubechies wavelet kernels, respectively, in terms of reconstruction error while the compression ratio is considered fixed.
A phenomenological extension of the well-known brane-world cosmology of Dvali, Gabadadze and Porrati (eDGP) has recently been proposed. In this model, a cosmological-constant-like term is explicitly present as a non-vanishing tension sigma on the brane, and an extra parameter alpha tunes the cross-over scale r_c, the scale at which higher dimensional gravity effects become non negligible. Since the Hubble parameter in this cosmology reproduces the same LCDM expansion history, we study how upcoming weak lensing surveys, such as Euclid and DES (Dark Energy Survey), can confirm or rule out this class of models. We perform Markov Chain Monte Carlo simulations to determine the parameters of the model, using Type Ia Supernov\ae, H(z) data, Gamma Ray Bursts and Baryon Acoustic Oscillations. We also fit the power spectrum of the temperature anisotropies of the Cosmic Microwave Background to obtain the correct normalisation for the density perturbation power spectrum. Then, we compute the matter and the cosmic shear power spectra, both in the linear and non-linear regimes. The latter is calculated with the two different approaches of Hu and Sawicki (2007) (HS) and Khoury and Wyman (2009) (KW). With the eDGP parameters coming from the Markov Chains, KW reproduces the LCDM matter power spectrum at both linear and non-linear scales and the LCDM and eDGP shear signals are degenerate. This result does not hold with the HS prescription: Euclid can distinguish the eDGP model from LCDM because their expected power spectra roughly differ by the 3sigma uncertainty in the angular scale range 700<l<3000; on the contrary, the two models differ at most by the 1sigma uncertainty over the range 500<l<3000 in the DES experiment and they are virtually indistinguishable.
While short range 3D pedestrian detection is sufficient for emergency breaking, long range detections are required for smooth breaking and gaining trust in autonomous vehicles. The current state-of-the-art on the KITTI benchmark performs suboptimal in detecting the position of pedestrians at long range. Thus, we propose an approach specifically targeting long range 3D pedestrian detection (LRPD), leveraging the density of RGB and the precision of LiDAR. Therefore, for proposals, RGB instance segmentation and LiDAR point based proposal generation are combined, followed by a second stage using both sensor modalities symmetrically. This leads to a significant improvement in mAP on long range compared to the current state-of-the art. The evaluation of our LRPD approach was done on the pedestrians from the KITTI benchmark.
We discuss regularity statements for equidistant decompositions of Riemannian manifolds and for the corresponding quotient spaces. We show that any stratum of the quotient space has curvature locally bounded from both sides.
The demonstration of optical multipath interference from a large number of quantum emitters is essential for the realization of many paradigmatic experiments in quantum optics. However, such interference remains still unexplored as it crucially depends on the sub-wavelength positioning accuracy and stability of all emitters. We present the observation of controlled interference of light scattered from strings of up to 53~trapped ions. The light scattered from ions localized in a harmonic trapping potential is collected along the ion crystal symmetry axis, which guarantees the spatial indistinguishability and allows for an efficient scaling of the contributing ion number. We achieve the preservation of the coherence of scattered light observable for all the measured string sizes and nearly-optimal enhancement of phase sensitivity. The presented results will enable realization and control of directional photon emission, direct detection of enhanced quadrature squeezing of atomic resonance fluorescence, or optical generation of genuine multi-partite entanglement of atoms.
Excess noise from scattered light poses a persistent challenge in the analysis of data from gravitational wave detectors such as LIGO. We integrate a physically motivated model for the behavior of these "glitches" into a standard Bayesian analysis pipeline used in gravitational wave science. This allows for the inference of the free parameters in this model, and subtraction of these models to produce glitch-free versions of the data. We show that this inference is an effective discriminator of the presence of the features of these glitches, even when those features may not be discernible in standard visualizations of the data.
We present a study of the statistical properties of three velocity dispersion and mass estimators, namely biweight, gapper and standard deviation, in the small number of galaxies regime ($N_{\rm gal} \le 75$). Using a set of 73 numerically simulated galaxy clusters, we characterise the statistical bias and the variance for the three estimators, both in the determination of the velocity dispersion and the dynamical mass of the clusters via the $\sigma-M$ relation. The results are used to define a new set of unbiased estimators, that are able to correct for those statistical biases with a minimal increase of the associated variance. The numerical simulations are also used to characterise the impact of velocity segregation in the selection of cluster members, and the impact of using cluster members within different physical radii from the cluster centre. The standard deviation is found to be the lowest variance estimator. The selection of galaxies within the sub-sample of the most massive galaxies in the cluster introduces a $2\,$\% bias in the velocity dispersion estimate when calculated using a quarter of the most massive cluster members. We also find a dependence of the velocity dispersion estimate on the aperture radius as a fraction of $R_{200}$, consistent with previous results. The proposed set of unbiased estimators effectively provides a correction of the velocity dispersion and mass estimates from all those effects in the small number of cluster members regime. This is tested by applying the new estimators to a subset of simulated observations. Although for a single galaxy cluster the statistical and physical effects discussed here are comparable or slightly smaller than the bias introduced by interlopers, they will be of relevance when dealing with ensemble properties and scaling relations for large cluster samples (Abridged).
The qualitative difference between the anomalous scaling properties of hadronic final states in soft and hard processes of high energy collisions is studied in some detail. It is pointed out that the experimental data of e^+e^- collisions at $E_{cm}=$91.2 GeV from DELPHI indicate that the dynamical fluctuations in e^+e^- collisions are isotropical, in contrast to the anisotropical fluctuations oberserved in hadron-hadron collision experiments. This assertion is confirmed by the Monte Carlo simulation using the Jetset7.4 event generator.
Realizing faster experimental cycle times is important for the future of quantum simulation. The cycle time determines how often the many-body wave-function can be sampled, defining the rate at which information is extracted from the quantum simulation. We demonstrate a system which can produce a Bose-Einstein condensate of $8 \times 10^4$ $^{168}\text{Er}$ atoms with approximately 85% condensate fraction in 800 ms and a degenerate Fermi gas of $^{167}\text{Er}$ in 4 seconds, which are unprecedented times compared to many existing quantum gas experiments. This is accomplished by several novel cooling techniques and a tunable dipole trap. The methods used here for accelerating the production of quantum degenerate gases should be applicable to a variety of atomic species and are promising for expanding the capabilities of quantum simulation.
Space-time (ST) beams, ultrafast optical wavepackets with customized spatial and temporal characteristics, present a significant contrast to conventional spatial-structured light and hold the potential to revolutionize our understanding and manipulation of light. However, the progress in ST beam research has been constrained by the absence of a universal framework for their analysis and generation. Here, we introduce the concept of "two-dimensional ST duality", establishing a foundational duality between spatial-structured light and ST beams. We show that breaking the exact balance between paraxial diffraction and narrow-band dispersion is crucial for guiding the dynamics of ST wavepackets. Leveraging this insight, we pioneer a versatile complex-amplitude modulation strategy, enabling the precise crafting of ST beams with an exceptional fidelity exceeding 97%. Furthermore, we uncover a new range of ST wavepackets by harnessing the exact one-to-one relationship between scalar spatial-structured light and ST beams. Our findings suggest a paradigm shift opportunity in ST beam research and may apply to a broader range of wave physics systems.
We show that the light-front vaccum is not trivial, and the Fock space for positive energy quanta solutions is not complete. As an example of this non triviality we have calculated the electromagnetic current for scalar bosons in the background field method were the covariance is restored through considering the complete Fock space of solutions. We also show thus that the method of "dislocating the integration pole" is nothing more than a particular case of this, so that such an "ad hoc" prescription can be dispensed altogether if we deal with the whole Fock space. In this work we construct the electromagnetic current operator for a system composed of two free bosons. The technique employed to deduce these operators is through the definition of global propagators in the light front when a background electromagnetic field acts on one of the particles.
Double pants decompositions were introduced in our paper "Double pants decompositions of 2-surfaces" (Mosc. Math. J. 11 (2011), no. 2, 231-258, arXiv:1005.0073), together with a flip-twist groupoid acting on these decompositions. It was shown that flip-twist groupoid acts transitively on a certain topological class of the decompositions, however, recently Randich discovered a serious mistake in the proof. In this note we present a new proof of the result, accessible without reading the initial paper.
Large Language Models (LLMs) have recently shown impressive abilities in handling various natural language-related tasks. Among different LLMs, current studies have assessed ChatGPT's superior performance across manifold tasks, especially under the zero/few-shot prompting conditions. Given such successes, the Recommender Systems (RSs) research community have started investigating its potential applications within the recommendation scenario. However, although various methods have been proposed to integrate ChatGPT's capabilities into RSs, current research struggles to comprehensively evaluate such models while considering the peculiarities of generative models. Often, evaluations do not consider hallucinations, duplications, and out-of-the-closed domain recommendations and solely focus on accuracy metrics, neglecting the impact on beyond-accuracy facets. To bridge this gap, we propose a robust evaluation pipeline to assess ChatGPT's ability as an RS and post-process ChatGPT recommendations to account for these aspects. Through this pipeline, we investigate ChatGPT-3.5 and ChatGPT-4 performance in the recommendation task under the zero-shot condition employing the role-playing prompt. We analyze the model's functionality in three settings: the Top-N Recommendation, the cold-start recommendation, and the re-ranking of a list of recommendations, and in three domains: movies, music, and books. The experiments reveal that ChatGPT exhibits higher accuracy than the baselines on books domain. It also excels in re-ranking and cold-start scenarios while maintaining reasonable beyond-accuracy metrics. Furthermore, we measure the similarity between the ChatGPT recommendations and the other recommenders, providing insights about how ChatGPT could be categorized in the realm of recommender systems. The evaluation pipeline is publicly released for future research.
Phonon measurements in the A15-type superconductors were complicated in the past because of the unavailability of large single crystals for inelastic neutron scattering, e.g., in the case of Nb$_3$Sn, or unfavorable neutron scattering properties in the case of V$_3$Si. Hence, only few studies of the lattice dynamical properties with momentum resolved methods were published, in particular below the superconducting transition temperature $T_c$. Here, we overcome these problems by employing inelastic x-ray scattering and report a combined experimental and theoretical investigation of lattice dynamics in V$_3$Si with the focus on the temperature-dependent properties of low-energy acoustic phonon modes in several high-symmetry directions. We paid particular attention to the evolution of the soft phonon mode of the structural phase transition observed in our sample at $T_s=18.9\,\rm{K}$, i.e., just above the measured superconducting phase transition at $T_c=16.8\,\rm{K}$. Theoretically, we predict lattice dynamics including electron-phonon coupling based on density-functional-perturbation theory and discuss the relevance of the soft phonon mode with regard to the value of $T_c$. Furthermore, we explain superconductivityinduced anomalies in the lineshape of several acoustic phonon modes using a model proposed by Allen et al., [Phys. Rev. B 56, 5552 (1997)].
A small number of double-lobed radio galaxies (17 from our own census of the literature) show an additional pair of low surface brightness `wings', thus forming an overall `X'-shaped appearance. The origin of the wings in these radio sources is unclear. They may be the result of back-flowing plasma from the currently active radio lobes into an asymmetric medium surrounding the active nucleus, which would make these ideal systems in which to study thermal/non-thermal plasma interactions in extragalactic radio sources. Another possibility is that the wings are the aging radio lobes left over after a (rapid) realignment of the central supermassive black-hole/accretion disk system due perhaps to a merger. Generally, these models are not well tested; with the small number of known examples, previous works focused on detailed case studies of selected sources with little attempt at a systematic study of a large sample. Using the VLA-FIRST survey database, we are compiling a large sample of winged and X-shaped radio sources for such studies. As a first step toward this goal, an initial sample of 100 new candidate objects of this type are presented in this paper. ...[abridged]
The latent class model is a powerful unsupervised clustering algorithm for categorical data. Many statistics exist to test the fit of the latent class model. However, traditional methods to evaluate those fit statistics are not always useful. Asymptotic distributions are not always known, and empirical reference distributions can be very time consuming to obtain. In this paper we propose a fast resampling scheme with which any type of model fit can be assessed. We illustrate it here on the latent class model, but the methodology can be applied in any situation. The principle behind the lazy bootstrap method is to specify a statistic which captures the characteristics of the data that a model should capture correctly. If those characteristics in the observed data and in model-generated data are very different we can assume that the model could not have produced the observed data. With this method we achieve the flexibility of tests from the Bayesian framework, while only needing maximum likelihood estimates. We provide a step-wise algorithm with which the fit of a model can be assessed based on the characteristics we as researcher find important. In a Monte Carlo study we show that the method has very low type I errors, for all illustrated statistics. Power to reject a model depended largely on the type of statistic that was used and on sample size. We applied the method to an empirical data set on clinical subgroups with risk of Myocardial infarction and compared the results directly to the parametric bootstrap. The results of our method were highly similar to those obtained by the parametric bootstrap, while the required computations differed three orders of magnitude in favour of our method.
We study transfer principles for upper bounds of motivic exponential functions and for linear combinations of such functions, directly generalizing the transfer principles from [7] by Cluckers-Loeser and [13, Appendix B] by Shin-Templier (appendix B by Cluckers-Gordon-Halupczok). These functions come from rather general oscillatory integrals on local fields, and can be used to describe e.g. Fourier transforms of orbital integrals. One of our techniques consists in reducing to simpler functions where the oscillation only comes from the residue field.
During its first 4 months of taking data, Advanced LIGO has detected gravitational waves from two binary black hole mergers, GW150914 and GW151226, along with the statistically less significant binary black hole merger candidate LVT151012. We use our rapid binary population synthesis code COMPAS to show that all three events can be explained by a single evolutionary channel -- classical isolated binary evolution via mass transfer including a common envelope phase. We show all three events could have formed in low-metallicity environments (Z = 0.001) from progenitor binaries with typical total masses $\gtrsim 160 M_\odot$, $\gtrsim 60 M_\odot$ and $\gtrsim 90 M_\odot$, for GW150914, GW151226, and LVT151012, respectively.
We report the discovery of a new Small Magellanic Cloud Pulsar Wind Nebula (PWN) at the edge of the Supernova Remnant (SNR)-DEM S5. The pulsar powered object has a cometary morphology similar to the Galactic PWN analogs PSR B1951+32 and 'the mouse'. It is travelling supersonically through the interstellar medium. We estimate the Pulsar kick velocity to be in the range of 700-2000 km/s for an age between 28-10 kyr. The radio spectral index for this SNR PWN pulsar system is flat (-0.29 $\pm$ 0.01) consistent with other similar objects. We infer that the putative pulsar has a radio spectral index of -1.8, which is typical for Galactic pulsars. We searched for dispersion measures (DMs) up to 1000 cm/pc^3 but found no convincing candidates with a S/N greater than 8. We produce a polarisation map for this PWN at 5500 MHz and find a mean fractional polarisation of P $\sim 23$ percent. The X-ray power-law spectrum (Gamma $\sim 2$) is indicative of non-thermal synchrotron emission as is expected from PWN-pulsar system. Finally, we detect DEM S5 in Infrared (IR) bands. Our IR photometric measurements strongly indicate the presence of shocked gas which is expected for SNRs. However, it is unusual to detect such IR emission in a SNR with a supersonic bow-shock PWN. We also find a low-velocity HI cloud of $\sim 107$ km/s which is possibly interacting with DEM S5. SNR DEM S5 is the first confirmed detection of a pulsar-powered bow shock nebula found outside the Galaxy.
We develop a formalism for evaluation of the transverse momentum dependence of cross sections of the radiation processes in medium. The analysis is based on the light-cone path integral approach to the induced radiation. The results are applicable in both QED and QCD.
We study the Rayleigh scattering induced by a diamond nanocrystal in a whispering-gallery-microcavity--waveguide coupling system, and find that it plays a significant role in the photon transportation. On one hand, this study provides a new insight into future solid-state cavity quantum electrodynamics toward strong coupling physics. On the other hand, benefitting from this Rayleigh scattering, novel photon transportation such as dipole induced transparency and strong photon antibunching can occur simultaneously. As potential applications, this system can function as high-efficiency photon turnstiles. In contrast to [B. Dayan \textit{et al.}, \textrm{Science} \textbf{319},1062 (2008)], the photon turnstiles proposed here are highly immune to nanocrystal's azimuthal position.
The transformations of the sum identities for generalized harmonic and oscillatory numbers, obtained earlier in our recent report [1], enable us to derive the new identities expressed in terms of the corresponding square roots of x. At least one of these identities may be applied to prove the Riemann Hypothesis by induction. Additionally using this approach, the new series for Euler's constant gamma has been found.
We investigate the bias-modulated dynamics of a strongly driven two-level system using the counter-rotating-hybridized rotating-wave (CHRW) method. This CHRW method treats the driving field and the bias on equal footing by a unitary transformation with two parameters $\xi$ and $\zeta$, and is nonperturbative in driving strength, tunneling amplitude or bias. In addition, this CHRW method is beyond the traditional rotating-wave approximation (Rabi-RWA) and yet by properly choosing the two parameters $\xi$ and $\zeta$, the transformed Hamiltonian takes the RWA form with a renormalized energy splitting and a renormalized driving strength. The reformulated CHRW method possesses the same mathematical simplicity as the Rabi-RWA approach and thus allows us to calculate analytically the dynamics and explore explicitly the effect of the bias. We show that the CHRW method gives the accurate driven dynamics for a wide range of parameters as compared to the numerically exact results. When energy scales of the driving are comparable to the intrinsic energy scale of the two-level systems, the counter-rotating interactions and static bias profoundly influence the generalized Rabi frequency. In this regime, where ordinary perturbation approaches fail, the CHRW works very well and efficiently. We also demonstrate the dynamics of the system in the strong-driving and off-resonance cases for which the Rabi-RWA method breaks down but the CHRW method remains valid. We obtain analytical expressions for the generalized Rabi frequency and bias-modulated Bloch-Siegert shift as functions of the bias, tunneling and driving field parameters. The CHRW approach is a mathematically simple and physically clear method. It can be applied to treat some complicated problems for which a numerical study is difficult to perform.
Nuclear deep inelastic scattering is considered in the framework of a model in which the current operator explicitly satisfies Poincare invariance and current conservation. The results considerably differ from the standard ones at small values of the Bjorken variable $x$. In particular, it is impossible to extract the neutron structure functions from the deuteron data at x<0.01 and we predict that the behavior of the deuteron structure functions at low x and large momentum transfer considerably differs from the behavior of the nucleon structure functions at such conditions. We also argue that for heavier nuclei the effect of the final state interaction is important even in the Bjorken limit.
Critical discussion of the recently published results [Muller, A. and Aschenbach, B. 2007 Class. Quantum Grav. 24, p. 2637; arXiv:0704.3963] on the non-monotonic orbital velocity profiles of the Keplerian motion of test particles and l = const motion of test perfect fluid around K(a)dS black holes is given, and the discrepancies concerned the existence of the non-monotonicity in dependence of the spacetime parameters are corrected. Moreover a new non-monotonic behaviour of the Keplerian orbital velocity in the Kerr-antide Sitter spacetimes is highlighted.
The rules associated with propositional logic programs and the stable model semantics are not expressive enough to let one write concise programs. This problem is alleviated by introducing some new types of propositional rules. Together with a decision procedure that has been used as a base for an efficient implementation, the new rules supplant the standard ones in practical applications of the stable model semantics.
We generalize the Hitchin-Kobayashi correspondence between semistability and the existence of approximate Hermitian-Yang-Mills structures to the case of principal Higgs bundles. We prove that a principal Higgs bundle on a compact Kaehler manifold, with structure group a connected linear algebraic reductive group, is semistable if and only if it admits an approximate Hermitian-Yang-Mills structure.
A well-known challenge in applying deep-learning methods to omnidirectional images is spherical distortion. In dense regression tasks such as depth estimation, where structural details are required, using a vanilla CNN layer on the distorted 360 image results in undesired information loss. In this paper, we propose a 360 monocular depth estimation pipeline, OmniFusion, to tackle the spherical distortion issue. Our pipeline transforms a 360 image into less-distorted perspective patches (i.e. tangent images) to obtain patch-wise predictions via CNN, and then merge the patch-wise results for final output. To handle the discrepancy between patch-wise predictions which is a major issue affecting the merging quality, we propose a new framework with the following key components. First, we propose a geometry-aware feature fusion mechanism that combines 3D geometric features with 2D image features to compensate for the patch-wise discrepancy. Second, we employ the self-attention-based transformer architecture to conduct a global aggregation of patch-wise information, which further improves the consistency. Last, we introduce an iterative depth refinement mechanism, to further refine the estimated depth based on the more accurate geometric features. Experiments show that our method greatly mitigates the distortion issue, and achieves state-of-the-art performances on several 360 monocular depth estimation benchmark datasets.
The effective degrees of freedom of the Quark-Gluon Plasma are studied in the temperature range $\sim 1-2$ $ T_c$. Employing lattice results for the pressure and the energy density, we constrain the quasiparticle chiral invariant mass to be of order 200 MeV and the effective number of bosonic resonant states to be at most of order $\sim 10$. The chiral mass and the effective number of bosonic degrees of freedom decrease with increasing temperature and at $T \sim 2$ $T_c$ only quark and gluon quasiparticles survive. Some remarks regarding the role of the gluon condensation and the baryon number-strangeness correlation are also presented.
The low thermal conductivity of piezoelectric perovskites is a challenge for high power transducer applications. We report first principles calculations of the thermal conductivity of ferroelectric PbTiO$_3$ and the cubic nearly ferroelectric perovskite KTaO$_3$. The calculated thermal conductivity of PbTiO$_3$ is much lower than that of KTaO$_3$ in accord with experiment. Analysis of the results shows that the reason for the low thermal conductivity of PbTiO$_3$ is the presence of low frequency optical phonons associated with the polar modes. These are less dispersive in PbTiO$_3$, leading to a large three phonon scattering phase space. These differences between the two materials are associated with the $A$-site driven ferroelectricity of PbTiO$_3$ in contrast to the $B$-site driven near ferroelectricity of KTaO$_3$. The results are discussed in the context of modification of the thermal conductivity of electroactive materials.
In this paper we investigate spectral properties of Lapla- cians on Rooms and Passages domains. In the first part, we use Dirichlet- Neumann bracketing techniques to show that for the Neumann Lapla- cian in certain Rooms and Passages domains the second term of the asymptotic expansion of the counting function is of order $\sqrt{\lambda}$. For the Dirichlet Laplacian our methods only give an upper estimate of the form $\sqrt{\lambda}$. In the second part of the paper, we consider the relation- ship between Neumann Laplacians on Rooms and Passages domains and Sturm-Liouville operators on the skeleton.
We discuss a (10+2)D N=(1,1) superalgebra and its projections to M-theory, type IIA and IIB algebras. From the complete classification of a second-rank central term valued in the so(10,2) algebra, we find all possible BPS states coming from this term. We show that, among them, there are two types of 1/2-susy BPS configurations; one corresponds to a super (2+2)-brane while another one arises from a nilpotent element in so(10,2).
This paper presents a novel deep architecture for saliency prediction. Current state of the art models for saliency prediction employ Fully Convolutional networks that perform a non-linear combination of features extracted from the last convolutional layer to predict saliency maps. We propose an architecture which, instead, combines features extracted at different levels of a Convolutional Neural Network (CNN). Our model is composed of three main blocks: a feature extraction CNN, a feature encoding network, that weights low and high level feature maps, and a prior learning network. We compare our solution with state of the art saliency models on two public benchmarks datasets. Results show that our model outperforms under all evaluation metrics on the SALICON dataset, which is currently the largest public dataset for saliency prediction, and achieves competitive results on the MIT300 benchmark.
Since a Poisson Structure is a smooth bivector field, we use a ring-valued sheaf $\OO_{X}$ on a manifold with corners $X$, we can interpret $\OO_{X}(U)$ as the ring of admissible smooth functions where $U$ is an open subset on $X$, in this way, a poisson structure on $(X, \OO_{X})$ is a sheaf morphism $$ \{-, -\}: \OO_{X} \times \OO_{X} \longrightarrow \OO_{X} $$ which satisfies the Leibniz rule an also the Jacobi Identity.
Canada's access to neutron beams for neutron scattering was significantly curtailed in 2018 with the closure of the National Research Universal (NRU) reactor in Chalk River, Ontario, Canada. New sources are needed for the long-term; otherwise, access will only become harder as the global supply shrinks. Compact Accelerator-based Neutron Sources (CANS) offer the possibility of an intense source of neutrons with a capital cost significantly lower than spallation sources. In this paper, we propose a CANS for Canada. The proposal is staged with the first stage offering a medium neutron-flux, linac-based approach for neutron scattering that is also coupled with a boron neutron capture therapy (BNCT) station and a positron emission tomography (PET) isotope station. The first stage will serve as a prototype for a second stage: a higher brightness, higher cost facility that could be viewed as a national centre for neutron applications.
On the one hand, the correctness of routing protocols in networks is an issue of utmost importance for guaranteeing the delivery of messages from any source to any target. On the other hand, a large collection of routing schemes have been proposed during the last two decades, with the objective of transmitting messages along short routes, while keeping the routing tables small. Regrettably, all these schemes share the property that an adversary may modify the content of the routing tables with the objective of, e.g., blocking the delivery of messages between some pairs of nodes, without being detected by any node. In this paper, we present a simple certification mechanism which enables the nodes to locally detect any alteration of their routing tables. In particular, we show how to locally verify the stretch-3 routing scheme by Thorup and Zwick [SPAA 2001] by adding certificates of $\widetilde{O}(\sqrt{n})$ bits at each node in $n$-node networks, that is, by keeping the memory size of the same order of magnitude as the original routing tables. We also propose a new name-independent routing scheme using routing tables of size $\widetilde{O}(\sqrt{n})$ bits. This new routing scheme can be locally verified using certificates on $\widetilde{O}(\sqrt{n})$ bits. Its stretch is3 if using handshaking, and 5 otherwise.
A concise presentation of Schrodinger's ancilla theorem (1936 Proc. Camb. Phil. Soc. 32, 446) and its several recent rediscoveries.
We describe the latest results obtained by the CMS Collaboration on top quark spin and polarization properties. The top quark spin asymmetry is measured both targeting single-top quark production in the $t$-channel and single-top quark production in association with a Z boson. Additionally, all the independent coefficients of the spin-dependent part of the top quark-antiquark production density matrix are measured and the results are extrapolated to the High-Luminosity LHC scenario.
The precision of the parallax measurements by Gaia is unprecedented. As of Gaia Data Release 2, the number of known nearby open clusters has increased. Some of the clusters appear to be relatively close to each other and form aggregates, which makes them interesting objects to study. We study the aggregates of clusters which share several of the assigned member stars in relatively narrow volumes of the phase space. Using the most recent list of open clusters, we compare the cited central parallaxes with the histograms of parallax distributions of cluster aggregates. The aggregates were chosen based on the member stars which are shared by multiple clusters. Many of the clusters in the aggregates have been assigned parallaxes which coincide with the histograms. However, clusters that share a large number of members in a small volume of the phase space display parallax distributions which do not coincide with the values from the literature. This is the result of ignoring a possibility of assigning multiple probabilities to a single star. We propose that this small number of clusters should be analysed anew.
In this paper we study the Hawking radiation in Reissner-Nordstrom and Kerr-Newman black holes by considering the charge to be the function of radial coordinate.
As a general rule, it is considered that the global gauge invariance of an action integral does not cause the occurrence of gauge field. However, in this paper we demonstrate that when the so-called localized assumption is excluded, the gauge field will be induced by the global gauge invariance of the action integral. An example is given to support this conclusion.
We present a pipeline of Image to Vector (Img2Vec) for masked image modeling (MIM) with deep features. To study which type of deep features is appropriate for MIM as a learning target, we propose a simple MIM framework with serials of well-trained self-supervised models to convert an Image to a feature Vector as the learning target of MIM, where the feature extractor is also known as a teacher model. Surprisingly, we empirically find that an MIM model benefits more from image features generated by some lighter models (e.g., ResNet-50, 26M) than from those by a cumbersome teacher like Transformer-based models (e.g., ViT-Large, 307M). To analyze this remarkable phenomenon, we devise a novel attribute, token diversity, to evaluate the characteristics of generated features from different models. Token diversity measures the feature dissimilarity among different tokens. Through extensive experiments and visualizations, we hypothesize that beyond the acknowledgment that a large model can improve MIM, a high token-diversity of a teacher model is also crucial. Based on the above discussion, Img2Vec adopts a teacher model with high token-diversity to generate image features. Img2Vec pre-trained on ImageNet unlabeled data with ViT-B yields 85.1\% top-1 accuracy on fine-tuning. Moreover, we scale up Img2Vec on larger models, ViT-L and ViT-H, and get $86.7\%$ and $87.5\%$ accuracy respectively. It also achieves state-of-the-art results on other downstream tasks, e.g., 51.8\% mAP on COCO and 50.7\% mIoU on ADE20K. Img2Vec is a simple yet effective framework tailored to deep feature MIM learning, accomplishing superb comprehensive performance on representative vision tasks.
Many robot applications call for autonomous exploration and mapping of unknown and unstructured environments. Information-based exploration techniques, such as Cauchy-Schwarz quadratic mutual information (CSQMI) and fast Shannon mutual information (FSMI), have successfully achieved active binary occupancy mapping with range measurements. However, as we envision robots performing complex tasks specified with semantically meaningful objects, it is necessary to capture semantic categories in the measurements, map representation, and exploration objective. This work develops a Bayesian multi-class mapping algorithm utilizing range-category measurements. We derive a closed-form efficiently computable lower bound for the Shannon mutual information between the multi-class map and the measurements. The bound allows rapid evaluation of many potential robot trajectories for autonomous exploration and mapping. We compare our method against frontier-based and FSMI exploration and apply it in a 3-D photo-realistic simulation environment.
Using a model for self-regulated growth of black holes (BHs) in mergers involving gas-rich galaxies, we study the relationship between quasars and the population of merging galaxies and predict the merger-induced star formation rate density of the Universe. Mergers drive nuclear gas inflows, fueling starbursts and 'buried quasars' until accretion feedback expels the gas, rendering a briefly visible optical quasar. Star formation is shut down and accretion declines, leaving a passively evolving remnant with properties typical of red, elliptical galaxies. Based on evolution of these events in our simulations, we demonstrate that the observed statistics of merger rates, luminosity functions (LFs) and mass functions, SFR distributions, specific SFRs, quasar and quasar host galaxy LFs, and elliptical/red galaxy LFs are self-consistent and follow from one another as predicted by the merger hypothesis. We use our simulations to de-convolve both quasar and merging galaxy LFs to determine the birthrate of black holes of a given final mass and merger rates as a function of stellar mass. We use this to predict the merging galaxy LF in several observed wavebands, color-magnitude relations, mass functions, absolute and specific SFR distributions and SFR density, and quasar host galaxy LFs, as a function of redshift from z=0-6. We invert this and predict e.g. quasar LFs from observed merger LFs or SFR distributions. Our results agree well with observations, but idealized models of quasar lightcurves are ruled out by comparison of merger and quasar observations at >99.9% confidence. Using only observations of quasars, we estimate the contribution of mergers to the SFR density of the Universe even to high redshifts z~4.
In this brief note, we show how to apply Kummer's and other quadratic transformation formulas for Gauss' and generalized hypergeometric functions in order to obtain transformation and summation formulas for series with harmonic numbers that contain one or two continuous parameters. We also give a generating function of the sequence $\frac{(a)_n (1-a)_n}{(n!)^2}H_n$ as a combination of Gauss hypergeometric function and elementary functions.
Using an effective hadronic Lagrangian with physical hadron masses and coupling constants determined either empirically or from SU(4) flavor symmetry, we study the production cross sections of charm mesons from pion and rho meson interactions with nucleons. With a cutoff parameter of 1 GeV at interaction vertices as usually used in studying the cross sections for $J/\psi$ absorption and charm meson scattering by hadrons, we find that the cross sections for charm meson production have values of a few tenth of mb and are dominated by the s channel nucleon pole diagram. Relevance of these reactions to charm meson production in relativistic heavy ion collisions is discussed.
We discuss the gauge dependence of physical parameter's definitions under the on-shell and pole mass renormalization prescriptions. By two-loop-level calculations we prove for the first time that the on-shell mass renormalization prescription makes physical result gauge dependent. On the other hand, such gauge dependence doesn't appear in the result of the pole mass renormalization prescription. Our calculation also implies the difference of the physical results between the two mass renormalization prescriptions cannot be neglected at two-loop level.
Music genre classification has become increasingly critical with the advent of various streaming applications. Nowadays, we find it impossible to imagine using the artist's name and song title to search for music in a sophisticated music app. It is always difficult to classify music correctly because the information linked to music, such as region, artist, album, or non-album, is so variable. This paper presents a study on music genre classification using a combination of Digital Signal Processing (DSP) and Deep Learning (DL) techniques. A novel algorithm is proposed that utilizes both DSP and DL methods to extract relevant features from audio signals and classify them into various genres. The algorithm was tested on the GTZAN dataset and achieved high accuracy. An end-to-end deployment architecture is also proposed for integration into music-related applications. The performance of the algorithm is analyzed and future directions for improvement are discussed. The proposed DSP and DL-based music genre classification algorithm and deployment architecture demonstrate a promising approach for music genre classification.
We present a theoretical model of the near-surface shear layer (NSSL) of the Sun. Convection cells deeper down are affected by the Sun's rotation, but this is not the case in a layer just below the solar surface due to the smallness of the convection cells there. Based on this idea, we show that the thermal wind balance equation (the basic equation in the theory of the meridional circulation which holds inside the convection zone) can be solved to obtain the structure of the NSSL, matching observational data remarkably well.
There is increasing need to assess the impact and the interpretation of dim = 6 and dim = 8 operators within the context of the Standard Model Effective Field Theory (SMEFT). The observational and mathematical consistency of a construct based on dim = 6 and dim = 8 operators is critically examined in the light of known theoretical results. The discussion is based on a general dim = 4 theory X and its effective extension, XEFT; it includes elimination of redundant operators and their higher order compensation, SMEFT in comparison with ultraviolet completions incorporating a proliferation of scalar and mixings, canonical normalization of effective field theories, gauge invariance and gauge fixing, role of tadpoles when constructing XEFT at NLO, heavy-light contributions to the low energy limit of theories containing bosons and fermions, one-loop matching, EFT fits and their interpretation and effective field theory interpretation of derivative-coupled field theories.
Recent determinations of the radial distributions of mono-metallicity populations (MMPs, i.e., stars in narrow bins in [Fe/H] within wider [$\alpha$/Fe] ranges) by the SDSS-III/APOGEE DR12 survey cast doubts on the classical thin - thick disk dichotomy. The analysis of these observations lead to the non-$[\alpha$/Fe] enhanced populations splitting into MMPs with different surface densities according to their [Fe/H]. By contrast, $[\alpha$/Fe] enhanced (i.e., old) populations show an homogeneous behaviour. We analyze these results in the wider context of disk formation within non-isolated halos embedded in the Cosmic Web, resulting in a two-phase mass assembly. By performing hydrodynamical simulations in the context of the $\rm \Lambda CDM$ model, we have found that the two phases of halo mass assembly (an early, fast phase, followed by a slow one, with low mass assembly rates) are very relevant to determine the radial structure of MMP distributions, while radial mixing has only a secondary role, depending on the coeval dynamical and/or destabilizing events. Indeed, while the frequent dynamical violent events occuring at high redshift remove metallicity gradients, and imply efficient stellar mixing, the relatively quiescent dynamics after the transition keeps [Fe/H] gaseous gradients and prevents newly formed stars to suffer from strong radial mixing. By linking the two-component disk concept with the two-phase halo mass assembly scenario, our results set halo virialization (the event marking the transition from the fast to the slow phases) as the separating event marking periods characterized by different physical conditions under which thick and thin disk stars were born.
Simple dynamics, few available decay channels, and highly controlled radiative and loop corrections, make pion and muon decays a sensitive means of exploring details of the underlying symmetries. We review the current status of the rare decays: pi+ -> e+ nu, pi+ -> e+ nu gamma, pi+ -> pi0 e+ nu, and mu+ -> e+ nu nu-bar gamma. For the latter we report new preliminary values for the branching ratio B(E_gamma >10 MeV, theta_(e-gamma) > 30deg) = 4.365 (9)_stat (42)_syst x 10^{-3}, and the decay parameter eta-bar = 0.006 (17)_stat (18)_syst, both in excellent agreement with standard model predictions. We review recent measurements, particularly by the PIBETA and PEN experiments, and near-term prospects for improvement. These and other similar precise low energy studies complement modern collider results materially.
We consider barotropic instability of shear flows for incompressible fluids with Coriolis effects. For a class of shear flows, we develop a new method to find the sharp stability conditions. We study the flow with Sinus profile in details and obtain the sharp stability boundary in the whole parameter space, which corrects previous results in the fluid literature. Our new results are confirmed by more accurate numerical computation. The addition of the Coriolis force is found to bring fundamental changes to the stability of shear flows. Moreover, we study dynamical behaviors near the shear flows, including the bifurcation of nontrivial traveling wave solutions and the linear inviscid damping. The first ingredient of our proof is a careful classification of the neutral modes. The second one is to write the linearized fluid equation in a Hamiltonian form and then use an instability index theory for general Hamiltonian PDEs. The last one is to study the singular and non-resonant neutral modes using Sturm-Liouville theory and hypergeometric functions.
In this paper, genetic programming reinforcement learning (GPRL) is utilized to generate human-interpretable control policies for a Chylla-Haase polymerization reactor. Such continuously stirred tank reactors (CSTRs) with jacket cooling are widely used in the chemical industry, in the production of fine chemicals, pigments, polymers, and medical products. Despite appearing rather simple, controlling CSTRs in real-world applications is quite a challenging problem to tackle. GPRL utilizes already existing data from the reactor and generates fully automatically a set of optimized simplistic control strategies, so-called policies, the domain expert can choose from. Note that these policies are white-box models of low complexity, which makes them easy to validate and implement in the target control system, e.g., SIMATIC PCS 7. However, despite its low complexity the automatically-generated policy yields a high performance in terms of reactor temperature control deviation, which we empirically evaluate on the original reactor template.
We study the repulsive Fermi polaron in a two-component, two-dimensional system of fermionic atoms inspired by the results of a recent experiment with $^{173}$Yb atoms [N. Darkwah Oppong \textit{et al.}, Phys. Rev. Lett. \textbf{122}, 193604 (2019)]. We use the diffusion Monte Carlo method to report properties such as the polaron energy and the quasi-particle residue that have been measured in that experiment. To provide insight on the quasi-particle character of the problem, we also report results for the effective mass. We show that the effective range, together with the scattering length, is needed in order to reproduce the experimental results. Using different model potentials for the interaction between the Fermi sea and the impurity, we show that it is possible to establish a regime of universality, in terms of these two parameters, that includes the whole experimental regime. This illustrates the relevance of quantum fluctuations and beyond mean-field effects to correctly describe the Fermi polaron problem.
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
Multiple tracers of the same surveyed volume can enhance the signal-to-noise on a measurement of local primordial non-Gaussianity and the relativistic projections. Increasing the number of tracers comparably increases the number of shot noise terms required to describe the stochasticity of the data. Although the shot noise is white on large scales, it is desirable to investigate the extent to which it can degrade constraints on the parameters of interest. In a multi-tracer analysis of the power spectrum, a marginalization over shot noise does not degrade the constraints on $f_\text{NL}$ by more than $\sim 30$% so long as halos of mass $M\lesssim 10^{12}M_\odot$ are resolved. However, ignoring cross shot noise terms induces large systematics on a measurement of $f_\text{NL}$ at redshift $z<1$ when small mass halos are resolved. These effects are less severe for the relativistic projections, especially for the dipole term. In the case of a low and high mass tracer, the optimal sample division maximizes the signal-to-noise on $f_\text{NL}$ and the projection effects simultaneously, reducing the errors to the level of $\sim 10$ consecutive mass bins of equal number density. We also emphasize that the non-Poissonian noise corrections that arise from small-scale clustering effects cannot be measured with random dilutions of the data. Therefore, they must either be properly modeled or marginalized over.
Tensors are multiway arrays of data, and transverse operators are the operators that change the frame of reference. We develop the spectral theory of transverse tensor operators and apply it to problems closely related to classifying quantum states of matter, isomorphism in algebra, clustering in data, and the design of high performance tensor type-systems. We prove the existence and uniqueness of the optimally-compressed tensor product spaces over algebras, called \emph{densors}. This gives structural insights for tensors and improves how we recognize tensors in arbitrary reference frames. Using work of Eisenbud--Sturmfels on binomial ideals, we classify the maximal groups and categories of transverse operators, leading us to general tensor data types and categorical tensor decompositions, amenable to theorems like Jordan--H\"older and Krull--Schmidt. All categorical tensor substructure is detected by transverse operators whose spectra contain a Stanley--Reisner ideal, which can be analyzed with combinatorial and geometrical tools via their simplicial complexes. Underpinning this is a ternary Galois correspondence between tensor spaces, multivariable polynomial ideals, and transverse operators. This correspondence can be computed in polynomial time. We give an implementation in the computer algebra system \textsf{Magma}.
The joint task of Dialog Sentiment Classification (DSC) and Act Recognition (DAR) aims to predict the sentiment label and act label for each utterance in a dialog simultaneously. However, current methods encode the dialog context in only one direction, which limits their ability to thoroughly comprehend the context. Moreover, these methods overlook the explicit correlations between sentiment and act labels, which leads to an insufficient ability to capture rich sentiment and act clues and hinders effective and accurate reasoning. To address these issues, we propose a Bi-directional Multi-hop Inference Model (BMIM) that leverages a feature selection network and a bi-directional multi-hop inference network to iteratively extract and integrate rich sentiment and act clues in a bi-directional manner. We also employ contrastive learning and dual learning to explicitly model the correlations of sentiment and act labels. Our experiments on two widely-used datasets show that BMIM outperforms state-of-the-art baselines by at least 2.6% on F1 score in DAR and 1.4% on F1 score in DSC. Additionally, Our proposed model not only improves the performance but also enhances the interpretability of the joint sentiment and act prediction task.
In order to fully harness the potential of dielectric elastomer actu-ators (DEAs) in soft robots, advanced control methods are need-ed. An important groundwork for this is the development of a control-oriented model that can adequately describe the underly-ing dynamics of a DEA. A common feature of existing models is that always custom-made DEAs were investigated. This makes the modelling process easier, as all specifications and the struc-ture of the actuator are well known. In the case of a commercial actuator, however, only the information from the manufacturer is available and must be checked or completed during the modelling process. The aim of this paper is to explore how a commercial stacked silicone-based DEA can be modelled and how complex the model should be to properly replicate the features of the actu-ator. The static description has demonstrated the suitability of Hooke's law. In the case of dynamic description, it is shown that no viscoelastic model is needed for control-oriented modelling. However, if all features of the DEA are considered, the general-ized Kelvin-Maxwell model with three Maxwell elements shows good results, stability and computational efficiency.
We extend the positivity-preserving method of Zhang & Shu (2010, JCP, 229, 3091-3120) to simulate the advection of neutral particles in phase space using curvilinear coordinates. The ability to utilize these coordinates is important for non-equilibrium transport problems in general relativity and also in science and engineering applications with specific geometries. The method achieves high-order accuracy using Discontinuous Galerkin (DG) discretization of phase space and strong stability-preserving, Runge-Kutta (SSP-RK) time integration. Special care in taken to ensure that the method preserves strict bounds for the phase space distribution function $f$; i.e., $f\in[0,1]$. The combination of suitable CFL conditions and the use of the high-order limiter proposed in Zhang & Shu (2010) is sufficient to ensure positivity of the distribution function. However, to ensure that the distribution function satisfies the upper bound, the discretization must, in addition, preserve the divergence-free property of the phase space flow. Proofs that highlight the necessary conditions are presented for general curvilinear coordinates, and the details of these conditions are worked out for some commonly used coordinate systems (i.e., spherical polar spatial coordinates in spherical symmetry and cylindrical spatial coordinates in axial symmetry, both with spherical momentum coordinates). Results from numerical experiments --- including one example in spherical symmetry adopting the Schwarzschild metric --- demonstrate that the method achieves high-order accuracy and that the distribution function satisfies the maximum principle.
The redshift-space distortion (RSD) in the observed distribution of galaxies is known as a powerful probe of cosmology. Observations of large-scale RSD have given tight constraints on the linear growth rate of the large-scale structures in the universe. On the other hand, the small-scale RSD, caused by galaxy random motions inside clusters, has not been much used in cosmology, but also has cosmological information because universes with different cosmological parameters have different halo mass functions and virialized velocities. We focus on the projected correlation function $w(r_p)$ and the multipole moments $\xi_l$ on small scales ($1.4$ to $30\ h^{-1}\rm{Mpc}$). Using simulated galaxy samples generated from a physically motivated most bound particle (MBP)-galaxy correspondence scheme in the Multiverse Simulation, we examine the dependence of the small-scale RSD on the cosmological matter density parameter $\Omega_m$, the satellite velocity bias with respect to MBPs, $b_v^s$, and the merger-time-scale parameter $\alpha$. We find that $\alpha=1.5$ gives an excellent fit to the $w(r_p)$ and $\xi_l$ measured from the SDSS-KIAS value added galaxy catalog. We also define the ``strength'' of Fingers-of-God as the ratio of the parallel and perpendicular size of the contour in the two-point correlation function set by a specific threshold value and show that the strength parameter helps constraining $(\Omega_m, b_v^s, \alpha)$ by breaking the degeneracy among them. The resulting parameter values from all measurements are $(\Omega_m,b_v^s)=(0.272\pm0.013,0.982\pm0.040)$, indicating a slight reduction of satellite galaxy velocity relative to the MBP. However, considering that the average MBP speed inside haloes is $0.94$ times the dark matter velocity dispersion, the main drivers behind the galaxy velocity bias are gravitational interactions, rather than baryonic effects.
The observed small value of cosmological constant can be naturally related with the scale of breaking down supersymmetry in agreement with other evaluations in particle physics.
Analyzing Event-Triggered Control's (ETC) sampling behaviour is of paramount importance, as it enables formal assessment of its sampling performance and prediction of its sampling patterns. In this work, we formally analyze the sampling behaviour of stochastic linear periodic ETC (PETC) systems by computing bounds on associated metrics. Specifically, we consider functions over sequences of state measurements and intersampling times that can be expressed as average, multiplicative or cumulative rewards, and introduce their expectations as metrics on PETC's sampling behaviour. We compute bounds on these expectations, by constructing appropriate Interval Markov Chains equipped with suitable reward structures, that abstract stochastic PETC's sampling behaviour. Our results are illustrated on a numerical example, for which we compute bounds on the expected average intersampling time and on the probability of triggering with the maximum possible intersampling time in a finite horizon.
We extend the graph convolutional network method for deep learning on graph data to higher order in terms of neighboring nodes. In order to construct representations for a node in a graph, in addition to the features of the node and its immediate neighboring nodes, we also include more distant nodes in the calculations. In experimenting with a number of publicly available citation graph datasets, we show that this higher order neighbor visiting pays off by outperforming the original model especially when we have a limited number of available labeled data points for the training of the model.
Azimuthal angle correlations of charged hadrons were measured in $\sqrt s_{NN}$ = 2.76 TeV PbPb collisions by the CMS experiment. The distributions exhibit anisotropies that are correlated with the event-by-event orientation of the reaction plane. Several methods were employed to extract the strength of the signal: the event-plane, cumulant and Lee-Yang Zeros methods. These methods have different sensitivity to correlations that are not caused by the collective motion in the system (non-flow correlations due to jets, resonance decays, and quantum correlations). The second Fourier coefficient of the charged hadron azimuthal distributions was measured as a function of transverse momentum, pseudorapidity and centrality in a broad kinematic range: $0.3 < p_T < 12.0$ GeV/c, $|\eta| < 2.4$, as a function of collision centrality. In addition, the third through sixth Fourier components were measured at midrapidity using selected methods.
We consider the effect of optical depth of the 2 ^{3}S level on the nebular recombination spectrum of He I for a spherically symmetric nebula with no systematic velocity gradients. These calculations, using many improvements in atomic data, can be used in place of the earlier calculations of Robbins. We give representative Case B line fluxes for UV, optical, and IR emission lines over a range of physical conditions: T=5000-20000 K, n_{e}=1-10^{8} cm^{-3}, and tau_{3889}=0-100. A FORTRAN program for calculating emissivities for all lines arising from quantum levels with n < 11 is also available from the authors. We present a special set of fitting formulae for the physical conditions relevant to low metallicity extragalactic H II regions: T=12,000-20,000 K, n_{e}=1-300 cm^{-3}, and tau_{3889} < 2.0. For this range of physical conditions, the Case B line fluxes of the bright optical lines 4471 A, 5876 A, and 6678 A, are changed less than 1%, in agreement with previous studies. However, the 7065 A corrections are much smaller than those calculated by Izotov & Thuan based on the earlier calculations by Robbins. This means that the 7065 A line is a better density diagnostic than previously thought. Two corrections to the fitting functions calculated in our previous work are also given.
The result by Burnett-Kroll (BK) states that for radiative decays the interference of ${\cal O}(\omega^{-1})$ in the photon energy $\omega$, vanishes after sum over polarizations of the involved particles. Using radiative decays of vector mesons we show that if the vector meson is polarized the ${\cal O}(\omega^{-1})$ terms are null only for the canonical value of the magnetic dipole moment of the vector meson, namely ${\bf g}=2$ in Bohr's magneton units. A subtle cancellation of all ${\cal O}(\omega^{-1})$ terms happens when summing over all polarizations to recover the Burnett-Kroll result. We also show the source of these terms and the corresponding cancellation for the unpolarized case and exhibit a global structure that can make them individually vanish in a particular kinematical region.
We analyze the structure and connectivity of the distinct morphologies that define the Cosmic Web. With the help of our Multiscale Morphology Filter (MMF), we dissect the matter distribution of a cosmological $\Lambda$CDM N-body computer simulation into cluster, filaments and walls. The MMF is ideally suited to adress both the anisotropic morphological character of filaments and sheets, as well as the multiscale nature of the hierarchically evolved cosmic matter distribution. The results of our study may be summarized as follows: i).- While all morphologies occupy a roughly well defined range in density, this alone is not sufficient to differentiate between them given their overlap. Environment defined only in terms of density fails to incorporate the intrinsic dynamics of each morphology. This plays an important role in both linear and non linear interactions between haloes. ii).- Most of the mass in the Universe is concentrated in filaments, narrowly followed by clusters. In terms of volume, clusters only represent a minute fraction, and filaments not more than 9%. Walls are relatively inconspicous in terms of mass and volume. iii).- On average, massive clusters are connected to more filaments than low mass clusters. Clusters with $M \sim 10^{14}$ M$_{\odot}$ h$^{-1}$ have on average two connecting filaments, while clusters with $M \geq 10^{15}$ M$_{\odot}$ h$^{-1}$ have on average five connecting filaments. iv).- Density profiles indicate that the typical width of filaments is 2$\Mpch$. Walls have less well defined boundaries with widths between 5-8 Mpc h$^{-1}$. In their interior, filaments have a power-law density profile with slope ${\gamma}\approx -1$, corresponding to an isothermal density profile.
Distributed surveillance systems have become popular in recent years due to security concerns. However, transmitting high dimensional data in bandwidth-limited distributed systems becomes a major challenge. In this paper, we address this issue by proposing a novel probabilistic algorithm based on the divergence between the probability distributions of the visual features in order to reduce their dimensionality and thus save the network bandwidth in distributed wireless smart camera networks. We demonstrate the effectiveness of the proposed approach through extensive experiments on two surveillance recognition tasks.
We introduce matchmakereft, a fully automated tool to compute the tree-level and one-loop matching of arbitrary models onto arbitrary effective theories. Matchmakereft performs an off-shell matching, using diagrammatic methods and the background field method when gauge theories are involved. The large redundancy inherent to the off-shell matching together with explicit gauge invariance offers a significant number of non-trivial checks of the results provided. These results are given in the physical basis but several intermediate results, including the matching in the Green basis before and after canonical normalization, are given for flexibility and the possibility of further cross-checks. As a non-trivial example we provide the complete matching in the Warsaw basis up to one loop of an extension of the Standard Model with a charge -1 vector-like lepton singlet. Matchmakereft has been built with generality, flexibility and efficiency in mind. These ingredients allow matchmakereft to have many applications beyond the matching between models and effective theories. Some of these applications include the one-loop renormalization of arbitrary theories (including the calculation of the one-loop renormalization group equations for arbitrary theories); the translation between different Green bases for a fixed effective theory or the check of (off-shell) linear independence of the operators in an effective theory. All these applications are performed in a fully automated way by matchmakereft.
In this paper, we characterize all the distributions $F \in \mathcal{D}'(U)$ such that there exists a continuous weak solution $v \in C(U,\mathbb{C}^{n})$ (with $U \subset \Omega$) to the divergence-type equation $$L_{1}^{*}v_{1}+...+L_{n}^{*}v_{n}=F,$$ where $\left\{L_{1},\dots,L_{n}\right\}$ is an elliptic system of linearly independent vector fields with smooth complex coefficients defined on $\Omega \subset \mathbb{R}^{N}$. In case where $(L_1,\dots, L_n)$ is the usual gradient field on $\mathbb{R}^N$, we recover the classical result for the divergence equation proved by T. De Pauw and W. Pfeffer.
A method, referred to as the principal correlation decomposition (PCD), is proposed in this paper to optimally dissect complex flows into mutually orthogonal modes that are ranked by their correlated energy with an observable. It is particularly suitable for identifying the observable-correlated flow structures, while effectively excluding those uncorrelated even though they may be highly energetic. Therefore, this method is capable of extracting coherent flow features under very low signal-to-noise ratio (SNR). A numerical validation is conducted and shows that the new method can robustly identify the observable-correlated flow events even though the underlying signal is corrupted by random noise that is four orders of magnitude more energetic. Moreover, the resolution continues to improve if the flow is sampled for a longer duration, which is often readily available in experimental measurements. This method is subsequently used to analyse the unsteady vortex shedding from a cylinder and a subsonic turbulent jet. This new decomposition represents a data-driven method of effective order-reduction for highly noisy experimental and numerical data and is very effective in identifying the source and descendent events of a given observable. It is expected to find wide applications in flow observable diagnosis and its control such as noise control.
Simulation of the dynamics of dust-gas circumstellar discs is crucial in understanding the mechanisms of planet formation. The dynamics of small grains in the disc is stiffly coupled to the gas, while the dynamics of grown solids is decoupled. Moreover, in some parts of the disc the concentration of the dust is low (dust to gas mass ratio is about 0.01), while in other parts it can be much higher. These factors place high requirements on the numerical methods for disc simulations. In particular, when gas and dust are simulated with two different fluids, explicit methods require very small timestep (must be less than dust stopping time $t_{\rm stop}$ during which the velocity of a solid particle is equalized with respect to the gas velocity) to obtain solution, while some implicit methods requires high temporal resolution to obtain acceptable accuracy. Moreover, recent studies underlined that for Smoothed particle hydrodynamics (SPH) when the gas and the dust are simulated with different sets of particles only high spatial resolution $h<c_{\rm s} t_{\rm stop}$ guaranties suppression of numerical overdissipation due to gas and dust interaction. To address these problems, we developed a fast algorithm based on the ideas of (1) implicit integration of linear (Epstein) drag and (2) exact conservation of local linear momentum. We derived formulas for monodisperse dust-gas in two-fluid SPH and tested the new method on problems with known analytical solutions. We found that our method is a promising alternative for the previously developed two-fluid SPH scheme in case of stiff linear drag thanks to the fact that spatial resolution condition $h<c_{\rm s} t_{\rm stop}$ is not required anymore for accurate results.
The recent LHCb determination of the direct CP asymmetries in the decays $D^0 \to K^+ K^-, \pi^+ \pi^-$ hints at a sizeable breaking of two approximate symmetries of the SM: CP and U-spin. We aim at explaining the data with BSM physics and use the framework of flavorful $Z^\prime$ models. Interestingly, experimental and theoretical constraints very much narrow down the shape of viable models: Viable, anomaly-free models are electron- and muon-phobic and feature a light $Z^\prime$ of 10-20 GeV coupling only to right-handed fermions. The $Z^\prime$ can be searched for in low mass dijets or at the LHC as well as dark photon searches. A light $Z^\prime$ of $\sim$ 3 GeV or $\sim$ 5-7 GeV can moreover resolve the longstanding discrepancy in the $J/\psi, \psi^\prime$ branching ratios with pion form factors from fits to $e^+ e^- \to \pi^+ \pi^-$ data, and simultaneously explain the charm CP asymmetries. Smoking gun signatures for this scenario are $\Upsilon$ and charmonium decays into pions, taus or invisbles.
Social networks are usually considered as positive sources of social support, a role which has been extensively studied in the context of domestic violence. To victims of abuse, social networks often provide initial emotional and practical help as well useful information ahead of formal institutions. Recently, however, attention has been paid to the negative responses of social networks. In this article, we advance the theoretical debate on social networks as a source of social support by moving beyond the distinction between positive and negative ties. We do so by proposing the concepts of relational ambivalence and consistency, which describe the interactive processes by which people, intentionally or inadvertently, disregard or align with each other role relational expectations, therefore undermining or reinforcing individual choices of action. We analyse the qualitative accounts of nineteen female victims of domestic violence in Sweden, who described the responses of their personal networks during and after the abuse. We observe how the relationships embedded in these networks were described in ambivalent and consistent terms, and how they played a role in supporting or undermining women in reframing their loving relationships as abusive; in accounting or dismissing perpetrators responsibilities for the abuse; in relieving women from role expectations and obligations or in burdening them with further responsibilities; and in supporting or challenging their pathways out of domestic abuse. Our analysis suggests that social isolation cannot be considered a simple result of a lack of support but of the complex dynamics in which support is offered and accepted or withdrawn and refused.
In order to better understand the heave observed on the railway roadbed of the French high-speed train (TGV) at Chabrillan in southern France, the swelling behaviour of the involved expansive clayey marl taken from the site by coring was investigated. The aim the study is to analyse the part of heave induced by the soil swelling. First, the swell potential was determined by flooding the soil specimen in an oedometer under its in-situ overburden stress. On the other hand, in order to assess the swell induced by the excavation undertaken during the construction of the railway, a second method was applied. The soil was first loaded to its in situ overburden stress existing before the excavation. It was then flooded and unloaded to its current overburden stress (after the excavation). The swell induced by this unloading was considered. Finally, the experimental results obtained were analyzed, together with the results from other laboratory tests performed previously and the data collected from the field monitoring. This study allowed estimating the heave induced by soil swelling. Subsequently, the part of heave due to landslide could be estimated which corresponds to the difference between the monitored heave and the swelling heave.
Software as a Service (SaaS) is a new software delivery model in which pre-built applications are delivered to customers as a service. SaaS providers aim to attract a large number of tenants (users) with minimal system modifications to meet economics of scale. To achieve this aim, SaaS applications have to be customizable to meet requirements of each tenant. However, due to the rapid growing of the SaaS, SaaS applications could have thousands of tenants with a huge number of ways to customize applications. Modularizing such customizations still is a highly complex task. Additionally, due to the big variation of requirements for tenants, no single customization model is appropriate for all tenants. In this paper, we propose a multi-dimensional customization model based on metagraph. The proposed mode addresses the modelling variability among tenants, describes customizations and their relationships, and guarantees the correctness of SaaS customizations made by tenants.
This paper explores the use of a deformation by a root of unity as a tool to build models with a finite number of states for applications to quantum gravity. The initial motivation for this work was cosmological breaking of supersymmetry. We explain why the project was unsuccessful. What is left are some observations on supersymmetry for q-bosons, an analogy between black holes in de Sitter and properties of quantum groups, and an observation on a noncommutative quantum mechanics model with two degrees of freedom, depending on one parameter. When this parameter is positive, the spectrum has a finite number of states; when it is negative or zero, the spectrum has an infinite number of states. This exhibits a desirable feature of quantum physics in de Sitter space, albeit in a very simple, non-gravitational context.
The Sun exhibits a well-observed modulation in the number of spots on its disk over a period of about 11 years. From the dawn of modern observational astronomy sunspots have presented a challenge to understanding -- their quasi-periodic variation in number, first noted 175 years ago, stimulates community-wide interest to this day. A large number of techniques are able to explain the temporal landmarks, (geometric) shape, and amplitude of sunspot "cycles," however forecasting these features accurately in advance remains elusive. Recent observationally-motivated studies have illustrated a relationship between the Sun's 22-year (Hale) magnetic cycle and the production of the sunspot cycle landmarks and patterns, but not the amplitude of the sunspot cycle. Using (discrete) Hilbert transforms on more than 270 years of (monthly) sunspot numbers we robustly identify the so-called "termination" events that mark the end of the previous 11-yr sunspot cycle, the enhancement/acceleration of the present cycle, and the end of 22-yr magnetic activity cycles. Using these we extract a relationship between the temporal spacing of terminators and the magnitude of sunspot cycles. Given this relationship and our prediction of a terminator event in 2020, we deduce that Sunspot Cycle 25 could have a magnitude that rivals the top few since records began. This outcome would be in stark contrast to the community consensus estimate of sunspot cycle 25 magnitude.
We study the magnetic susceptibility at large 't Hooft coupling by computing the correlation function of the magnetizations in the strongly coupled Maxwell theory in large-N limit with finite temperature and chemical potential, within the framework of the AdS/CFT correspondence. We show that in strong coupling limit the magnetic susceptibility is independent to the temperature and be universal, measured in the unit of magnetic permeability of the bulk space. A comparison with the weak coupling system, the Pauli paramagnetic susceptibility, is also discussed.
We study a two-player, zero-sum, stochastic game with incomplete information on one side in which the players are allowed to play more and more frequently. The informed player observes the realization of a Markov chain on which the payoffs depend, while the non-informed player only observes his opponent's actions. We show the existence of a limit value as the time span between two consecutive stages vanishes; this value is characterized through an auxiliary optimization problem and as the solution of an Hamilton-Jacobi equation.
Given an $n$-vertex graph $G$ with minimum degree at least $d n$ for some fixed $d > 0$, the distribution $G \cup \mathbb{G}(n,p)$ over the supergraphs of $G$ is referred to as a (random) {\sl perturbation} of $G$. We consider the distribution of edge-coloured graphs arising from assigning each edge of the random perturbation $G \cup \mathbb{G}(n,p)$ a colour, chosen independently and uniformly at random from a set of colours of size $r := r(n)$. We prove that such edge-coloured graph distributions a.a.s. admit rainbow Hamilton cycles whenever the edge-density of the random perturbation satisfies $p := p(n) \geq C/n$, for some fixed $C > 0$, and $r = (1 + o(1))n$. The number of colours used is clearly asymptotically best possible. In particular, this improves upon a recent result of Anastos and Frieze (2019) in this regard. As an intermediate result, which may be of independent interest, we prove that randomly edge-coloured sparse pseudo-random graphs a.a.s. admit an almost spanning rainbow path.
Large-scale pretrained language models are surprisingly good at recalling factual knowledge presented in the training corpus. In this paper, we present preliminary studies on how factual knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons. Specifically, we examine the fill-in-the-blank cloze task for BERT. Given a relational fact, we propose a knowledge attribution method to identify the neurons that express the fact. We find that the activation of such knowledge neurons is positively correlated to the expression of their corresponding facts. In our case studies, we attempt to leverage knowledge neurons to edit (such as update, and erase) specific factual knowledge without fine-tuning. Our results shed light on understanding the storage of knowledge within pretrained Transformers. The code is available at https://github.com/Hunter-DDM/knowledge-neurons.
We demonstrate that muon tomography can be used to precisely measure the properties of various materials. The materials which have been considered have been extracted from an experimental blast furnace, including carbon (coke) and iron oxides, for which measurements of the linear scattering density relative to the mass density have been performed with an absolute precision of 10%. We report the procedures that are used in order to obtain such precision, and a discussion is presented to address the expected performance of the technique when applied to heavier materials. The results we obtain do not depend on the specific type of material considered and therefore they can be extended to any application.
It has been argued that Horava gravity needs to be extended to include terms that mix spatial and time derivatives in order avoid unacceptable violations of Lorentz invariance in the matter sector. In an earlier paper we have shown that including such mixed derivative terms generically leads to 4th instead of 6th order dispersion relations and this could be (naively) interpreted as a threat to renormalizability. We have also argued that power-counting renormalizability is not actually compromised, but instead the simplest power-counting renormalizable model is not unitary. In this paper we consider the Lifshitz scalar as a toy theory and we generalize our analysis to include higher order operators. We show that models which are power-counting renormalizable and unitary do exist. Our results suggest the existence of a new class of theories that can be thought of as Horava gravity with mixed derivative terms.
Generalized Procrustes Analysis (GPA) is the problem of bringing multiple shapes into a common reference by estimating transformations. GPA has been extensively studied for the Euclidean and affine transformations. We introduce GPA with deformable transformations, which forms a much wider and difficult problem. We specifically study a class of transformations called the Linear Basis Warps (LBWs), which contains the affine transformation and most of the usual deformation models, such as the Thin-Plate Spline (TPS). GPA with deformations is a nonconvex underconstrained problem. We resolve the fundamental ambiguities of deformable GPA using two shape constraints requiring the eigenvalues of the shape covariance. These eigenvalues can be computed independently as a prior or posterior. We give a closed-form and optimal solution to deformable GPA based on an eigenvalue decomposition. This solution handles regularization, favoring smooth deformation fields. It requires the transformation model to satisfy a fundamental property of free-translations, which asserts that the model can implement any translation. We show that this property fortunately holds true for most common transformation models, including the affine and TPS models. For the other models, we give another closed-form solution to GPA, which agrees exactly with the first solution for models with free-translation. We give pseudo-code for computing our solution, leading to the proposed DefGPA method, which is fast, globally optimal and widely applicable. We validate our method and compare it to previous work on six diverse 2D and 3D datasets, with special care taken to choose the hyperparameters from cross-validation.
We introduce a weak concept of Morita equivalence, in the birational context, for Poisson modules on complex normal Poisson projective varieties. We show that Poisson modules, on projective varieties with mild singularities, are either rationally Morita equivalent to a flat partial holomorphic sheaf, or a sheaf with a meromorphic flat connection or a co-Higgs sheaf. As an application, we study the geometry of rank two meromorphic rank two $\mathfrak{sl}_2$-Poisson modules which can be interpreted as a Poisson analogous to transversally projective structures for codimension one holomorphic foliations. Moreover, we describe the geometry of the symplectic foliation induced by the Poisson connection on the projectivization of the Poisson module.
Complementarity principle is one of the central concepts in quantum mechanics which restricts joint measurement for certain observables. Of course, later development shows that joint measurement could be possible for such observables with the introduction of a certain degree of unsharpness or fuzziness in the measurement. In this paper, we show that the optimal degree of unsharpness, which guarantees the joint measurement of all possible pairs of dichotomic observables, determines the degree of nonlocality in quantum mechanics as well as in more general no-signaling theories.
We investigate properties of group gradings on matrix rings $M_n(R)$, where $R$ is an associative unital ring and $n$ is a positive integer. More precisely, we introduce very good gradings and show that any very good grading on $M_n(R)$ is necessarily epsilon-strong. We also identify a condition that is sufficient to guarantee that $M_n(R)$ is an epsilon-crossed product, i.e. isomorphic to a crossed product associated with a unital twisted partial action. In the case where $R$ has IBN, we are able to provide a characterization of when $M_n(R)$ is an epsilon-crossed product. Our results are illustrated by several examples.