text
stringlengths
6
128k
We consider some conditions under which a smooth projective variety X is actually the projective space. We also extend to the case of positive characteristic some results in the theory of vector bundle adjunction. We use methods and techniques of the so called Mori theory, in particular the study of rational curves on projective manifolds.
Non-local correlations are usually understood through the outcomes of alternative measurements (on two or more parts of a system) that cannot altogether actually be carried out in an experiment. Indeed, a joint input/output -- e.g., measurement-setting/outcome -- behavior is non-local if and only if the outputs for all possible inputs cannot coexist consistently. It has been argued that this counterfactual view is how Bell's inequalities and their violations are to be seen. We propose an alternative perspective which refrains from setting into relation the results of mutually exclusive measurements, but that is based solely on data actually available. Our approach uses algorithmic complexity instead of probability, implies non-locality to have similar consequences as in the probabilistic view, and is conceptually simpler yet at the same time more general than the latter.
This paper is devoted to the complexity of the quantified boolean formula problem. We describe a simple deterministic algorithm that, for a given quantified boolean formula $F$, stops in time bounded by $O(|F|^4)$ and answers yes if $F$ is true and no otherwise.
The equivalence between the Chern-Simons gauge theory on a three-dimensional manifold with boundary and the WZNW model on the boundary is established in a simple and general way using the BRST symmetry. Our approach is based on restoring gauge invariance of the Chern-Simons theory in the presence of a boundary. This gives a correspondence to the WZNW model that does not require solving any constraints, fixing the gauge or specifying boundary conditions.
We investigate the impact between the gas stream from the inner Lagrangian point and the accretion disk in interacting binaries, using three dimensional Smooth Particle Hydrodynamics simulations. We find that a significant fraction of the stream material can ricochet off the disk edge and overflow towards smaller radii, and that this generates pronounced non-axisymmetric structure in the absorption column towards the central object. We discuss the implications of our results for observations and time-dependent models of low-mass X-ray binaries, cataclysmic variables and supersoft X-ray sources.
Let $[\, \cdot\,]$ be the floor function. In the present paper we prove that when $1<c<\frac{12}{11}$ and $\theta>1$ is a fixed, then there exist infinitely many prime numbers of the form $[n^c \tan^\theta(\log n)]$.
This paper highlights three known identities, each of which involves sums over alternating sign matrices. While proofs of all three are known, the only known derivations are as corollaries of difficult results. The simplicity and natural combinatorial interpretation of these identities, however, suggest that there should be direct, bijective proofs.
The planar graph product structure theorem of Dujmovi\'{c}, Joret, Micek, Morin, Ueckerdt, and Wood [J. ACM 2020] states that every planar graph is a subgraph of the strong product of a graph with bounded treewidth and a path. This result has been the key tool to resolve important open problems regarding queue layouts, nonrepetitive colourings, centered colourings, and adjacency labelling schemes. In this paper, we extend this line of research by utilizing shallow minors to prove analogous product structure theorems for several beyond planar graph classes. The key observation that drives our work is that many beyond planar graphs can be described as a shallow minor of the strong product of a planar graph with a small complete graph. In particular, we show that powers of planar graphs, $k$-planar, $(k,p)$-cluster planar, fan-planar and $k$-fan-bundle planar graphs have such a shallow-minor structure. Using a combination of old and new results, we deduce that these classes have bounded queue-number, bounded nonrepetitive chromatic number, polynomial $p$-centred chromatic numbers, linear strong colouring numbers, and cubic weak colouring numbers. In addition, we show that $k$-gap planar graphs have at least exponential local treewidth and, as a consequence, cannot be described as a subgraph of the strong product of a graph with bounded treewidth and a path.
This study shows the feasibility of an eHealth solution for tackling eating habits and physical activity in the adolescent population. The participants were children from 11 to 15 years old. An intervention was carried out on 139 students in the intervention group and 91 students in the control group, in two schools during 14 weeks. The intervention group had access to the web through a user account and a password. They were able to create friendship relationships, post comments, give likes and interact with other users, as well as receive notifications and information about nutrition and physical activity on a daily basis and get (virtual) rewards for improving their habits. The control group did not have access to any of these features. The homogeneity of the samples in terms of gender, age, body mass index and initial health-related habits was demonstrated. Pre- and post-measurements were collected through self-reports on the application website. After applying multivariate analysis of variance, a significant alteration in the age-adjusted body mass index percentile was observed in the intervention group versus the control group, as well as in the PAQ-A score and the KIDMED score. It can be concluded that eHealth interventions can help to obtain healthy habits. More research is needed to examine the effectiveness in achieving adherence to these new habits.
Let $G$ be a group and let $F$ be a field of characteristic different from 2. Denote by $(FG)^+$ the set of symmetric elements and by $\mathcal{U}^+(FG)$ the set of symmetric units, under an oriented classical involution of the group algebra $FG$. We give some lower and upper bounds on the Lie nilpotency index of $(FG)^+$ and the nilpotency class of $\mathcal{U}^+(FG)$.
We study spatio-temporal chaos in the complex Ginzburg-Landau equation in parameter regions of weak amplification and viscosity. Turbulent states involving many soliton-like pulses appear in the parameter range, because the complex Ginzburg-Landau equation is close to the nonlinear Schr\"odinger equation. We find that the distributions of amplitude and wavenumber of pulses depend only on the ratio of the two parameters of the amplification and the viscosity. This implies that a one-parameter family of soliton turbulence states characterized by different distributions of the soliton parameters exists continuously around the completely integrable system.
The renormalization of singular chiral potentials as applied to NN scattering and the structure of the deuteron is discussed. It is shown how zero range theories may be implemented non-perturbatively as constrained from known long range NN forces.
The most usual formulation of the Laws of Thermodynamics turns out to be suitable for local or simple materials, while for non-local systems there are two different ways: either modify this usual formulation by introducing suitable extra fluxes or express the Laws of Thermodynamics in terms of internal powers directly, as we propose in this paper. The first choice is subject to the criticism that the vector fluxes must be introduced a posteriori in order to obtain the compatibility with the Laws of Thermodynamics. On the contrary, the formulation in terms of internal powers is more general, because it is a priori defined on the basis of the constitutive equations. Besides it allows to highlight, without ambiguity, the contribution of the internal powers in the variation of the thermodynamic potentials. Finally, in this paper, we consider some examples of non-local materials and derive the proper expressions of their internal powers from the power balance laws.
This paper provides a technical companion to M. Aguado and E. Seiler, hep-lat/0406041, in which the fate of perturbation theory in the thermodynamic limit is discussed for the O(N) model on a 2d lattice and different boundary conditions. The techniques used to compute perturbative coefficients are explained, and results for all boundary conditions considered reviewed in detail.
Many state-of-the art visualization techniques must be tailored to the specific type of dataset, its modality (CT, MRI, etc.), the recorded object or anatomical region (head, spine, abdomen, etc.) and other parameters related to the data acquisition process. While parts of the information (imaging modality and acquisition sequence) may be obtained from the meta-data stored with the volume scan, there is important information which is not stored explicitly (anatomical region, tracing compound). Also, meta-data might be incomplete, inappropriate or simply missing. This paper presents a novel and simple method of determining the type of dataset from previously defined categories. 2D histograms based on intensity and gradient magnitude of datasets are used as input to a neural network, which classifies it into one of several categories it was trained with. The proposed method is an important building block for visualization systems to be used autonomously by non-experts. The method has been tested on 80 datasets, divided into 3 classes and a "rest" class. A significant result is the ability of the system to classify datasets into a specific class after being trained with only one dataset of that class. Other advantages of the method are its easy implementation and its high computational performance.
We show that the integers in the HMM LLL HNF algorithm have bit length O(m.log(m.B)), where m is the number of rows and B is the maximum square length of a row of the input matrix. This is only a little worse than the estimate O(m.log(B)) in the LLL algorithm.
Telehealth helps to facilitate access to medical professionals by enabling remote medical services for the patients. These services have become gradually popular over the years with the advent of necessary technological infrastructure. The benefits of telehealth have been even more apparent since the beginning of the COVID-19 crisis, as people have become less inclined to visit doctors in person during the pandemic. In this paper, we focus on facilitating the chat sessions between a doctor and a patient. We note that the quality and efficiency of the chat experience can be critical as the demand for telehealth services increases. Accordingly, we develop a smart auto-response generation mechanism for medical conversations that helps doctors respond to consultation requests efficiently, particularly during busy sessions. We explore over 900,000 anonymous, historical online messages between doctors and patients collected over nine months. We implement clustering algorithms to identify the most frequent responses by doctors and manually label the data accordingly. We then train machine learning algorithms using this preprocessed data to generate the responses. The considered algorithm has two steps: a filtering (i.e., triggering) model to filter out infeasible patient messages and a response generator to suggest the top-3 doctor responses for the ones that successfully pass the triggering phase. The method provides an accuracy of 83.28\% for precision@3 and shows robustness to its parameters.
Understanding human mobility is essential for many fields, including transportation planning. Currently, surveys are the primary source for such analysis. However, in the recent past, many researchers have focused on Call Detail Records (CDR) for identifying travel patterns. CDRs have shown correlation to human mobility behavior. However, one of the main issues in using CDR data is that it is difficult to identify the precise location of the user due to the low spacial resolution of the data and other artifacts such as the load sharing effect. Existing approaches have certain limitations. Previous studies using CDRs do not consider the transmit power of cell towers when localizing the users and use an oversimplified approach to identify load sharing effects. Furthermore, they consider the entire population of users as one group neglecting the differences in mobility patterns of different segments of users. This research introduces a novel methodology to user position localization from CDRs through improved detection of load sharing effects, by taking the transmit power into account, and segmenting the users into distinct groups for the purpose of learning any parameters of the model. Moreover, this research uses several methods to address the existing limitations and validate the generated results using nearly 4 billion CDR data points with travel survey data and voluntarily collected mobile data.
We compute, in the MSSM framework, the total electroweak contributions at one loop for the process pp -> tW+X, initiated by the parton process bg -> tW. The supersymmetric effect is analyzed for various choices of the SUSY benchmark points. Choosing realistic unpolarized and polarized experimental quantities, we show the size of the various effects and discuss their dependence on the MSSM parameters.
The rotation rate in pre-supernova cores is an important ingredient which can profoundly affect the post-collapse evolution and associated energy release in supernovae and long gamma ray bursts (LGRBs). Previous work has focused on whether the specific angular momentum is above or below the critical value required for the creation of a centrifugally supported disk around a black hole. Here, we explore the effect of the distribution of angular momentum with radius in the star, and show that qualitative transitions between high and low angular momentum flow, corresponding to high and low luminosity accretion states, can effectively be reflected in the energy output, leading to variability and the possibility of quiescent times in LGRBs.
We consider the problem of broadcast with common messages, and focus on the case that the common message rate $R_{\mathcal{A}}$, i.e., the rate of the message intended for all the receivers in the set $\mathcal{A}$, is the same for all the set $\mathcal{A}$ of the same cardinality. Instead of attempting to characterize the capacity region of general broadcast channels, we only consider the structure of the capacity region that any broadcast channel should bear. The concept of latent capacity region is useful in capturing these underlying constraints, and we provide a complete characterization of the latent capacity region for the symmetric broadcast problem. The converse proof of this tight characterization relies on a deterministic broadcast channel model. The achievability proof generalizes the familiar rate transfer argument to include more involved erasure correction coding among messages, thus revealing an inherent connection between broadcast with common message and erasure correction codes.
It is shown that there is a simple way to get the quantization equation for the electric and magnetic charges of dyons, $e_ig_j-g_ie_j=m(\hbar c)$, which also shed light to the origin of such quantization.
We study a sequence of eruptive events including filament eruption, a GOES C4.3 flare and a coronal mass ejection. We aim to identify the possible trigger(s) and precursor(s) of the filament destabilisation; investigate flare kernel characteristics; flare ribbons/kernels formation and evolution; study the interrelation of the filament-eruption/flare/coronal-mass-ejection phenomena as part of the integral active-region magnetic field configuration; determine H\alpha\ line profile evolution during the eruptive phenomena. Multi-instrument observations are analysed including H\alpha\ line profiles, speckle images at H\alpha-0.8 \AA\ and H\alpha+0.8 \AA\ from IBIS at DST/NSO, EUV images and magnetograms from the SDO, coronagraph images from STEREO and the X-ray flux observations from FERMI and GOES. We establish that the filament destabilisation and eruption are the main trigger for the flaring activity. A surge-like event with a circular ribbon in one of the filament footpoints is determined as the possible trigger of the filament destabilisation. Plasma draining in this footpoint is identified as the precursor for the filament eruption. A magnetic flux emergence prior to the filament destabilisation followed by a high rate of flux cancelation of 1.34$\times10^{16}$ Mx s$^{-1}$ is found during the flare activity. The flare X-ray lightcurves reveal three phases that are found to be associated with three different ribbons occurring consecutively. A kernel from each ribbon is selected and analysed. The kernel lightcurves and H alpha line profiles reveal that the emission increase in the line centre is stronger than that in the line wings. A delay of around 5-6 mins is found between the increase in the line centre and the occurrence of red asymmetry. Only red asymmetry is observed in the ribbons during the impulsive phases. Blue asymmetry is only associated with the dynamic filament.
We revisit the general framework introduced by Fazylab et al. (SIAM J. Optim. 28, 2018) to construct Lyapunov functions for optimization algorithms in discrete and continuous time. For smooth, strongly convex objective functions, we relax the requirements necessary for such a construction. As a result we are able to prove for Polyak's ordinary differential equations and for a two-parameter family of Nesterov algorithms rates of convergence that improve on those available in the literature. We analyse the interpretation of Nesterov algorithms as discretizations of the Polyak equation. We show that the algorithms are instances of Additive Runge-Kutta integrators and discuss the reasons why most discretizations of the differential equation do not result in optimization algorithms with acceleration. We also introduce a modification of Polyak's equation and study its convergence properties. Finally we extend the general framework to the stochastic scenario and consider an application to random algorithms with acceleration for overparameterized models; again we are able to prove convergence rates that improve on those in the literature.
The fabrication, characterisation, and superconductivity of MgB2 thick films grown on stainless steel substrate were studied. XRD, SEM, and magnetic measurements were carried out. It was found that the MgB2 thick films can be fast formed by heating samples to 660 oC then immediately cooling down to room temperature. XRD shows above 90% MgB2 phase and less than 10 % MgO. However, the samples sintered at 800 oC for 4 h contain both MgB4 and MgO impurities in addition to MgB2. The fast formed MgB2 films appear to have a good grain connectivity that gives a Jc of 8 x 10 4 A/cm2at 5 K and 1 T and maintained this value at 20 K in zero field.
There would be a perfect correspondence between the laws of classical thermodynamics and black hole thermodynamics, except for the apparent failure of black hole thermodynamics to correspond to the Third Law. The classical Third Law of Thermodynamics entails that as the absolute temperature, T, approaches zero, the entropy, S, also approaches zero. This discussion is based upon part of the work published by the author in 1995 that demonstrated that the most general form of the classical Third Law of Thermodynamics is satisfied by treating the area of the inner-event horizon as a measure of negative-entropy (negentropy).
The Praesepe cluster contains a number of Delta Sct and Gamma Dor pulsators. Asteroseismology of cluster stars is simplified by the common distance, age and stellar abundances. Since asteroseismology requires a large number of known frequencies, the small pulsation amplitudes of these stars require space satellite campaigns. The present study utilizes photometric MOST satellite measurements in order to determine the pulsation frequencies of two evolved (EP Cnc, BT Cnc) and two main-sequence (BS Cnc, HD 73872) Delta Sct stars in the Praesepe cluster. The frequency analysis of the 2008 and 2009 data detected up to 34 frequencies per star with most amplitudes in the submillimag range. In BS Cnc, two modes showed strong amplitude variability between 2008 and 2009. The frequencies ranged from 0.76 to 41.7 c/d. After considering the different evolutionary states and mean stellar densities of these four stars, the differences and large ranges in frequency remain.
This report gives a novel technique of image encryption and authentication by combining elements of Visual Cryptography and Public Key Cryptography. A prominent attack involving generation of fake shares to cheat honest users has been described and a demonstration of the proposed system employing a centralised server to generate shares and authenticate them on the basis of requests is made as a counter to the described attack.
A robust route for the biased production of single-handed chiral structures has been found in generating non-spherical, multi-component double emulsions using microfluidics. The specific type of handedness is determined by the final packing geometry of four different inner drops inside an ultra-thin sheath of oil. Before three-dimensional chiral structures are formed, the quasi-one-dimensional chain re-arranges in two dimensions into either checkerboard or stripe patterns. We derive an analytical model predicting which pattern is more likely and assembles in the least amount of time. Moreover, our dimensionless model accurately predicts our experimental results and is based on local bending dynamics, rather than global surface energy minimization. This better reflects the underlying self-assembly process which will not, in general, reach a global energy minimum. In summary, using glass microfluidic techniques for channeling aqueous fluids through narrow orifices of multi-bore injection capillaries while encapsulating these fluids as drops inside an ultra-thin sheath of oil is sufficient to produce single-handed chiral structures.
We analyse the global (rigid) symmetries that are realised on the bosonic fields of the various supergravity actions obtained from eleven-dimensional supergravity by toroidal compactification followed by the dualisation of some subset of fields. In particular, we show how the global symmetries of the action can be affected by the choice of this subset. This phenomenon occurs even with the global symmetries of the equations of motion. A striking regularity is exhibited by the series of theories obtained respectively without any dualisation, with the dualisation of only the Ramond-Ramond fields of the type IIA theory, with full dualisation to lowest degree forms, and finally for certain inverse dualisations (increasing the degrees of some forms) to give the type IIB series. These theories may be called the GL_A, D, E and GL_B series respectively. It turns out that the scalar Lagrangians of the E series are sigma models on the symmetric spaces K(E_{11-D})\backslash E_{11-D} (where K(G) is the maximal compact subgroup of G) and the other three series lead to models on homogeneous spaces K(G) \backslash G\semi \R^s. These can be understood from the E series in terms of the deletion of positive roots associated with the dualised scalars, which implies a group contraction. We also propose a constrained Lagrangian version of the even dimensional theories exhibiting the full duality symmetry and begin a systematic analysis of abelian duality subalgebras.
Seminal work by Edmonds and Lovasz shows the strong connection between submodularity and convexity. Submodular functions have tight modular lower bounds, and subdifferentials in a manner akin to convex functions. They also admit poly-time algorithms for minimization and satisfy the Fenchel duality theorem and the Discrete Seperation Theorem, both of which are fundamental characteristics of convex functions. Submodular functions also show signs similar to concavity. Submodular maximization, though NP hard, admits constant factor approximation guarantees. Concave functions composed with modular functions are submodular, and they also satisfy diminishing returns property. This manuscript provides a more complete picture on the relationship between submodularity with convexity and concavity, by extending many of the results connecting submodularity with convexity to the concave aspects of submodularity. We first show the existence of superdifferentials, and efficiently computable tight modular upper bounds of a submodular function. While we show that it is hard to characterize this polyhedron, we obtain inner and outer bounds on the superdifferential along with certain specific and useful supergradients. We then investigate forms of concave extensions of submodular functions and show interesting relationships to submodular maximization. We next show connections between optimality conditions over the superdifferentials and submodular maximization, and show how forms of approximate optimality conditions translate into approximation factors for maximization. We end this paper by studying versions of the discrete seperation theorem and the Fenchel duality theorem when seen from the concave point of view. In every case, we relate our results to the existing results from the convex point of view, thereby improving the analysis of the relationship between submodularity, convexity, and concavity.
We consider the Bayesian detection statistic for a targeted search for continuous gravitational waves, known as the $\mathcal{B}$-statistic. This is a Bayes factor between signal and noise hypotheses, produced by marginalizing over the four amplitude parameters of the signal. We show that by Taylor-expanding to first order in certain averaged combinations of antenna patterns (elements of the parameter space metric), the marginalization integral can be performed analytically, producing a closed-form approximation in terms of confluent hypergeometric functions. We demonstrate using Monte Carlo simulations that this approximation is as powerful as the full $\mathcal{B}$-statistic, and outperforms the traditional maximum-likelihood $\mathcal{F}$-statistic, for several observing scenarios which involve an average over sidereal times. We also show that the approximation does not perform well for a near-instantaneous observation, so the approximation is suited to long-time continuous wave observations rather than transient modelled signals such as compact binary inspiral.
We report the most precise measurement to date of a parity-violating asymmetry in elastic electron-proton scattering. The measurement was carried out with a beam energy of 3.03 GeV and a scattering angle <theta_lab>=6 degrees, with the result A_PV = -1.14 +/- 0.24 (stat) +/- 0.06 (syst) parts per million. From this we extract, at Q^2 = 0.099 GeV^2, the strange form factor combination G_E^s + 0.080 G_M^s = 0.030 +/- 0.025 (stat) +/- 0.006 (syst) +/- 0.012 (FF) where the first two errors are experimental and the last error is due to the uncertainty in the neutron electromagnetic form factor. This result significantly improves current knowledge of G_E^s and G_M^s at Q^2 ~0.1 GeV^2. A consistent picture emerges when several measurements at about the same Q^2 value are combined: G_E^s is consistent with zero while G_M^s prefers positive values though G_E^s=G_M^s=0 is compatible with the data at 95% C.L.
We give a new fusion procedure for the Brauer algebra by showing that all primitive idempotents can be found by evaluating a rational function in several variables which has the form of a product of R-matrix type factors. In particular, this provides a new fusion procedure for the symmetric group involving an arbitrary parameter. The R-matrices are solutions of the Yang--Baxter equation associated with the classical Lie algebras g_N of types B, C and D. Moreover, we construct an evaluation homomorphism from a reflection equation algebra B(g_N) to U(g_N) and show that the fusion procedure provides an equivalence between natural tensor representations of B(g_N) with the corresponding evaluation modules.
Let $K$ be a field and $S=K[x_1,\ldots, x_n]$. Let $I$ be a monomial ideal of $S$ and $u_1,\ldots, u_r$ be monomials in $S$ which form a filter-regular sequence on $S/I$. We show that $S/I$ is pretty clean if and only if $S/(I,u_1,\ldots, u_r)$ is pretty clean.
Quantum algorithms provide an exponential speedup for solving certain classes of linear systems, including those that model geologic fracture flow. However, this revolutionary gain in efficiency does not come without difficulty. Quantum algorithms require that problems satisfy not only algorithm-specific constraints, but also application-specific ones. Otherwise, the quantum advantage carefully attained through algorithmic ingenuity can be entirely negated. Previous work addressing quantum algorithms for geologic fracture flow has illustrated core algorithmic approaches while incrementally removing assumptions. This work addresses two further requirements for solving geologic fracture flow systems with quantum algorithms: efficient system state preparation and efficient information extraction. Our approach to addressing each is consistent with an overall exponential speed-up.
This paper corrects the proof of the Theorem 2 from the Gower's paper \cite[page 5]{Gower:1982} as well as corrects the Theorem 7 from Gower's paper \cite{Gower:1986}. The first correction is needed in order to establish the existence of the kernel function used commonly in the kernel trick e.g. for $k$-means clustering algorithm, on the grounds of distance matrix. The correction encompasses the missing if-part proof and dropping unnecessary conditions. The second correction deals with transformation of the kernel matrix into a one embeddable in Euclidean space.
An earlier suggestion that scalar fields in gauge theory may be introduced as frame vectors or vielbeins in internal symmetry space, and so endowed with geometric significance, is here sharpened and refined. Applied to a $u(1) \times su(2)$ theory this gives exactly the Higgs structure of the standard electroweak theory. Applied to an $su(3)$ theory, it gives a structure having much in common with a phenomenological model previously constructed to explain fermion mixing and mass hierarchy. The difference in physical outcome for the two theories is here traced to the difference in structure between the two symmetry groups.
The Biot problem of poroelasticity is extended by Signorini contact conditions. The resulting Biot contact problem is formulated and analyzed as a two field variational inequality problem of a perturbed saddle point structure. We present an a priori error analysis for a general as well as for a $hp$-FE discretization including convergence and guaranteed convergence rates for the latter. Moreover, we derive a family of reliable and efficient residual based a posteriori error estimators, and elaborate how a simple and efficient primal-dual active set solver can be applied to solve the discrete Galerkin problem. Numerical results underline our theoretical finding and show that optimal, in particular exponential, convergence rates can be achieved by adaptive schemes for two dimensional problems.
We investigate the sedimentation equilibrium of a charge stabilized colloidal suspension in the regime of low ionic strength. We analyze the asymptotic behaviour of the density profiles on the basis of a simple Poisson--Boltzmann theory and show that the effective mass we can deduce from the barometric law corresponds to the actual mass of the colloidal particles, contrary to previous studies.
The application of graph theory to model the complex structure and function of the brain has shed new light on its organization and function, prompting the emergence of network neuroscience. Despite the tremendous progress that has been achieved in this field, still relatively few methods exploit the topology of brain networks to analyze brain activity. Recent attempts in this direction have leveraged on graph spectral analysis and graph signal processing to decompose brain activity in connectivity eigenmodes or gradients. If results are promising in terms of interpretability and functional relevance, methodologies and terminology are sometimes confusing. The goals of this paper are twofold. First, we summarize recent contributions related to connectivity gradients and graph signal processing, and attempt a clarification of the terminology and methods used in the field, while pointing out current methodological limitations. Second, we discuss the perspective that the functional relevance of connectivity gradients could be fruitfully exploited by considering them as graph Fourier bases of brain activity.
Embedding rigid inclusions into elastic matrix materials is a procedure of high practical relevance, for instance for the fabrication of elastic composite materials. We theoretically analyze the following situation. Rigid spherical inclusions are enclosed by a homogeneous elastic medium under stick boundary conditions. Forces and torques are directly imposed from outside onto the inclusions, or are externally induced between them. The inclusions respond to these forces and torques by translations and rotations against the surrounding elastic matrix. This leads to elastic matrix deformations, and in turn results in mutual long-ranged matrix-mediated interactions between the inclusions. Adapting a well-known approach from low-Reynolds-number hydrodynamics, we explicitly calculate the displacements and rotations of the inclusions from the externally imposed or induced forces and torques. Analytical expressions are presented as a function of the inclusion configuration in terms of displaceability and rotateability matrices. The role of the elastic environment is implicitly included in these relations. That is, the resulting expressions allow a calculation of the induced displacements and rotations directly from the inclusion configuration, without having to explicitly determine the deformations of the elastic environment. In contrast to the hydrodynamic case, compressibility of the surrounding medium is readily taken into account. We present the complete derivation based on the underlying equations of linear elasticity theory. In the future, the method will, for example, be helpful to characterize the behavior of externally tunable elastic composite materials, to accelerate numerical approaches, as well as to improve the quantitative interpretation of microrheological results.
We explore the connections between automata, groups, limit spaces of self-similar actions, and tilings. In particular, we show how a group acting ``nicely'' on a tree gives rise to a self-covering of a topological groupoid, and how the group can be reconstructed from the groupoid and its covering. The connection is via finite-state automata. These define decomposition rules, or self-similar tilings, on leaves of the solenoid associated with the covering.
The spin- and charge-density-wave order parameters of the itinerant antiferromagnet chromium are measured directly with non-resonant x-ray diffraction as the system is driven towards its quantum critical point with high pressure using a diamond anvil cell. The exponential decrease of the spin and charge diffraction intensities with pressure confirms the harmonic scaling of spin and charge, while the evolution of the incommensurate ordering vector provides important insight into the difference between pressure and chemical doping as means of driving quantum phase transitions. Measurement of the charge density wave over more than two orders of magnitude of diffraction intensity provides the clearest demonstration to date of a weakly-coupled, BCS-like ground state. Evidence for the coexistence of this weakly-coupled ground state with high-energy excitations and pseudogap formation above the ordering temperature in chromium, the charge-ordered perovskite manganites, and the blue bronzes, among other such systems, raises fundamental questions about the distinctions between weak and strong coupling.
We perform a full nuclear-network numerical calculation of the $r$-process nuclei in binary neutron-star mergers (NSMs), with the aim of estimating $\gamma$-ray emissions from the remnants of Galactic NSMs up to $10^6$ years old. The nucleosynthesis calculation of 4,070 nuclei is adopted to provide the elemental composition ratios of nuclei with an electron fraction $Y_{\rm e}$ between 0.10 and 0.45 . The decay processes of 3,237 unstable nuclei are simulated to extract the $\gamma$-ray spectra. As a result, the NSMs have different spectral color in $\gamma$-ray band from various other astronomical objects at less than $10^5$ years old. In addition, we propose a new line-diagnostic method for $Y_{\rm e}$ that uses the line ratios of either $^{137{\rm m}}$Ba/$^{85}$K or $^{243}$Am/$^{60{\rm m}}$Co, which become larger than unity for young and old $r$-process sites, respectively, with a low $Y_{\rm e}$ environment. From an estimation of the distance limit for $\gamma$-ray observations as a function of the age, the high sensitivity in the sub-MeV band, at approximately $10^{-9}$ photons s$^{-1}$ cm$^{-2}$ or $10^{-15}$ erg s$^{-1}$ cm$^{-2}$, is required to cover all the NSM remnants in our Galaxy if we assume that the population of NSMs by \citet{2019ApJ...880...23W}. A $\gamma$-ray survey with sensitivities of $10^{-8}$--$10^{-7}$ photons s$^{-1}$ cm$^{-2}$ or $10^{-14}$--$10^{-13}$ erg s$^{-1}$ cm$^{-2}$ in the 70--4000 keV band is expected to find emissions from at least one NSM remnant under the assumption of NSM rate of 30 Myr$^{-1}$. The feasibility of $\gamma$-ray missions to observe Galactic NSMs are also studied.
The aim of this work is to introduce a thermo-electromagnetic model for calculating the temperature and the power dissipated in cylindrical pieces whose geometry var\'ies with time and undergoes large deformations; the motion will be a known data. The work will be a first step towards building a complete thermoelectromagnetic-mechanical model suitable for simulating electrically assisted forming processes, which is the main motivation of the work. The electromagnetic model will be obtained from the time-harmonic eddy current problem with an inplane current; the source will be given in terms of currents or voltages defined at sorne parts of the boundary. Finite element methods based on a Lagrangian weak formulation will be used for the numerical solution. This approach will avoid the need to compute and remesh the thermo-electromagnetic domain along the time. The numerical tools will be implemented in FEniCS and validated by using a suitable test also solved in Eulerian coordinates.
Primordial Black Holes (PBHs) might have formed in the early Universe due to the collapse of density fluctuations. PBHs may act as the sources for some of the gravitational waves recently observed. We explored the formation scenarios of PBHs of stellar mass, taking into account the possible influence of the QCD phase transition, for which we considered three different models: Crossover Model (CM), Bag Model (BM), and Lattice Fit Model (LFM). For the fluctuations, we considered a running-tilt power-law spectrum; when these cross the $\sim 10^{-9}$-$10^{-1}\mathrm{~s}$ Universe horizon they originate 0.05-500~M$_{\odot}$ PBHs which could: i) provide a population of stellar mass PBHs similar to the ones present on the binaries associated with all known gravitational wave sources; ii) constitute a broad mass spectrum accounting for $\sim 76\%$ of all Cold Dark Matter (CDM) in the Universe.
Despite many efforts, the behavior of a crowd is not fully understood. The advent of modern communication media has made it an even more challenging problem, as crowd dynamics could be driven by both human-to-human and human-technology interactions. Here, we study the dynamics of a crowd controlled game (Twitch Plays Pok\'emon), in which nearly a million players participated during more than two weeks. We dissect the temporal evolution of the system dynamics along the two distinct phases that characterized the game. We find that players who do not follow the crowd average behavior are key to succeed in the game. The latter finding can be well explained by an n-$th$ order Markov model that reproduces the observed behavior. Secondly, we analyze a phase of the game in which players were able to decide between two different modes of playing, mimicking a voting system. Our results suggest that under some conditions, the collective dynamics can be better regarded as a swarm-like behavior instead of a crowd. Finally, we discuss our findings in the light of the social identity theory, which appears to describe well the observed dynamics.
We introduce new Langevin-type equations describing the rotational and translational motion of rigid bodies interacting through conservative and non-conservative forces, and hydrodynamic coupling. In the absence of non-conservative forces the Langevin-type equations sample from the canonical ensemble. The rotational degrees of freedom are described using quaternions, the lengths of which are exactly preserved by the stochastic dynamics. For the proposed Langevin-type equations, we construct a weak 2nd order geometric integrator which preserves the main geometric features of the continuous dynamics. The integrator uses Verlet-type splitting for the deterministic part of Langevin equations appropriately combined with an exactly integrated Ornstein-Uhlenbeck process. Numerical experiments are presented to illustrate both the new Langevin model and the numerical method for it, as well as to demonstrate how inertia and the coupling of rotational and translational motion can introduce qualitatively distinct behaviours.
We present the results of a detailed spectral analysis of optically faint hard X-ray sources in the Chandra deep fields selected on the basis of their high X-ray to optical flux ratio (X/O). The stacked spectra of high X/O sources in both Chandra deep fields, fitted with a single power-law model, are much harder than the spectrum of the X-ray background (XRB). The average slope is also insensitive to the 2-8 keV flux, being approximately constant around Gamma~1 over more than two decades, strongly indicating that high X/O sources represent the most obscured component of the XRB. For about half of the sample, a redshift estimate (in most of the cases a photometric redshift) is available from the literature. Individual fits of a few of the brightest objects and of stacked spectra in different redshift bins imply column densities in the range 10^{22-23.5} cm^{-2}. A trend of increasing absorption towards higher redshifts is suggested.
We report the fabrication and characterization of superconducting quantum interference devices (SQUIDs) based on InAs nanowires and vanadium superconducting electrodes. These mesoscopic devices are found to be extremely robust against thermal cycling and to operate up to temperatures of $\sim2.5$~K with reduced power dissipation. We show that our geometry allows to obtain nearly-symmetric devices with very large magnetic-field modulation of the critical current. All these properties make these devices attractive for on-chip quantum-circuit implementation.
We examine theoretically electron paramagnetic resonance (EPR) lineshapes as functions of resonance frequency, energy level, and temperature for single crystals of three different kinds of single-molecule nanomagnets (SMMs): Mn$_{12}$ acetate, Fe$_8$Br, and the $S=9/2$ Mn$_4$ compound. We use a density-matrix equation and consider distributions in the uniaxial (second-order) anisotropy parameter $D$ and the $g$ factor, caused by possible defects in the samples. Additionally, weak intermolecular exchange and electronic dipole interactions are included in a mean-field approximation. Our calculated linewidths are in good agreement with experiments. We find that the distribution in $D$ is common to the three examined single-molecule magnets. This could provide a basis for a proposed tunneling mechanism due to lattice defects or imperfections. We also find that weak intermolecular exchange and dipolar interactions are mainly responsible for the temperature dependence of the lineshapes for all three SMMs, and that the intermolecular exchange interaction is more significant for Mn$_4$ than for the other two SMMs. This finding is consistent with earlier experiments and suggests the role of spin-spin relaxation processes in the mechanism of magnetization tunneling.
A conjecture of Kalai asserts that for $d\geq 4$, the affine type of a prime simplicial $d$-polytope $P$ can be reconstructed from the space of affine $2$-stresses of $P$. We prove this conjecture for all $d\geq 5$. We also prove the following generalization: for all pairs $(i,d)$ with $2\leq i\leq \lceil \frac d 2\rceil-1$, the affine type of a simplicial $d$-polytope $P$ that has no missing faces of dimension $\geq d-i+1$ can be reconstructed from the space of affine $i$-stresses of $P$. A consequence of our proofs is a strengthening of the Generalized Lower Bound Theorem: it was proved by Nagel that for any simplicial $(d-1)$-sphere $\Delta$ and $1\leq k\leq \lceil\frac{d}{2}\rceil-1$, $g_k(\Delta)$ is at least as large as the number of missing $(d-k)$-faces of $\Delta$; here we show that, for $1\leq k\leq \lfloor\frac{d}{2}\rfloor-1$, equality holds if and only if $\Delta$ is $k$-stacked. Finally, we show that for $d\geq 4$, any simplicial $d$-polytope $P$ that has no missing faces of dimension $\geq d-1$ is redundantly rigid, that is, for each edge $e$ of $P$, there exists an affine $2$-stress on $P$ with a non-zero value on $e$.
Quasi-invariant and pseudo-differentiable measures on a Banach space $X$ over a non-Archimedean locally compact infinite field with a non-trivial valuation are defined and constructed. Measures are considered with values in $\bf R$. Theorems and criteria are formulated and proved about quasi-invariance and pseudo-differentiability of measures relative to linear and non-linear operators on $X$. Characteristic functionals of measures are studied. Moreover, the non-Archimedean analogs of the Bochner-Kolmogorov and Minlos-Sazonov theorems are investigated. Infinite products of measures also are considered. Convergence of quasi-invariant and pseudo-differentiable measures in the corresponding spaces of measures is investigated.
In all applications of gamma-ray spectroscopy, one of the most important and delicate parts of the data analysis is the fitting of the gamma-ray spectra, where information as the number of counts, the position of the centroid and the width, for instance, are associated with each peak of each spectrum. There's a huge choice of computer programs that perform this type of analysis, and the most commonly used in routine work are the ones that automatically locate and fit the peaks; this fit can be made in several different ways -- the most common ways are to fit a Gaussian function to each peak or simply to integrate the area under the peak, but some software go far beyond and include several small corrections to the simple Gaussian peak function, in order to compensate for secondary effects. In this work several gamma-ray spectroscopy software are compared in the task of finding and fitting the gamma-ray peaks in spectra taken with standard sources of $^{137}$Cs, $^{60}$Co, $^{133}$Ba and $^{152}$Eu. The results show that all of the automatic software can be properly used in the task of finding and fitting peaks, with the exception of GammaVision; also, it was possible to verify that the automatic peak-fitting software did perform as well as -- and sometimes even better than -- a manual peak-fitting software.
The quantum spin Hall (QSH) phase is a time reversal invariant electronic state with a bulk electronic band gap that supports the transport of charge and spin in gapless edge states. We show that this phase is associated with a novel $Z_2$ topological invariant, which distinguishes it from an ordinary insulator. The $Z_2$ classification, which is defined for time reversal invariant Hamiltonians, is analogous to the Chern number classification of the quantum Hall effect. We establish the $Z_2$ order of the QSH phase in the two band model of graphene and propose a generalization of the formalism applicable to multi band and interacting systems.
We investigate the influence of the helical compactification of spatial dimension on the local properties of the vacuum state for a charged scalar field with general curvature coupling parameter. A general background geometry is considered with rotational symmetry in the subspace with the coordinates appearing in the helical periodicity condition. It is shown that by a coordinate transformation the problem is reduced to the problem with standard quasiperiodicity condition in the same local geometry and with the effective compactification radius determined by the length of the compact dimension and the helicity parameter. As an application of the general procedure we have considered locally de Sitter spacetime with a helical compact dimension. By using the Hadamard function for the Bunch-Davies vacuum state, the vacuum expectation values of the field squared, current density, and energy-momentum tensor are studied. The topological contributions are explicitly separated and their asymptotics are described at early and late stages of cosmological expansion. An important difference, compared to the problem with quasiperiodic conditions, is the appearance of the nonzero off-diagonal component of the energy-momentum tensor and of the component of the current density along the uncompact dimension.
Video summarization aims at choosing parts of a video that narrate a story as close as possible to the original one. Most of the existing video summarization approaches focus on hand-crafted labels. As the number of videos grows exponentially, there emerges an increasing need for methods that can learn meaningful summarizations without labeled annotations. In this paper, we aim to maximally exploit unsupervised video summarization while concentrating the supervision to a few, personalized labels as an add-on. To do so, we formulate the key requirements for the informative video summarization. Then, we propose contrastive learning as the answer to both questions. To further boost Contrastive video Summarization (CSUM), we propose to contrast top-k features instead of a mean video feature as employed by the existing method, which we implement with a differentiable top-k feature selector. Our experiments on several benchmarks demonstrate, that our approach allows for meaningful and diverse summaries when no labeled data is provided.
Quantized Skyrmions with baryon numbers $B=1,2$ and 4 are considered and angularly localized wavefunctions for them are found. By combining a few low angular momentum states, one can construct a quantum state whose spatial density is close to that of the classical Skyrmion, and has the same symmetries. For the B=1 case we find the best localized wavefunction among linear combinations of $j=1/2$ and $j=3/2$ angular momentum states. For B=2, we find that the $j=1$ ground state has toroidal symmetry and a somewhat reduced localization compared to the classical solution. For B=4, where the classical Skyrmion has cubic symmetry, we construct cubically symmetric quantum states by combining the $j=0$ ground state with the lowest rotationally excited $j=4$ state. We use the rational map approximation to compare the classical and quantum baryon densities in the B=2 and B=4 cases.
The CDF collaboration recently reported a measurement of the $W$-bosos mass, $M_W$, showing a large positive deviation from the Standard Model (SM) prediction. The question arises whether extensions of the SM exist that can accommodate such large values, and what further phenomenological consequences arise from this. We give a brief review of the implications of the new CDF measurement on the SM, as well as on Higgs-sector extensions. In particular, we review the compatibility of the $M_W$ measurement of CDF with excesses observed in the light Higgs-boson searches at $\sim 95$ GeV, as well as with the Minimal Supersymmetric Standard Model in conjunction with the anomalous magnetic moment of the muon, $(g-2)_\mu$.
Even with the recent advances in convolutional neural networks (CNN) in various visual recognition tasks, the state-of-the-art action recognition system still relies on hand crafted motion feature such as optical flow to achieve the best performance. We propose a multitask learning model ActionFlowNet to train a single stream network directly from raw pixels to jointly estimate optical flow while recognizing actions with convolutional neural networks, capturing both appearance and motion in a single model. We additionally provide insights to how the quality of the learned optical flow affects the action recognition. Our model significantly improves action recognition accuracy by a large margin 31% compared to state-of-the-art CNN-based action recognition models trained without external large scale data and additional optical flow input. Without pretraining on large external labeled datasets, our model, by well exploiting the motion information, achieves competitive recognition accuracy to the models trained with large labeled datasets such as ImageNet and Sport-1M.
We present a new two-parameter family of solutions of Einstein gravity with negative cosmological constant in 2+1 dimensions. These solutions are obtained by squashing the anti-de Sitter geometry along one direction and posses four Killing vectors. Global properties as well as the four dimensional generalization are discussed, followed by the investigation of the geodesic motion. A simple global embedding of these spaces as the intersection of four quadratic surfaces in a seven dimensional space is obtained. We argue also that these geometries describe the boundary of a four dimensional nutty-bubble solution and are relevant in the context of AdS/CFT correspondence.
Starting from a factorization theorem in effective field theory, we present resummed results for two non-global observables: the invariant-mass distribution of jets and the energy distribution outside jets. Our results include the full next-to-leading-order corrections to the hard, jet and soft functions and are implemented in a parton-shower framework which generates the renormalization-group running in the effective theory. The inclusion of these matching corrections leads to an improved description of the data and reduced theoretical uncertainties. They will have to be combined with two-loop running in the future, but our results are an important first step towards the higher-logarithmic resummation of non-global observables.
Based on the principle of the Lorentz covariance the transition matrix elements from an off-shell photon state to the vacuum are decomposed into the light-cone photon DAs, in which only two transversal DAs survive in the on-shell limit. The eight off-shell light-cone photon distribution amplitudes (DAs) corresponding to chiral-odd and chiral-even up to twist-four and the corresponding coupling constants are studied systematically in the instanton vacuum model of quantum chromodynamics (QCD). The various individual photon DA multiplied by its corresponding coupling constant is expressed in terms of the correlation functions, which are connected with the spectral densities of an effective quark propagator, and then evaluated in the low-energy effective theory derived from the instanton vacuum model of QCD. The explicit analytical expressions and the numerical results for the photon DAs and their coupling constants are given.
The Narrow-line Seyfert I galaxy, 1H0707-495, has been well observed in the 0.3-10 keV band, revealing a dramatic drop in flux in the iron K alpha band, a strong soft excess, and short timescale reverberation lags associated with these spectral features. In this paper, we present the first results of a deep 250 ks NuSTAR observation of 1H0707-495, which includes the first sensitive observations above 10 keV. Even though the NuSTAR observations caught the source in an extreme low-flux state, the Compton hump is still significantly detected. NuSTAR, with its high effective area above 7 keV, clearly detects the drop in flux in the iron K alpha band, and by comparing these observations with archival XMM-Newton observations, we find that the energy of this drop increases with increasing flux. We discuss possible explanations for this, the most likely of which is that the drop in flux is the blue wing of the relativistically broadened iron K alpha emission line. When the flux is low, the coronal source height is low, thus enhancing the most gravitationally redshifted emission.
We derive a formalism to describe the scattering of polarized radiation over the full spectral range encompassed by atomic transitions belonging to the same spectral series (e.g., the H I Lyman and Balmer series, the UV multiplets of Fe I and Fe II). This allows us to study the role of radiation-induced coherence among the upper terms of the spectral series, and its contribution to Rayleigh scattering and the polarization of the solar continuum. We rely on previous theoretical results for the emissivity of a three-term atom of the $\Lambda$-type taking into account partially coherent scattering, and generalize its expression in order to describe a "multiple $\Lambda$" atomic system underlying the formation of a spectral series. Our study shows that important polarization effects must be expected because of the combined action of partial frequency redistribution and radiation-induced coherence among the terms of the series. In particular, our model predicts the correct asymptotic limit of 100% polarization in the far wings of a \emph{complete} (i.e., $\Delta L=0,\pm 1$) group of transitions, which must be expected on the basis of the principle of spectroscopic stability.
Modeling groundwater levels continuously across California's Central Valley (CV) hydrological system is challenging due to low-quality well data which is sparsely and noisily sampled across time and space. The lack of consistent well data makes it difficult to evaluate the impact of 2017 and 2019 wet years on CV groundwater following a severe drought during 2012-2015. A novel machine learning method is formulated for modeling groundwater levels by learning from a 3D lithological texture model of the CV aquifer. The proposed formulation performs multivariate regression by combining Gaussian processes (GP) and deep neural networks (DNN). The hierarchical modeling approach constitutes training the DNN to learn a lithologically informed latent space where non-parametric regression with GP is performed. We demonstrate the efficacy of GP-DNN regression for modeling non-stationary features in the well data with fast and reliable uncertainty quantification, as validated to be statistically consistent with the empirical data distribution from 90 blind wells across CV. We show how the model predictions may be used to supplement hydrological understanding of aquifer responses in basins with irregular well data. Our results indicate that on average the 2017 and 2019 wet years in California were largely ineffective in replenishing the groundwater loss caused during previous drought years.
For every positive integer $n$, an infinite family of positive integral solutions of the diophantine equation $x^n - y^n = z^{n+1}$ is constructed.
We propose a unified model combining the first-order liquid-liquid and the second-order ferroelectric phase transitions models and explaining various features of the $\lambda$-point of liquid water within a single theoretical framework. It becomes clear within the proposed model that not only does the long-range dipole-dipole interaction of water molecules yield a large value of dielectric constant $\epsilon$ at room temperatures, our analysis shows that the large dipole moment of the water molecules also leads to a ferroelectric phase transition at a temperature close to the lambda-point. Our more refined model suggests that the phase transition occurs only in the low density component of the liquid and is the origin of the singularity of the dielectric constant recently observed in experiments with supercooled liquid water at temperature T~233K. This combined model agrees well with nearly every available set of experiments and explains most of the well-known and even recently obtained results of MD simulations.
We consider the statics and dynamics of distinguishable spin-1/2 systems on an arbitrary graph G with N vertices. In particular, we consider systems of quantum spins evolving according to one of two hamiltonians: (i) the XY hamiltonian H_XY, which contains an XY interaction for every pair of spins connected by an edge in G; and (ii) the Heisenberg hamiltonian H_Heis, which contains a Heisenberg interaction term for every pair of spins connected by an edge in G. We find that the action of the XY (respectively, Heisenberg) hamiltonian on state space is equivalent to the action of the adjacency matrix (respectively, combinatorial laplacian) of a sequence G_k, k=0,..., N of graphs derived from G (with G_1=G). This equivalence of actions demonstrates that the dynamics of these two models is the same as the evolution of a free particle hopping on the graphs G_k. Thus we show how to replace the complicated dynamics of the original spin model with simpler dynamics on a more complicated geometry. A simple corollary of our approach allows us to write an explicit spectral decomposition of the XY model in a magnetic field on the path consisting of N vertices. We also use our approach to utilise results from spectral graph theory to solve new spin models: the XY model and heisenberg model in a magnetic field on the complete graph.
Kinetics of silicon dry oxidation are investigated theoretically and experimentally at low temperature in the nanometer range where the limits of the Deal and Grove model becomes critical. Based on a fine control of the oxidation process conditions, experiments allow the investigation of the growth kinetics of nanometric oxide layer. The theoretical model is formulated using a reaction rate approach. In this framework, the oxide thickness is estimated with the evolution of the various species during the reaction. Standard oxidation models and the reaction rate approach are confronted with these experiments. The interest of the reaction rate approach to improve silicon oxidation modeling in the nanometer range is clearly demonstrated.
The Atlantic Meridional Overturning Circulation (AMOC) distributes heat and salt into the Northern Hemisphere via a warm surface current toward the subpolar North Atlantic, where water sinks and returns southwards as a deep cold current. There is substantial evidence that the AMOC has slowed down over the last century. We introduce a conceptual box model for the evolution of salinity and temperature on the surface of the North Atlantic Ocean, subject to the influx of meltwater from the Greenland ice sheets. Our model, which extends a model due to Welander, describes the interaction between a surface box and a deep-water box of constant temperature and salinity, which may be convective or non-convective, depending on the density difference. Its two main parameters $\mu$ and $\eta$ describe the influx of freshwater and the threshold density between the two boxes, respectively. We use bifurcation theory to analyse two cases of the model: instantaneous switching between convective or non-convective interaction, where the system is piecewise-smooth (PWS), and the full smooth model with more gradual switching. For the PWS model we derive analytical expressions for all bifurcations. The resulting bifurcation diagram in the $(\mu,\eta)$-plane identifies all regions of possible dynamics, which we show as phase portraits - both at typical parameter points, as well as at the different transitions between them. We also present the bifurcation diagram for the case of smooth switching and show how it arises from that of the PWS case. In this way, we determine exactly where one finds bistability and self-sustained oscillations of the AMOC in both versions of the model. In particular, our results show that oscillations between temperature and salinity on the surface North Atlantic Ocean disappear completely when the transition between the convective and non-convective regimes is too slow.
We present a new scheme of compact Rubidium cold-atom clock which performs the diffuse light cooling, the microwave interrogation and the detection of the clock signal in a cylindrical microwave cavity. The diffuse light is produced by the reflection of the laser light at the inner surface of the microwave cavity. The pattern of injected laser beams is specially designed to make most of the cold atoms accumulate in the center of the microwave cavity. The microwave interrogation of cold atoms in the cavity leads to Ramsey fringes whose line-width is 24.5 Hz and the contrast of 95.6% when the free evolution time is 20 ms. The frequency stability of $7.3\times10^{-13}\tau^{-1/2}$ has been achieved recently. The scheme of this physical package can largely reduce the complexity of the cold atom clock, and increase the performance of the clock.
We introduce Bayesian Estimation Applied to Multiple Species (BEAMS), an algorithm designed to deal with parameter estimation when using contaminated data. We present the algorithm and demonstrate how it works with the help of a Gaussian simulation. We then apply it to supernova data from the Sloan Digital Sky Survey (SDSS), showing how the resulting confidence contours of the cosmological parameters shrink significantly.
We perform an analysis of the $b\to c\tau\nu$ data, including $R(D^{(*)})$, $R(J/\psi)$, $P_\tau(D^{*})$ and $F_L^{D^*}$, within and beyond the Standard Model (SM). We fit the $B\to D^{(*)}$ hadronic form factors in the HQET parametrization to the lattice and the light-cone sum rule (LCSR) results, applying the general strong unitarity bounds corresponding to $J^P=1^-$, $1^+$, $0^-$ and $0^+$. Using the obtained HQET relations between helicity amplitudes, we give the strong unitarity bounds on individual helicity amplitudes, which can be used in the BGL fits. Using the fitted form factors and taking into account the most recent Belle measurement of $R(D^{(*)})$ we investigate the model-independent and the leptoquark model explanations of the $b\to c\tau\nu$ anomalies. Specifically, we consider the one-operator, the two-operator new physics (NP) scenarios and the NP models with a single $R_2$, $S_1$ or $U_1$ leptoquark which is supposed to be able to address the $b\to c\tau\nu$ anomalies, and our results show that the $R_2$ leptoquark model is in tension with the limit $\mathcal B(B_c\to \tau\nu)<10\%$. Furthermore, we give predictions for the various observables in the SM and the NP scenarios/leptoquark models based on the present form factor study and the analysis of NP.
The Wronski map is a finite, PGL_2(C)-equivariant morphism from the Grassmannian Gr(d,n) to a projective space (the projectivization of a vector space of polynomials). We consider the following problem. If C_r < PGL_2(C) is a cyclic subgroup of order r, how may C_r-fixed points are in the in a fibre of the Wronski map over a C_r-fixed point in the base? In this paper, we compute a general answer in terms of r-ribbon tableaux. When r=2, this computation gives the number of real points in the fibre of the Wronski map over a real polynomial with purely imaginary roots. More generally, we can compute the number of real points in certain intersections of Schubert varieties. When r divides d(n-d) our main result says that the generic number of C_r-fixed points in the fibre is the number of standard r-ribbon tableaux rectangular shape (n-d)^d. Computing by a different method, we show that the answer in this case is also given by the number of of standard Young tableaux of shape (n-d)^d that are invariant under N/r iterations of jeu de taquin promotion. Together, these two results give a new proof of Rhoades' cyclic sieving theorem for promotion on rectangular tableaux. We prove analogous results for dihedral group actions.
We use a tunable laser ARPES to study the electronic properties of the prototypical multiband BCS superconductor MgB2. Our data reveal a strong renormalization of the dispersion (kink) at ~65 meV, which is caused by coupling of electrons to the E2g phonon mode. In contrast to cuprates, the 65 meV kink in MgB2 does not change significantly across Tc. More interestingly, we observe strong coupling to a second, lower energy collective mode at binding energy of 10 meV. This excitation vanishes above Tc and is likely a signature of the elusive Leggett mode.
The Gerda experiment designed to search for the neutrinoless double beta decay in 76Ge has successfully completed the first data collection. No signal excess is found, and a lower limit on the half life of the process is set, with T1/2 > 2.1x10^25 yr (90% CL). After a review of the experimental setup and of the main Phase I results, the hardware upgrade for Gerda Phase II is described, and the physics reach of the new data collection is reported.
We introduce SODA, a self-supervised diffusion model, designed for representation learning. The model incorporates an image encoder, which distills a source view into a compact representation, that, in turn, guides the generation of related novel views. We show that by imposing a tight bottleneck between the encoder and a denoising decoder, and leveraging novel view synthesis as a self-supervised objective, we can turn diffusion models into strong representation learners, capable of capturing visual semantics in an unsupervised manner. To the best of our knowledge, SODA is the first diffusion model to succeed at ImageNet linear-probe classification, and, at the same time, it accomplishes reconstruction, editing and synthesis tasks across a wide range of datasets. Further investigation reveals the disentangled nature of its emergent latent space, that serves as an effective interface to control and manipulate the model's produced images. All in all, we aim to shed light on the exciting and promising potential of diffusion models, not only for image generation, but also for learning rich and robust representations.
We review some recent results obtained in studying superspace formulations of 2D N=(4,4) matter-coupled supergravity. For a superspace geometry described by the minimal supergravity multiplet, we first describe how to reduce to components the chiral integral by using ``ectoplasm'' superform techniques as in arXiv:0907.5264 and then we review the bi-projective superspace formalism introduced in arXiv:0911.2546. After that, we elaborate on the curved bi-projective formalism providing a new result: the solution of the covariant type-I twisted multiplet constraints in terms of a weight-(-1,-1) bi-projective superfield.
Magnetic skyrmions are nanoscale spin textures touted as next-generation computing elements. When subjected to lateral currents, skyrmions move at considerable speeds. Their topological charge results in an additional transverse deflection known as the skyrmion Hall effect (SkHE). While promising, their dynamic phenomenology with current, skyrmion size, geometric effects and disorder remain to be established. Here we report on the ensemble dynamics of individual skyrmions forming dense arrays in Pt/Co/MgO wires by examining over 20,000 instances of motion across currents and fields. The skyrmion speed reaches 24 m/s in the plastic flow regime and is surprisingly robust to positional and size variations. Meanwhile, the SkHE saturates at $\sim 22^\circ$, is substantially reshaped by the wire edge, and crucially increases weakly with skyrmion size. Particle model simulations suggest that the SkHE size dependence - contrary to analytical predictions - arises from the interplay of intrinsic and pinning-driven effects. These results establish a robust framework to harness SkHE and achieve high-throughput skyrmion motion in wire devices.
This short note considers the effects of quantum theory on the linear evolution of the magnetic fields during and after inflation. The analysis appears to show that the magnetic fields decay exponentially in the high-temperature radiation era due to a combination of ohmic dissipation and vacuum polarisation.
Sensitivity analysis (SA) is a procedure for studying how sensitive are the output results of large-scale mathematical models to some uncertainties of the input data. The models are described as a system of partial differential equations. Often such systems contain a large number of input parameters. Obviously, it is important to know how sensitive is the solution to some uncontrolled variations or uncertainties in the input parameters of the model. Algorithms based on analysis of variances technique (ANOVA) for calculating numerical indicators of sensitivity and computationally efficient Monte Carlo integration techniques have recently been developed by the authors. They have been successfully applied to sensitivity studies of air pollution levels calculated by the Unified Danish Eulerian Model (UNI-DEM) with respect to several important input parameters. In this paper a comprehensive theoretical and experimental study of the Monte Carlo algorithm based on \textit{symmetrised shaking} of Sobol sequences has been done. It has been proven that this algorithm has an optimal rate of convergence for functions with continuous and bounded second derivatives in terms of probability and mean square error. Extensive numerical experiments with Monte Carlo, quasi-Monte Carlo (QMC) and scrambled quasi-Monte Carlo algorithms based on Sobol sequences are performed to support the theoretical studies and to analyze applicability of the algorithms to various classes of problems. The numerical tests show that the Monte Carlo algorithm based on \textit{symmetrised shaking} of Sobol sequences gives reliable results for multidimensional integration problems under consideration.
The amount of screening of a proton in a metal, migrating under the influence of an applied electric field, is calculated using different theoretical formulations. First the lowest order screening expression derived by Sham (1975) is evaluated. In addition 'exact' expressions are evaluated which were derived according to different approaches. For a proton in a metal modeled as a jellium the screening appears to be 15 +/- 10 %, which is neither negligible not reconcilable with the controversial full-screening point of view of Bosvieux and Friedel (1962). In reconsidering the theory of electromigration, a new simplified linear-response expression for the driving force is shown to lead to essentially the same result as found by Sorbello (1985), who has used a rather complicated technique. The expressions allow for a reduction such that only the scattering phase shifts of the migrating impurity are required. Finally it is shown that the starting formula for the driving force of Bosvieux and Friedel leads exactly to the zero-temperature limit of well-established linear response descriptions, by which the sting of the controversy has been removed.
We use a recently constructed linearized soliton sector perturbation theory to calculate the form factors relevant to the elastic scattering of ultrarelativistic mesons off of nonrelativistic kinks. Both localized kink wave packets and also delocalized momentum eigenstate kinks are considered. In the delocalized case, the leading term is just the classical kink solution, as was found by Goldstone and Jackiw. The leading delocalized quantum correction agrees with that found by Gervais, Jevicki and Sakita in the $\phi^4$ model and Weisz in the Sine-Gordon model. In the case of localized kink wave packets, some corrections are found which scale with the wave packet width, and so will be relevant for the coherent scattering of mesons off of kink wave packets.
Nuclear Star Clusters (NSCs) are often present in spiral galaxies as well as resolved Stellar Nuclei (SNi) in elliptical galaxies centres. Ever growing observational data indicate the existence of correlations between the properties of these very dense central star aggregates and those of host galaxies, which constitute a significant constraint for the validity of theoretical models of their origin and formation. In the framework of the well known 'migratory and merger' model for NSC and SN formation, in this paper we obtain, first, by a simple argument the expected scaling of the NSC/SN mass with both time and parent galaxy velocity dispersion in the case of dynamical friction as dominant effect on the globular cluster system evolution. This generalizes previous results by \cite{TrOsSp} and is in good agreement with available observational data showing a shallow correlation between NSC/SN mass and galactic bulge velocity dispersion. Moreover, we give statistical relevance to predictions of this formation model, obtaining a set of parameters to correlate with the galactic host parameters. We find that the correlations between the masses of NSCs in the migratory model and the global properties of the hosts reproduce quite well the observed correlations, supporting the validity of the migratory-merger model. In particular, one important result is the flattening or even decrease of the value of the NSC/SN mass obtained by the merger model as function of the galaxy mass for high values of the galactic mass, i.e. $\gtrsim 3\times 10^{11}$M$_\odot$, in agreement with some growing observational evidence.
Consider a quadratic rational self-map of the Riemann sphere such that one critical point is periodic of period 2, and the other critical point lies on the boundary of its immediate basin of attraction. We will give explicit topological models for all such maps. We also discuss the corresponding parameter picture.
Application size and complexity are the underlying cause of numerous security vulnerabilities in code. In order to mitigate the risks arising from such vulnerabilities, various techniques have been proposed to isolate the execution of sensitive code from the rest of the application and from other software on the platform (e.g. the operating system). However, even with these partitioning techniques, it is not immediately clear exactly how they can and should be used to partition applications. What overall partitioning scheme should be followed; what granularity of the partitions should be. To some extent, this is dependent on the capabilities and performance of the partitioning technology in use. For this work, we focus on the upcoming Intel Software Guard Extensions (SGX) technology as the state-of-the-art in this field. SGX provides a trusted execution environment, called an enclave, that protects the integrity of the code and the confidentiality of the data inside it from other software, including the operating system. We present a novel framework consisting of four possible schemes under which an application can be partitioned. These schemes range from coarse-grained partitioning, in which the full application is included in a single enclave, through ultra-fine partitioning, in which each application secret is protected in an individual enclave. We explain the specific security benefits provided by each of the partitioning schemes and discuss how the performance of the application would be affected. To compare the different partitioning schemes, we have partitioned OpenSSL using four different schemes. We discuss SGX properties together with the implications of our design choices in this paper.
The performance of lithium and sodium ion batteries relies notably on the accessibility to carbon electrodes of controllable porous structure and chemical composition. This work reports a facile synthesis of well-defined porous N-doped carbons (NPCs) using a poly(ionic liquid) (PIL) as precursor, and graphene oxide (GO)-stabilized poly(methyl methacrylate) (PMMA) nanoparticles as sacrificial template. The GO-stabilized PMMA nanoparticles were first prepared and then decorated by a thin PIL coating before carbonization. The resulting NPCs reached a satisfactory specific surface area of up to 561 m2/g and a hierarchically meso- and macroporous structure while keeping a nitrogen content of 2.6 wt %. Such NPCs delivered a high reversible charge/discharge capacity of 1013 mA h/g over 200 cycles at 0.4 A/g for lithium ion batteries (LIBs), and showed a good capacity of 204 mA h/g over 100 cycles at 0.1 A/g for sodium ion batteries (SIBs).
Radar simulation is a promising way to provide data-cube with effectiveness and accuracy for AI-based approaches to radar applications. This paper develops a channel simulator to generate frequency-modulated continuous-wave (FMCW) waveform multiple inputs multiple outputs (MIMO) radar signals. In the proposed simulation framework, an open-source animation tool called Blender is utilized to model the scenarios and render animations. The ray tracing (RT) engine embedded can trace the radar propagation paths, i.e., the distance and signal strength of each path. The beat signal models of time division multiplexing (TDM)-MIMO are adapted to RT outputs. Finally, the environment-based models are simulated to show the validation.
The structure and elastic properties of (5,5) and (10,10) nanotubes, as well as barriers for relative rotation of the walls and their relative sliding along the axis in a double-walled (5,5)@(10,10) carbon nanotube, are calculated using the density functional method. The results of these calculations are the basis for estimating the following physical quantities: shear strengths and diffusion coefficients for relative sliding along the axis and rotation of the walls, as well as frequencies of relative rotational and translational oscillations of the walls. The commensurability-incommensurability phase transition is analyzed. The length of the incommensurability defect is estimated on the basis of ab initio calculations. It is proposed that (5,5)@(10,10) double-walled carbon nanotube be used as a plain bearing. The possibility of experimental verification of the results is discussed.
A nilspace system is a generalization of a nilsystem, consisting of a compact nilspace X equipped with a group of nilspace translations acting on X. Nilspace systems appear in different guises in several recent works, and this motivates the study of these systems per se as well as their relation to more classical types of systems. In this paper we study morphisms of nilspace systems, i.e., nilspace morphisms with the additional property of being consistent with the actions of the given translations. A nilspace morphism does not necessarily have this property, but one of our main results shows that it factors through some other morphism which does have the property. As an application we obtain a strengthening of the inverse limit theorem for compact nilspaces, valid for nilspace systems. This is used in work of the first and third named authors to generalize the celebrated structure theorem of Host and Kra on characteristic factors.
We investigate the spectral and timing signatures of the internal-shock model for blazars. For this purpose, we develop a semi-analytical model for the time-dependent radiative output from internal shocks arising from colliding relativistic shells in a blazar jet. The emission through synchrotron and synchrotron-self Compton (SSC) radiation as well as Comptonization of an isotropic external radiation field are taken into account. We evaluate the discrete correlation function (DCF) of the model light curves in order to evaluate features of photon-energy dependent time lags and the quality of the correlation, represented by the peak value of the DCF. The almost completely analytic nature of our approach allows us to study in detail the influence of various model parameters on the resulting spectral and timing features. This paper focuses on a range of parameters in which the gamma-ray production is dominated by Comptonization of external radiation, most likely appropriate for gamma-ray bright flat-spectrum radio quasars (FSRQs) or low-frequency peaked BL Lac objects (LBLs). In most cases relevant for FSRQs and LBLs, the variability of the optical emission is highly correlated with the X-ray and high-energy (HE: > 100 MeV) gamma-ray emission. Our baseline model predicts a lead of the optical variability with respect to the higher-energy bands by 1 - 2 hours and of the HE gamma-rays before the X-rays by about 1 hour. We show that variations of certain parameters may lead to changing signs of inter-band time lags, potentially explaining the lack of persistent trends of time lags in most blazars.
Many business applications involve adversarial relationships in which both sides adapt their strategies to optimize their opposing benefits. One of the key characteristics of these applications is the wide range of strategies that an adversary may choose as they adapt their strategy dynamically to sustain benefits and evade authorities. In this paper, we present a novel way of approaching these types of applications, in particular in the context of Anti-Money Laundering. We provide a mechanism through which diverse, realistic and new unobserved behavior may be generated to discover potential unobserved adversarial actions to enable organizations to preemptively mitigate these risks. In this regard, we make three main contributions. (a) Propose a novel behavior-based model as opposed to individual transactions-based models currently used by financial institutions. We introduce behavior traces as enriched relational representation to represent observed human behavior. (b) A modelling approach that observes these traces and is able to accurately infer the goals of actors by classifying the behavior into money laundering or standard behavior despite significant unobserved activity. And (c) a synthetic behavior simulator that can generate new previously unseen traces. The simulator incorporates a high level of flexibility in the behavioral parameters so that we can challenge the detection algorithm. Finally, we provide experimental results that show that the learning module (automated investigator) that has only partial observability can still successfully infer the type of behavior, and thus the simulated goals, followed by customers based on traces - a key aspiration for many applications today.
Kirigami-inspired metamaterials are attracting increasing interest because of their ability to achieve extremely large strains and shape changes via out-of-plane buckling. While in flat kirigami sheets the ligaments buckle simultaneously as Euler columns leading to a continuous phase transition, here we demonstrate that kirigami shells can also support discontinuous phase transitions. Specifically, we show via a combination of experiments, numerical simulations and theoretical analysis that in cylindrical kirigami shells the snapping-induced curvature inversion of the initially bent ligaments results in a pop-up process that first localizes near an imperfection and then, as the deformation is increased, progressively spreads through the structure. Notably, we find that the width of the transition zone as well as the stress at which propagation of the instability is triggered can be controlled by carefully selecting the geometry of the cuts and the curvature of the shell. Our study significantly expands the ability of existing kirigami metamaterials and opens avenues for the design of the next generation of responsive surfaces, as demonstrated by the design of a smart skin that significantly enhance the crawling efficiency of a simple linear actuator.
In this paper, we extend a spherical variant of the Kowalski-S\{l}odkowski theorem due to Li, Peralta, Wang and Wang. As a corollary, we prove that every 2-local map in the set of all surjective isometries (without assuming linearity) on a certain function space is in fact a surjective isometry. This gives an affirmative answer to the problem on 2-local isometries posed by Moln\'ar.
This paper presents Generative Adversarial Talking Head (GATH), a novel deep generative neural network that enables fully automatic facial expression synthesis of an arbitrary portrait with continuous action unit (AU) coefficients. Specifically, our model directly manipulates image pixels to make the unseen subject in the still photo express various emotions controlled by values of facial AU coefficients, while maintaining her personal characteristics, such as facial geometry, skin color and hair style, as well as the original surrounding background. In contrast to prior work, GATH is purely data-driven and it requires neither a statistical face model nor image processing tricks to enact facial deformations. Additionally, our model is trained from unpaired data, where the input image, with its auxiliary identity label taken from abundance of still photos in the wild, and the target frame are from different persons. In order to effectively learn such model, we propose a novel weakly supervised adversarial learning framework that consists of a generator, a discriminator, a classifier and an action unit estimator. Our work gives rise to template-and-target-free expression editing, where still faces can be effortlessly animated with arbitrary AU coefficients provided by the user.
It is shown that the generation linewidth of an auto-oscillator with a nonlinear frequency shift (i.e. an auto-oscillator in which frequency depends on the oscillation amplitude) is substantially larger than the linewidth of a conventional quasi-linear auto-oscillator due to the renormalization of the phase noise caused by the nonlinearity of the oscillation frequency. The developed theory, when applied to a spin-torque nano-contact auto-oscillator, predicts a minimum of the generation linewidth when the nano-contact is magnetized at a critical angle to its plane, corresponding to the minimum nonlinear frequency shift, in good agreement with recent experiments.
Non-autoregressive translation (NAT) achieves faster inference speed but at the cost of worse accuracy compared with autoregressive translation (AT). Since AT and NAT can share model structure and AT is an easier task than NAT due to the explicit dependency on previous target-side tokens, a natural idea is to gradually shift the model training from the easier AT task to the harder NAT task. To smooth the shift from AT training to NAT training, in this paper, we introduce semi-autoregressive translation (SAT) as intermediate tasks. SAT contains a hyperparameter k, and each k value defines a SAT task with different degrees of parallelism. Specially, SAT covers AT and NAT as its special cases: it reduces to AT when k = 1 and to NAT when k = N (N is the length of target sentence). We design curriculum schedules to gradually shift k from 1 to N, with different pacing functions and number of tasks trained at the same time. We called our method as task-level curriculum learning for NAT (TCL-NAT). Experiments on IWSLT14 De-En, IWSLT16 En-De, WMT14 En-De and De-En datasets show that TCL-NAT achieves significant accuracy improvements over previous NAT baselines and reduces the performance gap between NAT and AT models to 1-2 BLEU points, demonstrating the effectiveness of our proposed method.
We obtain many results and solve some problems about feebly compact paratopological groups. We obtain necessary and sufficient conditions for such a group to be topological. One of them is the quasiregularity. We prove that each $2$-pseudocompact paratopological group is feebly compact and that each Hausdorff $\sigma$-compact feebly compact paratopological group is a compact topological group. Our particular attention concerns periodic and topologically periodic groups. We construct examples of various compact-like paratopological groups which are not topological groups, among them a $T_0$ sequentially compact group, a $T_1$ $2$-pseudocompact group, a functionally Hausdorff countably compact group (under the axiomatic assumption that there is an infinite torsion-free abelian countably compact topological group without non-trivial convergent sequences), and a functionally Hausdorff second countable group sequentially pracompact group. We investigate cone topologies of paratopological groups which provide a general tool to construct pathological examples, especially examples of compact-like paratopological groups with discontinuous inversion. We find a simple interplay between the algebraic properties of a basic cone subsemigroup $S$ of a group $G$ and compact-like properties of two basic semigroup topologies generated by $S$ on the group $G$. We prove that the product of a family of feebly compact paratopological groups is feebly compact, and that a paratopological group $G$ is feebly compact provided it has a feebly compact normal subgroup $H$ such that a quotient group $G/H$ is feebly compact.