text
stringlengths
6
128k
We supply a detailed proof of the result by P.S. Green and T. H$\ddot{\text{u}}$bsch that all complete intersection Calabi--Yau 3-folds in product of projective spaces are connected through projective conifold transitions (known as the standard web). We also introduce a subclass of small transitions which we call primitive small transitions and study such subclass. More precisely, given a small projective resolution $\pi : \widehat{X} \rightarrow X$ of a Calabi--Yau 3-fold $X$, we show that if the natural closed immersion $Def(\widehat{X}) \hookrightarrow Def(X)$ is an isomorphism then $X$ has only ODPs as singularities.
We replace a Hamiltonian with a modular Hamiltonian in the spectral form factor and the level spacing distribution function. This study establishes a connection between quantities within Quantum Entanglement and Quantum Chaos. To have a universal study for Quantum Entanglement, we consider the Gaussian random 2-qubit model. The maximum violation of Bell's inequality demonstrates a positive correlation with the entanglement entropy. Thus, the violation plays an equivalent role as Quantum Entanglement. We first provide an analytical estimation of the relation between quantum entanglement quantities and the dip when a subregion only has one qubit. The time of the first dip is monotone for entanglement entropy. The dynamics in a subregion is independent of the initial state at a late time. It is one of the signaling conditions for classical chaos. We also extend our analysis to the Gaussian random 3-qubit state, and it indicates a similar result. The simulation shows that the level spacing distribution function approaches GUE at a late time. In the end, we develop a technique within QFT to the spectral form factor for its relation to an $n$-sheet manifold. We apply the technology to a single interval in CFT$_2$ and the spherical entangling surface in $\mathcal{N}=4$ super Yang-Mills theory. The result is one for both cases, but the R\'enyi entropy can depend on the R\'enyi index. For the case of CFT$_2$, it indicates the difference between the continuum and discrete spectrum.
We address an unexpectedly large rectification using a simple quantum wire with correlated site potentials. The external electric field, associated with voltage bias, leads to unequal charge currents for two different polarities of external bias and this effect is further enhanced by incorporating the asymmetry in wire-to-electrode coupling. Our calculations suggest that in some cases almost cent percent rectification is obtained for a wide bias window. This performance is valid against disorder configurations and thus we can expect an experimental verification of our theoretical analysis in near future.
We prove that every positive semidefinite matrix over the natural numbers that is eventually 0 in each row and column can be factored as the product of an upper triangular matrix times a lower triangular matrix. We also extend some known results about factorization with respect to tensor products of nest algebras. Our proofs use the theory of reproducing kernel Hilbert spaces.
In this paper we give a new foundational, categorical formulation for operations and relations and objects parameterizing them. This generalizes and unifies the theory of operads and all their cousins including but not limited to PROPs, modular operads, twisted (modular) operads, properads, hyperoperads, their colored versions, as well as algebras over operads and an abundance of other related structures, such as crossed simplicial groups, the augmented simplicial category or FI--modules. The usefulness of this approach is that it allows us to handle all the classical as well as more esoteric structures under a common framework and we can treat all the situations simultaneously. Many of the known constructions simply become Kan extensions. In this common framework, we also derive universal operations, such as those underlying Deligne's conjecture, construct Hopf algebras as well as perform resolutions, (co)bar transforms and Feynman transforms which are related to master equations. For these applications, we construct the relevant model category structures. This produces many new examples.
We first recall Solomon's relations for Welschinger's invariants counting real curves in real symplectic fourfolds, announced in \cite{Jake2} and established in \cite{RealWDVV}, and the WDVV-style relations for Welschinger's invariants counting real curves in real symplectic sixfolds with some symmetry established in \cite{RealWDVV3}. We then explicitly demonstrate that in some important cases (projective spaces with standard conjugations, real blowups of the projective plane, and two- and three-fold products of the one-dimensional projective space with two involutions each), these relations provide complete recursions determining all Welschinger's invariants from basic input. We include extensive tables of Welschinger's invariants in low degrees obtained from these recursions with {\it Mathematica}. These invariants provide lower bounds for counts of real rational curves, including with curve insertions in smooth algebraic threefolds.
We study vacuum structure of N=1 supersymmetric quiver gauge theories which can be realized geometrically by D brane probes wrapping cycles of local Calabi-Yau three-folds. In particular, we show that the A_2 quiver theory with gauge group U(N_1) \times U(N_2) with N_1 / 2 < N_2 < 2N_1 / 3 has a regime with an infrared free description that is partially magnetic and partially electric. Using this dual description, we show that the model has a landscape of inequivalent meta-stable vacua where supersymmetry is dynamically broken and all the moduli are stabilized. Each vacuum has distinct unbroken gauge symmetry. B-terms generated by the supersymmetry breaking give rise to gaugino masses at one-loop, and we are left with the bosonic pure Yang-Mills theory in the infrared. We also identify the supersymmetric vacua in this model using their infrared free descriptions and show that the decay rates of the supersymmetry breaking vacua into the supersymmetric vacua can be made parametrically small.
The main scope of this chapter is metrics defined for coding and decoding purposes, mainly for block codes.
Pion-nucleon elastic scattering in the dominant $P_{33}$ channel is examined in the model in which the interaction is of the form $\pi + N\leftrightarrow N, \Delta(1232)$. New expressions are found for the elastic pion-nucleon scattering amplitude which differ from existing formula both in the kinematics and in the treatment of the renormalization of the nucleon mass and coupling constant. Fitting the model to the phase shifts in the $P_{33}$ channel does not uniquely fix the parameters of the model. The cutoff for the pion-nucleon form factor is found to lie in the range $\beta = 750\pm350$ MeV/c. The masses of the nucleon and the $\Delta$ which would arise if there were no coupling to mesons are found to be $m_{_N}^{(0)}= 1200\pm 200$ MeV and $m_\Delta^{(0)} = 1500\pm 200$ MeV. The difference in these bare masses, a quantity which would be accounted for by a residual gluon interaction, is found to be $\delta m^{(0)}=350\pm 100$ MeV.
Real topological phases protected by the spacetime inversion (P T) symmetry are a current research focus. The basis is that the P T symmetry endows a real structure in momentum space, which leads to Z2 topological classifications in 1D and 2D. Here, we provide solutions to two outstanding problems in the diagnosis of real topology. First, based on the stable equivalence in K-theory, we clarify that the 2D topological invariant remains well defined in the presence of nontrivial 1D invariant, and we develop a general numerical approach for its evaluation, which was hitherto unavailable. Second, under the unit-cell convention, noncentered P T symmetries assume momentum dependence, which violates the presumption in previous methods for computing the topological invariants. We clarify the classifications for this case and formulate the invariants by introducing a twisted Wilson-loop operator for both 1D and 2D. A simple model on a rectangular lattice is constructed to demonstrate our theory, which can be readily realized using artificial crystals.
The emerging graph Transformers have achieved impressive performance for graph representation learning over graph neural networks (GNNs). In this work, we regard the self-attention mechanism, the core module of graph Transformers, as a two-step aggregation operation on a fully connected graph. Due to the property of generating positive attention values, the self-attention mechanism is equal to conducting a smooth operation on all nodes, preserving the low-frequency information. However, only capturing the low-frequency information is inefficient in learning complex relations of nodes on diverse graphs, such as heterophily graphs where the high-frequency information is crucial. To this end, we propose a Signed Attention-based Graph Transformer (SignGT) to adaptively capture various frequency information from the graphs. Specifically, SignGT develops a new signed self-attention mechanism (SignSA) that produces signed attention values according to the semantic relevance of node pairs. Hence, the diverse frequency information between different node pairs could be carefully preserved. Besides, SignGT proposes a structure-aware feed-forward network (SFFN) that introduces the neighborhood bias to preserve the local topology information. In this way, SignGT could learn informative node representations from both long-range dependencies and local topology information. Extensive empirical results on both node-level and graph-level tasks indicate the superiority of SignGT against state-of-the-art graph Transformers as well as advanced GNNs.
Chiral plasmonic nanostructures will be of increasing importance for future applications in the field of nano optics and metamaterials. Their sensitivity to incident circularly polarized light in combination with the ability of extreme electromagnetic field localization renders them ideal candidates for chiral sensing and for all-optical information processing. Here, the resonant modes of single plasmonic helices are investigated. We find that a single plasmonic helix can be efficiently excited with circularly polarized light of both equal and opposite handedness relative to that of the helix. An analytic model provides resonance conditions matching the results of full-field modeling. The underlying geometric considerations explain the mechanism of excitation and deliver quantitative design rules for plasmonic helices being resonant in a desired wavelength range. Based on the developed analytical design tool, single silver helices were fabricated and optically characterized. They show the expected strong chiroptical response to both handednesses in the targeted visible range. With a value of 0.45 the experimentally realized dissymmetry factor is the largest obtained for single plasmonic helices in the visible range up to now.
We investigate, using mean-field theory and simulation, the effect of asymmetry on the critical behavior and probability density of Bak-Sneppen models. Two kinds of anisotropy are investigated: (i) different numbers of sites to the left and right of the central (minimum) site are updated and (ii) sites to the left and right of the central site are renewed in different ways. Of particular interest is the crossover from symmetric to asymmetric scaling for weakly asymmetric dynamics, and the collapse of data with different numbers of updated sites but the same degree of asymmetry. All non-symmetric rules studied fall, independent of the degree of asymmetry, in the same universality class. Conversely, symmetric variants reproduce the exponents of the original model. Our results confirm the existence of two symmetry-based universality classes for extremal dynamics.
We study the extended supersymmetric quantum mechanics, with supercharges transforming in the fundamental representation of U(N|M), as realized in certain one-dimensional nonlinear sigma models with Kaehler manifolds as target space. We discuss the symmetry algebra characterizing these models and, using operatorial methods, compute the heat kernel in the limit of short propagation time. These models are relevant for studying the quantum properties of a certain class of higher spin field equations in first quantization.
The much debated issue of the transverse single spin asymmetry A_N observed in the inclusive large P_T production of a single hadron in pp interactions, p(transv. polarized) p --> pion X, is considered in a TMD factorization scheme. A previous result [1,2] stating that the maximum contribution of the Collins effect is strongly suppressed, is revisited, correcting a numerical error. New estimates are given, adopting the Collins functions recently extracted from SIDIS and e+e- data, and phenomenological consequences are discussed.
We perform for the first time a dynamical system analysis of both the background and perturbation equations, of $\Lambda$CDM cosmology and quintessence scenario with an exponential potential. In the former case the perturbations do not change the stability of the late-time attractor of the background equations, and the system still results in the dark-energy dominated, de Sitter solution, having passed from the correct dark-matter era with $\gamma\approx6/11$. However, in the case of quintessence the incorporation of perturbations changes the stability and properties of the background evolution, and the only conditionally stable points present either an exponentially increasing matter clustering not favored by observations, or Laplacian instabilities, and thus not physically interesting. This result is a severe disadvantage of quintessence cosmology comparing to $\Lambda$CDM paradigm.
We derive an exact expression for the tachyon $\beta$-function for the Wess-Zumino-Witten model. We check our result up to three loops by calculating the three-loop tachyon $\beta$-function for a general non-linear $\sigma$-model with torsion, and then specialising to the case of the WZW model.
We report a photoinduced change of the coercive field, i.e., a photocoercivity effect (PCE), under very low intensity illumination of a low-doped (Ga,Mn)As ferromagnetic semiconductor. We find a strong correlation between the PCE and the sample resistivity. Spatially resolved dynamics of the magnetization reversal rule out any role of thermal heating in the origin of this PCE, and we propose a mechanism based on the light-induced lowering of the domain wall pinning energy. The PCE is local and reversible, allowing writing and erasing of magnetic images using light.
The conformational vibrations of Z-DNA with counterions are studied in framework of phenomenological model developed. The structure of left-handed double helix with counterions neutralizing the negatively charged phosphate groups of DNA is considered as the ion-phosphate lattice. The frequencies and Raman intensities for the modes of Z-DNA with Na+ and Mg2+ ions are calculated, and the low-frequency Raman spectra are built. At the spectra range about the frequency 150 cm-1 new mode of ion-phosphate vibrations is found, which characterizes the vibrations of Mg2+ counterions. The results of our calculations show that the intensities of Z-DNA modes are sensitive to the concentration of magnesium counterions. The obtained results describe well the experimental Raman spectra of Z-DNA.
Stochastic processes wherein the size of the state space is changing as a function of time offer models for the emergence of scale-invariant features observed in complex systems. I consider such a sample-space reducing (SSR) stochastic process that results in a random sequence of strictly decreasing integers $\{x(t)\}$, $0\le t \le \tau$, with boundary conditions $x(0) = N$ and $x(\tau)$ = 1. This model is shown to be exactly solvable: $\mathcal{P}_N(\tau)$, the probability that the process survives for time $\tau$ is analytically evaluated. In the limit of large $N$, the asymptotic form of this probability distribution is Gaussian, with mean and variance both varying logarithmically with system size: $\langle \tau \rangle \sim \ln N$ and $\sigma_{\tau}^{2} \sim \ln N$. Correspondence can be made between survival time statistics in the SSR process and record statistics of i.i.d. random variables.
Online conversations include more than just text. Increasingly, image-based responses such as memes and animated gifs serve as culturally recognized and often humorous responses in conversation. However, while NLP has broadened to multimodal models, conversational dialog systems have largely focused only on generating text replies. Here, we introduce a new dataset of 1.56M text-gif conversation turns and introduce a new multimodal conversational model Pepe the King Prawn for selecting gif-based replies. We demonstrate that our model produces relevant and high-quality gif responses and, in a large randomized control trial of multiple models replying to real users, we show that our model replies with gifs that are significantly better received by the community.
We estimate differential rapidity cross sections for $\Psi$ and $\Upsilon$ production via p-Pb collisions at 8 TeV. We use the mixed heavy quark hybrid theory in which the $J/\Psi(1S),\Upsilon(1S),\Upsilon(2S)$ are standard mesons while the $\Psi(2S)$ and $\Upsilon(3S)$ are mixed hybrids, approximately 50% standard $|q \bar{q}>$ states and 50% hybrid $|q \bar{q} g>$ states. This is an extension of previous work on heavy-quark state production via A-A collisions at RHIC.
We show that any non-zero Banach space with a separable dual contains a totally disconnected, closed and bounded subset S of Hausdorff dimension 1 such that every Lipschitz function on the space is Fr\'echet differentiable somewhere in S.
We address the problem of identifying individual cetaceans from images showing the trailing edge of their fins. Given the trailing edge from an unknown individual, we produce a ranking of known individuals from a database. The nicks and notches along the trailing edge define an individual's unique signature. We define a representation based on integral curvature that is robust to changes in viewpoint and pose, and captures the pattern of nicks and notches in a local neighborhood at multiple scales. We explore two ranking methods that use this representation. The first uses a dynamic programming time-warping algorithm to align two representations, and interprets the alignment cost as a measure of similarity. This algorithm also exploits learned spatial weights to downweight matches from regions of unstable curvature. The second interprets the representation as a feature descriptor. Feature keypoints are defined at the local extrema of the representation. Descriptors for the set of known individuals are stored in a tree structure, which allows us to perform queries given the descriptors from an unknown trailing edge. We evaluate the top-k accuracy on two real-world datasets to demonstrate the effectiveness of the curvature representation, achieving top-1 accuracy scores of approximately 95% and 80% for bottlenose dolphins and humpback whales, respectively.
We mainly discuss the Wu classes $v(M)$ and the Steenrod operation $Sq$ of the topological blow up $\tilde{M}$. The formula of the Wu class $v(\tilde{M})$ will be given as well as the formula of the Steenrod operation $Sq$. As an application, we will use our results to describe a geometric obstruction.
The Reynolds-Averaged Navier-Stokes (RANS) approach remains a backbone for turbulence modeling due to its high cost-effectiveness. Its accuracy is largely based on a reliable Reynolds stress anisotropy tensor closure model. There has been an amount of work aiming at improving traditional closure models, while they are still not satisfactory to some complex flow configurations. In recent years, advances in computing power have opened up a new way to address this problem: the machine-learning-assisted turbulence modeling. In this paper, we employ neural networks to fully predict the Reynolds stress anisotropy tensor of turbulent channel flows at different friction Reynolds numbers, for both interpolation and extrapolation scenarios. Several generic neural networks of Multi-Layer Perceptron (MLP) type are trained with different input feature combinations to acquire a complete grasp of the role of each parameter. The best performance is yielded by the model with the dimensionless mean streamwise velocity gradient $\alpha$, the dimensionless wall distance $y^+$ and the friction Reynolds number $\mathrm{Re}_\tau$ as inputs. A deeper theoretical insight into the Tensor Basis Neural Network (TBNN) clarifies some remaining ambiguities found in the literature concerning its application of Pope's general eddy viscosity model. We emphasize the sensitivity of the TBNN on the constant tensor $\textbf{T}^{*(0)}$ upon the turbulent channel flow data set, and newly propose a generalized $\textbf{T}^{*(0)}$, which considerably enhances its performance. Through comparison between the MLP and the augmented TBNN model with both $\{\alpha, y^+, \mathrm{Re}_\tau\}$ as input set, it is concluded that the former outperforms the latter and provides excellent interpolation and extrapolation predictions of the Reynolds stress anisotropy tensor in the specific case of turbulent channel flow.
We consider a well posed SPDE$\colon dZ=(AZ+b(Z)) dt+dW(t),\,Z_0=x, $ on a separable Hilbert space $H$, where $A\colon H\to H$ is self-adjoint, negative and such that $A^{-1+\beta}$ is of trace class for some $\beta>0$, $b\colon H\to H$ is Lipschitz continuous and $W$ is a cylindrical Wiener process on $H$. We denote by $W_A(t)=\int_0^te^{(t-s)A}\,dW(s),\,t\in[0,T],$ the stochastic convolution. We prove, with the help of a formula for nonlinear transformations of Gaussian integrals due to R. Ramer, the following identity $$(P\circ Z_x^{-1})(\Phi) =\int_X\Phi(h+e^{\cdot A}x)\, \exp\left\{ -\tfrac12|\gamma_x(h)|^2_{ H_{Q_T}} + I(\gamma_x)(h)\right\} N_{Q_T}(dh), $$ where $ N_{Q_T}$ is the law of $W_A$ in $C([0,T],H)$, $ H_{Q_T}$ its Cameron--Martin space, $$ [\gamma_x(k)](t)=\int_0^t e^{(t-s)A}b(k(s)+e^{sA}x) ds,\quad t\in[0,T], \; k \in C([0,T],H) $$ and $I(\gamma_x) $ is the It\^o integral of $\gamma_x$. Some applications are discussed; in particular, when $b$ is dissipative we provide an explicit formula for the law of the stationary process and the invariant measure $\nu$ of the Markov semigroup $(P_t)$. Some concluding remarks are devoted to a similar problem with colored noise.
Charmless B decay modes $B \to \pi \pi, \pi K$ and $KK$ aresystematically investigated with and without flavor SU(3) symmetry. Independent analyses on $\pi \pi$ and $\pi K$ modes both favor a large ratio between color-suppressed tree ($C$) and tree ($T)$ diagram, which suggests that they are more likely to originate from long distance effects. The sizes of QCD penguin diagrams extracted individually from $\pi\pi$, $\pi K$ and $KK$ modes are found to follow a pattern of SU(3) breaking in agreement with the naive factorization estimates. Global fits to these modes are done under various scenarios of SU(3)relations. The results show good determinations of weak phase $\gamma$ in consistency with the Standard Model (SM), but a large electro-weak penguin $(P_{\tmop{EW}})$ relative to $T + C$ with a large relative strong phase are favored, which requires an big enhancement of color suppressed electro-weak penguin ($P_{\tmop{EW}}^C$) compatible in size but destructively interfering with $P_{\tmop{EW}}$ within the SM, or implies new physics. Possibility of sizable contributions from nonfactorizable diagrams such as $W$-exchange ($E$), annihilation($A$) and penguin-annihilation diagrams($P_A$) are investigated. The implications to the branching ratios and CP violations in $K K$modes are discussed.
General relativistic magnetohydrodynamic (GRMHD) simulations represent a fundamental tool to probe various underlying mechanisms at play during binary neutron star (BNS) and neutron star (NS) - black hole (BH) mergers. Contemporary flux-conservative GRMHD codes numerically evolve a set of conservative equations based on `conserved' variables which then need to be converted back into the fundamental (`primitive') variables. The corresponding conservative-to-primitive variable recovery procedure, based on root-finding algorithms, constitutes one of the core elements of such GRMHD codes. Recently, a new robust, accurate and efficient recovery scheme called RePrimAnd was introduced, which has demonstrated the ability to always converge to a unique solution. The scheme provides fine-grained error policies to handle invalid states caused by evolution errors, and also provides analytical bounds for the error of all primitive variables. In this work, we describe the technical aspects of implementing the RePrimAnd scheme into the GRMHD code Spritz. To check our implementation as well as to assess the various features of the scheme, we perform a number of GRMHD tests in three dimensions. Our tests, which include critical cases such as a NS collapse to a BH as well as the early evolution (~50 ms) of a Fishbone-Moncrief BH-accrection disk system, show that RePrimAnd is able to support magnetized, low density environments with magnetic-to-fluid pressure ratios as high as 10^4, in situations where the previously used recovery scheme fails.
The optical trapping techniques have been extensively used in physics, biophysics, micro-chemistry, and micro-mechanics to allow trapping and manipulation of materials ranging from particles, cells, biological substances, and polymers to DNA and RNA molecules. In this Letter, we present a convenient and effective way to generate a novel phenomenon of trapping, named trap split, in a conventional four-level double-\Lambda atomic system driven by four femtosecond Laguerre-Gaussian laser pulses. We find that trap split can be always achieved when atoms are trapped by such laser pulses, as compared to Gaussian ones. This work would greatly facilitate the trapping and manipulating the particles and generation of trap split. It may also suggest the possibility of extension into new research fields, such as micro-machining and biophysics.
Graphene is a monoatomic layer of graphite with Carbon atoms arranged in a two dimensional honeycomb lattice configuration. It has been known for more than sixty years that the electronic structure of graphene can be modelled by two-dimensional massless relativistic fermions. This property gives rise to numerous applications, both in applied sciences and in theoretical physics. Electronic circuits made out of graphene could take advantage of its high electron mobility that is witnessed even at room temperature. In the theoretical domain the Dirac-like behavior of graphene can simulate high energy effects, such as the relativistic Klein paradox. Even more surprisingly, topological effects can be encoded in graphene such as the generation of vortices, charge fractionalization and the emergence of anyons. The impact of the topological effects on graphene's electronic properties can be elegantly described by the Atiyah-Singer index theorem. Here we present a pedagogical encounter of this theorem and review its various applications to graphene. A direct consequence of the index theorem is charge fractionalization that is usually known from the fractional quantum Hall effect. The charge fractionalization gives rise to the exciting possibility of realizing graphene based anyons that unlike bosons or fermions exhibit fractional statistics. Besides being of theoretical interest, anyons are a strong candidate for performing error free quantum information processing.
We use cosmological simulations of high-redshift minihalos to investigate the effect of dark matter annihilation (DMA) on the collapse of primordial gas. We numerically investigate the evolution of the gas as it assembles in a Population III stellar disk. We find that when DMA effects are neglected, the disk undergoes multiple fragmentation events beginning at ~ 500 yr after the appearance of the first protostar. On the other hand, DMA heating and ionization of the gas speeds the initial collapse of gas to protostellar densities and also affects the stability of the developing disk against fragmentation, depending on the DM distribution. We compare the evolution when we model the DM density with an analytical DM profile which remains centrally peaked, and when we simulate the DM profile using N-body particles (the 'live' DM halo). When utilizing the analytical DM profile, DMA suppresses disk fragmentation for ~ 3500 yr after the first protostar forms, in agreement with earlier work. However, when using a 'live' DM halo, the central DM density peak is gradually flattened due to the mutual interaction between the DM and the rotating gaseous disk, reducing the effects of DMA on the gas, and enabling secondary protostars of mass ~ 1 M_sol to be formed within ~ 900 yr. These simulations demonstrate that DMA is ineffective in suppressing gas collapse and subsequent fragmentation, rendering the formation of long-lived dark stars unlikely. However, DMA effects may still be significant in the early collapse and disk formation phase of primordial gas evolution.
Dodona (dodona.ugent.be) is an intelligent tutoring system for computer programming. It bridges the gap between assessment and learning by providing real-time data and feedback to help students learn better, teachers teach better and educational technology become more effective. We demonstrate how Dodona can be used as a virtual co-teacher to stimulate active learning and support challenge-based education in open and collaborative learning environments. We also highlight some of the opportunities (automated feedback, learning analytics, educational data mining) and challenges (scalable feedback, open internet exams, plagiarism) we faced in practice. Dodona is free for use and has more than 36 thousand registered users across many educational and research institutes, of which 15 thousand new users registered last year. Lowering the barriers for such a broad adoption was achieved by following best practices and extensible approaches for software development, authentication, content management, assessment, security and interoperability, and by adopting a holistic view on computer-assisted learning and teaching that spans all aspects of managing courses that involve programming assignments. The source code of Dodona is available on GitHub under the permissive MIT open-source license.
In spite of the large amount of existing neural models in the literature, there is a lack of a systematic review of the possible effect of choosing different initial conditions on the dynamic evolution of neural systems. In this short review we intend to give insights into this topic by discussing some published examples. First, we briefly introduce the different ingredients of a neural dynamical model. Secondly, we introduce some concepts used to describe the dynamic behavior of neural models, namely phase space and its portraits, time series, spectra, multistability and bifurcations. We end with an analysis of the irreversibility of processes and its implications on the functioning of normal and pathological brains.
We prove a positive mass theorem for spin initial data sets $(M,g,k)$ that contain an asymptotically flat end and a shield of dominant energy (a subset of $M$ on which the dominant energy scalar $\mu-|J|$ has a positive lower bound). In a similar vein, we show that for an asymptotically flat end $\mathcal{E}$ that violates the positive mass theorem (i.e. $\mathrm{E} < |\mathrm{P}|$), there exists a constant $R>0$, depending only on $\mathcal{E}$, such that any initial data set containing $\mathcal{E}$ must violate the hypotheses of Witten's proof of the positive mass theorem in an $R$-neighborhood of $\mathcal{E}$. This implies the positive mass theorem for spin initial data sets with arbitrary ends, and we also prove a rigidity statement. Our proofs are based on a modification of Witten's approach to the positive mass theorem involving an additional independent timelike direction in the spinor bundle.
We give a pedagogical analysis on $K$ matrix models describing the $\pi N$ scattering amplitude, in $S_{11}$ channel at low energies. We show how the correct use of analyticity in the $s$ channel and crossing symmetry in $t$ and $u$ channels leads to a much improved analytic behavior in the negative $s$ region, in agreement with the prediction from chiral perturbation amplitudes in its validity region. The analysis leads again to the conclusion that a genuine $N^*(890)$ resonance exists.
Many molecular "quantum" theories, like "quantum chemistry", conceal that they are actually quantum-classical approaches---they treat one set of molecular degrees of freedom classically while the remaining degrees of freedom follow the laws of quantum mechanics. We show that the prominent "frozen-nuclei approximation", which is often used in molecular control communities, is a further example for such theory reduction: It treats the nuclei of the molecule as classical particles. Here, we demonstrate that the ignorance about the quantum nature of nuclei has far-reaching consequences for the theoretical description of molecules. We analyse the symmetry of oriented and aligned rigid molecules with feasible permutations of identical nuclei and show: The presumption of fixed nuclei corresponds to a localized state that is impossible to create if the existence of stable nuclear spin isomers is a justifiable assumption for the controlled molecule. The results of studies on molecules containing identical nuclei have to be re-evaluated and properly anti-symmetrised, because for such molecules the premise of frozen nuclei is inherently wrong: Molecular wave functions have to obey the spin-statistics theorem twice.
Partially observable Markov decision processes (POMDP) are a useful model for decision-making under partial observability and stochastic actions. Partially Observable Monte-Carlo Planning is an online algorithm for deciding on the next action to perform, using a Monte-Carlo tree search approach, based on the UCT (UCB applied to trees) algorithm for fully observable Markov-decision processes. POMCP develops an action-observation tree, and at the leaves, uses a rollout policy to provide a value estimate for the leaf. As such, POMCP is highly dependent on the rollout policy to compute good estimates, and hence identify good actions. Thus, many practitioners who use POMCP are required to create strong, domain-specific heuristics. In this paper, we model POMDPs as stochastic contingent planning problems. This allows us to leverage domain-independent heuristics that were developed in the planning community. We suggest two heuristics, the first is based on the well-known h_add heuristic from classical planning, and the second is computed in belief space, taking the value of information into account.
We present PhysioLLM, an interactive system that leverages large language models (LLMs) to provide personalized health understanding and exploration by integrating physiological data from wearables with contextual information. Unlike commercial health apps for wearables, our system offers a comprehensive statistical analysis component that discovers correlations and trends in user data, allowing users to ask questions in natural language and receive generated personalized insights, and guides them to develop actionable goals. As a case study, we focus on improving sleep quality, given its measurability through physiological data and its importance to general well-being. Through a user study with 24 Fitbit watch users, we demonstrate that PhysioLLM outperforms both the Fitbit App alone and a generic LLM chatbot in facilitating a deeper, personalized understanding of health data and supporting actionable steps toward personal health goals.
In this note, we provide a tractable example of a polyhomogeneous solution space for electromagnetism at null infinity in four dimensions. The memory effect for electromagnetism is then derived from the polyhomogeneous solution space. We also comment on the connection between the electromagnetic memories and asymptotic symmetries.
Double barrier GaN/AlN resonant tunneling heterostructures have been grown by molecular beam epitaxy on the (0001) plane of commercially available bulk GaN substrates. Resonant tunneling diodes were fabricated; room temperature current-voltage measurements reveal the presence of a negative differential conductance region under forward bias with peak current densities of ~6.4 $kA/cm^2$ and a peak to valley current ratio of ~1.3. Reverse bias operation presents a characteristic turn-on threshold voltage intimately linked to the polarization fields present in the heterostructure. An analytic electrostatic model is developed to capture the unique features of polar-heterostructure-based resonant tunneling diodes; both the resonant and threshold voltages are derived as a function of the design parameters and polarization fields. Subsequent measurements confirm the repeatability of the negative conductance and demonstrate that III-nitride tunneling heterostructures are capable of robust resonant transport at room temperature.
It has recently been pointed out by Kowalski et. al. (arxiv:0804.4142) that there is `an unexpected brightness of the SnIa data at z>1'. We quantify this statement by constructing a new statistic which is applicable directly on the Type Ia Supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa datapoints with respect to a best fit cosmological model at high redshifts. It is based on binning the normalized differences between the SnIa distance moduli and the corresponding best fit values in the context of a specific cosmological model (eg LCDM). We then focus on the highest redshift bin and extend its size towards lower redshifts until the Binned Normalized Difference (BND) changes sign (crosses 0) at a redshift z_c (bin size N_c). The bin size N_c of this crossing (the statistical variable) is then compared with the corresponding crossing bin size N_{mc} for Monte Carlo data realizations based on the best fit model. We find that the crossing bin size N_c obtained from the Union08 and Gold06 data with respect to the best fit LCDM model is anomalously large compared to N_{mc} of the corresponding Monte Carlo datasets obtained from the best fit LCDM in each case. In particular, only 2.2% of the Monte Carlo LCDM datasets are consistent with the Gold06 value of N_c while the corresponding probability for the Union08 value of N_c is 5.3%. Thus, according to this statistic, the probability that the high redshift brightness bias of the Union08 and Gold06 datasets is realized in the context of a (w_0,w_1)=(-1,0) model (LCDM cosmology) is less than 6%. The corresponding realization probability in the context of a (w_0,w_1)=(-1.4,2) model is more than 30% for both the Union08 and the Gold06 datasets.
New experimental results from photoproduction of hadrons at HERA are reviewed.
We investigate the magnetic properties of archetypal transition-metal oxides MnO, FeO, CoO and NiO under very high pressure by x-ray emission spectroscopy at the K\beta line. We observe a strong modification of the magnetism in the megabar range in all the samples except NiO. The results are analyzed within a multiplet approach including charge-transfer effects. The pressure dependence of the emission line is well accounted for by changes of the ligand field acting on the d electrons and allows us to extract parameters like local d-hybridization strength, O-2p bandwidth and ionic crystal field across the magnetic transition. This approach allows a first-hand insight into the mechanism of the pressure induced spin transition.
We prove various finiteness and representability results for flat cohomology of finite flat abelian group schemes. In particular, we show that if $f:X\rightarrow \mathrm{Spec} (k)$ is a projective scheme over a field $k$ and $G$ is a finite flat abelian group scheme over $X$ then $R^if_*G$ is an algebraic space for all $i$. More generally, we study the derived pushforwards $R^if_*G$ for $f:X\rightarrow S$ a projective morphism and $G$ a finite flat abelian group scheme over $X$. We also develop a theory of compactly supported cohomology for finite flat abelian group schemes, describe cohomology in terms of the cotangent complex for group schemes of height $1$, and relate the Dieudonn\'e modules of the group schemes $R^if_*\mu _p$ to cohomology generalizing work of Illusie. A higher categorical version of our main representability results is also presented.
Over the years, Isogeometric Analysis has shown to be a successful alternative to the Finite Element Method (FEM). However, solving the resulting linear systems of equations efficiently remains a challenging task. In this paper, we consider a p-multigrid method, in which coarsening is applied in the approximation order p instead of the mesh width h. Since the use of classical smoothers (e.g. Gauss-Seidel) results in a p-multigrid method with deteriorating performance for higher values of p, the use of an ILUT smoother is investigated. Numerical results and a spectral analysis indicate that the resulting p-multigrid method exhibits convergence rates independent of h and p. In particular, we compare both coarsening strategies (e.g. coarsening in h or p) adopting both smoothers for a variety of two and threedimensional benchmarks.
Amplitudes for the reaction $\pi^-p\to \Lambda K^0$ are reconstructed from data on the differential cross section $d\sigma/d\Omega$, the recoil polarization $P$, and on the spin rotation parameter $\beta$. At low energies, no data on $\beta$ exist, resulting in ambiguities. An approximation using $S$ and $P$ waves leads only to a fair description of the data on $d\sigma/d\Omega$ and $P$; in this case, there are two sets of amplitudes. Including $D$ waves, the data on $d\sigma/d\Omega$ and $P$ are well reproduced by the fit but now, there are several distinct solutions which describe the data with identical precision. In the range where the spin rotation parameter $\beta$ was measured, a full and unambiguous reconstruction of the partial wave amplitudes is possible. The energy-independent amplitudes are compared to the energy dependent amplitudes which resulted from a coupled channel fit (BnGa2011-02) to a large data set including both pion and photo-induced reactions. Significant deviations are observed. Consistency between energy dependent and energy independent solutions by choosing the energy independent solution which is closest to the energy dependent solution. In a second step, the {\it known} energy dependent solution for low (or high) partial waves is imposed and only the high (or low) partial waves are fitted leading to smaller uncertainties.
Among the thriving ecosystem of cloud computing and the proliferation of Large Language Model (LLM)-based code generation tools, there is a lack of benchmarking for code generation in cloud-native applications. In response to this need, we present CloudEval-YAML, a practical benchmark for cloud configuration generation. CloudEval-YAML tackles the diversity challenge by focusing on YAML, the de facto standard of numerous cloud-native tools. We develop the CloudEval-YAML benchmark with practicality in mind: the dataset consists of hand-written problems with unit tests targeting practical scenarios. We further enhanced the dataset to meet practical needs by rephrasing questions in a concise, abbreviated, and bilingual manner. The dataset consists of 1011 problems that take more than 1200 human hours to complete. To improve practicality during evaluation, we build a scalable evaluation platform for CloudEval-YAML that achieves a 20 times speedup over a single machine. To the best of our knowledge, the CloudEval-YAML dataset is the first hand-written dataset targeting cloud-native applications. We present an in-depth evaluation of 12 LLMs, leading to a deeper understanding of the problems and LLMs, as well as effective methods to improve task performance and reduce cost.
We study the diffusion and submonolayer spreading of chainlike molecules on surfaces. Using the fluctuating bond model we extract the collective and tracer diffusion coefficients D_c and D_t with a variety of methods. We show that D_c(theta) has unusual behavior as a function of the coverage theta. It first increases but after a maximum goes to zero as theta go to one. We show that the increase is due to entropic repulsion that leads to steep density profiles for spreading droplets seen in experiments. We also develop an analytic model for D_c(theta) which agrees well with the simulations.
We investigate the metastable repulsive branch of a mobile impurity coupled to a degenerate Fermi gas via short-range interactions. We show that the quasiparticle lifetime of this repulsive Fermi polaron can be experimentally probed by driving Rabi oscillations between weakly and strongly interacting impurity states. Using a time-dependent variational approach, we find that we can accurately model the impurity Rabi oscillations that were recently measured for repulsive Fermi polarons in both two and three dimensions. Crucially, our theoretical description does not include relaxation processes to the lower-lying attractive branch. Thus, the theory-experiment agreement demonstrates that the quasiparticle lifetime is determined by many-body dephasing within the upper repulsive branch rather than by the metastability of the upper branch itself. Our findings shed light on recent experimental observations of persistent repulsive correlations, and have important consequences for the nature and stability of the strongly repulsive Fermi gas.
[Abridged] Star and planet formation are the complex outcomes of gravitational collapse and angular momentum transport mediated by protostellar and protoplanetary disks. In this review we focus on the role of gravitational instability in this process. We begin with a brief overview of the observational evidence for massive disks that might be subject to gravitational instability, and then highlight the diverse ways in which the instability manifests itself in protostellar and protoplanetary disks: the generation of spiral arms, small scale turbulence-like density fluctuations, and fragmentation of the disk itself. We present the analytic theory that describes the linear growth phase of the instability, supplemented with a survey of numerical simulations that aim to capture the non-linear evolution. We emphasize the role of thermodynamics and large scale infall in controlling the outcome of the instability. Despite apparent controversies in the literature, we show a remarkable level of agreement between analytic predictions and numerical results. We highlight open questions related to (1) the development of a turbulent cascade in thin disks, and (2) the role of mode-mode coupling in setting the maximum angular momentum transport rate in thick disks.
Community structure analysis is a powerful tool for social networks, which can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of community structure partitioned is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a novel framework analyzing the significance of social community specially. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of nodes and their corresponding leaders. Then, using log-likelihood score, the tightness of community can be derived. Based on the distribution of community tightness, we establish a new connection between $p$-value theory and network analysis and then get a novel statistical form significance measure. Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms and so on.
A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis) and K-NN (K-Nearest Neighbor). It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition. with bags of paper votes. The central station compiles and publishes the names of winners and losers through television and radio stations. This method is useful only if the whole process is completed in a transparent way. However, there are some drawbacks to this system. These include higher expenses, longer time to complete the voting process, fraudulent practices by the authorities administering elections as well as malpractices by the voters [1]. These challenges result in manipulated election results.
We develop an effective extended Hubbard model to describe the low-energy electronic properties of the twisted bilayer graphene. By using the Bloch states in the effective continuum model and with the aid of the maximally localized algorithm, we construct the Wannier orbitals and obtain an effective tight-binding model on the emergent honeycomb lattice. We found the Wannier state takes a peculiar three-peak form in which the amplitude maxima are located at the triangle corners surrounding the center. We estimate the direct Coulomb interaction and the exchange interaction between the Wannier states. At the filling of two electrons per super cell, in particular, we find an unexpected coincidence in the direct Coulomb energy between a charge-ordered state and a homogeneous state, which would possibly lead to an unconventional many-body state.
The rapidly developing deep learning (DL) techniques have been applied in software systems with various application scenarios. However, they could also pose new safety threats with potentially serious consequences, especially in safety-critical domains. DL libraries serve as the underlying foundation for DL systems, and bugs in them can have unpredictable impacts that directly affect the behaviors of DL systems. Previous research on fuzzing DL libraries still has limitations in the diversity of test inputs, the construction of test oracles, and the precision of detection. In this paper, we propose MoCo, a novel fuzzing testing method for DL libraries via assembling code. MoCo first disassembles the seed code file to obtain the template and code blocks, and then employs code block mutation operators (e.g., API replacement, random generation, and boundary checking) to generate more new code blocks adapted to the template. By inserting context-appropriate code blocks into the template step by step, MoCo can generate a tree of code files with intergenerational relations. According to the derivation relations in this tree and the applied mutation operators, we construct the test oracle based on the execution state consistency. Since the granularity of code assembly and mutation is controlled rather than randomly divergent, we can quickly pinpoint the lines of code where the bugs are located and the corresponding triggering conditions. We conduct a comprehensive experiment to evaluate the efficiency and effectiveness of MoCo using three widely-used DL libraries (i.e., TensorFlow, PyTorch, and Jittor). During the experiment, MoCo detects 64 new bugs of four types in three DL libraries, where 51 bugs have been confirmed, and 13 bugs have been fixed by developers.
In an intelligent transportation system, the effects and relations of traffic flow at different points in a network are valuable features which can be exploited for control system design and traffic forecasting. In this paper, we define the notion of causality based on the directed information, a well-established data-driven measure, to represent the effective connectivity among nodes of a vehicular traffic network. This notion indicates whether the traffic flow at any given point affects another point's flow in the future and, more importantly, reveals the extent of this effect. In contrast with conventional methods to express connections in a network, it is not limited to linear models and normality conditions. In this work, directed information is used to determine the underlying graph structure of a network, denoted directed information graph, which expresses the causal relations among nodes in the network. We devise an algorithm to estimate the extent of the effects in each link and build the graph. The performance of the algorithm is then analyzed with synthetic data and real aggregated data of vehicular traffic.
This paper introduces KnowHalu, a novel approach for detecting hallucinations in text generated by large language models (LLMs), utilizing step-wise reasoning, multi-formulation query, multi-form knowledge for factual checking, and fusion-based detection mechanism. As LLMs are increasingly applied across various domains, ensuring that their outputs are not hallucinated is critical. Recognizing the limitations of existing approaches that either rely on the self-consistency check of LLMs or perform post-hoc fact-checking without considering the complexity of queries or the form of knowledge, KnowHalu proposes a two-phase process for hallucination detection. In the first phase, it identifies non-fabrication hallucinations--responses that, while factually correct, are irrelevant or non-specific to the query. The second phase, multi-form based factual checking, contains five key steps: reasoning and query decomposition, knowledge retrieval, knowledge optimization, judgment generation, and judgment aggregation. Our extensive evaluations demonstrate that KnowHalu significantly outperforms SOTA baselines in detecting hallucinations across diverse tasks, e.g., improving by 15.65% in QA tasks and 5.50% in summarization tasks, highlighting its effectiveness and versatility in detecting hallucinations in LLM-generated content.
Astrophysical fluids under the influence of magnetic fields are often subjected to single-fluid or two-fluid approximations. In the case of weakly ionized plasmas however, this can be inappropriate due to distinct responses from the multiple constituent species to both collisional and non-collisional forces. As a result, in dense molecular clouds and proto-stellar accretion discs for instance, the conductivity of the plasma may be highly anisotropic leading to phenomena such as Hall and ambipolar diffusion strongly influencing the dynamics. Diffusive processes are known to restrict the stability of conventional numerical schemes which are not implicit in nature. Furthermore, recent work establishes that a large Hall term can impose an additional severe stability limit on standard explicit schemes. Following a previous paper which presented the one-dimensional case, we describe a fully three-dimensional method which relaxes the normal restrictions on explicit schemes for multifluid processes. This is achieved by applying the little known Super TimeStepping technique to the symmetric (ambipolar) component of the evolution operator for the magnetic field in the local plasma rest-frame, and the new Hall Diffusion Scheme to the skew-symmetric (Hall) component.
If dark energy --- which drives the accelerated expansion of the universe --- consists of a light scalar field, it might be detectable as a "fifth force" between normal-matter objects, in potential conflict with precision tests of gravity. Chameleon fields and other theories with screening mechanisms, however, can evade these tests by suppressing the forces in regions of high density, such as the laboratory. Using a cesium matter-wave interferometer near a spherical mass in an ultra-high vacuum chamber, we reduce the screening mechanism by probing the field with individual atoms rather than bulk matter. Thus, we constrain a wide class of dark energy theories, including a range of chameleon and other theories that reproduce the observed cosmic acceleration.
In this paper, we provide a theoretical description, and calculate, the nonlinear frequency shift, group velocity and collionless damping rate, $\nu$, of a driven electron plasma wave (EPW). All these quantities, whose physical content will be discussed, are identified as terms of an envelope equation allowing one to predict how efficiently an EPW may be externally driven. This envelope equation is derived directly from Gauss law and from the investigation of the nonlinear electron motion, provided that the time and space rates of variation of the EPW amplitude, $E_p$, are small compared to the plasma frequency or the inverse of the Debye length. $\nu$ arises within the EPW envelope equation as more complicated an operator than a plain damping rate, and may only be viewed as such because $(\nu E_p)/E_p$ remains nearly constant before abruptly dropping to zero. We provide a practical analytic formula for $\nu$ and show, without resorting to complex contour deformation, that in the limit $E_p \to 0$, $\nu$ is nothing but the Landau damping rate. We then term $\nu$ the "nonlinear Landau damping rate" of the driven plasma wave. As for the nonlinear frequency shift of the EPW, it is also derived theoretically and found to assume values significantly different from previously published ones, assuming that the wave is freely propagating. Moreover, we find no limitation in $k \lambda_D$, $k$ being the plasma wavenumber and $\lambda_D$ the Debye length, for a solution to the dispertion relation to exist, and want to stress here the importance of specifying how an EPW is generated to discuss its properties. Our theoretical predictions are in excellent agreement with results inferred from Vlasov simulations of stimulated Raman scattering (SRS), and an application of our theory to the study of SRS is presented.
A conditionally exactly solvable potential, the supersymmetric partner of the harmonic oscillator is investigated in the PT-symmetric setting. It is shown that a number of properties characterizing shape-invariant and Natanzon-class potentials generated by an imaginary coordinate shift $x-{\rm i}\epsilon$ also hold for this potential outside the Natanzon class.
Over the last decade, most of the increase in computing power has been gained by advances in accelerated many-core architectures, mainly in the form of GPGPUs. While accelerators achieve phenomenal performances in various computing tasks, their utilization requires code adaptations and transformations. Thus, OpenMP, the most common standard for multi-threading in scientific computing applications, introduced offloading capabilities between host (CPUs) and accelerators since v4.0, with increasing support in the successive v4.5, v5.0, v5.1, and the latest v5.2 versions. Recently, two state-of-the-art GPUs -- the Intel Ponte Vecchio Max 1100 and the NVIDIA A100 GPUs -- were released to the market, with the oneAPI and NVHPC compilers for offloading, correspondingly. In this work, we present early performance results of OpenMP offloading capabilities to these devices while specifically analyzing the portability of advanced directives (using SOLLVE's OMPVV test suite) and the scalability of the hardware in representative scientific mini-app (the LULESH benchmark). Our results show that the coverage for version 4.5 is nearly complete in both latest NVHPC and oneAPI tools. However, we observed a lack of support in versions 5.0, 5.1, and 5.2, which is particularly noticeable when using NVHPC. From the performance perspective, we found that the PVC1100 and A100 are relatively comparable on the LULESH benchmark. While the A100 is slightly better due to faster memory bandwidth, the PVC1100 reaches the next problem size (400^3) scalably due to the larger memory size.
We propose and analyze surface-plasmon-driven electron spin currents in a thin metallic film. The electron gas in the metal follows the transversally rotating electric fields of the surface plasmons (SPs), which leads to a static magnetization gradient. We consider herein SPs in a thin-film insulator-metal-insulator structure and solve the spin diffusion equation in the presence of a magnetization gradient. The results reveal that the SPs at the metal interfaces generate spin currents in the metallic film. For thinner film, the SPs become strongly hybridized, which increases the magnetization gradient and enhances the spin current. We also discuss how the spin current depends on SP wavelength and the spin-diffusion length of the metal. The polarization of the spin current can be controlled by tuning the wavelength of the SPs and/or the spin diffusion length.
Information scrambling refers to the unitary dynamics that quickly spreads and encodes localized quantum information over an entire many-body system and makes the information accessible from any small subsystem. While information scrambling is the key to understanding complex quantum many-body dynamics and is well-understood in random unitary models, it has been hardly explored in Hamiltonian systems. In this Letter, we investigate the information recovery in various time-independent Hamiltonian systems, including chaotic spin chains and Sachdev-Ye-Kitaev (SYK) models. We show that information recovery is possible in certain, but not all, chaotic models, which highlights the difference between information recovery and quantum chaos based on the energy spectrum or the out-of-time-ordered correlators. We also show that information recovery probes transitions caused by the change of information-theoretic features of the dynamics.
To advance the neural decoding of Portuguese, in this paper we present a fully open Transformer-based, instruction-tuned decoder model that sets a new state of the art in this respect. To develop this decoder, which we named Gerv\'asio PT*, a strong LLaMA~2 7B model was used as a starting point, and its further improvement through additional training was done over language resources that include new instruction data sets of Portuguese prepared for this purpose, which are also contributed in this paper. All versions of Gerv\'asio are open source and distributed for free under an open license, including for either research or commercial usage, and can be run on consumer-grade hardware, thus seeking to contribute to the advancement of research and innovation in language technology for Portuguese.
Radiative transfer models were developed to understand the optical polarizations in edge-on galaxies, which are observed to occur even outside the geometrically thin dust disk, with a scale height of ~ 0.2 kpc. In order to reproduce the vertically extended polarization structure, we find it is essential to include a geometrically thick dust layer in the radiative transfer model, in addition to the commonly-known thin dust layer. The models include polarizations due to both dust scattering and dichroic extinction which is responsible for the observed interstellar polarization in the Milky Way. We also find that the polarization level is enhanced if the clumpiness of the interstellar medium, and the dichroic extinction by vertical magnetic fields in the outer regions of the dust lane are included in the radiative transfer model. The predicted degree of polarization outside the dust lane was found to be consistent with that (ranging from 1% to 4%) observed in NGC 891.
The trade-off between optimality and complexity has been one of the most important challenges in the field of robust Model Predictive Control (MPC). To address the challenge, we propose a flexible robust MPC scheme by synergizing the multi-stage and tube-based MPC approaches. The key idea is to exploit the non-conservatism of the multi-stage MPC and the simplicity of the tube-based MPC. The proposed scheme provides two options for the user to determine the trade-off depending on the application: the choice of the robust horizon and the classification of the uncertainties. Beyond the robust horizon, the branching of the scenario-tree employed in multi-stage MPC is avoided with the help of tubes. The growth of the problem size with respect to the number of uncertainties is reduced by handling \emph{small} uncertainties via an invariant tube that can be computed offline. This results in linear growth of the problem size beyond the robust horizon and no growth of the problem size concerning small magnitude uncertainties. The proposed approach helps to achieve a desired trade-off between optimality and complexity compared to existing robust MPC approaches. We show that the proposed approach is robustly asymptotically stable. Its advantages are demonstrated for a CSTR example.
Large N gauge theories with adjoint matter can be numerically studied using lattice techniques. Eguchi-Kawai reductions holds for this theory and one can reduce the lattice model to a single site. Hybrid Monte Carlo algorithm can be used to simulate this model. One can either perform an exact computation of the "fermionic force" or use pseudo fermions as part of the HMC algorithm. The former algorithm is slower than the latter but has the advantage that one can work with any real number for the fermion flavor. Some results using both algorithms will be presented.
In this article, we propose a generalized model for nonequilibrium vibrational energy distribution functions. The model can be used, in place of equilibrium (Boltzmann) distribution functions, when deriving reaction rate constants for high-temperature nonequilibrium flows. The distribution model is derived based on recent \textit{ab initio} calculations, carried out using potential energy surfaces developed using accurate computational quantum chemistry techniques for the purpose of studying air chemistry at high temperatures. Immediately behind a strong shock wave, the vibrational energy distribution is non-Boltzmann. Specifically, as the gas internal energy rapidly excites to a high temperature, overpopulation of the high-energy tail (relative to a corresponding Boltzmann distribution) is observed in \textit{ab initio} simulations. As the gas excites further and begins to dissociate, a depletion of the high-energy tail is observed, during a time-invariant quasi-steady state (QSS). Since the probability of dissociation is exponentially related to the vibrational energy of the dissociating molecule, the overall dissociation rate is sensitive to the populations of these high vibrational energy states. The non-Boltzmann effects captured by the new model either enhance or reduce the dissociation rate relative to that obtained assuming a Boltzmann distribution. This article proposes a simple model that is demonstrated to reproduce these non-Boltzmann effects quantitatively when compared to \textit{ab initio} simulations.
The purpose of these lectures is threefold: We first give a short survey of the Hida white noise calculus, and in this context we introduce the Hida-Malliavin derivative as a stochastic gradient with values in the Hida stochastic distribution space $(\mathcal{S}% )^*$. We show that this Hida-Malliavin derivative defined on $L^2(\mathcal{F}_T,P)$ is a natural extension of the classical Malliavin derivative defined on the subspace $\mathbb{D}_{1,2}$ of $L^2(P)$. The Hida-Malliavin calculus allows us to prove new results under weaker assumptions than could be obtained by the classical theory. In particular, we prove the following: (i) A general integration by parts formula and duality theorem for Skorohod integrals, (ii) a generalised fundamental theorem of stochastic calculus, and (iii) a general Clark-Ocone theorem, valid for all $F \in L^2(\mathcal{F}_T,P)$. As applications of the above theory we prove the following: A general representation theorem for backward stochastic differential equations with jumps, in terms of Hida-Malliavin derivatives; a general stochastic maximum principle for optimal control; backward stochastic Volterra integral equations; optimal control of stochastic Volterra integral equations and other stochastic systems.
We present a probabilistic generative model and efficient algorithm to model reciprocity in directed networks. Unlike other methods that address this problem such as exponential random graphs, it assigns latent variables as community memberships to nodes and a reciprocity parameter to the whole network rather than fitting order statistics. It formalizes the assumption that a directed interaction is more likely to occur if an individual has already observed an interaction towards her. It provides a natural framework for relaxing the common assumption in network generative models of conditional independence between edges, and it can be used to perform inference tasks such as predicting the existence of an edge given the observation of an edge in the reverse direction. Inference is performed using an efficient expectation-maximization algorithm that exploits the sparsity of the network, leading to an efficient and scalable implementation. We illustrate these findings by analyzing synthetic and real data, including social networks, academic citations and the Erasmus student exchange program. Our method outperforms others in both predicting edges and generating networks that reflect the reciprocity values observed in real data, while at the same time inferring an underlying community structure. We provide an open-source implementation of the code online.
In this article, we investigate four-dimensional gradient shrinking Ricci solitons close to a K\"ahler model. The first theorem could be considered as a rigidity result for the K\"ahler-Ricci soliton structure on $\mathbb{S}^2\times \mathbb{R}^2$ (in the sense of Remark 1). Moreover, we show that if the quotient of norm of the self-dual Weyl tensor and scalar curvature is close to that on a K\"ahler metric in a specific sense, then the gradient Ricci soliton must be either half-conformally flat or locally K\"ahler.
Terahertz (THz)-band communications are celebrated as a key enabling technology for next-generation wireless systems that promises to integrate a wide range of data-demanding and delay-sensitive applications. Following recent advancements in optical, electronic, and plasmonic transceiver design, integrated, adaptive, and efficient THz systems are no longer far-fetched. In this paper, we present a progressive vision of how the traditional "THz gap" will transform into a "THz rush" over the next few years. We posit that the breakthrough that the THz band will introduce will not be solely driven by achievable high data rates, but more profoundly by the interaction between THz sensing, imaging, and localization applications. We first detail the peculiarities of each of these applications at the THz band. Then, we illustrate how their coalescence results in enhanced environment-aware system performance in beyond-5G use cases. We further discuss the implementation aspects of this merging of applications in the context of shared and dedicated resource allocation, highlighting the role of machine learning.
Query workloads and database schemas in OLAP applications are becoming increasingly complex. Moreover, the queries and the schemas have to continually \textit{evolve} to address business requirements. During such repetitive transitions, the \textit{order} of index deployment has to be considered while designing the physical schemas such as indexes and MVs. An effective index deployment ordering can produce (1) a prompt query runtime improvement and (2) a reduced total deployment time. Both of these are essential qualities of design tools for quickly evolving databases, but optimizing the problem is challenging because of complex index interactions and a factorial number of possible solutions. We formulate the problem in a mathematical model and study several techniques for solving the index ordering problem. We demonstrate that Constraint Programming (CP) is a more flexible and efficient platform to solve the problem than other methods such as mixed integer programming and A* search. In addition to exact search techniques, we also studied local search algorithms to find near optimal solution very quickly. Our empirical analysis on the TPC-H dataset shows that our pruning techniques can reduce the size of the search space by tens of orders of magnitude. Using the TPC-DS dataset, we verify that our local search algorithm is a highly scalable and stable method for quickly finding a near-optimal solution.
Izawa's gauge-fixing procedure based on BRS symmetry is applied twice to the massive tensor field theory of Fierz-Pauli type. It is shown the second application can remove massless singularities which remain after the first application. Massless limit of the theory is discussed.
For humanoid robots to live up to their potential utility, they must be able to robustly recover from instabilities. In this work, we propose a number of balance enhancements to enable the robot to both achieve specific, desired footholds in the world and adjusting the step positions and times as necessary while leveraging ankle and hip. This includes improving the calculation of capture regions for bipedal locomotion to better consider how step constraints affect the ability to recover. We then explore a new strategy for performing cross-over steps to maintain stability, which greatly enhances the variety of tracking error from which the robot may recover. Our last contribution is a strategy for time adaptation during the transfer phase for recovery. We then present these results on our humanoid robot, Nadia, in both simulation and hardware, showing the robot walking over rough terrain, recovering from external disturbances, and taking cross-over steps to maintain balance.
The problem of existence of arbitrage free and monotone CDO term structure models is studied. Conditions for positivity and monotonicity of the corresponding Heath-Jarrow-Morton-Musiela equation for the $x$-forward rates with the use of the Milian type result are formulated. Two state spaces are taken into account - of square integrable functions and a Sobolev space. For the first the regularity results concerning pointwise monotonicity are proven. Arbitrage free and monotone models are characterized in terms of the volatility of the model and characteristics of the driving L\'evy process.
In this paper, the reinforcement learning (RL)-based optimal control problem is studied for multiplicative-noise systems, where input delay is involved and partial system dynamics is unknown. To solve a variant of Riccati-ZXL equations, which is a counterpart of standard Riccati equation and determines the optimal controller, we first develop a necessary and sufficient stabilizing condition in form of several Lyapunov-type equations, a parallelism of the classical Lyapunov theory. Based on the condition, we provide an offline and convergent algorithm for the variant of Riccati-ZXL equations. According to the convergent algorithm, we propose a RL-based optimal control design approach for solving linear quadratic regulation problem with partially unknown system dynamics. Finally, a numerical example is used to evaluate the proposed algorithm.
For a space with involutive action, there is a variant of K-theory. Motivated by T-duality in type II orbifold string theory, we establish that a twisted version of the variant enjoys a topological T-duality for Real circle bundles, i.e. circle bundles with real structure.
The positive parity doublet bands based on the $\pi h_{11/2}\otimes\nu h_{11/2}$ configuration in $^{126}$Cs have been investigated in the two quasi-particles coupled with a triaxial rotor model. The energy spectra $E(I)$, energy staggering parameter $S(I)=[E(I)-E(I-1)]/2I$, $B(M1)$ and $B(E2)$ values, intraband $B(M1)/B(E2)$ ratios, $B(M1)_{\textrm{in}}/B(M1)_{\textrm{out}}$ ratios, and orientation of the angular momentum for the rotor as well as the valence proton and neutron are calculated. After including the pairing correlation, good agreement has been obtained between the calculated results and the data available, which supports the interpretation of this positive parity doublet bands as chiral bands.
Polar molecules are an emerging platform for quantum technologies based on their long-range electric dipole-dipole interactions, which open new possibilities for quantum information processing and the quantum simulation of strongly correlated systems. Here, we use magnetic and microwave fields to design a fast entangling gate with $>0.999$ fidelity and which is robust with respect to fluctuations in the trapping and control fields and to small thermal excitations. These results establish the feasibility to build a scalable quantum processor with a broad range of molecular species in optical-lattice and optical-tweezers setups.
New methods for $D$-decomposition analysis are presented. They are based on topology of real algebraic varieties and computational real algebraic geometry. The estimate of number of root invariant regions for polynomial parametric families of polynomial and matrices is given. For the case of two parametric family more sharp estimate is proven. Theoretic results are supported by various numerical simulations that show higher precision of presented methods with respect to traditional ones. The presented methods are inherently global and could be applied for studying $D$-decomposition for the space of parameters as a whole instead of some prescribed regions. For symbolic computations the Maple v.14 software and its package RegularChains are used.
Magnetic Resonance Fingerprinting (MRF) is an efficient quantitative MRI technique that can extract important tissue and system parameters such as T1, T2, B0, and B1 from a single scan. This property also makes it attractive for retrospectively synthesizing contrast-weighted images. In general, contrast-weighted images like T1-weighted, T2-weighted, etc., can be synthesized directly from parameter maps through spin-dynamics simulation (i.e., Bloch or Extended Phase Graph models). However, these approaches often exhibit artifacts due to imperfections in the mapping, the sequence modeling, and the data acquisition. Here we propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation. To implement our direct contrast synthesis (DCS) method, we deploy a conditional Generative Adversarial Network (GAN) framework and propose a multi-branch U-Net as the generator. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics. We also demonstrate cases where our trained model is able to mitigate in-flow and spiral off-resonance artifacts that are typically seen in MRF reconstructions and thus more faithfully represent conventional spin echo-based contrast-weighted images.
It has recently been shown that the thermodynamics of a FRW universe can be fully derived using the generalized uncertainty principle (GUP) in extra dimensions as a primary input. There is a phenomenologically close relation between the GUP and Modified Dispersion Relations (MDR). However, the form of the MDR in theories with extra dimensions is as yet not known. The purpose of this letter is to derive the MDR in extra dimensional scenarios. To achieve this goal, we focus our attention on the thermodynamics of a FRW universe within a proposed MDR in an extra dimensional model universe. We then compare our results with the well-known results for the thermodynamics of a FRW universe in an extra dimensional GUP setup. The result shows that the entropy functionals calculated in these two approaches are the same, pointing to a possible conclusion that these approaches are equivalent. In this way, we derive the MDR form in a model universe with extra dimensions that would have interesting implications on the construction of the ultimate quantum gravity scenario.
We observe matterwave interference of a single cesium atom in free fall. The interferometer is an absolute sensor of acceleration and we show that this technique is sensitive to forces at the level of $3.2\times10^{-27}$ N with a spatial resolution at the micron scale. We observe the build up of the interference pattern one atom at a time in an interferometer where the mean path separation extends far beyond the coherence length of the atom. Using the coherence length of the atom wavepacket as a metric, we directly probe the velocity distribution and measure the temperature of a single atom in free fall.
In general a contractible complex need not be collapsible. Moreover, there exist complexes which are collapsible but even so admit a collapsing sequence where one "gets stuck", that is one can choose the collapses in such a way that one arrives at a nontrivial complex which admits no collapsing moves. Here we examine this phenomenon in the case of a simplex. In particular we characterize all values of $n$ and $d$ so that the n-simplex may collapse to a d-complex from which no further collapses are possible. Equivalently and in the language of high-dimensional generalizations of trees, we construct hypertrees that are anticollapsible, but not collapsible. Furthermore we examine anticollapsibility in random simplicial complexes.
The use of complex analysis for computing one-loop scattering amplitudes is naturally induced by generalised unitarity-cut conditions, fulfilled by complex values of the loop variable. We report on two techniques: the cut-integration with spinor-variables as contour integrals of rational functions; and the use of the Discrete Fourier Transform to optimize the reduction of tensor-integrals to master scalar integrals.
We investigate the thermodynamic properties of stellar self-gravitating system arising from the Tsallis generalized entropy. In particular, physical interpretation of the thermodynamic instability, as has been revealed by previous paper(Taruya & Sakagami, cond-mat/0107494, Physica A 307, 185 (2002)), is discussed in detail based on the non-extensive thermostatistics. Examining the Clausius relation in a quasi-static experiment, we obtain the standard result of thermodynamic relation that the physical temperature of the equilibrium non-extensive system is identified with the inverse of the Lagrange multiplier, $T_{phys}=1/\beta$. Using this relation, the specific heat of total system is computed, and confirm the common feature of self-gravitating system that the presence of negative specific heat leads to the thermodynamic instability. In addition to the gravothermal instability discovered previously, the specific heat shows the curious divergent behavior at the polytrope index $n>3$, suggesting another type of thermodynamic instability. Evaluating the second variation of free energy, we find that the marginal stability condition indicated from the specific heat can be exactly recovered from the second variation of free energy. Thus, the stellar polytropic system is consistently characterized by the non-extensive thermostatistics as a plausible thermal equilibrium state. We also clarify the non-trivial scaling behavior appeared in specific heat and address the origin of non-extensive nature in stellar polytrope.
We investigate the stochasticity in temperature fluctuations in the cosmic microwave background (CMB) radiation data from {\it Wilkinson Microwave Anisotropy Probe}. We show that the angular fluctuations of the temperature is a Markov process with a {\it Markov angular scale}, $\Theta_{\rm Markov}=1.01^{+0.09}_{-0.07}$. We characterize the complexity of the CMB fluctuations by means of a Fokker-Planck or Langevin equation and measure the associated Kramers-Moyal coefficients for the fluctuating temperature field $T(\hat n)$ and its increment, $\Delta T =T(\hat n_1) - T(\hat n_2)$. Through this method we show that temperature fluctuations in the CMB has fat tails compared to a Gaussian distribution.
Kinesin-5, also known as Eg5 in vertebrates is a processive motor with 4 heads, which moves on filamentous tracks called microtubules. The basic function of Kinesin-5 is to slide apart two anti-parallel microtubules by simultaneously walking on both the microtubules. We develop an analytical expression for the steady-state relative velocity of this sliding in terms of the rates of attachments and detachments of motor heads with the ATPase sites on the microtubules. We first analyse the motion of one pair of motor heads on one microtubule and then couple it to the motion of the other pair of motor heads of the same motor on the second microtubule to get the relative velocity of sliding.
We investigate the nature of the low-energy, large-scale excitations in the three-dimensional Edwards-Anderson Ising spin glass with Gaussian couplings and free boundary conditions, by studying the response of the ground state to a coupling-dependent perturbation introduced previously. The ground states are determined exactly for system sizes up to 12^3 spins using a branch and cut algorithm. The data are consistent with a picture where the surface of the excitations is not space-filling, such as the droplet or the ``TNT'' picture, with only minimal corrections to scaling. When allowing for very large corrections to scaling, the data are also consistent with a picture with space-filling surfaces, such as replica symmetry breaking. The energy of the excitations scales with their size with a small exponent \theta', which is compatible with zero if we allow moderate corrections to scaling. We compare the results with data for periodic boundary conditions obtained with a genetic algorithm, and discuss the effects of different boundary conditions on corrections to scaling. Finally, we analyze the performance of our branch and cut algorithm, finding that it is correlated with the existence of large-scale,low-energy excitations.
It has been shown experimentally that contact interactions may influence the migration of cancer cells. Previous works have modelized this thanks to stochastic, discrete models (cellular automata) at the cell level. However, for the study of the growth of real-size tumors with several millions of cells, it is best to use a macroscopic model having the form of a partial differential equation (PDE) for the density of cells. The difficulty is to predict the effect, at the macroscopic scale, of contact interactions that take place at the microscopic scale. To address this we use a multiscale approach: starting from a very simple, yet experimentally validated, microscopic model of migration with contact interactions, we derive a macroscopic model. We show that a diffusion equation arises, as is often postulated in the field of glioma modeling, but it is nonlinear because of the interactions. We give the explicit dependence of diffusivity on the cell density and on a parameter governing cell-cell interactions. We discuss in details the conditions of validity of the approximations used in the derivation and we compare analytic results from our PDE to numerical simulations and to some in vitro experiments. We notice that the family of microscopic models we started from includes as special cases some kinetically constrained models that were introduced for the study of the physics of glasses, supercooled liquids and jamming systems.
With the ongoing debate on 'freedom of speech' vs. 'hate speech' there is an urgent need to carefully understand the consequences of the inevitable culmination of the two, i.e., 'freedom of hate speech' over time. An ideal scenario to understand this would be to observe the effects of hate speech in an (almost) unrestricted environment. Hence, we perform the first temporal analysis of hate speech on Gab.com, a social media site with very loose moderation policy. We first generate temporal snapshots of Gab from millions of posts and users. Using these temporal snapshots, we compute an activity vector based on DeGroot model to identify hateful users. The amount of hate speech in Gab is steadily increasing and the new users are becoming hateful at an increased and faster rate. Further, our analysis analysis reveals that the hate users are occupying the prominent positions in the Gab network. Also, the language used by the community as a whole seem to correlate more with that of the hateful users as compared to the non-hateful ones. We discuss how, many crucial design questions in CSCW open up from our work.
We present a method developed to actively compensate common-mode magnetic disturbances on a multi-sensor device devoted to differential measurements. The system uses a field-programmable-gated-array card, and operates in conjunction with a high sensitivity magnetometer: compensating the common-mode of magnetic disturbances results in a relevant reduction of the difference-mode noise. The digital nature of the compensation system allows for using a numerical approach aimed at automatically adapting the feedback loop filter response. A common mode disturbance attenuation exceeding 50 dB is achieved, resulting in a final improvement of the differential noise floor by a factor of 10 over the whole spectral interval of interest.
The asymmetric simple exclusion process and its analysis by mode coupling theory (MCT) is reviewed. To treat the weakly asymmetric case at large space scale $x\varepsilon^{-1}$, %(corresponding to small Fourier momentum at scale $p\varepsilon$), large time scale $t \varepsilon^{-\chi}$ and weak hopping bias $b \varepsilon^{\kappa}$ in the limit $\varepsilon \to 0$ we develop a mesoscale MCT that allows for studying the crossover at $\kappa=1/2$ and $\chi=2$ from Kardar-Parisi-Zhang (KPZ) to Edwards-Wilkinson (EW) universality. The dynamical structure function is shown to satisfy for all $\kappa$ an integral equation that is independent of the microscopic model parameters and has a solution that yields a scale-invariant function with the KPZ dynamical exponent $z=3/2$ at scale $\chi=3/2+\kappa$ for $0\leq\kappa<1/2$ and for $\chi=2$ the exact Gaussian EW solution with $z=2$ for $\kappa>1/2$. At the crossover point it is a function of both scaling variables which converges at macroscopic scale to the conventional MCT approximation of KPZ universality for $\kappa<1/2$. This fluctuation pattern confirms long-standing conjectures for $\kappa \leq 1/2$ and is in agreement with mathematically rigorous results for $\kappa>1/2$ despite the numerous uncontrolled approximations on which MCT is based.
Memristor-based crossbar arrays represent a promising emerging memory technology to replace conventional memories by offering a high density and enabling computing-in-memory (CIM) paradigms. While analog computing provides the best performance, non-idealities and ADC/DAC conversion limit memristor-based CIM. Logic-in-Memory (LIM) presents another flavor of CIM, in which the memristors are used in a binary manner to implement logic gates. Since binary neural networks (BNNs) use binary logic gates as the dominant operation, they can benefit from the massively parallel execution of binary operations and better resilience to variations of the memristors. Although conventional neural networks have been thoroughly investigated, the impact of faults on memristor-based BNNs remains unclear. Therefore, we analyze the impact of faults on logic gates in memristor-based crossbar arrays for BNNs. We propose a simulation framework that simulates different traditional faults to examine the accuracy loss of BNNs on memristive crossbar arrays. In addition, we compare different logic families based on the robustness and feasibility to accelerate AI applications.
We find three characterizations for a multidimensional (n+1)-web W possessing a reduct reducible subweb: its closed form equations, the integrability of an invariant distribution associated with W, and the relations between the components of its torsion tensor. In the case of codimension one, the latter criterion establishes a relation with solutions of a system of nonlinear second-order PDEs. Some particular cases of this system were considered by Goursat in 1899.
We construct a rational $T^2$-equivariant elliptic cohomology theory for the 2-torus $T^2$, starting from an elliptic curve C over the complex numbers and a coordinate data around the identity. The theory is defined by constructing an object $EC_{T^2}$ in the algebraic model category $dA(T^2)$, which by Greenlees and Shipley is Quillen-equivalent to rational $T^2$-spectra. This result is a generalization to the 2-torus of the construction [Gre05] for the circle. The object $EC_{T^2}$ is directly built using geometric inputs coming from the Cousin complex of the structure sheaf of the surface CxC.
Elliott and Halberstam proved that $\sum_{p<n} 2^{\omega(n-p)}$ is asymptotic to $\phi(n)$. In analogy to the Erd\H{o}s--Kac Theorem, Elliott conjectured that if one restricts the summation to primes $p$ such that $\omega(n-p)\le 2 \log \log n+\lambda(2\log \log n)^{1/2}$ then the sum will be asymptotic to $\phi(n)\int_{-\infty}^{\lambda} e^{-t^2/2}dt/\sqrt{2\pi}$. We show that this conjecture follows from the Bombieri--Vinogradov Theorem. We further prove a related result involving Poisson--Dirichlet distribution, employing deeper lying level of distribution results of the primes.
Blazars, radio-loud active galactic nuclei with the relativistic jet closely aligned with the line of sight, dominate the extragalactic sky observed at gamma-ray energies, above 100 MeV. We discuss some of the emission properties of these sources, focusing in particular on the "blazar sequence" and the interpretative models of the high-energy emission of BL Lac objects.