text
stringlengths
6
128k
Let K be an algebraically closed field, X a K-scheme, and X(K) the set of closed points in X. A constructible set C in X(K) is a finite union of subsets Y(K) for finite type subschemes Y in X. A constructible function f : X(K) --> Q has f(X(K)) finite and f^{-1}(c) constructible for all nonzero c. Write CF(X) for the Q-vector space of constructible functions on X. Let phi : X --> Y and psi : Y --> Z be morphisms of C-varieties. MacPherson defined a Q-linear "pushforward" CF(phi) : CF(X) --> CF(Y) by "integration" w.r.t. the topological Euler characteristic. It is functorial, that is, CF(psi o phi)=CF(psi) o CF(phi). This was extended to K of characteristic zero by Kennedy. This paper generalizes these results to K-schemes and Artin K-stacks with affine stabilizers. We define notions of Euler characteristic for constructible sets in K-schemes and K-stacks, and pushforwards and pullbacks of constructible functions, with functorial behaviour. Pushforwards and pullbacks commute in Cartesian squares. We also define "pseudomorphisms", a generalization of morphisms well suited to constructible functions problems.
We determine the Seiberg-Witten-Floer homology groups of the three-manifold which is the product of a surface of genus $g \geq 1$ times the circle, together with its ring structure, for spin-c structures which are non-trivial on the three-manifold. We give applications to computing Seiberg-Witten invariants of four-manifolds which are connected sums along surfaces and also we reprove the higher type adjunction inequalities previously obtained by Oszv\'ath and Szab\'o.
We study integral operators related to a regularized version of the classical Poincar\'e path integral and the adjoint class generalizing Bogovski\u{\i}'s integral operator, acting on differential forms in $R^n$. We prove that these operators are pseudodifferential operators of order -1. The Poincar\'e-type operators map polynomials to polynomials and can have applications in finite element analysis. For a domain starlike with respect to a ball, the special support properties of the operators imply regularity for the de Rham complex without boundary conditions (using Poincar\'e-type operators) and with full Dirichlet boundary conditions (using Bogovski\u{\i}-type operators). For bounded Lipschitz domains, the same regularity results hold, and in addition we show that the cohomology spaces can always be represented by $C^\infty$ functions.
The yrast states of even even vibrational and transitional nuclei are inter- preted as a rotating condensate of interacting d-bosons and the corresponding semi-classical tidal wave concept. A simple experimental manifestation of the anharmonicity caused by the boson interaction is found. The interpretation is substantiated by calculations based on the Collective Model and the Cranking Model.
We present results of deep integral field spectroscopy observations using high resolution optical (4150-7200 A) VIMOS VLT spectra, of NGC 4696, the dominant galaxy in the Centaurus cluster (Abell 3526). After the Virgo cluster, this is the second nearest (z=0.0104) example of a cool core cluster. NGC 4696 is surrounded by a vast, luminous H alpha emission line nebula (L = 2.2 \times 10^40 ergs per second). We explore the origin and excitation of the emission-line filaments and find their origin consistent with being drawn out, under rising radio bubbles, into the intracluster medium as in other similar systems. Contrary to previous observations we do not observe evidence for shock excitation of the outer filaments. Our optical spectra are consistent with the recent particle heating excitation mechanism of Ferland et al.
Let $ \{X_j, j\in \Z\}$ be a Gaussian stationary sequence having a spectral function $F$ of infinite type. Then for all $n$ and $z\ge 0$,$$ \P\Big\{\sup_{j=1}^n |X_j|\le z \Big\}\le \Big(\int_{-z/\sqrt{G(f)}}^{z/\sqrt{G(f)}} e^{-x^2/2}\frac{\dd x}{\sqrt{2\pi}} \Big)^n,$$ where $ G(f)$ is the geometric mean of the Radon Nycodim derivative of the absolutely continuous part $f$ of $F$. The proof uses properties of finite Toeplitz forms. Let $ \{X(t), t\in \R\}$ be a sample continuous stationary Gaussian process with covariance function $\g(u) $. We also show that there exists an absolute constant $K$ such that for all $T>0$, $a>0$ with $T\ge \e(a)$, $$\P\Big\{\sup_{0\le s,t\le T} |X(s)-X(t)|\le a\Big\} \le \exp \Big \{-{KT \over \e(a) p(\e(a))}\Big\} ,$$ where $\e (a)= \min\big\{b>0: \d (b)\ge a\big\}$, $\d (b)=\min_{u\ge 1}\{\sqrt{2(1-\g((ub))}, u\ge 1\}$, and $ p(b) = 1+\sum_{j=2}^\infty {|2\g (jb)-\g ((j-1)b)-\g ((j+1)b)| \over 2(1-\g(b))}$. The proof is based on some decoupling inequalities arising from Brascamp-Lieb inequality. Both approaches are developed and compared on examples. Several other related results are established.
For midrapidity fragments from central 50-200 AMeV Au+Au collisions temperatures from double ratios of isotopic yields were compared with temperatures from particle unbound states. Temperatures from particle unbound states with T = 4-5 MeV show with increasing beam energy an increasing difference to temperatures from double ratios of isotopic yields, which increase from T = 5MeV to T = 12MeV. The lower temperatures extracted from particle unstable states can be explained by increasing cooling of the decaying system due to expansion. This expansion is driven by the radial flow, and freeze out of particle unstable states might depend on the dynamics of the expanding system. Source sizes from pp-correlation functions were found to be 9 to 11 fm.
This is a sequel to the papers [OW1], [OW2]. In [OW1], the authors introduced a canonical affine connection on $M$ associated to the contact triad $(M,\lambda,J)$. In [OW2], they used the connection to establish a priori $W^{k,p}$-coercive estimates for maps $w: \dot \Sigma \to M$ satisfying $\overline{\partial}^\pi w= 0, \, d(w^*\lambda \circ j) = 0$ \emph{without involving symplectization}. We call such a pair $(w,j)$ a contact instanton. In this paper, we first prove a canonical neighborhood theorem of the locus $Q$ foliated by closed Reeb orbits of a Morse-Bott contact form. Then using a general framework of the three-interval method, we establish exponential decay estimates for contact instantons $(w,j)$ of the triad $(M,\lambda,J)$, with $\lambda$ a Morse-Bott contact form and $J$ a CR-almost complex structure adapted to $Q$, under the condition that the asymptotic charge of $(w,j)$ at the associated puncture vanishes. We also apply the three-interval method to the symplectization case and provide an alternative approach via tensorial calculations to exponential decay estimates in the Morse-Bott case for the pseudoholomorphic curves on the symplectization of contact manifolds. This was previously established by Bourgeois [Bou] (resp. by Bao [Ba]), by using special coordinates, for the cylindrical (resp. for the asymptotically cylindrical) ends. The exponential decay result for the Morse-Bott case is an essential ingredient in the set-up of the moduli space of pseudoholomorphic curves which plays a central role in contact homology and symplectic field theory (SFT).
Quantum theory permits interference between indistinguishable paths but, at the same time, restricts its order. Single-particle interference, for instance, is limited to the second order, that is, to pairs of single-particle paths. To date, all experimental efforts to search for higher-order interferences beyond those compatible with quantum mechanics have been based on such single-particle schemes. However, quantum physics is not bounded to single-particle interference. We here experimentally study many-particle higher-order interference using a two-photon-five-slit setup. We observe nonzero two-particle interference up to fourth order, corresponding to the interference of two distinct two-particle paths. We further show that fifth-order interference is restricted to $10^{-3}$ in the intensity-correlation regime and to $10^{-2}$ in the photon-correlation regime, thus providing novel bounds on higher-order quantum interference.
Isolated isothermal spheres of N gravitationally interacting points with equal mass are believed to be stable when density contrasts do not exceed 709. That stability limit does, however, not take into consideration fluctuations of temperature near the onset of instability. These are important when N is finite. Here we correlate {\it global mean quadratic temperature fluctuations} with onset of instability. We show that such fluctuations trigger instability when the density contrast reaches a value near $709\cdot\exp(-3.3N^{-1/3})$. These lower values of limiting density contrasts are significantly smaller than 709 when N is not very big and this suggests (i) that numerical calculations with small N may not reflect correctly the onset of core collapse in clusters with big N and (ii) that a greater number of globular clusters than is normally believed may already be in an advanced stage of core collapse because most of observed globular clusters whose parameters fit quasi-isothermal configurations are close to marginal stability.
We introduce an end-to-end learning framework for image-to-image composition, aiming to plausibly compose an object represented as a cropped patch from an object image into a background scene image. As our approach emphasizes more on semantic and structural coherence of the composed images, rather than their pixel-level RGB accuracies, we tailor the input and output of our network with structure-aware features and design our network losses accordingly, with ground truth established in a self-supervised setting through the object cropping. Specifically, our network takes the semantic layout features from the input scene image, features encoded from the edges and silhouette in the input object patch, as well as a latent code as inputs, and generates a 2D spatial affine transform defining the translation and scaling of the object patch. The learned parameters are further fed into a differentiable spatial transformer network to transform the object patch into the target image, where our model is trained adversarially using an affine transform discriminator and a layout discriminator. We evaluate our network, coined SAC-GAN, for various image composition scenarios in terms of quality, composability, and generalizability of the composite images. Comparisons are made to state-of-the-art alternatives, including Instance Insertion, ST-GAN, CompGAN and PlaceNet, confirming superiority of our method.
Dynamic-mode atomic force microscopy (AFM) in liquid remains complicated due to the strong viscous damping of the cantilever resonance. Here we show that a high-quality resonance (Q>20) can be achieved in aqueous solution by attaching a microgram-bead at the end of the nanogram-cantilever. The resulting increase in cantilever mass causes the resonance frequency to drop significantly. However, the force sensitivity --- as expressed via the minimum detectable force gradient --- is hardly affected, because of the enhanced quality factor. Via the enhancement of the quality factor, the attached bead also reduces the relative importance of noise in the deflection detector. It can thus yield an improved signal-to-noise ratio when this detector noise is significant. We describe and analyze these effects for a set-up which includes magnetic actuation of the cantilevers and which can be easily implemented in any AFM system that is compatible with an inverted optical microscope.
Autonomous robotic surgery has advanced significantly based on analysis of visual and temporal cues in surgical workflow, but relational cues from domain knowledge remain under investigation. Complex relations in surgical annotations can be divided into intra- and inter-relations, both valuable to autonomous systems to comprehend surgical workflows. Intra- and inter-relations describe the relevance of various categories within a particular annotation type and the relevance of different annotation types, respectively. This paper aims to systematically investigate the importance of relational cues in surgery. First, we contribute the RLLS12M dataset, a large-scale collection of robotic left lateral sectionectomy (RLLS), by curating 50 videos of 50 patients operated by 5 surgeons and annotating a hierarchical workflow, which consists of 3 inter- and 6 intra-relations, 6 steps, 15 tasks, and 38 activities represented as the triplet of 11 instruments, 8 actions, and 16 objects, totaling 2,113,510 video frames and 12,681,060 annotation entities. Correspondingly, we propose a multi-relation purification hybrid network (MURPHY), which aptly incorporates novel relation modules to augment the feature representation by purifying relational features using the intra- and inter-relations embodied in annotations. The intra-relation module leverages a R-GCN to implant visual features in different graph relations, which are aggregated using a targeted relation purification with affinity information measuring label consistency and feature similarity. The inter-relation module is motivated by attention mechanisms to regularize the influence of relational features based on the hierarchy of annotation types from the domain knowledge. Extensive experimental results on the curated RLLS dataset confirm the effectiveness of our approach, demonstrating that relations matter in surgical workflow analysis.
Authors propose a conceptual model of participation in viral diffusion process composed of four stages: awareness, infection, engagement and action. To verify the model it has been applied and studied in the virtual social chat environment settings. The study investigates the behavioral paths of actions that reflect the stages of participation in the diffusion and presents shortcuts, that lead to the final action, i.e. the attendance in a virtual event. The results show that the participation in each stage of the process increases the probability of reaching the final action. Nevertheless, the majority of users involved in the virtual event did not go through each stage of the process but followed the shortcuts. That suggests that the viral diffusion process is not necessarily a linear sequence of human actions but rather a dynamic system.
We prove the Plancherel formula for hypergeometric functions associated to a root system in the situation when the root multiplicities are negative (but close to 0). As a result we obtain a classification of the hypergeometric functions that are square integrable, and we find a closed formula for their square norm as a function of the root multiplicities.
Simulating the dynamics and the non-equilibrium steady state of an open quantum system are hard computational tasks on conventional computers. For the simulation of the time evolution, several efficient quantum algorithms have recently been developed. However, computing the non-equilibrium steady state as the long-time limit of the system dynamics is often not a viable solution, because of exceedingly long transient features or strong quantum correlations in the dynamics. Here, we develop an efficient quantum algorithm for the direct estimation of averaged expectation values of observables on the non-equilibrium steady state, thus bypassing the time integration of the master equation. The algorithm encodes the vectorized representation of the density matrix on a quantum register, and makes use of quantum phase estimation to approximate the eigenvector associated to the zero eigenvalue of the generator of the system dynamics. We show that the output state of the algorithm allows to estimate expectation values of observables on the steady state. Away from critical points, where the Liouvillian gap scales as a power law of the system size, the quantum algorithm performs with exponential advantage compared to exact diagonalization.
Soil temperature is one of the most significant parameters that plays a crucial role in glacier energy, dynamics of mass balance, processes of surface hydrological, coaction of glacier-atmosphere, nutrient cycling, ecological stability, the management of soil, water, and field crop. In this work, we introduce a novel approach using transformer models for the purpose of forecasting soil temperature prediction. To the best of our knowledge, the usage of transformer models in this work is the very first attempt to predict soil temperature. Experiments are carried out using six different FLUXNET stations by modeling them with five different transformer models, namely, Vanilla Transformer, Informer, Autoformer, Reformer, and ETSformer. To demonstrate the effectiveness of the proposed model, experiment results are compared with both deep learning approaches and literature studies. Experiment results show that the utilization of transformer models ensures a significant contribution to the literature, thence determining the new state-of-the-art.
Active motions of a biological membrane can be induced by non-thermal fluctuations that occur in the outer environment of the membrane. We discuss the dynamics of a membrane interacting hydrodynamically with an active wall that exerts random velocities on the ambient fluid. Solving the hydrodynamic equations of a bound membrane, we first derive a dynamic equation for the membrane fluctuation amplitude in the presence of different types of walls. Membrane two-point correlation functions are calculated for three different cases; (i) a static wall, (ii) an active wall, and (iii) an active wall with an intrinsic time scale. We focus on the mean squared displacement (MSD) of a tagged membrane describing the Brownian motion of a membrane segment. For the static wall case, there are two asymptotic regimes of MSD ($\sim t^{2/3}$ and $\sim t^{1/3}$) when the hydrodynamic decay rate changes monotonically. In the case of an active wall, the MSD grows linearly in time ($\sim t$) in the early stage, which is unusual for a membrane segment. This linear-growth region of the MSD is further extended when the active wall has a finite intrinsic time scale.
A useful finite-dimensional matrix representation of the derivative of periodic functions is obtained by using some elementary facts of trigonometric interpolation. This NxN matrix becomes a projection of the angular derivative into polynomial subspaces of finite dimension and it can be interpreted as a generator of discrete rotations associated to the z-component of the projection of the angular momentum operator in such subspaces, inheriting thus some properties of the continuum operator. The group associated to these discrete rotations is the cyclic group of order N. Since the square of the quantum angular momentum L^2 is associated to a partial differential boundary value problem in the angular variables $\theta$ and $\phi$ whose solution is given in terms of the spherical harmonics, we can project such a differential equation to obtain an eigenvalue matrix problem of finite dimension by extending to several variables a projection technique for solving numerically two point boundary value problems and using the matrix representation of the angular derivative found before. The eigenvalues of the matrix representing L^2 are found to have the exact form n(n+1), counting the degeneracy, and the eigenvectors are found to coincide exactly with the corresponding spherical harmonics evaluated at a certain set of points.
A key problem in sensor networks is to decide which sensors to query when, in order to obtain the most useful information (e.g., for performing accurate prediction), subject to constraints (e.g., on power and bandwidth). In many applications the utility function is not known a priori, must be learned from data, and can even change over time. Furthermore for large sensor networks solving a centralized optimization problem to select sensors is not feasible, and thus we seek a fully distributed solution. In this paper, we present Distributed Online Greedy (DOG), an efficient, distributed algorithm for repeatedly selecting sensors online, only receiving feedback about the utility of the selected sensors. We prove very strong theoretical no-regret guarantees that apply whenever the (unknown) utility function satisfies a natural diminishing returns property called submodularity. Our algorithm has extremely low communication requirements, and scales well to large sensor deployments. We extend DOG to allow observation-dependent sensor selection. We empirically demonstrate the effectiveness of our algorithm on several real-world sensing tasks.
We consider a typical setup of cavity QED consisting of a two-level atom interacting strongly with a single resonant electromagnetic field mode inside a cavity. The cavity is resonantly driven and the output undergoes continuous homodyne measurements. We derive an explicit expression for the state of the system conditional on a discrete photocount record. This expression takes a particularly simple form if the system is initially in the steady state. As a byproduct, we derive a general formula for the steady state that had been conjectured before in the strong driving limit.
We present the conceptual design and the physics potential of DarkSPHERE, a proposed 3 m in diameter spherical proportional counter electroformed underground at the Boulby Underground Laboratory. This effort builds on the R&D performed and experience acquired by the NEWS-G Collaboration. DarkSPHERE is primarily designed to search for nuclear recoils from light dark matter in the 0.05--10 GeV mass range. Electroforming the spherical shell and the implementation of a shield based on pure water ensures a background level below 0.01 dru. These, combined with the proposed helium-isobutane gas mixture, will provide sensitivity to the spin-independent nucleon cross-section of $2\times 10^{-41} (2\times 10^{-43})$ cm$^2$ for a dark matter mass of $0.1 (1)$ GeV. The use of a hydrogen-rich gas mixture with a natural abundance of $^{13}$C provides sensitivity to spin-dependent nucleon cross-sections more than two orders of magnitude below existing constraints for dark matter lighter than 1 GeV. The characteristics of the detector also make it suitable for searches of other dark matter signatures, including scattering of MeV-scale dark matter with electrons, and super-heavy dark matter with masses around the Planck scale that leave extended ionisation tracks in the detector.
In a recent paper, by working in the orbifold GUT limit of the Heterotic string, we showed how one could accommodate gauge coupling unification in the "mini-landscape" models of Lebedev et al. Furthermore, it was shown how one of the solutions was consistent with the decoupling of other exotics with F=0. In this short addendum, we show that this solution is also consistent with D=0.
Constructing charges in the covariant phase space formalism often leads to formally divergent expressions, even when the fields satisfy physically acceptable fall-off conditions. These expressions can be rendered finite by corner ambiguities in the definition of the presymplectic potential, which in some cases may be motivated by arguments involving boundary Lagrangians. We show that the necessary corner terms are already present in the variation of the bulk action and can be extracted in a straightforward way. Once these corner terms are included in the presymplectic potential, charges derived from an associated codimension-2 form are automatically finite. We illustrate the procedure with examples in two and three dimensions, working in Bondi gauge and obtaining integrable charges. As a by-product, actions are derived for these theories that admit a well-defined variational principle when the fields satisfy boundary conditions on a timelike surface with corners. An interesting feature of our analysis is that the fields are not required to be fully on-shell.
Recidivism prediction provides decision makers with an assessment of the likelihood that a criminal defendant will reoffend that can be used in pre-trial decision-making. It can also be used for prediction of locations where crimes most occur, profiles that are more likely to commit violent crimes. While such instruments are gaining increasing popularity, their use is controversial as they may present potential discriminatory bias in the risk assessment. In this paper we propose a new fair-by-design approach to predict recidivism. It is prototype-based, learns locally and extracts empirically the data distribution. The results show that the proposed method is able to reduce the bias and provide human interpretable rules to assist specialists in the explanation of the given results.
This paper has been withdrawn, because the result turned out to be well known.
Complex Event Processing (CEP) has emerged as the unifying field for technologies that require processing and correlating distributed data sources in real-time. CEP finds applications in diverse domains, which has resulted in a large number of proposals for expressing and processing complex events. However, existing CEP languages lack from a clear semantics, making them hard to understand and generalize. Moreover, there are no general techniques for evaluating CEP query languages with clear performance guarantees. In this paper we embark on the task of giving a rigorous and efficient framework to CEP. We propose a formal language for specifying complex events, called CEL, that contains the main features used in the literature and has a denotational and compositional semantics. We also formalize the so-called selection strategies, which had only been presented as by-design extensions to existing frameworks. With a well-defined semantics at hand, we study how to efficiently evaluate CEL for processing complex events in the case of unary filters. We start by studying the syntactical properties of CEL and propose rewriting optimization techniques for simplifying the evaluation of formulas. Then, we introduce a formal computational model for CEP, called complex event automata (CEA), and study how to compile CEL formulas into CEA. Furthermore, we provide efficient algorithms for evaluating CEA over event streams using constant time per event followed by constant-delay enumeration of the results. By gathering these results together, we propose a framework for efficiently evaluating CEL with unary filters. Finally, we show experimentally that this framework consistently outperforms the competition, and even over trivial queries can be orders of magnitude more efficient.
In this report, we explore the ability of language model agents to acquire resources, create copies of themselves, and adapt to novel challenges they encounter in the wild. We refer to this cluster of capabilities as "autonomous replication and adaptation" or ARA. We believe that systems capable of ARA could have wide-reaching and hard-to-anticipate consequences, and that measuring and forecasting ARA may be useful for informing measures around security, monitoring, and alignment. Additionally, once a system is capable of ARA, placing bounds on a system's capabilities may become significantly more difficult. We construct four simple example agents that combine language models with tools that allow them to take actions in the world. We then evaluate these agents on 12 tasks relevant to ARA. We find that these language model agents can only complete the easiest tasks from this list, although they make some progress on the more challenging tasks. Unfortunately, these evaluations are not adequate to rule out the possibility that near-future agents will be capable of ARA. In particular, we do not think that these evaluations provide good assurance that the ``next generation'' of language models (e.g. 100x effective compute scaleup on existing models) will not yield agents capable of ARA, unless intermediate evaluations are performed during pretraining. Relatedly, we expect that fine-tuning of the existing models could produce substantially more competent agents, even if the fine-tuning is not directly targeted at ARA.
This study presents an integrated approach for identifying key nodes in information propagation networks using advanced artificial intelligence methods. We introduce a novel technique that combines the Decision-making Trial and Evaluation Laboratory (DEMATEL) method with the Global Structure Model (GSM), creating a synergistic model that effectively captures both local and global influences within a network. This method is applied across various complex networks, such as social, transportation, and communication systems, utilizing the Global Network Influence Dataset (GNID). Our analysis highlights the structural dynamics and resilience of these networks, revealing insights into node connectivity and community formation. The findings demonstrate the effectiveness of our AI-based approach in offering a comprehensive understanding of network behavior, contributing significantly to strategic network analysis and optimization.
Given a pair of positive real numbers $\alpha, \beta$ and a sesqui-analytic function $K$ on a bounded domain $\Omega \subset \mathbb C^m$, in this paper, we investigate the properties of the sesqui-analytic function $\mathbb K^{(\alpha, \beta)}:= K^{\alpha+\beta}\big(\partial_i\bar{\partial}_j\log K\big )_{i,j=1}^ m,$ taking values in $m\times m$ matrices. One of the key findings is that $\mathbb K^{(\alpha, \beta)}$ is non-negative definite whenever $K^\alpha$ and $K^\beta$ are non-negative definite. In this case, a realization of the Hilbert module determined by the kernel $\mathbb K^{(\alpha,\beta)}$ is obtained. Let $\mathcal M_i$, $i=1,2,$ be two Hilbert modules over the polynomial ring $\mathbb C[z_1, \ldots, z_m]$. Then $\mathbb C[z_1, \ldots, z_{2m}]$ acts naturally on the tensor product $\mathcal M_1\otimes \mathcal M_2$. The restriction of this action to the polynomial ring $\mathbb C[z_1, \ldots, z_m]$ obtained using the restriction map $p \mapsto p_{|\Delta}$ leads to a natural decomposition of the tensor product $\mathcal M_1\otimes \mathcal M_2$, which is investigated. Two of the initial pieces in this decomposition are identified.
The Rocket Chip Generator uses a collection of parameterized processor components to produce RISC-V-based SoCs. It is a powerful tool that can produce a wide variety of processor designs ranging from tiny embedded processors to complex multi-core systems. In this paper we extend the features of the Memory Management Unit of the Rocket Chip Generator and specifically the TLB hierarchy. TLBs are essential in terms of performance because they mitigate the overhead of frequent Page Table Walks, but may harm the critical path of the processor due to their size and/or associativity. In the original Rocket Chip implementation the L1 Instruction/Data TLB is fully-associative and the shared L2 TLB is direct-mapped. We lift these restrictions and design and implement configurable, set-associative L1 and L2 TLB templates that can create any organization from direct-mapped to fully-associative to achieve the desired ratio of performance and resource utilization, especially for larger TLBs. We evaluate different TLB configurations and present performance, area, and frequency results of our design using benchmarks from the SPEC2006 suite on the Xilinx ZCU102 FPGA.
We introduce a new variant of the weak optimal transport problem where mass is distributed from one space to the other through unnormalized kernels. We give sufficient conditions for primal attainment and prove a dual formula for this transport problem. We also obtain dual attainment conditions for some specific cost functions. As a byproduct we obtain a transport characterization of the stochastic order defined by convex positively 1-homogenous functions, in the spirit of Strassen theorem for convex domination.
Lately, the three-dimensional (3D) Dirac semimetal, which possesses 3D linear dispersion in electronic structure as a bulk analogue of graphene, has generated widespread interests in both material science and condensed matter physics. Very recently, crystalline Cd3As2 has been proposed and proved to be one of 3D Dirac semimetals which can survive in atmosphere. Here, by controlled point contact (PC) measurement, we observe the exotic superconductivity around point contact region on the surface of Cd3As2 crystal. The observation of zero bias conductance peak (ZBCP) and double conductance peaks (DCPs) symmetric to zero bias further reveal p-wave like unconventional superconductivity in Cd3As2 quantum matter. Considering the topological property of the 3D Dirac semimetal, our findings may indicate that the Cd3As2 crystal under certain conditions is a candidate of the topological superconductor, which is predicted to support Majorana zero modes or gapless Majorana edge/surface modes in the boundary depending on the dimensionality of the material.
The beneficial role of noise-injection in learning is a consolidated concept in the field of artificial neural networks, suggesting that even biological systems might take advantage of similar mechanisms to optimize their performance. The training-with-noise algorithm proposed by Gardner and collaborators is an emblematic example of a noise-injection procedure in recurrent networks, which can be used to model biological neural systems. We show how adding structure to noisy training data can substantially improve the algorithm performance, allowing the network to approach perfect retrieval of the memories and wide basins of attraction, even in the scenario of maximal injected noise. We also prove that the so-called Hebbian Unlearning rule coincides with the training-with-noise algorithm when noise is maximal and data are stable fixed points of the network dynamics.
Corporate credit ratings issued by third-party rating agencies are quantified assessments of a company's creditworthiness. Credit Ratings highly correlate to the likelihood of a company defaulting on its debt obligations. These ratings play critical roles in investment decision-making as one of the key risk factors. They are also central to the regulatory framework such as BASEL II in calculating necessary capital for financial institutions. Being able to predict rating changes will greatly benefit both investors and regulators alike. In this paper, we consider the corporate credit rating migration early prediction problem, which predicts the credit rating of an issuer will be upgraded, unchanged, or downgraded after 12 months based on its latest financial reporting information at the time. We investigate the effectiveness of different standard machine learning algorithms and conclude these models deliver inferior performance. As part of our contribution, we propose a new Multi-task Envisioning Transformer-based Autoencoder (META) model to tackle this challenging problem. META consists of Positional Encoding, Transformer-based Autoencoder, and Multi-task Prediction to learn effective representations for both migration prediction and rating prediction. This enables META to better explore the historical data in the training stage for one-year later prediction. Experimental results show that META outperforms all baseline models.
Each node in a wireless multi-hop network can adjust the power level at which it transmits and thus change the topology of the network to save energy by choosing the neighbors with which it directly communicates. Many previous algorithms for distributed topology control have assumed an ability at each node to deduce some location-based information such as the direction and the distance of its neighbor nodes with respect to itself. Such a deduction of location-based information, however, cannot be relied upon in real environments where the path loss exponents vary greatly leading to significant errors in distance estimates. Also, multipath effects may result in different signal paths with different loss characteristics, and none of these paths may be line-of-sight, making it difficult to estimate the direction of a neighboring node. In this paper, we present Step Topology Control (STC), a simple distributed topology control algorithm which reduces energy consumption while preserving the connectivity of a heterogeneous sensor network without use of any location-based information. We show that the STC algorithm achieves the same or better order of communication and computational complexity when compared to other known algorithms that also preserve connectivity without the use of location-based information. We also present a detailed simulation-based comparative analysis of the energy savings and interference reduction achieved by the algorithms. The results show that, in spite of not incurring a higher communication or computational complexity, the STC algorithm performs better than other algorithms in uniform wireless environments and especially better when path loss characteristics are non-uniform.
Mobile crowdsourcing has become easier thanks to the widespread of smartphones capable of seamlessly collecting and pushing the desired data to cloud services. However, the success of mobile crowdsourcing relies on balancing the supply and demand by first accurately forecasting spatially and temporally the supply-demand gap, and then providing efficient incentives to encourage participant movements to maintain the desired balance. In this paper, we propose Deep-Gap, a deep learning approach based on residual learning to predict the gap between mobile crowdsourced service supply and demand at a given time and space. The prediction can drive the incentive model to achieve a geographically balanced service coverage in order to avoid the case where some areas are over-supplied while other areas are under-supplied. This allows anticipating the supply-demand gap and redirecting crowdsourced service providers towards target areas. Deep-Gap relies on historical supply-demand time series data as well as available external data such as weather conditions and day type (e.g., weekday, weekend, holiday). First, we roll and encode the time series of supply-demand as images using the Gramian Angular Summation Field (GASF), Gramian Angular Difference Field (GADF) and the Recurrence Plot (REC). These images are then used to train deep Convolutional Neural Networks (CNN) to extract the low and high-level features and forecast the crowdsourced services gap. We conduct comprehensive comparative study by establishing two supply-demand gap forecasting scenarios: with and without external data. Compared to state-of-art approaches, Deep-Gap achieves the lowest forecasting errors in both scenarios.
We present an algorithm that, given finite simplicial sets $X$, $A$, $Y$ with an action of a finite group $G$, computes the set $[X,Y]^A_G$ of homotopy classes of equivariant maps $\ell \colon X \to Y$ extending a given equivariant map $f \colon A \to Y$ under the stability assumption $\dim X^H \leq 2 \operatorname{conn} Y^H$ and $\operatorname{conn} Y^H \geq 1$, for all subgroups $H\leq G$. For fixed $n = \operatorname{dim} X$, the algorithm runs in polynomial time. When the stability condition is dropped, the problem is undecidable already in the non-equivariant setting. The algorithm is obtained as a special case of a more general result: For finite diagrams of simplicial sets $X$, $A$, $Y$, i.e. functors $\mathcal{I}^\mathrm{op} \to \mathsf{sSet}$, in the stable range $\operatorname{dim} X \leq 2 \operatorname{conn} Y$ and $\operatorname{conn} Y > 1$, we give an algorithm that computes the set $[X, Y]^A$ of homotopy classes of maps of diagrams $\ell \colon X \to Y$ extending a given $f \colon A \to Y$. Again, for fixed $n = \dim X$, the running time of the algorithm is polynomial. The algorithm can be utilized to compute homotopy invariants in the equivariant setting -- for example, one can algorithmically compute equivariant stable homotopy groups. Further, one can apply the result to solve problems from computational topology, which we showcase on the following Tverberg-type problem: Given a $k$-dimensional simplicial complex $K$, is there a map $K \to \mathbb{R}^{d}$ without $r$-tuple intersection points? In the metastable range of dimensions, $rd \geq (r+1)k +3$, the result of Mabillard and Wagner shows this problem equivalent to the existence of a particular equivariant map. In this range, our algorithm is applicable and, thus, the $r$-Tverberg problem is algorithmically decidable (in polynomial time when $k$, $d$ and $r$ are fixed).
Consider a finite set $E$. Assume that each $e \in E$ has a "weight" $w \left(e\right) \in \mathbb{R}$ assigned to it, and any two distinct $e, f \in E$ have a "distance" $d \left(e, f\right) = d \left(f, e\right) \in \mathbb{R}$ assigned to them, such that the distances satisfy the ultrametric triangle inequality $d(a,b)\leqslant \max \left\{d(a,c),d(b,c)\right\}$. We look for a subset of $E$ of given size with maximum perimeter (where the perimeter is defined by summing the weights of all elements and their pairwise distances). We show that any such subset can be found by a greedy algorithm (which starts with the empty set, and then adds new elements one by one, maximizing the perimeter at each step). We use this to define numerical invariants, and also to show that the maximum-perimeter subsets of all sizes form a strong greedoid, and the maximum-perimeter subsets of any given size are the bases of a matroid. This essentially generalizes the "$P$-orderings" constructed by Bhargava in order to define his generalized factorials, and is also similar to the strong greedoid of maximum diversity subsets in phylogenetic trees studied by Moulton, Semple and Steel. We further discuss some numerical invariants of $E, w, d$ stemming from this construction, as well as an analogue where maximum-perimeter subsets are replaced by maximum-perimeter tuples (i.e., elements can appear multiple times).
Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.
We will construct a theory which can explain the dynamics toward the steady state self-gravitating systems (SGSs) where many particles interact via the gravitational force. Real examples of SGS in the universe are globular clusters and galaxies. The idea is to represent an interaction by which a particle of the system is affected from the others by a special random force. That is, we will use a special Langevin equation, just as the normal Langevin equation can unveil the dynamics toward the steady state described by the Maxwell-Boltzmann distribution. However, we cannot introduce the randomness into the system without any evidence. Then, we must confirm that each orbit is random indeed. Of course, it is impossible to understand orbits of stars in globular clusters from observations. Thus we use numerical simulations. From the numerical simulations of SGS, grounds that we use the random noise become clear. The special Langevin equation includes the additive and the multiplicative noise. By using the random process, we derive the non-Maxwellian distribution of SGS especially around the core. The number density can be obtained through the steady state solution of the Fokker-Planck equation corresponding to the random process. We exhibit that the number density becomes equal to the density profiles around the core by adjusting the friction coefficient and the intensity of the multiplicative noise. Moreover, we also show that our model can be applied in the system which has a heavier particle, corresponding to the black hole in a globular cluster.
Demand for Personal Protective Equipment (PPE) such as surgical masks, gloves, and gowns has increased significantly since the onset of the COVID-19 pandemic. In hospital settings, both medical staff and patients are required to wear PPE. As these facilities resume regular operations, staff will be required to wear PPE at all times while additional PPE will be mandated during medical procedures. This will put increased pressure on hospitals which have had problems predicting PPE usage and sourcing its supply. To meet this challenge, we propose an approach to predict demand for PPE. Specifically, we model the admission of patients to a medical department using multiple independent queues. Each queue represents a class of patients with similar treatment plans and hospital length-of-stay. By estimating the total workload of each class, we derive closed-form estimates for the expected amount of PPE required over a specified time horizon using current PPE guidelines. We apply our approach to a data set of 22,039 patients admitted to the general internal medicine department at St. Michael's hospital in Toronto, Canada from April 2010 to November 2019. We find that gloves and surgical masks represent approximately 90% of predicted PPE usage. We also find that while demand for gloves is driven entirely by patient-practitioner interactions, 86% of the predicted demand for surgical masks can be attributed to the requirement that medical practitioners will need to wear them when not interacting with patients.
We attribute the recently discovered cosmic ray electron and cosmic ray positron excess components and their cutoffs to the acceleration in the supernova shock in the polar cap of exploding Wolf Rayet and Red Super Giant stars. Considering a spherical surface at some radius around such a star, the magnetic field is radial in the polar cap as opposed to most of 4 pi (the full solid angle), where the magnetic field is nearly tangential. This difference yields a flatter spectrum, and also an enhanced positron injection for the cosmic rays accelerated in the polar cap. This reasoning naturally explains the observations. Precise spectral measurements will be the test, as this predicts a simple E^-2 spectrum for the new components in the source, steepened to E^-3 in observations with an E^-4 cutoff.
We give an alternative proof of a conjecture of Bollob\'as, Brightwell and Leader, first proved by Peter Allen, stating that the number of boolean functions definable by 2-SAT formulae is $(1+o(1))2^{\binom{n+1}{2}}$. One step in the proof determines the asymptotics of the number of "odd-blue-triangle-free" graphs on $n$ vertices.
Oxygen vacancies (VO's) are of paramount importance in influencing the properties and applications of ceria (CeO2). Yet, comprehending the distribution and nature of the VO's poses a significant challenge due to the vast number of electronic configurations and intricate many-body interactions among VO's and polarons (Ce3+'s). In this study, we employed a combination of LASSO regression in machine learning, in conjunction with a cluster expansion model and first-principles calculations to decouple the interactions among the Ce3+'s and VO's, thereby circumventing the limitations associated with sampling electronic configurations. By separating these interactions, we identified specific electronic configurations characterized by the most favorable VO-Ce3+ attractions and the least Ce3+-Ce3+/VO-VO repulsions, which are crucial in determining the stability of vacancy structures. Through more than 10^8 Metropolis Monte Carlo samplings of Vo's and Ce3+ in the near-surface of CeO2(111), we explored potential configurations within an 8x8 supercell. Our findings revealed that oxygen vacancies tend to aggregate and are most abundant in the third oxygen layer, primarily due to extensive geometric relaxation-an aspect previously overlooked. This behavior is notably dependent on the concentration of Vo. This work introduces a novel theoretical framework for unraveling the complex vacancy structures in metal oxides, with potential applications in redox and catalytic chemistry.
The number of maximal independent sets of the n-cycle graph C_n is known to be the nth term of the Perrin sequence. The action of the automorphism group of C_n on the family of these maximal independent sets partitions this family into disjoint orbits, which represent the non-isomorphic (i.e., defined up to a rotation and a reflection) maximal independent sets. We provide exact formulas for the total number of orbits and the number of orbits having a given number of isomorphic representatives. We also provide exact formulas for the total number of unlabeled (i.e., defined up to a rotation) maximal independent sets and the number of unlabeled maximal independent sets having a given number of isomorphic representatives. It turns out that these formulas involve both Perrin and Padovan sequences.
Modern time series analysis requires the ability to handle datasets that are inherently high-dimensional; examples include applications in climatology, where measurements from numerous sensors must be taken into account, or inventory tracking of large shops, where the dimension is defined by the number of tracked items. The standard way to mitigate computational issues arising from the high dimensionality of the data is by applying some dimension reduction technique that preserves the structural properties of the ambient space. The dissimilarity between two time series is often measured by ``discrete'' notions of distance, e.g. the dynamic time warping or the discrete Fr\'echet distance. Since all these distance functions are computed directly on the points of a time series, they are sensitive to different sampling rates or gaps. The continuous Fr\'echet distance offers a popular alternative which aims to alleviate this by taking into account all points on the polygonal curve obtained by linearly interpolating between any two consecutive points in a sequence. We study the ability of random projections \`a la Johnson and Lindenstrauss to preserve the continuous Fr\'echet distance of polygonal curves by effectively reducing the dimension. In particular, we show that one can reduce the dimension to $O(\epsilon^{-2} \log N)$, where $N$ is the total number of input points while preserving the continuous Fr\'echet distance between any two determined polygonal curves within a factor of $1\pm \epsilon$. We conclude with applications on clustering.
The concept of graph compositions is related to several number theoretic concepts, including partitions of positive integers and the cardinality of the power set of finite sets. This paper examines graph compositions where the total number of components is restricted and illustrates a connection between graph compositions and Stirling numbers of the second kind.
We characterize the sufficient conditions which three weight functions $u$ and $v_{1}, v_{2}$ satisfy ensure the boundedness of the Hardy operator with variable limits on product space. The corresponding bound is explicitly worked out. Moreover, as application, we can obtain an explicit scale of bound for the P\'{o}lya-Knopp type operator with certain weights.
With a densely defined symmetric semi-bounded operator of nonzero defect indexes $L_0$ in a separable Hilbert space ${\cal H}$ we associate a topological space $\Omega_{L_0}$ ({\it wave spectrum}) constructed from the reachable sets of a dynamical system governed by the equation $u_{tt}+(L_0)^*u=0$. Wave spectra of unitary equivalent operators are homeomorphic. In inverse problems, one needs to recover a Riemannian manifold $\Omega$ via dynamical or spectral boundary data. We show that for a generic class of manifolds, $\Omega$ is isometric to the wave spectrum $\Omega_{L_0}$ of the minimal Laplacian $L_0=-\Delta|_{C^\infty_0(\Omega\backslash \partial \Omega)}$ acting in ${\cal H}=L_2(\Omega)$, whereas $L_0$ is determined by the inverse data up to unitary equivalence. Hence, the manifold can be recovered (up to isometry) by the scheme `data $\Rightarrow L_0 \Rightarrow \Omega_{L_0} \overset{\rm isom}= \Omega$'. The wave spectrum is relevant to a wide class of dynamical systems, which describe the finite speed wave propagation processes. The paper elucidates the operator background of the boundary control method (Belishev`1986), which is an approach to inverse problems based on their relations to control theory.
We report the first conformal ultra-wide band (UWB) array on a doubly curved surface for wide angle electronic scanning. We use a quadrilateral mesh as the basis for systematically arraying UWB radiators on arbitrary surfaces. A prototype consisting of a 52 element, dual-polarized Vivaldi array arranged over a 181 mm diameter hemisphere is developed. The antennas and SMP connectors are 3D printed out of titanium to allow for simple fabrication and assembly. We derive the theoretical gain of a hemispherical array based on the antenna size and number of elements. The measured realized gain of the prototype array is within 2 dB of the theoretical value from 2-18 GHz and scan angles out to 120{\deg} from the z-axis. This field of view is twice that of a planar array with the same diameter in agreement with theory. This work provides a baseline performance for larger conformal arrays that have more uniform meshes. Furthermore, the basic concept can be extended to other UWB radiating elements.
We derive the uniqueness of weak solutions to the Shigesada-Kawasaki-Teramoto (SKT) systems using the adjoint problem argument. Combining with [PT17] we then derive the well-posedness for the SKT systems in space dimension $d\le 4$
Charge and color breaking minima in SUSY theories might make the standard vacuum unstable. In this talk a brief review of this issue is performed. When a complete analysis of all the potentially dangerous directions in the field space of the theory is carried out, imposing that the standard vacuum should be the global minimum, the corresponding constraints turn out to be very strong and, in fact, there are extensive regions in the parameter space of soft SUSY--breaking terms that become forbidden. For instance, in the context of the MSSM with universal soft terms, this produces important bounds, not only on the value of A, but also on the values of B, M and m. In specific SUSY scenarios, as fixed point models, no-scale supergravity, gauge-mediated SUSY breaking and superstrings, the charge and color breaking constraints are also very important. For example, if the dilaton is the source of SUSY breaking in four-dimensional superstrings, the whole parameter space (m_{3/2},B) is excluded on these grounds. Cosmological analyses are also briefly reviewed.
Piecewise-deterministic Markov processes form a general class of non-diffusion stochastic models that involve both deterministic trajectories and random jumps at random times. In this paper, we state a new characterization of the jump rate of such a process with discrete transitions. We deduce from this result a nonparametric technique for estimating this feature of interest. We state the uniform convergence in probability of the estimator. The methodology is illustrated on a numerical example.
A computational paradigm based on neuroscientific concepts is proposed and shown to be capable of online unsupervised clustering. Because it is an online method, it is readily amenable to streaming realtime applications and is capable of dynamically adjusting to macro-level input changes. All operations, both training and inference, are localized and efficient. The paradigm is implemented as a cognitive column that incorporates five key elements: 1) temporal coding, 2) an excitatory neuron model for inference, 3) winner-take-all inhibition, 4) a column architecture that combines excitation and inhibition, 5) localized training via spike timing de-pendent plasticity (STDP). These elements are described and discussed, and a prototype column is given. The prototype column is simulated with a semi-synthetic benchmark and is shown to have performance characteristics on par with classic k-means. Simulations reveal the inner operation and capabilities of the column with emphasis on excitatory neuron response functions and STDP implementations.
Using the general results on the classification of timelike supersymmetric solutions of all 4-dimensional N >1 supergravity theories, we show how to construct all the supersymmetric (single- and multi-) black-hole solutions of N=8 supergravity.
Stepwise inference protocols, such as scratchpads and chain-of-thought, help language models solve complex problems by decomposing them into a sequence of simpler subproblems. Despite the significant gain in performance achieved via these protocols, the underlying mechanisms of stepwise inference have remained elusive. To address this, we propose to study autoregressive Transformer models on a synthetic task that embodies the multi-step nature of problems where stepwise inference is generally most useful. Specifically, we define a graph navigation problem wherein a model is tasked with traversing a path from a start to a goal node on the graph. Despite is simplicity, we find we can empirically reproduce and analyze several phenomena observed at scale: (i) the stepwise inference reasoning gap, the cause of which we find in the structure of the training data; (ii) a diversity-accuracy tradeoff in model generations as sampling temperature varies; (iii) a simplicity bias in the model's output; and (iv) compositional generalization and a primacy bias with in-context exemplars. Overall, our work introduces a grounded, synthetic framework for studying stepwise inference and offers mechanistic hypotheses that can lay the foundation for a deeper understanding of this phenomenon.
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
We explore the scattering amplitudes of fluid quanta described by the Navier-Stokes equation and its non-Abelian generalization. These amplitudes exhibit universal infrared structures analogous to the Weinberg soft theorem and the Adler zero. Furthermore, they satisfy on-shell recursion relations which together with the three-point scattering amplitude furnish a pure S-matrix formulation of incompressible fluid mechanics. Remarkably, the amplitudes of the non-Abelian Navier-Stokes equation also exhibit color-kinematics duality as an off-shell symmetry, for which the associated kinematic algebra is literally the algebra of spatial diffeomorphisms. Applying the double copy prescription, we then arrive at a new theory of a tensor bi-fluid. Finally, we present monopole solutions of the non-Abelian and tensor Navier-Stokes equations and observe a classical double copy structure.
We examine for representative gaugino-higgsino mixing scenarios sneutrino-neutralino and sneutrino-chargino production in deep inelastic ep-scattering at the cm-energy of 1.8 TeV. The cross sections for sneutrino-chargino production are more than one order of magnitude bigger than those for sneutrino-squark production. Also for zino-like neutralinos we find cross sections at least comparable to those for sneutrino-squark production.
In this paper, we present a generic and robust multimodal synthesis system that produces highly natural speech and facial expression simultaneously. The key component of this system is the Duration Informed Attention Network (DurIAN), an autoregressive model in which the alignments between the input text and the output acoustic features are inferred from a duration model. This is different from the end-to-end attention mechanism used, and accounts for various unavoidable artifacts, in existing end-to-end speech synthesis systems such as Tacotron. Furthermore, DurIAN can be used to generate high quality facial expression which can be synchronized with generated speech with/without parallel speech and face data. To improve the efficiency of speech generation, we also propose a multi-band parallel generation strategy on top of the WaveRNN model. The proposed Multi-band WaveRNN effectively reduces the total computational complexity from 9.8 to 5.5 GFLOPS, and is able to generate audio that is 6 times faster than real time on a single CPU core. We show that DurIAN could generate highly natural speech that is on par with current state of the art end-to-end systems, while at the same time avoid word skipping/repeating errors in those systems. Finally, a simple yet effective approach for fine-grained control of expressiveness of speech and facial expression is introduced.
We study tunneling dynamics of atomic pairs in Bose-Einstein condensates with Feshbach resonances. It is shown that the tunneling of the atomic pairs depends on not only the tunneling coupling between the atomic condensate and the molecular condensate, but also the inter-atomic nonlinear interactions and the initial number of atoms in these condensates. It is found that in addition to oscillating tunneling current between the atomic condensate and the molecular condensate, the nonlinear atomic-pair tunneling dynamics sustains a self-locked population imbalance: macroscopic quantum self-trapping effect. Influence of decoherence induced by non-condensate atoms on tunneling dynamics is investigated. It is shown that decoherence suppresses atomic-pair tunneling.
Magnetoacoustic oscillations are nowadays routinely observed in various regions of the solar corona. This allows them to be used as means of diagnosing plasma parameters and processes occurring in it. Plasma diagnostics, in turn, requires a sufficiently reliable MHD model to describe the wave evolution. In our paper, we focus on obtaining the exact analytical solution to the problem of the linear evolution of standing slow magnetoacoustic (MA) waves in coronal loops. Our consideration of the properties of slow waves is conducted using the infinite magnetic field assumption. The main contribution to the wave dynamics in this assumption comes from such processes as thermal conduction, unspecified coronal heating, and optically thin radiation cooling. In our consideration, the wave periods are assumed to be short enough so that the thermal misbalance has a weak effect on them. Thus, the main non-adiabatic process affecting the wave dynamics remains thermal conduction. The exact solution of the evolutionary equation is obtained using the Fourier method. This means that it is possible to trace the evolution of any harmonic of the initial perturbation, regardless of whether it belongs to entropy or slow mode. We show that the fraction of energy between entropy and slow mode is defined by the thermal conduction and coronal loop parameters. It is shown for which parameters of coronal loops it is reasonable to associate the full solution with a slow wave, and when it is necessary to take into account the entropy wave. Furthermore, we obtain the relationships for the phase shifts of various plasma parameters applicable to any values of harmonic number and thermal condition coefficient. In particular, it is shown that the phase shifts between density and temperature perturbations for the second harmonic of the slow wave vary between $\pi/2$ to 0, but are larger than for the fundamental harmonic.
We survey lower-bound results in complexity theory that have been obtained via newfound interconnections between propositional proof complexity, boolean circuit complexity, and query/communication complexity. We advocate for the theory of total search problems (TFNP) as a unifying language for these connections and discuss how this perspective suggests a whole programme for further research.
Recent advancements in integrating large language models (LLMs) with tools have allowed the models to interact with real-world environments. However, these tool-augmented LLMs often encounter incomplete scenarios when users provide partial information or the necessary tools are unavailable. Recognizing and managing such scenarios is crucial for LLMs to ensure their reliability, but this exploration remains understudied. This study examines whether LLMs can identify incomplete conditions and appropriately determine when to refrain from using tools. To this end, we address a dataset by manipulating instances from two datasets by removing necessary tools or essential information for tool invocation. We confirm that most LLMs are challenged to identify the additional information required to utilize specific tools and the absence of appropriate tools. Our research can contribute to advancing reliable LLMs by addressing scenarios that commonly arise during interactions between humans and LLMs.
The security of models based on new architectures such as MLP-Mixer and ViTs needs to be studied urgently. However, most of the current researches are mainly aimed at the adversarial attack against ViTs, and there is still relatively little adversarial work on MLP-mixer. We propose an adversarial attack method against MLP-Mixer called Maxwell's demon Attack (MA). MA breaks the channel-mixing and token-mixing mechanism of MLP-Mixer by controlling the part input of MLP-Mixer's each Mixer layer, and disturbs MLP-Mixer to obtain the main information of images. Our method can mask the part input of the Mixer layer, avoid overfitting of the adversarial examples to the source model, and improve the transferability of cross-architecture. Extensive experimental evaluation demonstrates the effectiveness and superior performance of the proposed MA. Our method can be easily combined with existing methods and can improve the transferability by up to 38.0% on MLP-based ResMLP. Adversarial examples produced by our method on MLP-Mixer are able to exceed the transferability of adversarial examples produced using DenseNet against CNNs. To the best of our knowledge, we are the first work to study adversarial transferability of MLP-Mixer.
In many multirobot applications, planning trajectories in a way to guarantee that the collective behavior of the robots satisfies a certain high-level specification is crucial. Motivated by this problem, we introduce counting temporal logics---formal languages that enable concise expression of multirobot task specifications over possibly infinite horizons. We first introduce a general logic called counting linear temporal logic plus (cLTL+), and propose an optimization-based method that generates individual trajectories such that satisfaction of a given cLTL+ formula is guaranteed when these trajectories are synchronously executed. We then introduce a fragment of cLTL+, called counting linear temporal logic (cLTL), and show that a solution to planning problem with cLTL constraints can be obtained more efficiently if all robots have identical dynamics. In the second part of the paper, we relax the synchrony assumption and discuss how to generate trajectories that can be asynchronously executed, while preserving the satisfaction of the desired cLTL+ specification. In particular, we show that when the asynchrony between robots is bounded, the method presented in this paper can be modified to generate robust trajectories. We demonstrate these ideas with an experiment and provide numerical results that showcase the scalability of the method.
Based on the exact relationship to Random Matrix Theory, we derive the probability distribution of the k-th smallest Dirac operator eigenvalue in the microscopic finite-volume scaling regime of QCD and related gauge theories.
Structures seen in idealized numerical experiments on compressible magnetoconvection in an imposed strong vertical magnetic field show important differences from those detected in observations or realistic numerical simulations of sunspot umbrae. To elucidate the origin of these discrepancies, we present a series of idealized 3D compressible magnetoconvection experiments that differ from previous such experiments in several details, bringing them closer to realistic solar conditions. An initially vertical magnetic field B 0 is imposed on a time snapshot of fully developed solar-like turbulent convection in a layer bounded by a stable layer from above. Upon relaxation to a statistically steady state, the structure of the flow field and magnetic field is examined. Instead of the vigorous granular convection (GRC) well known to take place in magnetized or weakly magnetized convection, for high values of B0 heat is transported by small-scale convection (SSC) in the form of narrow, persistent convective columns consisting of slender upflows accompanied by adjacent downflow patches, which are reminiscent of the 'convectons' identified in earlier semianalytic models. For moderate field strengths, flux separation (FXS) is observed: isolated field-free inclusions of GRC are embedded in a strongly magnetized plasma with SSC. Between the SSC and FXS regimes, a transitional regime (F/S) is identified where convectons dynamically evolve into multiply segmented granular inclusions and back. Our results agree in some aspects more closely with observed umbral structures than earlier idealized models, because they do reproduce the strong localized, patchy downflows immediately adjacent to the narrow convective columns. Based on recent observations of umbral dots, we suggest that in some cases the conditions in sunspot umbrae correspond to the newly identified F/S transitional regime.
We consider composition orderings for linear functions of one variable. Given $n$ linear functions $f_1,\dots,f_n$ and a constant $c$, the objective is to find a permutation $\sigma$ that minimizes/maximizes $f_{\sigma(n)}\circ\dots\circ f_{\sigma(1)}(c)$. It was first studied in the area of time-dependent scheduling, and known to be solvable in $O(n\log n)$ time if all functions are nondecreasing. In this paper, we present a complete characterization of optimal composition orderings for this case, by regarding linear functions as two-dimensional vectors. We also show several interesting properties on optimal composition orderings such as the equivalence between local and global optimality. Furthermore, by using the characterization above, we provide a fixed-parameter tractable (FPT) algorithm for the composition ordering problem for general linear functions, with respect to the number of decreasing linear functions. We next deal with matrix multiplication orderings as a generalization of composition of linear functions. Given $n$ matrices $M_1,\dots,M_n\in\mathbb{R}^{m\times m}$ and two vectors $w,y\in\mathbb{R}^m$, where $m$ denotes a positive integer, the objective is to find a permutation $\sigma$ that minimizes/maximizes $w^\top M_{\sigma(n)}\dots M_{\sigma(1)} y$. The problem is also viewed as a generalization of flow shop scheduling through a limit. By this extension, we show that the multiplication ordering problem for $2\times 2$ matrices is solvable in $O(n\log n)$ time if all the matrices are simultaneously triangularizable and have nonnegative determinants, and FPT with respect to the number of matrices with negative determinants, if all the matrices are simultaneously triangularizable. As the negative side, we finally prove that three possible natural generalizations are NP-hard: 1) when $m=2$, 2) when $m\geq 3$, and 3) the target version of the problem.
This paper describes a massively parallel code for a state-of-the art thermal lattice- Boltzmann method. Our code has been carefully optimized for performance on one GPU and to have a good scaling behavior extending to a large number of GPUs. Versions of this code have been already used for large-scale studies of convective turbulence. GPUs are becoming increasingly popular in HPC applications, as they are able to deliver higher performance than traditional processors. Writing efficient programs for large clusters is not an easy task as codes must adapt to increasingly parallel architectures, and the overheads of node-to-node communications must be properly handled. We describe the structure of our code, discussing several key design choices that were guided by theoretical models of performance and experimental benchmarks. We present an extensive set of performance measurements and identify the corresponding main bot- tlenecks; finally we compare the results of our GPU code with those measured on other currently available high performance processors. Our results are a production-grade code able to deliver a sustained performance of several tens of Tflops as well as a design and op- timization methodology that can be used for the development of other high performance applications for computational physics.
We study the variation of the dark matter mass fraction of elliptical galaxies as a function of their luminosity, stellar mass, and size using a sample of 29,469 elliptical galaxies culled from the Sloan Digital Sky Survey. We model ellipticals as a stellar Hernquist profile embedded in an adiabatically compressed dark matter halo. This model allows us to estimate a dynamical mass ($M_{dynm}$) at the half-light radius from the velocity dispersion of the spectra, and to compare these to the stellar mass estimates ($M_{*}$) from Kauffmann et al (2003). We find that $M_{*}/L$ is independent of luminosity, while $M_{dynm}/L$ increases with luminosity, implying that the dark matter fraction increases with luminosity. We also observe that at a fixed luminosity or stellar mass, the dark matter fraction increases with increasing galaxy size or, equivalently, increases with decreasing surface brightness: high surface brightness galaxies show almost no evidence for dark matter, while in low surface brightness galaxies, the dark matter exceeds the stellar mass at the half light radius. We relate this to the fundamental plane of elliptical galaxies, suggesting that the tilt of this plane from simple virial predictions is due to the dark matter in galaxies. We find that a simple model where galaxies are embedded in dark matter halos and have a star formation efficiency independent of their surface brightness explains these trends. We estimate the virial mass of ellipticals as being approximately 7-30 times their stellar mass, with the lower limit suggesting almost all of the gas within the virial radius is converted into stars.
We study the dynamics of a transformation that acts on infinite paths in the graph associated with Pascal's triangle. For each ergodic invariant measure the asymptotic law of the return time to cylinders is given by a step function. We construct a representation of the system by a subshift on a two-symbol alphabet and then prove that the complexity function of this subshift is asymptotic to a cubic, the frequencies of occurrence of blocks behave in a regular manner, and the subshift is topologically weak mixing.
Based upon the mathematical formulas of Lattice gauge theory and non-commutative geometry differential calculus, we developed an approach of generalized gauge theory on a product of the spacetime lattice and the two discrete points(or a $Z_2$ discrete group). We introduce a differentiation for non-nearest-neighbour points and find that this differentiation may lead to the introduction of Wilson term in the free fermion Lagrangian on lattice. The Wilson-Yukawa chiral model on lattice is constructed by the generalized gauge theory and a toy model and Smit-Swift model are studied.
We consider the dynamics of atomic and field coherent states in the non-resonant Dicke model. At weak coupling an initial product state evolves into a superposition of multiple field coherent states that are correlated with the atomic configuration. This process is accompanied by the buildup and decay of atom-field entanglement and leads to the periodic collapse and revival of Rabi oscillations. We provide a perturbative derivation of the underlying dynamical mechanism that complements the rotating wave approximation at resonance. The identification of two different time scales explains how the dynamical signatures depend on the sign of detuning between the atomic and field frequency, and predicts the generation of either atomic or field cat states in the two opposite cases. We finally discuss the restrictions that the buildup of atom-field entanglement during the collapse of Rabi oscillations imposes on the validity of semi-classical approximations that neglect entanglement.
We define direct sums and a corresponding notion of connectedness for graph limits. Every graph limit has a unique decomposition as a direct sum of connected components. As is well-known, graph limits may be represented by symmetric functions on a probability space; there are natural definitions of direct sums and connectedness for such functions, and there is a perfect correspondence with the corresponding properties of the graph limit. Similarly, every graph limit determines an infinite random graph, which is a.s. connected if and only if the graph limit is connected. There are also characterizations in terms of the asymptotic size of the largest component in the corresponding finite random graphs, and of minimal cuts in sequences of graphs converging to a given limit.
We improve the upper bound for the consistency strength of stationary reflection at successors of singular cardinals.
Recent work has shown that it is sometimes feasible to significantly reduce the energy usage of some radio-network algorithms by adaptively powering down the radio receiver when it is not needed. Although past work has focused on modifying specific network algorithms in this way, we now ask the question of whether this problem can be solved in a generic way, treating the algorithm as a kind of black box. We are able to answer this question in the affirmative, presenting a new general way to modify arbitrary radio-network algorithms in an attempt to save energy. At the expense of a small increase in the time complexity, we can provably reduce the energy usage to an extent that is provably nearly optimal within a certain class of general-purpose algorithms. As an application, we show that our algorithm reduces the energy cost of breadth-first search in radio networks from the previous best bound of $2^{O(\sqrt{\log n})}$ to $\mathrm{polylog}(n)$, where $n$ is the number of nodes in the network A key ingredient in our algorithm is hierarchical clustering based on additive Voronoi decomposition done at multiple scales. Similar clustering algorithms have been used in other recent work on energy-aware computation in radio networks, but we believe the specific approach presented here may be of independent interest.
This paper concerns the $\mathbb{Z}_2$ classification of Fermionic Time-Reversal (FTR) symmetric partial differential Hamiltonians on the Euclidean plane. We consider the setting of two insulators separated by an interface. Hamiltonians that are invariant with respect to spatial translations along the interface are classified into two categories depending on whether they may or may not be gapped by continuous deformations. Introducing a related odd-symmetric Fredholm operator, we show that the classification is stable against FTR-symmetric perturbations. The property that non-trivial Hamiltonians cannot be gapped may be interpreted as a topological obstruction to Anderson localization: no matter how much (spatially compactly supported) perturbations are present in the system, a certain amount of transmission in both directions is guaranteed in the nontrivial phase. We present a scattering theory for such systems and show numerically that transmission is indeed guaranteed in the presence of FTR-symmetric perturbations while it no longer is for non-symmetric fluctuations.
The dynamics of inertial particles in Rayleigh-B\'{e}nard convection, where both particles and fluid exhibit thermal expansion, is studied using direct numerical simulations (DNS). We consider the effect of particles with a thermal expansion coefficient larger than that of the fluid, causing particles to become lighter than the fluid near the hot bottom plate and heavier than the fluid near the cold top plate. Because of the opposite directions of the net Archimedes' force on particles and fluid, particles deposited at the plate now experience a relative force towards the bulk. The characteristic time for this motion towards the bulk to happen, quantified as the time particles spend inside the thermal boundary layers (BLs) at the plates, is shown to depend on the thermal response time, $\tau_T$, and the thermal expansion coefficient of particles relative to that of the fluid, $K = \alpha_p / \alpha_f$. In particular, the residence time is constant for small thermal response times, $\tau_T \lesssim 1$, and increasing with $\tau_T$ for larger thermal response times, $\tau_T \gtrsim 1$. Also, the thermal BL residence time is increasing with decreasing $K$. A one-dimensional (1D) model is developed, where particles experience thermal inertia and their motion is purely dependent on the buoyancy force. Although the values do not match one-to-one, this highly simplified 1D model does predict a regime of a constant thermal BL residence time for smaller thermal response times and a regime of increasing residence time with $\tau_T$ for larger response times, thus explaining the trends in the DNS data well.
We study the diffusion of a Brownian probe particle of size $R$ in a dilute dispersion of active Brownian particles (ABPs) of size $a$, characteristic swim speed $U_0$, reorientation time $\tau_R$, and mechanical energy $k_s T_s = \zeta_a U_0^2 \tau_R /6$, where $\zeta_a$ is the Stokes drag coefficient of a swimmer. The probe has a thermal diffusivity $D_P = k_B T/\zeta_P$, where $k_B T$ is the thermal energy of the solvent and $\zeta_P$ is the Stokes drag coefficient for the probe. When the swimmers are inactive, collisions between the probe and the swimmers sterically hinder the probe's diffusive motion. In competition with this steric hindrance is an enhancement driven by the activity of the swimmers. The strength of swimming relative to thermal diffusion is set by $Pe_s = U_0 a /D_P$. The active contribution to the diffusivity scales as $Pe_s^2$ for weak swimming and $Pe_s$ for strong swimming, but the transition between these two regimes is nonmonotonic. When fluctuations in the probe motion decay on the time scale $\tau_R$, the active diffusivity scales as $k_s T_s /\zeta_P$: the probe moves as if it were immersed in a solvent with energy $k_s T_s$ rather than $k_B T$.
We consider the parabolic Anderson model $\partial u/\partial t = \kappa\Delta u + \gamma\xi u$ with $u\colon\, \Z^d\times R^+\to \R^+$, where $\kappa\in\R^+$ is the diffusion constant, $\Delta$ is the discrete Laplacian, $\gamma\in\R^+$ is the coupling constant, and $\xi\colon\,\Z^d\times \R^+\to\{0,1\}$ is the voter model starting from Bernoulli product measure $\nu_{\rho}$ with density $\rho\in (0,1)$. The solution of this equation describes the evolution of a "reactant" $u$ under the influence of a "catalyst" $\xi$. In G\"artner, den Hollander and Maillard 2010 the behavior of the \emph{annealed} Lyapunov exponents, i.e., the exponential growth rates of the successive moments of $u$ w.r.t.\ $\xi$, was investigated. It was shown that these exponents exhibit an interesting dependence on the dimension and on the diffusion constant. In the present paper we address some questions left open in G\"artner, den Hollander and Maillard 2010 by considering specifically when the Lyapunov exponents are the a priori maximal value in terms of strong transience of the Markov process underlying the voter model.
Electro-optic modulators provide a key function in optical transceivers and increasingly in photonic programmable Application Specific Integrated Circuits (ASICs) for machine learning and signal processing. However, both foundry ready silicon based modulators and conventional material based devices utilizing Lithium niobate fall short in simultaneously providing high chip packaging density and fast speed. Current driven ITO based modulators have the potential to achieve both enabled by efficient light matter interactions. Here, we introduce micrometer compact Mach Zehnder Interferometer (MZI) based modulators capable of exceeding 100 GHz switching rates. Integrating ITO thin films atop a photonic waveguide, spectrally broadband, and compact MZI phase shifter. Remarkably, this allows integrating more than 3500 of these modulators within the same chip area as only one single silicon MZI modulator. The modulator design introduced here features a holistic photonic, electronic, and RF-based optimization and includes an asymmetric MZI tuning step to optimize the Extinction Ratio (ER) to Insertion Loss (IL) and dielectric thickness sweep to balance the tradeoffs between ER and speed. Driven by CMOS compatible bias voltage levels, this device is the first to address next generation modulator demands for processors of the machine intelligence revolution, in addition to the edge and cloud computing demands as well as optical transceivers alike.
In this paper the soft gluon radiation from partonic interaction of the type: $2 \to 2$ + gluon has been revisited and a correction term to the widely used Gunion-Bertsch (GB) formula is obtained.
For a general dark-energy equation of state, we estimate the maximum possible radius of massive structures that are not destabilized by the acceleration of the cosmological expansion. A comparison with known stable structures constrains the equation of state. The robustness of the constraint can be enhanced through the accumulation of additional astrophysical data and a better understanding of the dynamics of bound cosmic structures.
Let $\dot{\mathbf{U}}(\widehat{\frak{sl}}_n)$ be the modified quantum affine $\frak{sl}_n$ and let ${\bf U}(\widehat{\frak{sl}}_N)^+$ be the positive part of quantum affine $\frak{sl}_N$. Let $\dot{\mathbf{B}}(n)$ be the canonical basis of $\dot{\mathbf{U}}(\widehat{\frak{sl}}_n)$ and let $\mathbf{B}(N)^{\mathrm{ap}}$ be the canonical basis of ${\bf U}(\widehat{\frak{sl}}_N)^+$. It is proved in \cite{FS} that each structure constant for the multiplication with respect to $\dot{\mathbf{B}}(n)$ coincide with a certain structure constant for the multiplication with respect to $\mathbf{B}(N)^{\mathrm{ap}}$ for $n<N$. In this paper we use the theory of affine quantum Schur algebras to prove that the structure constants for the comultiplication with respect to $\dot{\mathbf{B}}(n)$ are determined by the structure constants for the comultiplication with respect to $\mathbf{B}(N)^{\mathrm{ap}}$ for $n<N$. In particular, the positivity property for the comultiplication of $\dot{\mathbf{U}}(\widehat{\frak{sl}}_n)$ follows from the positivity property for the comultiplication of ${\bf U}(\widehat{\frak{sl}}_N)^+$.
We report on the observation of magnetoresistance oscillations in graphene p-n junctions. The oscillations have been observed for six samples, consisting of single-layer and bilayer graphene, and persist up to temperatures of 30 K, where standard Shubnikov-de Haas oscillations are no longer discernible. The oscillatory magnetoresistance can be reproduced by tight-binding simulations. We attribute this phenomenon to the modulated densities of states in the n- and p- regions.
This work numerically investigates the role of viscosity and resistivity on Rayleigh-Taylor instabilities in magnetized high-energy-density (HED) plasmas for a high Atwood number and high plasma beta regimes surveying across plasma beta and magnetic Prandtl numbers. The numerical simulations are performed using the visco-resistive magnetohydrodynamic (MHD) equations. Results presented here show that the inclusion of self-consistent viscosity and resistivity in the system drastically changes the growth of the Rayleigh-Taylor instability (RTI) as well as modifies its internal structure at smaller scales. It is seen here that the viscosity has a stabilizing effect on the RTI. Moreover, the viscosity inhibits the development of small scale structures and also modifies the morphology of the tip of the RTI spikes. On the other hand, the resistivity reduces the magnetic field stabilization supporting the development of small scale structures. The morphology of the RTI spikes is seen to be unaffected by the presence of resistivity in the system. An additional novelty of this work is in the disparate viscosity and resistivity profiles that may exist in HED plasmas and their impact on RTI growth, morphology, and the resulting turbulence spectra. Furthermore, this work shows that the dynamics of the magnetic field is independent of viscosity and likewise the resistivity does not affect the dissipation of enstrophy and kinetic energy. In addition, power-law scalings of enstrophy, kinetic energy, and magnetic field energy are provided in both injection range and inertial sub-range which could be useful for understanding RTI induced turbulent mixing in HED laboratory and astrophysical plasmas and could aid in the interpretation of observations of RTI-induced turbulence spectra.
There have been several recent efforts towards developing representations for multivariate time-series in an unsupervised learning framework. Such representations can prove beneficial in tasks such as activity recognition, health monitoring, and anomaly detection. In this paper, we consider a setting where we observe time-series at each node in a dynamic graph. We propose a framework called GraphTNC for unsupervised learning of joint representations of the graph and the time-series. Our approach employs a contrastive learning strategy. Based on an assumption that the time-series and graph evolution dynamics are piecewise smooth, we identify local windows of time where the signals exhibit approximate stationarity. We then train an encoding that allows the distribution of signals within a neighborhood to be distinguished from the distribution of non-neighboring signals. We first demonstrate the performance of our proposed framework using synthetic data, and subsequently we show that it can prove beneficial for the classification task with real-world datasets.
The flare activity that is observed in GRBs soon after the prompt emission with the XRT (0.3-10 KeV) instrument on board of the Swift satellite is leading to important clues in relation to the physical characteristics of the mechanism generating the emission of energy in Gamma Ray Bursts. We will briefly refer to the results obtained with the recent analysis and and discuss the preliminary results we obtained with a new larger sample of GRBs [limited to early flares] based on fitting of the flares using the Norris 2005 profile. We find, in agreement with previous results, that XRT flares follow the main characteristics observed in Norris 2005 for the prompt emission spikes. The estimate of the flare energy for the subsample with redshift is rather robust and an attempt is made, using the redshisft sample, to estimate how the energy emitted in flares depends on time. We used a $H_0=70 km/s/Mpc$, $\Omega_\Lambda=0.7$, $\Omega_m=0.3$ cosmology.
The finite element method, finite difference method, finite volume method and spectral method have achieved great success in solving partial differential equations. However, the high accuracy of traditional numerical methods is at the cost of high efficiency. Especially in the face of high-dimensional problems, the traditional numerical methods are often not feasible in the subdivision of high-dimensional meshes and the differentiability and integrability of high-order terms. In deep learning, neural network can deal with high-dimensional problems by adding the number of layers or expanding the number of neurons. Compared with traditional numerical methods, it has great advantages. In this article, we consider the Deep Galerkin Method (DGM) for solving the general Stokes equations by using deep neural network without generating mesh grid. The DGM can reduce the computational complexity and achieve the competitive results. Here, depending on the L2 error we construct the objective function to control the performance of the approximation solution. Then, we prove the convergence of the objective function and the convergence of the neural network to the exact solution. Finally, the effectiveness of the proposed framework is demonstrated through some numerical experiments.
We review five often used quad lens models, each of which has analytical solutions and can produce four images at most. Each lens model has two parameters, including one that describes the intensity of non-dimensional mass density, and the other one that describes the deviation from the circular lens. In our recent work, we have found that the cusp and the fold summations are not equal to 0, when a point source infinitely approaches a cusp or a fold from inner side of the caustic. Based on the magnification invariant theory, which states that the sum of signed magnifications of the total images of a given source is a constant, we calculate the cusp summations for the five lens models. We find that the cusp summations are always larger than 0 for source on the major cusps, while can be larger or smaller than 0 for source on the minor cusps. We also find that if these lenses tend to the circular lens, the major and minor cusp summations will have infinite values, and with positive and negative signs respectively. The cusp summations do not change significantly if the sources are slightly deviated from the cusps. In addition, through the magnification invariants, we also derive the analytical signed cusp relations on the axes for three lens models. We find that both on the major and the minor axes the larger the lenses deviated from the circular lens, the larger the signed cusp relations. The major cusp relations are usually larger than the absolute minor cusp relations, but for some lens models with very large deviation from circular lens, the minor cusp relations can be larger than the major cusp relations.
Millions of news articles published online daily can overwhelm readers. Headlines and entity (topic) tags are essential for guiding readers to decide if the content is worth their time. While headline generation has been extensively studied, tag generation remains largely unexplored, yet it offers readers better access to topics of interest. The need for conciseness in capturing readers' attention necessitates improved content selection strategies for identifying salient and relevant segments within lengthy articles, thereby guiding language models effectively. To address this, we propose to leverage auxiliary information such as images and captions embedded in the articles to retrieve relevant sentences and utilize instruction tuning with variations to generate both headlines and tags for news articles in a multilingual context. To make use of the auxiliary information, we have compiled a dataset named XL-HeadTags, which includes 20 languages across 6 diverse language families. Through extensive evaluation, we demonstrate the effectiveness of our plug-and-play multimodal-multilingual retrievers for both tasks. Additionally, we have developed a suite of tools for processing and evaluating multilingual texts, significantly contributing to the research community by enabling more accurate and efficient analysis across languages.
Motivated by some questions in Euclidean Ramsey theory, our aim in this note is to show that there exists a cyclic quadrilateral that does not embed into any transitive set (in any dimension). We show that in fact this holds for almost all cyclic quadrilaterals, and we also give explicit examples of such cyclic quadrilaterals. These are the first explicit examples of spherical sets that do not embed into transitive sets.
The present paper is a comment regarding the robustness of the chirality in presence of the space-reflection asymmetry, which leads to pairs of interleaved positive- and negative-parity bands. The recent results reported in Ref. \cite{2006.12062} which introduced the $chiture$ and $chiplex$ quantum numbers to describe an ideal nuclear system with simultaneous chiral and reflection symmetry breaking, are commented.
In many moduli stabilization schemes in string theory, the scale of inflation appears to be of the same order as the scale of supersymmetry breaking. For low-scale supersymmetry breaking, therefore, the scale of inflation should also be low, unless this correlation is avoided in specific models. We explore such a low-scale inflationary scenario in a racetrack model with a single modulus in type IIB string theory. Inflation occurs near a point of inflection in the K\"ahler modulus potential. Obtaining acceptable cosmological density perturbations leads to the introduction of magnetized D7-branes sourcing non-perturbative superpotentials. The gravitino mass, m_{3/2}, is chosen to be around 30 TeV, so that gravitinos that are produced in the inflaton decay do not affect big-bang nucleosynthesis. Supersymmetry is communicated to the visible sector by a mixture of anomaly and modulus mediation. We find that the two sources contribute equally to the gaugino masses, while scalar masses are decided mainly by anomaly contribution. This happens as a result of the low scale of inflation and can be probed at the LHC.
Upstream reciprocity (also called generalized reciprocity) is a putative mechanism for cooperation in social dilemma situations with which players help others when they are helped by somebody else. It is a type of indirect reciprocity. Although upstream reciprocity is often observed in experiments, most theories suggest that it is operative only when players form short cycles such as triangles, implying a small population size, or when it is combined with other mechanisms that promote cooperation on their own. An expectation is that real social networks, which are known to be full of triangles and other short cycles, may accommodate upstream reciprocity. In this study, I extend the upstream reciprocity game proposed for a directed cycle by Boyd and Richerson to the case of general networks. The model is not evolutionary and concerns the conditions under which the unanimity of cooperative players is a Nash equilibrium. I show that an abundance of triangles or other short cycles in a network does little to promote upstream reciprocity. Cooperation is less likely for a larger population size even if triangles are abundant in the network. In addition, in contrast to the results for evolutionary social dilemma games on networks, scale-free networks lead to less cooperation than networks with a homogeneous degree distribution.
Polarized Raman spectra of the epitaxial Ba0.5Sr0.5TiO3 film, bi-color BaTiO3/Ba0.5Sr0.5TiO3 superlattice, and tri-color BaTiO3/Ba0.5Sr0.5TiO3/SrTiO3 superlattice were studied in a broad temperature range of 80-700 K. Based on the temperature dependence of the polar modes we determined the phase transitions temperatures in the studied heterostructures. In the sub-THz frequency range of the Y(XZ)Y spectra, we revealed the coexistence of the Debye-type central peak and soft mode in bi-color BaTiO3/Ba0.5Sr0.5TiO3 superlattice.
The damping of spin waves parametrically excited in the magnetic insulator Yttrium Iron Garnet (YIG) is controlled by a dc current passed through an adjacent normal-metal film. The experiment is performed on a macroscopically sized YIG(100nm)/Pt(10nm) bilayer of 4x2 mm^2 lateral dimensions. The spin-wave relaxation frequency is determined via the threshold of the parametric instability measured by Brillouin light scattering (BLS) spectroscopy. The application of a dc current to the Pt film leads to the formation of a spin-polarized electron current normal to the film plane due to the spin Hall effect (SHE). This spin current exerts a spin transfer torque (STT) in the YIG film and, thus, changes the spin-wave damping. Depending on the polarity of the applied dc current with respect to the magnetization direction, the damping can be increased or decreased. The magnitude of its variation is proportional to the applied current. A variation in the relaxation frequency of +/-7.5% is achieved for an applied dc current density of 5*10^10 A/m^2.
The paper is devoted to the study of two-parametric families of Dirichlet problems for systems of equations with $p, q$-Laplacians and indefinite nonlinearities. Continuous and monotone curves $\Gamma_f$ and $\Gamma_e$ on the parametric plane $\lambda \times \mu$, which are the lower and upper bounds for a maximal domain of existence of weak positive solutions are introduced. The curve $\Gamma_f$ is obtained by developing our previous work \cite{BobkovIlyasov} and it determines a maximal domain of the applicability of the Nehari manifold and fibering methods. The curve $\Gamma_e$ is derived explicitly via minimax variational principle of the extended functional method.