text
stringlengths
6
128k
Continual Learning (CL) aims to learn a sequence of problems (i.e., tasks and domains) by transferring knowledge acquired on previous problems, whilst avoiding forgetting of past ones. Different from previous approaches which focused on CL for one NLP task or domain in a specific use-case, in this paper, we address a more general CL setting to learn from a sequence of problems in a unique framework. Our method, HOP, permits to hop across tasks and domains by addressing the CL problem along three directions: (i) we employ a set of adapters to generalize a large pre-trained model to unseen problems, (ii) we compute high-order moments over the distribution of embedded representations to distinguish independent and correlated statistics across different tasks and domains, (iii) we process this enriched information with auxiliary heads specialized for each end problem. Extensive experimental campaign on 4 NLP applications, 5 benchmarks and 2 CL setups demonstrates the effectiveness of our HOP.
This paper is a complement of our recent works on the semilinear Tricomi equations in [8] and[9].
We propose a new method to extract the light quark mass ratio $m_u/m_d$ using the $\Upsilon(4S)\to h_b\pi^0(\eta)$ bottomonia transitions. The decay amplitudes are dominated by the light quark mass differences, and the corrections from other effects are rather small, allowing for a precise extraction. We also discuss how to reduce the theoretical uncertainty with the help of future experiments. As a by-product, we show that the decay $\Upsilon(4S)\to h_b\eta$ is expected to be a nice channel for searching for the $h_b$ state.
Lectures on Quantum Coulomb gases delivered at the CIME summer school on Quantum Many Body Systems 2010
We present a new conceptual definition of 'productivity' for sustainably developing research software. Existing definitions are flawed as they are short-term biased, thus devaluing long-term impact, which we consider to be the principal goal. Taking a long-term view of productivity helps fix that problem. We view the outputs of the development process as knowledge and user satisfaction. User satisfaction is used as a proxy for effective quality. The explicit emphasis on all knowledge produced, rather than just the operationalizable knowledge (code) implies that human-reusable knowledge, i.e. documentation, should also be greatly valued when producing research software.
A cell method is developed, which takes into account the bubble geometry of polyhedral foams, and provides for the generalized Rayleigh-Plesset equation that contains the non-local in time term corresponding to heat relaxation. The Rayleigh-Plesset equation together with the equations of mass and momentum balances for an effective single-phase inviscid fluid yield a model for foam acoustics. The present calculations reconcile observed sound velocity and attenuation with those predicted using the assumption that thermal dissipation is the dominant damping mechanism in a range of foam expansions and sound excitation frequencies.
We present a second-order monolithic method for solving incompressible Navier--Stokes equations on irregular domains with quadtree grids. A semi-collocated grid layout is adopted, where velocity variables are located at cell vertices, and pressure variables are located at cell centers. Compact finite difference methods with ghost values are used to discretize the advection and diffusion terms of the velocity. A pressure gradient and divergence operator on the quadtree that use compact stencils are developed. Furthermore, the proposed method is extended to cubical domains with octree grids. Numerical results demonstrate that the method is second-order convergent in $L^\infty$ norms and can handle irregular domains for various Reynolds numbers.
We introduce the Nuclear Electronic All-Particle Density Matrix Renormalization Group (NEAP-DMRG) method for solving the time-independent Schr\"odinger equation simultaneously for electrons and other quantum species. In contrast to already existing multicomponent approaches, in this work we construct from the outset a multi-reference trial wave function with stochastically optimized non-orthogonal Gaussian orbitals. By iterative refining of the Gaussians' positions and widths, we obtain a compact multi-reference expansion for the multicomponent wave function. We extend the DMRG algorithm to multicomponent wave functions to take into account inter- and intra-species correlation effects. The efficient parametrization of the total wave function as a matrix product state allows NEAP-DMRG to accurately approximate full configuration interaction energies of molecular systems with more than three nuclei and twelve particles in total, which is currently a major challenge for other multicomponent approaches. We present NEAP-DMRG results for two few-body systems, i.e., H$_2$ and H$_3^+$, and one larger system, namely BH$_3$
In this note, we study asymptotic zero distribution of multivariable full system of random polynomials with independent Bernoulli coefficients. We prove that with overwhelming probability their simultaneous zeros sets are discrete and the associated normalized empirical measure of zeros asymptotic to the Haar measure on the unit torus.
We show that for all positive beta the semigroups of beta-Dyson Brownian motions of different dimensions are intertwined. The proof relates beta-Dyson Brownian motions directly to Jack symmetric polynomials and omits an approximation of the former by discrete space Markov chains, thereby disposing of the technical assumption beta>1 in [GS]. The corresponding results for beta-Dyson Ornstein-Uhlenbeck processes are also presented.
Nonparametric Bayesian approaches based on Gaussian processes have recently become popular in the empirical learning community. They encompass many classical methods of statistics, like Radial Basis Functions or various splines, and are technically convenient because Gaussian integrals can be calculated analytically. Restricting to Gaussian processes, however, forbids for example the implemention of genuine nonconcave priors. Mixtures of Gaussian process priors, on the other hand, allow the flexible implementation of complex and situation specific, also nonconcave "a priori" information. This is essential for tasks with, compared to their complexity, a small number of available training data. The paper concentrates on the formalism for Gaussian regression problems where prior mixture models provide a generalisation of classical quadratic, typically smoothness related, regularisation approaches being more flexible without having a much larger computational complexity.
This paper analyses the application of artificial intelligence techniques to various areas of archaeology and more specifically: a) The use of software tools as a creative stimulus for the organization of exhibitions; the use of humanoid robots and holographic displays as guides that interact and involve museum visitors; b) The analysis of methods for the classification of fragments found in archaeological excavations and for the reconstruction of ceramics, with the recomposition of the parts of text missing from historical documents and epigraphs; c) The cataloguing and study of human remains to understand the social and historical context of belonging with the demonstration of the effectiveness of the AI techniques used; d) The detection of particularly difficult terrestrial archaeological sites with the analysis of the architectures of the Artificial Neural Networks most suitable for solving the problems presented by the site; the design of a study for the exploration of marine archaeological sites, located at depths that cannot be reached by man, through the construction of a freely explorable 3D version.
We study how to generate in minimum time special unitary transformations for a two-level quantum system under the assumptions that: (i) the system is subject to a constant drift, (ii) its dynamics can be affected by three independent, bounded controls, (iii) the bounds on the controls are asymmetric, that is, the constraint on the control in the direction of the drift is independent of that on the controls in the orthogonal plane. Using techniques recently developed for the analysis of SU(2) transformations, we fully characterize the reachable sets of the system, and the optimal control strategies for any possible target transformation.
Recent constraint logic programming (CLP) languages, such as HAL and Mercury, require type, mode and determinism declarations for predicates. This information allows the generation of efficient target code and the detection of many errors at compile-time. Unfortunately, mode checking in such languages is difficult. One of the main reasons is that, for each predicate mode declaration, the compiler is required to appropriately re-order literals in the predicate's definition. The task is further complicated by the need to handle complex instantiations (which interact with type declarations and higher-order predicates) and automatic initialization of solver variables. Here we define mode checking for strongly typed CLP languages which require reordering of clause body literals. In addition, we show how to handle a simple case of polymorphic modes by using the corresponding polymorphic types.
We demonstrate a silicon-chip biphoton source with an unprecedented quantum cross correlation up to ${\rm g_{si}^{(2)}(0) = (2.58 \pm 0.16) \times 10^4}$. The emitted biphotons are intrinsically single-mode, with self correlations of ${\rm g_{ss}^{(2)}(0) = 1.90 \pm 0.05}$ and ${\rm g_{ii}^{(2)}(0) = 1.87 \pm 0.06}$ for signal and idler photons, respectively. We observe the waveform asymmetry of cross correlation between signal and idler photons and reveal the identical and non-exponential nature of self correlations of individual signal and idler photon modes, which is a nature of cavity-enhanced nonlinear optical processes. The high efficiency and high purity of the biphoton source allow us to herald single photons with a conditional self correlation $\rm g_{c}^{(2)}(0)$ as low as $\rm 0.0059 \pm 0.0014$ at a pair flux of $\rm 1.95 \times 10^5$ pairs/s, which remains below $\rm 0.026 \pm 0.001$ for a biphoton flux up to $\rm 2.93 \times 10^6$ pairs/s, with a photon preparation efficiency in the single-mode fiber up to 51%, among the best values that have ever been reported. Our work unambiguously demonstrates that silicon photonic chips are superior material and device platforms for integrated quantum photonics.
Population III (Pop III) stars ended the cosmic Dark Ages and began early cosmological reionization and chemical enrichment. However, in spite of their importance to the evolution of the early Universe, their properties remain uncertain because of limitations to previous numerical simulations and the lack of any observational constraints. Here we investigate Pop III star formation in five primordial halos with 3D radiation-hydrodynamical cosmological simulations. We find that multiple stars form in each minihalo and that their numbers increase over time, with up to 23 stars forming in one of the halos. Radiative feedback from the stars generates strong outflows, deforms the surrounding protostellar disk, and delays star formation for a few thousand years. Star formation rates vary with halo and depend on mass accretion onto the disk, halo spin number, and the fraction of massive stars in the halo. Stellar masses in our models range from 0.1-37 $\rm M_{\odot}$, and of the 55 stars that form in our models twelve are $\rm > 10~ M_{\odot}$ and most of the others are 1-10 $\rm M_{\odot}$. Our simulations thus suggest that Pop III stars have characteristic masses of 1-10 $\rm M_{\odot}$ and a top-heavy IMF with dN/dM $\propto M_*^{-1.18}$. Up to 70\% of the stars are ejected from their disks by three-body interactions which, along with ionizing UV feedback, limits their final masses.
Several approaches to testing the hypothesis that two histograms are drawn from the same distribution are investigated. We note that single-sample continuous distribution tests may be adapted to this two-sample grouped data situation. The difficulty of not having a fully-specified null hypothesis is an important consideration in the general case, and care is required in estimating probabilities with ``toy'' Monte Carlo simulations. The performance of several common tests is compared; no single test performs best in all situations.
Light field deconvolution allows three-dimensional investigations from a single snapshot recording of a plenoptic camera. It is based on a linear image formation model, and iterative volume reconstruction requires to define the backprojection of individual image pixels into object space. This is effectively a reversal of the point spread function (PSF), and backprojection arrays H' can be derived from the shift-variant PSFs H of the optical system, which is a very time consuming step for high resolution cameras. This paper illustrates the common structure of backprojection arrays and the significance of their efficient computation. A new algorithm is presented to determine H' from H, which is based on the distinct relation of the elements' positions within the two multi-dimensional arrays. It permits a pure array re-arrangement, and while results are identical to those from published codes, computation times are drastically reduced. This is shown by benchmarking the new method using various sample PSF arrays against existing algorithms. The paper is complemented by practical hints for the experimental acquisition of light field PSFs in a photographic setup.
The paper is concerned with estimating the number of integers smaller than $x$ whose largest prime divisor is smaller than $y$, denoted $\psi (x,y)$. Much of the related literature is concerned with approximating $\psi (x,y)$ by Dickman's function $\rho (u)$, where $u=\ln x/\ln y$. A typical such result is that $$ \psi (x,y)=x\rho (u)(1+o(1)) \eqno (1) $$ in a certain domain of the parameters $x$ and $y$. In this paper a different type of approximation of $\psi (x,y)$, using iterated logarithms of $x$ and $y$, is presented. We establish that $$ \ln (\frac {\psi}{x})=-u [\ln ^{(2)}x-\ln ^{(2)}y+\ln ^{(3)}x-\ln ^{(3)}y+\ln ^{(4)}x-a] \eqno (2) $$ where $\underbar{a}<a<\bar{a}$ for some constants $\underbar{a}$ and $\bar{a}$ (denoting by $\ln ^{(k)}x=\ln ...\ln x$ the $k$-fold iterated logarithm). The approximation (2) holds in a domain which is complementary to the one on which the approximation (1) is known to be valid. One consequence of (2) is an asymptotic expression for Dickman's function, which is of the form $\ln \rho (u)=-u[\ln u+\ln ^{(2)}u](1+o(1))$, improving known asymptotic approximations of this type. We employ (2) to establish a version of Bertrand's Conjecture, and indicate how this method may be used to sharpen the result.
We propose an orbifold lattice formulation of QCD suitable for quantum simulations. We show explicitly how to encode gauge degrees of freedom into qubits using noncompact variables, and how to write down a simple truncated Hamiltonian in the coordinate basis. We show that SU(3) gauge group variables and quarks in the fundamental representation can be implemented straightforwardly on qubits, for arbitrary truncation of the gauge manifold.
In "Quartic Coincidences and the Singular Value Decomposition" by Clifford and Lachance, Mathematics Magazine, December, 2013, it was shown that if there is a midpoint ellipse(an ellipse inscribed in a quadrilateral, $Q$, which is tangent at the midpoints of all four sides of $Q$), then $Q$ must be a parallelogram. We strengthen this result by showing that if $Q$ is not a parallelogram, then there is no ellipse inscribed in $Q$ which is tangent at the midpoint of three sides of $Q$. Second, the only quadrilaterals which have inscribed ellipses tangent at the midpoint of even two sides of $Q$ are trapezoids or what we call a midpoint diagonal quadrilateral(the intersection point of the diagonals of $Q$ coincides with the midpoint of at least one of the diagonals of $Q$).
Oscillations are ubiquitous in sunspots and the associated higher atmospheres. However, it is still unclear whether these oscillations are driven by the external acoustic waves (p-modes) or generated by the internal magnetoconvection. To obtain clues about the driving source of umbral waves in sunspots, we analyzed the spiral wave patterns (SWPs) in two sunspots registered by IRIS MgII 2796 {\AA} slit-jaw images. By tracking the motion of the SWPs, we find for the first time that two one-armed SWPs coexist in the umbra, and they can rotate either in the same or opposite directions. Furthermore, by analyzing the spatial distribution of the oscillation centers of the one-armed SWPs within the umbra (the oscillation center is defined as the location where the SWP first appears), we find that the chromospheric umbral waves repeatedly originate from the regions with high oscillation power and most of the umbral waves occur in the dark nuclei and strong magnetic field regions of the umbra. Our study results indicate that the chromospheric umbral waves are likely excited by the p-mode oscillations.
Let $X$ be a smooth complex projective algebraic variety. Given a line bundle $\mathcal{L}$ over $X$ and an integer $r>1$ one defines the stack $\sqrt[r]{\mathcal{L}/X}$ of $r$-th roots of $\mathcal{L}$. Motivated by Gromov-Witten theoretic questions, in this paper we analyze the structure of moduli stacks of genus $0$ twisted stable maps to $\sqrt[r]{\mathcal{L}/X}$. Our main results are explicit constructions of moduli stacks of genus $0$ twisted stable maps to $\sqrt[r]{\mathcal{L}/X}$ starting from moduli stack of genus $0$ stable maps to $X$. As a consequence, we prove an exact formula expressing genus $0$ Gromov-Witten invariants of $\sqrt[r]{\mathcal{L}/X}$ in terms of those of $X$.
We study the optimal control problem of maximizing the spread of an information epidemic on a social network. Information propagation is modeled as a Susceptible-Infected (SI) process and the campaign budget is fixed. Direct recruitment and word-of-mouth incentives are the two strategies to accelerate information spreading (controls). We allow for multiple controls depending on the degree of the nodes/individuals. The solution optimally allocates the scarce resource over the campaign duration and the degree class groups. We study the impact of the degree distribution of the network on the controls and present results for Erdos-Renyi and scale free networks. Results show that more resource is allocated to high degree nodes in the case of scale free networks but medium degree nodes in the case of Erdos-Renyi networks. We study the effects of various model parameters on the optimal strategy and quantify the improvement offered by the optimal strategy over the static and bang-bang control strategies. The effect of the time varying spreading rate on the controls is explored as the interest level of the population in the subject of the campaign may change over time. We show the existence of a solution to the formulated optimal control problem, which has non-linear isoperimetric constraints, using novel techniques that is general and can be used in other similar optimal control problems. This work may be of interest to political, social awareness, or crowdfunding campaigners and product marketing managers, and with some modifications may be used for mitigating biological epidemics.
With the increase in demand of electrical energy storage devices such as batteries and supercapacitors, considerable effort is being put to increase the efficiency and applications of current technology while keeping it sustainable. Keeping this in mind we have persued the preparation and charatization of waste biomass derived carbon powders as supercapacitor/battery electrodes. Additionally, we have evaluated the performance of such carbons in the presence of an external magnetic field as we expect the graphene like structures to possess intrinsic magnetic nature. Here, we report the valorization of bougainvillea flower petals and detritus into graphenic carbon and explore a novel electrical double layer supercapacitor device that uses Zn+2 ions for internal charge transport and is able to show increased performance upon application of an external magnetic field.
As deep learning models are becoming larger and data-hungrier, there are growing ethical, legal and technical concerns over use of data: in practice, agreements on data use may change over time, rendering previously-used training data impermissible for training purposes. These issues have driven increased attention to machine unlearning: removing "the influence of" a subset of training data from a trained model. In this work, we advocate for a relaxed definition of unlearning that does not address privacy applications but targets a scenario where a data owner withdraws permission of use of their data for training purposes. In this context, we consider the important problem of \emph{transfer unlearning} where a pretrained model is transferred to a target dataset that contains some "non-static" data that may need to be unlearned in the future. We propose a new method that uses a mechanism for selecting relevant examples from an auxiliary "static" dataset, and finetunes on the selected data instead of "non-static" target data; addressing all unlearning requests ahead of time. We also adapt a recent relaxed definition of unlearning to our problem setting and demonstrate that our approach is an exact transfer unlearner according to it, while being highly efficient (amortized). We find that our method outperforms the gold standard "exact unlearning" (finetuning on only the "static" portion of the target dataset) on several datasets, especially for small "static" sets, sometimes approaching an upper bound for test accuracy. We also analyze factors influencing the accuracy boost obtained by data selection.
In stochastic low-rank matrix bandit, the expected reward of an arm is equal to the inner product between its feature matrix and some unknown $d_1$ by $d_2$ low-rank parameter matrix $\Theta^*$ with rank $r \ll d_1\wedge d_2$. While all prior studies assume the payoffs are mixed with sub-Gaussian noises, in this work we loosen this strict assumption and consider the new problem of \underline{low}-rank matrix bandit with \underline{h}eavy-\underline{t}ailed \underline{r}ewards (LowHTR), where the rewards only have finite $(1+\delta)$ moment for some $\delta \in (0,1]$. By utilizing the truncation on observed payoffs and the dynamic exploration, we propose a novel algorithm called LOTUS attaining the regret bound of order $\tilde O(d^\frac{3}{2}r^\frac{1}{2}T^\frac{1}{1+\delta}/\tilde{D}_{rr})$ without knowing $T$, which matches the state-of-the-art regret bound under sub-Gaussian noises~\citep{lu2021low,kang2022efficient} with $\delta = 1$. Moreover, we establish a lower bound of the order $\Omega(d^\frac{\delta}{1+\delta} r^\frac{\delta}{1+\delta} T^\frac{1}{1+\delta}) = \Omega(T^\frac{1}{1+\delta})$ for LowHTR, which indicates our LOTUS is nearly optimal in the order of $T$. In addition, we improve LOTUS so that it does not require knowledge of the rank $r$ with $\tilde O(dr^\frac{3}{2}T^\frac{1+\delta}{1+2\delta})$ regret bound, and it is efficient under the high-dimensional scenario. We also conduct simulations to demonstrate the practical superiority of our algorithm.
The spin-split Fermi level crossings of the conduction band in Ni are mapped out by high-resolution photoemission and compared to the equivalent crossing in Cu. The area of the quasiparticle peak decreases rapidly below Ef in Ni, but not in Cu. Majority spins have larger spectral weight at Ef than minority spins, thereby enhancing the spin-polarization beyond that expected from the density of states. A large part of the effect can be traced to a rapid variation of the matrix element with {\bf k} at the point where the s,p-band begins to hybridize with the $dz^2$ state. However, it is quite possible that the intensity drop in Ni is reinforced by a transfer of spectral weight from single-particle to many-electron excitations. The results suggest that the matrix element should be considered for explaining the enhanced spin polarization observed for Ni in spin-polarized tunneling.
We consider a dynamic scenario for characterizing the late Universe evolution, aiming to mitigate the Hubble tension. Specifically, we consider a metric $f(R)$ gravity in the Jordan frame which is implemented to the dynamics of a flat isotropic Universe. This cosmological model incorporates a matter creation process, due to the time variation of the cosmological gravitational field. We model particle creation by representing the isotropic Universe (specifically, a given fiducial volume) as an open thermodynamic system. The resulting dynamical model involves four unknowns: the Hubble parameter, the non-minimally coupled scalar field, its potential, and the energy density of the matter component. We impose suitable conditions to derive a closed system for these functions of the redshift. In this model, the vacuum energy density of the present Universe is determined by the scalar field potential, in line with the modified gravity scenario. Hence, we construct a viable model, determining the form of the $f(R)$ theory a posteriori and appropriately constraining the phenomenological parameters of the matter creation process to eliminate tachyon modes. Finally, by analyzing the allowed parameter space, we demonstrate that the Planck evolution of the Hubble parameter can be reconciled with the late Universe dynamics, thus alleviating the Hubble tension.
Motivated by SU(3) structure compactifications, we show explicitly how to construct half--flat topological mirrors to Calabi--Yau manifolds with NS fluxes. Units of flux are exchanged with torsion factors in the cohomology of the mirror; this is the topological complement of previous differential--geometric mirror rules. The construction modifies explicit SYZ fibrations for compact Calabi--Yaus. The results are of independent interest for SU(3) compactifications. For example one can exhibit explicitly which massive forms should be used for Kaluza--Klein reduction, proving previous conjectures. Formality shows that these forms carry no topological information; this is also confirmed by infrared limits and old classification theorems.
The BESIII detector on the BEPCII collider collected the world's largest dataset at the peaks of $J/\psi$, $\psi(3686)$ and $\psi(3770)$. The use of polarization and entanglement states in multidimensional angular distribution analysis can provide new probes to the production and decay characteristics of hyperon anti hyperon pairs. In a recent series of studies, significant transverse polarization in hyperon decay has been observed in $J/\psi$, $\psi(3686)$ decaying into the $\Lambda\bar\Lambda$, $\Sigma^+\bar\Sigma^{-}$, $\Xi^0\bar\Xi^0$ and $\Xi^-\bar\Xi^{+}$ final states. The weak decay parameters of hyperons and antihyperons are also independently determined for the first time. The most accurate testing for direct $CP$ violation has been achieved based on the measured weak decay parameters.
Evaporation of a liquid layer on a substrate is examined without the often-used isothermality assumption -- i.e., temperature variations are accounted for. Qualitative estimates show that nonisothermality makes the evaporation rate depend on the conditions the substrate is maintained at. If it is thermally insulated, evaporative cooling dramatically slows evaporation down; the evaporation rate tends to zero with time and cannot be determined by measuring the external parameters only. If, however, the substrate is maintained at a fixed temperature, the heat flux coming from below sustains evaporation at a finite rate -- deducible from the fluid's characteristics, relative humidity, and the layer's depth (whose importance has not been recognized before). The qualitative predictions are quantified using the diffuse-interface model applied to a liquid evaporating into its own vapor.
The chromatic critical edge theorem of Simonovits states that for a given color critical graph $H$ with $\chi(H)=k+1$, there exists an $n_0(H)$ such that the Tur\'an graph $T_{n,k}$ is the only extremal graph with respect to $ex(n,H)$ provided $n \geq n_0(H)$. Nikiforov's pioneer work on spectral graph theory implies that the color critical edge theorem also holds if $ex(n,H)$ is replaced by the maximum spectral radius and $n_0(H)$ is an exponential function of $|H|$. We want to know which color critical graphs $H$ satisfy that $n_0(H)$ is a linear function of $|H|$. Previous graphs include complete graphs and odd cycles. In this paper, we find two new classes of graphs: books and theta graphs. Namely, we prove that every graph on $n$ vertices with $\rho(G)>\rho(T_{n,2})$ contains a book of size greater than $\frac{n}{6.5}$. This can be seen as a spectral version of a 1962 conjecture by Erd\H{o}s, which states that every graph on $n$ vertices with $e(G)>e(T_{n,2})$ contains a book of size greater than $\frac{n}{6}$. In addition, our result on theta graphs implies that if $G$ is a graph of order $n$ with $\rho(G)>\rho(T_{n,2})$, then $G$ contains a cycle of length $t$ for every $t\leq \frac{n}{7}$. This is related to an open question by Nikiforov which asks to determine the maximum $c$ such that every graph $G$ of large enough order $n$ with $\rho(G)>\rho(T_{n,2})$ contains a cycle of length $t$ for every $t\leq cn$.
In this paper, we present a characterization of optimal entanglement witnesses in terms of positive maps and then provide a general method of checking optimality of entanglement witnesses. Applying it, we obtain new indecomposable optimal witnesses which have no spanning property. These also provide new examples which support a recent conjecture saying that the so-called structural physical approximations to optimal positive maps (optimal entanglement witnesses) give entanglement breaking maps (separable states).
We analyse the spectra of the archival XMM-Newton data of the Seyfert 1 AGN Zw 229.015 in the energy range $0.3 - 10.0$ keV. When fitted with a simple power-law, the spectrum shows signatures of weak soft excess below 1.0 keV. We find that both thermal comptonisation and relativistically blurred reflection models provide the most acceptable spectral fits with plausible physical explanations to the origin of the soft excess than do multicolour disc blackbody and smeared wind absorption models. This motivated us to study the variability properties of the soft and the hard X-ray emissions from the source and the relationship between them to put further constraints on the above models. Our analysis reveals that the variation in the $3.0 - 10.0$ keV band lags that in the $0.3 - 1.0$ keV by $600^{+290}_{-280}$ s, while the lag between the $1.0 - 10.0$ keV and $0.3 - 1.0$ keV is $980^{+500}_{-500}$ s. This implies that the X-ray emissions are possibly emanating from different regions within the system. From these values, we estimate the X-ray emission region to be within $20R_g$ of the central supermassive black hole (where $R_g=GM/c^2$, $M$ is the mass of black hole, $G$ the Newton gravitational constant and $c$ the speed of light). Furthermore, we use XMM-Newton and Kepler photometric lightcurves of the source to search for possible nonlinear signature in the flux variability. We find evidence that the variability in the system may be dominated by stochasticity rather than deterministic chaos which has implications for the dynamics of the accretion system.
The complex zeros of the Riemannn zeta-function are identical to the zeros of the Riemann xi-function, $\xi(s)$. Thus, if the Riemann Hypothesis is true for the zeta-function, it is true for $\xi(s)$. Since $\xi(s)$ is entire, the zeros of $\xi'(s)$, its derivative, would then also satisfy a Riemann Hypothesis. We investigate the pair correlation function of the zeros of $\xi'(s)$ under the assumption that the Riemann Hypothesis is true. We then deduce consequences about the size of gaps between these zeros and the proportion of these zeros that are simple.
The puzzle of computer vision might find new challenging solutions when we realize that most successful methods are working at image level, which is remarkably more difficult than processing directly visual streams, just as happens in nature. In this paper, we claim that their processing naturally leads to formulate the motion invariance principle, which enables the construction of a new theory of visual learning based on convolutional features. The theory addresses a number of intriguing questions that arise in natural vision, and offers a well-posed computational scheme for the discovery of convolutional filters over the retina. They are driven by the Euler-Lagrange differential equations derived from the principle of least cognitive action, that parallels laws of mechanics. Unlike traditional convolutional networks, which need massive supervision, the proposed theory offers a truly new scenario in which feature learning takes place by unsupervised processing of video signals. An experimental report of the theory is presented where we show that features extracted under motion invariance yield an improvement that can be assessed by measuring information-based indexes.
Measuring the energy loss and mass of highly ionizing particles predicted by theories from beyond the Standard Model pose considerable challenges to conventional detection techniques. Such particles are predicted to experience energy loss to matter they pass through that exceeds the dynamic range specified for most readout chips, leading to saturation of the detectors' electronics. Consequently, achieving precise energy loss and mass measurements becomes unattainable. We present a new approach to detect such highly ionizing particles using time projection chambers that overcomes this limitation and provide a case study for triggering on magnetic monopoles.
The asynchrony of the polar SDSS~J085414.02+390537.3 was revealed using the data of ZTF photometric survey. The light curves show a beat period $P_{beat} = 24.6 \pm 0.1$~days. During this period the system changes its brightness by $\sim 3^m$. The periodograms show power peaks at the white dwarf's rotation period $P_{spin} = 113.197 \pm 0.001$~min and orbital period $P_{orb} = 113.560 \pm 0.001$~min with the corresponding polar asynchrony $1-P_{orb}/P_{spin} = 0.3\%$. The photometric behavior of the polar indicates a change of the main accreting pole during the beat period. Based on the Zeeman splitting of the H$\beta$ line, an estimate of the mean magnetic field strength of the white dwarf is found to be $B = 28.5\pm 1.5$~MG. The magnetic field strength $B \approx 34$~MG near the magnetic pole was found by modeling cyclotron spectra. Doppler tomograms in the H$\beta$ line demonstrate the distribution of emission sources typical for polars.
The nature of the dark components (dark matter and dark energy) that dominate the current cosmic evolution is a completely open question at present. In reality, we do not even know if they really constitute two separated substances. In this paper we use the recent Cosmic All Sky Survey (CLASS) lensing sample to test the predictions of one of the candidates for a unified dark matter/energy scenario, the so-called generalized Chaplygin gas (Cg) which is parametrized by an equation of state $p = -A/\rho_{Cg}^{\alpha}$ where $A$ and $\alpha$ are arbitrary constants. We show that, although the model is in good agreement with this radio source gravitational lensing sample, the limits obtained from CLASS statistics are only marginally compatible with the ones obtained from other cosmological tests. We also investigate the constraints on the free parameters of the model from a joint analysis between CLASS and supernova data.
Atomically thin layered two-dimensional materials, including transition-metal dichacolgenide (TMDC) and black phosphorus (BP), (1) have been receiving much attention, because of their promising physical properties and potential applications in flexible and transparent electronic devices . Here, for the first time we show non-volatile chargetrap memory devices, based on field-effect transistors with large hysteresis, consisting of a few-layer black phosphorus channel and a three dimensional (3D) Al2O3 /HfO2 /Al2O3 charge-trap gate stack. An unprecedented memory window exceeding 12 V is observed, due to the extraordinary trapping ability of HfO2. The device shows a high endurance and a stable retention of ?25% charge loss after 10 years, even drastically lower than reported MoS2 flash memory. The high program/erase current ratio, large memory window, stable retention and high on/off current ratio, provide a promising route towards the flexible and transparent memory devices utilising atomically thin two-dimensional materials. The combination of 2D materials with traditional high-k charge-trap gate stacks opens up an exciting field of nonvolatile memory devices.
The ordering between Wigner--Yanase--Dyson function and logarithmic mean is known. Also bounds for logarithmic mean are known. In this paper, we give two reverse inequalities for Wigner--Yanase--Dyson function and logarithmic mean. We also compare the obtained results with the known bounds of the logarithmic mean. Finally, we give operator inequalities based on the obtained results.
We have developed a model for pulsar polarization by taking into account of viewing geometry, rotation and modulation of the emission region. By solving for the plasma dynamics, we deduced the expressions for curvature radiation electric field in time as well as frequency domains. We show that both the 'antisymmetric' and 'symmetric' types of circular polarization are possible due to the combined effect of aberration, viewing geometry and modulation. We argue that aberration combined with modulation can introduce 'kinky' pattern into the PPA traverses.
The method used by senior geodetic engineer Jean-Georges Affholder to determine what can be termed as the 'centre of gravity' of physical Europe in 1989 and 2004 relies on mathematical formulae which, in their only published version, happen to be flawed with typographical errors that do not reflect Mr. Affholder's actual mathematical exactness. This short epistemological paper summarizes the major steps of Affholder's method, provides a corrected version of the general formulae, and briefly recalls some particulars of the specific determination of the centre of gravity of physical Europe.
Systematic analysis of available data for $\omega$-meson photoproduction is given in frame of Regge theory. At photon energies above 20 GeV the $\gamma{+}p{\to}\omega{+}p$ reaction is entirely dominated by Pomeron exchange. However, it was found that Pomeron exchange model can not reproduce the $\gamma{+}p{\to}\rho{+}p$ and $\gamma{+}p{\to}\omega{+}p$ data at high energies simultaneously with the same set of parameters. The comparison between $\rho$ and $\omega$ data indicates a large room for meson exchange contribution to $\omega$-meson photoproduction at low energies. It was found that at low energies the dominant contribution comes from $\pi$ and $f_2$-meson exchanges. There is smooth transition between the meson exchange model at low energies and Regge theory at high energies.
We survey recent developments in the Birational Anabelian Geometry program aimed at the reconstruction of function fields of algebraic varieties over algebraically closed fields from pieces of their absolute Galois groups.
Facial expressions, vital in non-verbal human communication, have found applications in various computer vision fields like virtual reality, gaming, and emotional AI assistants. Despite advancements, many facial expression generation models encounter challenges such as low resolution (e.g., 32x32 or 64x64 pixels), poor quality, and the absence of background details. In this paper, we introduce FacEnhance, a novel diffusion-based approach addressing constraints in existing low-resolution facial expression generation models. FacEnhance enhances low-resolution facial expression videos (64x64 pixels) to higher resolutions (192x192 pixels), incorporating background details and improving overall quality. Leveraging conditional denoising within a diffusion framework, guided by a background-free low-resolution video and a single neutral expression high-resolution image, FacEnhance generates a video incorporating the facial expression from the low-resolution video performed by the individual with background from the neutral image. By complementing lightweight low-resolution models, FacEnhance strikes a balance between computational efficiency and desirable image resolution and quality. Extensive experiments on the MUG facial expression database demonstrate the efficacy of FacEnhance in enhancing low-resolution model outputs to state-of-the-art quality while preserving content and identity consistency. FacEnhance represents significant progress towards resource-efficient, high-fidelity facial expression generation, Renewing outdated low-resolution methods to up-to-date standards.
The weak tensor product was introduced by Snevily as a way to construct new graphs that admit $\alpha$-labelings from a pair of known $\alpha$-graphs. In this article, we show that this product and the application to $\alpha$-labelings can be generalized by considering as a second factor of the product, a family $\Gamma$ of bipartite $(p,q)$-graphs, $p$ and $q$ fixed. The only additional restriction that we should consider is that for every $F\in \Gamma$, there exists and $\alpha$-labeling $f_F$ with $f_F(V(F))=L\cup H$, where $L,H \subset [0,q]$ are the stable sets induced by the characteristic of $f_F$ and they do not depend on $F$. We also obtain analogous applications to near $\alpha$-labelings and bigraceful labelings.
The stability of $q$-Gaussian distributions as particular solutions of the linear diffusion equation and its generalized nonlinear form, $\pderiv{P(x,t)}{t} = D \pderiv{^2 [P(x,t)]^{2-q}}{x^2}$, the \emph{porous-medium equation}, is investigated through both numerical and analytical approaches. It is shown that an \emph{initial} $q$-Gaussian, characterized by an index $q_i$, approaches the \emph{final}, asymptotic solution, characterized by an index $q$, in such a way that the relaxation rule for the kurtosis evolves in time according to a $q$-exponential, with a \emph{relaxation} index $q_{\rm rel} \equiv q_{\rm rel}(q)$. In some cases, particularly when one attempts to transform an infinite-variance distribution ($q_i \ge 5/3$) into a finite-variance one ($q<5/3$), the relaxation towards the asymptotic solution may occur very slowly in time. This fact might shed some light on the slow relaxation, for some long-range-interacting many-body Hamiltonian systems, from long-standing quasi-stationary states to the ultimate thermal equilibrium state.
We answer a 15-year-old open question about the exact upper bound for bivariate copulas with a given diagonal section by giving an explicit formula for this bound. As an application, we determine the maximal asymmetry of bivariate copulas with a given diagonal section and construct a copula that attains it. We derive a formula for the maximal asymmetry that is simple enough to be used by practitioners.
We introduce a new interacting particles model with blocking and pushing interactions. Particles evolve on the positive line jumping on their own volition rightwards or leftwards according to geometric jumps with parameter q. We show that the model involves a Pieri-type formula for the orthogonal group. We prove that the two extreme cases - q=0 and q=1 - lead respectively to a random tiling model studied by Borodin and Kuan and to a random matrix model.
An ultrarelativistic electron beam passing through an intense laser pulse emits radiation around its direction of propagation into a characteristic angular profile. Here we show that measurement of the variances of this profile in the planes parallel and perpendicular to the laser polarization, and the mean initial and final energies of the electron beam, allows the intensity of the laser pulse to be inferred in way that is independent of the model of the electron dynamics. The method presented applies whether radiation reaction is important or not, and whether it is classical or quantum in nature, with accuracy of a few per cent across three orders of magnitude in intensity. It is tolerant of electron beams with broad energy spread and finite divergence. In laser-electron beam collision experiments, where spatiotemporal fluctuations cause alignment of the beams to vary from shot to shot, this permits inference of the laser intensity at the collision point, thereby facilitating comparisons between theoretical calculations and experimental data.
Recently, Freyhult, Rej and Staudacher (FRS) proposed an integral equation determining the leading logarithmic term of the anomalous dimension of sl(2) twist-operators in N=4 SYM for large Lorentz spin M and twist L at fixed j = L/log(M). We discuss the large j limit of the FRS equation. This limit can be matched with the {\em fast long string} limit of AdS_5 X S^5 superstring perturbation theory at all couplings. In particular, a certain part of the classical and one-loop string result is known to be protected and can be computed in the weakly coupled large-j limit of the FRS equation. We present various analytical and numerical results supporting agreement at one and two loops in the gauge theory.
In this paper we consider the following question: For bounded domains with smooth boundary, can strong pseudoconvexity be characterized in terms of the intrinsic complex geometry of the domain? Our approach to answering this question is based on understanding the dynamical behavior of real geodesics in the Kobayashi metric and allows us to prove a number of results for domains with low regularity. For instance, we show that for convex domains with $C^{2,\epsilon}$ boundary strong pseudoconvexity can be characterized in terms of the behavior of the squeezing function near the boundary, the behavior of the holomorphic sectional curvature of the Bergman metric near the boundary, or any other reasonable measure of the complex geometry near the boundary. The first characterization gives a partial answer to a question of Forn{\ae}ss and Wold. As an application of these characterizations, we show that a convex domain with $C^{2,\epsilon}$ boundary which is biholomorphic to a strongly pseudoconvex domain is also strongly pseudoconvex.
Two-dimensional (2D) palladium ditelluride (PdTe2) and platinum ditelluride (PtTe2) are two Dirac semimetals which demonstrate fascinating quantum properties such as superconductivity, magnetism and topological order, illustrating promising applications in future nanoelectronics and optoelectronics. However, the synthesis of their monolayers is dramatically hindered by strong interlayer coupling and orbital hybridization. In this study, an efficient synthesis method for monolayer PdTe2 and PtTe2 is demonstrated. Taking advantages of the surface reaction, epitaxial growth of large-area and high quality monolayers of PdTe2 and patterned PtTe2 is achieved by direct tellurization of Pd(111) and Pt(111). A well-ordered PtTe2 pattern with Kagome lattice formed by Te vacancy arrays is successfully grown. Moreover, multilayer PtTe2 can be also obtained and potential excitation of Dirac plasmons is observed. The simple and reliable growth procedure of monolayer PdTe2 and patterned PtTe2 gives unprecedented opportunities for investigating new quantum phenomena and facilitating practical applications in optoelectronics.
Determining semantic textual similarity is a core research subject in natural language processing. Since vector-based models for sentence representation often use shallow information, capturing accurate semantics is difficult. By contrast, logical semantic representations capture deeper levels of sentence semantics, but their symbolic nature does not offer graded notions of textual similarity. We propose a method for determining semantic textual similarity by combining shallow features with features extracted from natural deduction proofs of bidirectional entailment relations between sentence pairs. For the natural deduction proofs, we use ccg2lambda, a higher-order automatic inference system, which converts Combinatory Categorial Grammar (CCG) derivation trees into semantic representations and conducts natural deduction proofs. Experiments show that our system was able to outperform other logic-based systems and that features derived from the proofs are effective for learning textual similarity.
A sunflower is a collection of sets $\{U_1,\ldots, U_n\}$ such that the pairwise intersection $U_i\cap U_j$ is the same for all choices of distinct $i$ and $j$. We study sunflowers of convex open sets in $\mathbb R^d$, and provide a Helly-type theorem describing a certain "rigidity" that they possess. In particular we show that if $\{U_1,\ldots, U_{d+1}\}$ is a sunflower in $\mathbb R^d$, then any hyperplane that intersects all $U_i$ must also intersect $\bigcap_{i=1}^{d+1} U_i$. We use our results to describe a combinatorial code $\mathcal C_n$ for all $n\ge 2$ which is on the one hand minimally non-convex, and on the other hand has no local obstructions. Along the way we further develop the theory of morphisms of codes, and establish results on the covering relation in the poset $\mathbf P_{\mathbf{Code}}$.
This paper gives similarity transformations for laminar film condensation on a vertical flat plate with variable temperature distribution and finds analytical solutions for arbitrary Prandtl numbers and condensation rates. The work contrasts with Sparrow and Gregg assertion that wall temperature variation does not permit similarity solutions. To resolve the long-standing debatable issue regarding heat transfer in the non-isothermal case, some useful formulations are obtained, including a significant dependence of varying Prandtl numbers. Results are compared with the available experimental data.
During sudden onset crisis events, the presence of spam, rumors and fake content on Twitter reduces the value of information contained on its messages (or "tweets"). A possible solution to this problem is to use machine learning to automatically evaluate the credibility of a tweet, i.e. whether a person would deem the tweet believable or trustworthy. This has been often framed and studied as a supervised classification problem in an off-line (post-hoc) setting. In this paper, we present a semi-supervised ranking model for scoring tweets according to their credibility. This model is used in TweetCred, a real-time system that assigns a credibility score to tweets in a user's timeline. TweetCred, available as a browser plug-in, was installed and used by 1,127 Twitter users within a span of three months. During this period, the credibility score for about 5.4 million tweets was computed, allowing us to evaluate TweetCred in terms of response time, effectiveness and usability. To the best of our knowledge, this is the first research work to develop a real-time system for credibility on Twitter, and to evaluate it on a user base of this size.
The study of quantum heat transport in superconducting circuits is significant for further understanding the connection between quantum mechanics and thermodynamics, and for possible applications for quantum information. The first experimental realisations of devices demonstrating photonic heat transport mediated by a qubit have already been designed and measured. Motivated by the analysis of such experimental results, and for future experimental designs, we numerically evaluate the photonic heat transport of qubit-resonator devices in the linear circuit regime through electromagnetic simulations using Sonnet software, and compare with microwave circuit theory. We show that the method is a powerful tool to calculate heat transport and predict unwanted parasitic resonances and background.
I compute exact partition function zeros of the Wako-Saito-Mu\~noz-Eaton model for various secondary structural elements and for two proteins, 1BBL and 1I6C, using both analytic and numerical methods. Two-state and barrierless downhill folding transitions can be distinguished by a gap in the distribution of zeros at the positive real axis.
We calculate the fidelity with which an arbitrary state can be encoded into a [7,1,3] CSS quantum error correction code in a non-equiprobable Pauli operator error environment with the goal of determining whether this encoding can be used for practical implementations of quantum computation. This determination is accomplished by applying ideal error correction to the encoded state which demonstrates the correctability of errors that occurred during the encoding process. We then apply single-qubit Clifford gates to the encoded state and determine the accuracy with which these gates can be applied. Finally, fault tolerant noisy error correction is applied to the encoded states in the non-equiprobable Pauli operator error environment allowing us to compare noisy (realistic) and perfect error correction implementations. We note that this maintains the fidelity of the encoded state for certain error-probability values. These results have implications for when non-fault tolerant procedures may be used in practical quantum computation and whether quantum error correction should be applied at every step in a quantum protocol.
Rank-revealing matrix decompositions provide an essential tool in spectral analysis of matrices, including the Singular Value Decomposition (SVD) and related low-rank approximation techniques. QR with Column Pivoting (QRCP) is usually suitable for these purposes, but it can be much slower than the unpivoted QR algorithm. For large matrices, the difference in performance is due to increased communication between the processor and slow memory, which QRCP needs in order to choose pivots during decomposition. Our main algorithm, Randomized QR with Column Pivoting (RQRCP), uses randomized projection to make pivot decisions from a much smaller sample matrix, which we can construct to reside in a faster level of memory than the original matrix. This technique may be understood as trading vastly reduced communication for a controlled increase in uncertainty during the decision process. For rank-revealing purposes, the selection mechanism in RQRCP produces results that are the same quality as the standard algorithm, but with performance near that of unpivoted QR (often an order of magnitude faster for large matrices). We also propose two formulas that facilitate further performance improvements. The first efficiently updates sample matrices to avoid computing new randomized projections. The second avoids large trailing updates during the decomposition in truncated low-rank approximations. Our truncated version of RQRCP also provides a key initial step in our truncated SVD approximation, TUXV. These advances open up a new performance domain for large matrix factorizations that will support efficient problem-solving techniques for challenging applications in science, engineering, and data analysis.
In this paper, we consider the wave equation with both a viscous Kelvin-Voigt and frictional damping as a model of viscoelasticity in which we incorporate an internal control with a moving support. We prove the null controllability when the control region, driven by the flow of an ODE, covers all the domain. The proof is based upon the interpretation of the system as, roughly, the coupling of a heat equation with an ordinary differential equation (ODE). The presence of the ODE for which there is no propagation along the space variable makes the controllability of the system impossible when the control is confined into a subset in space that does not move. %Accordingly, we consider the control on a moving support that, along the time interval, covers the whole domain. The null controllability of the system with a moving control is established in using the observability of the adjoint system and some Carleman estimates for a coupled system of a parabolic equation and an ODE with the same singular weight, adapted to the geometry of the moving support of the control. This extends to the multi-dimensional case the results by P. Martin et al. on the one-dimensional case, employing 1-d Fourier analysis techniques.
In this paper, we introduce and study the concept of CS-Rickart modules, that is a module analogue of the concept of ACS rings. A ring $R$ is called a right weakly semihereditary ring if every its finitly generated right ideal is of the form $P\oplus S,$ where $P_R$ is a projective module and $S_R$ is a singular module. We describe the ring $R$ over which $\mathrm{Mat}_n (R)$ is a right ACS ring for any $n \in \mathbb {N}$. We show that every finitely generated projective right $R$-module will to be a CS-Rickart module, is precisely when $R$ is a right weakly semihereditary ring. Also, we prove that if $R$ is a right weakly semihereditary ring, then every finitely generated submodule of a projective right $R$-module has the form $P_{1}\oplus \ldots\oplus P_{n}\oplus S$, where every $P_{1}, \ldots, P_{n}$ is a projective module which is isomorphic to a submodule of $R_{R}$, and $S_R$ is a singular module. As corollaries we obtain some well-known properties of Rickart modules and semihereditary rings.
Post-starburst galaxies are believed to be in a rapid transition between major merger starbursts and quiescent ellipticals, where AGN feedback is suggested as one of the processes responsible for the quenching. To study the role of AGN feedback, we constructed a sample of post-starburst candidates with AGN and indications of ionized outflows. We use MUSE/VLT observations to resolve the properties of the stars and multi-phased gas in five of them. All the galaxies show signatures of interaction/merger in their stellar or gas properties, with some galaxies at an early stage of interaction with companions at distances $\sim$50 kpc, suggesting that optical post-starburst signatures may be present well before the final starburst and coalescence. We detect narrow and broad kinematic components in multiple transitions in all the galaxies. Our detailed analysis of their kinematics and morphology suggests that, contrary to our expectation, the properties of the broad kinematic components are inconsistent with AGN-driven winds in 3 out of 5 galaxies. The two exceptions are also the only galaxies in which spatially-resolved NaID P-Cygni profiles are detected. In some cases, the observations are more consistent with interaction-induced galactic-scale flows, an often overlooked process. These observations raise the question of how to interpret broad kinematic components in interacting and perhaps also in active galaxies, in particular when spatially-resolved observations are not available or cannot rule out merger-induced galactic-scale motions. We suggest that NaID P-Cygni profiles are more effective outflow tracers, and use them to estimate the energy that is carried by the outflow.
Despite its simple crystal structure, layered boron nitride features a surprisingly complex variety of phonon-assisted luminescence peaks. We present a combined experimental and theoretical study on ultraviolet-light emission in hexagonal and rhombohedral bulk boron nitride crystals. Emission spectra of high-quality samples are measured via cathodoluminescence spectroscopy, displaying characteristic differences between the two polytypes. These differences are explained using a fully first-principles computational technique that takes into account radiative emission from ``indirect'', finite-momentum, excitons via coupling to finite-momentum phonons. We show that the differences in peak positions, number of peaks and relative intensities can be qualitatively and quantitatively explained, once a full integration over all relevant momenta of excitons and phonons is performed.
As a result of a non-trivial mixing matrix, neutrinos cannot be simultaneously in a flavor and mass eigenstate. We formulate and discuss information entropic relations that quantify the associated quantum uncertainty. We also formulate a protocol to determine the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix from quantum manipulations and measurements on an entangled lepton-neutrino pair. The entangled state features neutrino oscillations in a conditional probability involving measurements on the lepton and the neutrino. They can be switched off by choosing a specific observable on the lepton side which is determined by the PMNS matrix. The parameters of the latter, including the CP-violating phase $\delta$, can be obtained by guessing them and improving the guess by minimizing the remaining oscillations.
The aim of this paper is to study a conjecture predicting a lower bound on the canonical height on abelian varieties, formulated by S. Lang and generalized by J. H. Silverman. We give here an asymptotic result on the height of Heegner points on the modular jacobian $J_{0}(N)$, and we derive non-trivial remarks about the conjecture.
We investigate the star formation main sequence (MS) (SFR-M$_{\star}$) down to 10$^{8-9}\mathrm{M}_\odot$ using a sample of 34,061 newly-discovered ultra-faint ($27\lesssim i \lesssim 30$ mag) galaxies at $1<z<3$ detected in the GOODS-N field. Virtually these galaxies are not contained in previous public catalogs, effectively doubling the number of known sources in the field. The sample was constructed by stacking the optical broad-band observations taken by the HST/GOODS-CANDELS surveys as well as the 25 ultra-deep medium-band images gathered by the GTC/SHARDS project. Our sources are faint (average observed magnitudes $<i>\sim28.2$ mag, $<H>\sim27.9$ mag), blue (UV-slope $<\beta>\sim-1.9$), star-forming (rest-frame colors $<U-V>\sim0.10$ mag, $<V-J>\sim0.17$ mag) galaxies. These observational characteristics are identified with young (mass-weighted age $<\mathrm{t_{M-w}}>\sim0.014$ Gyr) stellar populations subject to low attenuations ($<\mathrm{A(V)}>\sim0.30$ mag). Our sample allows us to probe the MS down to $10^{8.0}\,\mathrm{M}_\odot$ at $z=1$ and $10^{8.5}\,\mathrm{M}_\odot$ at $z=3$, around 0.6 dex deeper than previous analysis. In the low-mass galaxy regime, we find an average value for the slope of 0.97 at $1<z<2$ and 1.12 at $2<z<3$. Nearly $\sim$60% of our sample presents stellar masses in the range $10^{6-8}$ M$_\odot$ between $1<z<3$. If the slope of the MS remained constant in this regime, the sources populating the low-mass tail of our sample would qualify as starburst galaxies.
Spreadsheet manipulation is widely existing in most daily works and significantly improves working efficiency. Large language model (LLM) has been recently attempted for automatic spreadsheet manipulation but has not yet been investigated in complicated and realistic tasks where reasoning challenges exist (e.g., long horizon manipulation with multi-step reasoning and ambiguous requirements). To bridge the gap with the real-world requirements, we introduce $\textbf{SheetRM}$, a benchmark featuring long-horizon and multi-category tasks with reasoning-dependent manipulation caused by real-life challenges. To mitigate the above challenges, we further propose $\textbf{SheetAgent}$, a novel autonomous agent that utilizes the power of LLMs. SheetAgent consists of three collaborative modules: $\textit{Planner}$, $\textit{Informer}$, and $\textit{Retriever}$, achieving both advanced reasoning and accurate manipulation over spreadsheets without human interaction through iterative task reasoning and reflection. Extensive experiments demonstrate that SheetAgent delivers 20-30% pass rate improvements on multiple benchmarks over baselines, achieving enhanced precision in spreadsheet manipulation and demonstrating superior table reasoning abilities. More details and visualizations are available at https://sheetagent.github.io.
In monolayers of transition metal dichalcogenides the nonlocal nature of the effective dielectric screening leads to large binding energies of excitons. Additional lateral confinement gives rise to exciton localization in quantum dots. By assuming parabolic confinement for both the electron and the hole, we derive model wave functions for the relative and the center-of-mass motions of electron-hole pairs, and investigate theoretically resonant energy transfer among excitons localized in two neighboring quantum dots. We quantify the probability of energy transfer for a direct-gap transition by assuming that the interaction between two quantum dots is described by a Coulomb potential, which allows us to include all relevant multipole terms of the interaction. We demonstrate the structural control of the valley-selective energy transfer between quantum dots.
A matroid $N$ is said to be triangle-rounded in a class of matroids $\mathcal{M}$ if each $3$-connected matroid $M\in \mathcal{M}$ with a triangle $T$ and an $N$-minor has an $N$-minor with $T$ as triangle. Reid gave a result useful to identify such matroids as stated next: suppose that $M$ is a binary $3$-connected matroid with a $3$-connected minor $N$, $T$ is a triangle of $M$ and $e\in T\cap E(N)$; then $M$ has a $3$-connected minor $M'$ with an $N$-minor such that $T$ is a triangle of $M'$ and $|E(M')|\le |E(N)|+2$. We strengthen this result by dropping the condition that such element $e$ exists and proving that there is a $3$-connected minor $M'$ of $M$ with an $N$-minor $N'$ such that $T$ is a triangle of $M'$ and $E(M')-E(N')\subseteq T$. This result is extended to the non-binary case and, as an application, we prove that $M(K_5)$ is triangle-rounded in the class of the regular matroids.
As brain-computer interfacing (BCI) systems transition from assistive technology to more diverse applications, their speed, reliability, and user experience become increasingly important. Dynamic stopping methods enhance BCI system speed by deciding at any moment whether to output a result or wait for more information. Such approach leverages trial variance, allowing good trials to be detected earlier, thereby speeding up the process without significantly compromising accuracy. Existing dynamic stopping algorithms typically optimize measures such as symbols per minute (SPM) and information transfer rate (ITR). However, these metrics may not accurately reflect system performance for specific applications or user types. Moreover, many methods depend on arbitrary thresholds or parameters that require extensive training data. We propose a model-based approach that takes advantage of the analytical knowledge that we have about the underlying classification model. By using a risk minimisation approach, our model allows precise control over the types of errors and the balance between precision and speed. This adaptability makes it ideal for customizing BCI systems to meet the diverse needs of various applications. We validate our proposed method on a publicly available dataset, comparing it with established static and dynamic stopping methods. Our results demonstrate that our approach offers a broad range of accuracy-speed trade-offs and achieves higher precision than baseline stopping methods.
Tidal disruption event (TDE) can launch an ultrafast outflow. If the black hole is surrounded by large amounts of clouds, outflow-cloud interaction will generate bow shocks, accelerate electrons and produce radio emission. Here we investigate the interaction between a non-relativistic outflow and clouds in active galaxies, which is manifested as outflow-BLR (broad line region) interaction, and can be extended to outflow-torus interaction. This process can generate considerable radio emission, which may account for the radio flares appearing a few months later after TDE outbursts. Benefitting from efficient energy conversion from outflow to shocks and the strong magnetic field, outflow-cloud interaction may play a non-negligible, or even dominating role in generating radio flares in a cloudy circumnuclear environment if the CNM density is no more than 100 times the Sgr A*-like one. In this case, the evolution of radio spectra can be used to directly constrain the properties of outflows.
Generalised matrix-matrix multiplication forms the kernel of many mathematical algorithms. A faster matrix-matrix multiply immediately benefits these algorithms. In this paper we implement efficient matrix multiplication for large matrices using the floating point Intel Pentium SIMD (Single Instruction Multiple Data) architecture. A description of the issues and our solution is presented, paying attention to all levels of the memory hierarchy. Our results demonstrate an average performance of 2.09 times faster than the leading public domain matrix-matrix multiply routines.
This paper provides an E-theoretic proof of an exact form, due to E. Troitsky, of the Mischenko-Fomenko Index Theorem for elliptic pseudodifferential operators over a unital C*-algebra. The main ingredients in the proof are the use of asymptotic morphisms of Connes and Higson, vector bundle modification, a Baum-Douglas-type group, and a KK-argument of Kasparov.
Exploring abundance and non lacunarity of hyperbolic times for endomorphisms preserving an ergodic probability with positive Lyapunov exponents, we obtain that there are periodic points of period growing sublinearly with respect to the lenght of almost every dynamical ball. In particular, we conclude that any ergodic measure with positive Lyapunov exponents satisfy the nonuniform specification property. As consequences, we (re)-obtain estimates on the recurrence to a ball in terms of the Lyapunov exponents and we prove that any expanding measure is limit of Dirac measures on periodic points.
We study the effect of the MeV-scale asymmetric dark matter annihilation on the effective number of neutrinos $N_{\rm eff}$ at the epoch of the big bang nucleosynthesis. If the asymmetric dark matter $\chi$ couples more strongly to the neutrinos $\nu$ than to the photons $\gamma$ and electrons $e^-$, $\Gamma_{\chi\gamma, \chi e} \ll \Gamma_{\chi\nu}$, or $\Gamma_{\chi\gamma, \chi e} \gg \Gamma_{\chi\nu}$, the lower mass limit on the asymmetric dark matter is about $18$ MeV for $N_{\rm eff}\simeq 3.0$.
Synchronization over networks depends strongly on the structure of the coupling between the oscillators. When the coupling presents certain regularities, the dynamics can be coarse-grained into clusters by means of External Equitable Partitions of the network graph and their associated quotient graphs. We exploit this graph-theoretical concept to study the phenomenon of cluster synchronization, in which different groups of nodes converge to distinct behaviors. We derive conditions and properties of networks in which such clustered behavior emerges, and show that the ensuing dynamics is the result of the localization of the eigenvectors of the associated graph Laplacians linked to the existence of invariant subspaces. The framework is applied to both linear and non-linear models, first for the standard case of networks with positive edges, before being generalized to the case of signed networks with both positive and negative interactions. We illustrate our results with examples of both signed and unsigned graphs for consensus dynamics and for partial synchronization of oscillator networks under the master stability function as well as Kuramoto oscillators.
Owing to their periodic and intricate configurations, metamaterials engineered for acoustic and elastic wave control inevitably suffer from manufacturing anomalies and deviate from theoretical dispersion predictions. This work exploits the Polynomial Chaos theory to quantify the magnitude and extent of these deviations and assess their impact on the desired behavior. It is shown that uncertainties stemming from surface roughness, tolerances, and other inconsistencies in a metamaterial's unit cell parameters alter the targetted band gap width, location and the confidence level with which it is guaranteed. The effect of uncertainties are projected from a Bloch-wave dispersion analysis of three distinct phononic and resonant cellular configurations and are further confirmed in the frequency response the finite structures. The analysis concludes with a unique algorithm intended to guide the design of metamaterials in the presence of system uncertainties.
A knot in a solid torus defines a map on the set of (smooth or topological) concordance classes of knots in $S^3$. This set admits a group structure, but a conjecture of Hedden suggests that satellite maps never induce interesting homomorphisms: we give new evidence for this conjecture in both categories. First, we use Casson-Gordon signatures to give the first obstruction to a slice pattern inducing a homomorphism on the topological concordance group, constructing examples with every winding number besides $\pm 1$. We then provide subtle examples of satellite maps which map arbitrarily deep into the $n$-solvable filtration of [COT03], act like homomorphisms on arbitrary finite sets of knots, and yet which still do not induce homomorphisms. Finally, we verify Hedden's conjecture in the smooth category for all but one small crossing number satellite operator.
In this short note, we come back to the recent proposal put forward by Kharzeev and Levin [PRL 114 (2015) 24, 242001], in which they phenomenologically couple the non-perturbative Veneziano ghost to the perturbative gluon, leading to a modified gluon propagator (the "glost") of the Gribov type, with complex poles. As such, a possible link was made between the QCD topological \theta-vacuum (Veneziano ghost) and color confinement (no physically observable gluons). We discuss some subtleties concerning gauge (BRST) invariance of this proposal, related to the choice of Feynman gauge. We furthermore provide an example in the Landau gauge of a similar phenomenological vertex that also describes the necessary Veneziano ghost but does not affect the Landau gauge gluon propagator.
We show that a quantum dot connected via tunnel barriers to superconducting leads traps a continuously tunable and hence fractional charge. The fractional charge on the island is due to particle-hole symmetry breaking and can be tuned via a gate potential acting on the dot or via changes in the phase difference across the island. We determine the groundstate, equilibrium, and excitation charges and show how to identify these quantities in an experiment.
We present a systematic study of ad blocking - and the associated "arms race" - as a security problem. We model ad blocking as a state space with four states and six state transitions, which correspond to techniques that can be deployed by either publishers or ad blockers. We argue that this is a complete model of the system. We propose several new ad blocking techniques, including ones that borrow ideas from rootkits to prevent detection by anti-ad blocking scripts. Another technique uses the insight that ads must be recognizable by humans to comply with laws and industry self-regulation. We have built prototype implementations of three of these techniques, successfully blocking ads and evading detection. We systematically evaluate our proposed techniques, along with existing ones, in terms of security, practicality, and legality. We characterize the order of growth of the development effort required to create/maintain ad blockers as a function of the growth of the web. Based on our state-space model, our new techniques, and this systematization, we offer insights into the likely "end game" of the arms race. We challenge the widespread assumption that the arms race will escalate indefinitely, and instead identify a combination of evolving technical and legal factors that will determine the outcome.
As machine learning models become more accurate, they typically become more complex and uninterpretable by humans. The black-box character of these models holds back its acceptance in practice, especially in high-risk domains where the consequences of failure could be catastrophic such as health-care or defense. Providing understandable and useful explanations behind ML models or predictions can increase the trust of the user. Example-based reasoning, which entails leveraging previous experience with analogous tasks to make a decision, is a well known strategy for problem solving and justification. This work presents a new explanation extraction method called LEAFAGE, for a prediction made by any black-box ML model. The explanation consists of the visualization of similar examples from the training set and the importance of each feature. Moreover, these explanations are contrastive which aims to take the expectations of the user into account. LEAFAGE is evaluated in terms of fidelity to the underlying black-box model and usefulness to the user. The results showed that LEAFAGE performs overall better than the current state-of-the-art method LIME in terms of fidelity, on ML models with non-linear decision boundary. A user-study was conducted which focused on revealing the differences between example-based and feature importance-based explanations. It showed that example-based explanations performed significantly better than feature importance-based explanation, in terms of perceived transparency, information sufficiency, competence and confidence. Counter-intuitively, when the gained knowledge of the participants was tested, it showed that they learned less about the black-box model after seeing a feature importance-based explanation than seeing no explanation at all. The participants found feature importance-based explanation vague and hard to generalize it to other instances.
Possible manipulation of user transactions by miners in a permissionless blockchain systems is a growing concern. This problem is a pervasive and systemic issue, known as Miner Extractable Value (MEV), incurs highs costs on users of decentralised applications. Furthermore, transaction manipulations create other issues in blockchain systems such as congestion, higher fees, and system instability. Detecting transaction manipulations is difficult, even though it is known that they originate from the pre-consensus phase of transaction selection for a block building, at the base layer of blockchain protocols. In this paper we summarize known transaction manipulation attacks. We then present L{\O}, an accountable base layer protocol specifically designed to detect and mitigate transaction manipulations. L{\O} is built around accurate detection of transaction manipulations and assignment of blame at the granularity of a single mining node. L{\O} forces miners to log all the transactions they receive into a secure mempool data structure and to process them in a verifiable manner. Overall, L{\O} quickly and efficiently detects reordering, injection or censorship attempts. Our performance evaluation shows that L{\O} is also practical and only introduces a marginal performance overhead.
The photometric analysis of sample Am stars is carried out to determine the stellar characteristics and to constrain the stellar dynamics. The spectroscopic analysis of the studied Am stars confirms their general characteristics of Am stars. The available data on elemental abundances for HD 113878 and HD 118660 have shown different characteristics during different epochs of observations. The basic stellar parameters (mass, luminosity, radius, life time, distance, proper-motion, etc.) are also determined to identify the stellar habitat zones for earth like exoplanet. Such information is important to identify suitable planets for human settlement in the near future. In this connection, the tidal radius and boundaries of the habitable zone of each star have been computed to support the search of an extra-terrestrial life around them. Asteroseismic mass scale test shows greater stellar masses comparable to the solar mass.
Using the Coulomb gauge formulation of QED we present a lattice QCD procedure to calculate the $\pi^+\pi^+$ scattering phase shift including the effects of the Coulomb potential which appears in this formulation. The approach described here incorporates the effects of relativity and avoids finite-volume corrections that vanish as a power of the volume in which the lattice calculation is performed. This is the first step in developing a complete lattice QCD calculation of the electromagnetic and isospin-breaking light-quark mass contributions to $\varepsilon'$, the parameter describing direct CP violating effects in $K_L\to\pi\pi$ decay.
In the framework of wave packet analysis, finite wavelet systems are particular classes of finite wave packet systems. In this paper, using a scaling matrix on a permuted version of the discrete Fourier transform (DFT) of system generator, we derive a locally-scaled version of the DFT of system genarator and obtain a finite equal-norm Parseval wavelet frame over prime fields. We also give a characterization of all multiplicative subgroups of the cyclic multiplicative group, for which the associated wavelet systems form frames. Finally, we present some concrete examples as applications of our results.
This paper reports the results of a series of field experiments designed to investigate how peer effects operate in a real work setting. Workers were hired from an online labor market to perform an image-labeling task and, in some cases, to evaluate the work product of other workers. These evaluations had financial consequences for both the evaluating worker and the evaluated worker. The experiments showed that on average, evaluating high-output work raised an evaluator's subsequent productivity, with larger effects for evaluators that are themselves highly productive. The content of the subject evaluations themselves suggest one mechanism for peer effects: workers readily punished other workers whose work product exhibited low output/effort. However, non-compliance with employer expectations did not, by itself, trigger punishment: workers would not punish non-complying workers so long as the evaluated worker still exhibited high effort. A worker's willingness to punish was strongly correlated with their own productivity, yet this relationship was not the result of innate differences---productivity-reducing manipulations also resulted in reduced punishment. Peer effects proved hard to stamp out: although most workers complied with clearly communicated maximum expectations for output, some workers still raised their production beyond the output ceiling after evaluating highly productive yet non-complying work products.
The puzzling properties of quantum mechanics, wave-particle duality, entanglement and superposition, were dissected experimentally at past decades. However, hidden-variable (HV) models, based on three classical assumptions of wave-particle objectivity, determinism and independence, strive to explain or even defeat them. The development of quantum technologies enabled us to test experimentally the predictions of quantum mechanics and HV theories. Here, we report an experimental demonstration of an entanglement-assisted quantum delayed-choice scheme using a liquid nuclear magnetic resonance quantum information processor. This scheme we realized is based on the recently proposed scheme [Nat. Comms. 5:4997(2014)], which gave different results for quantum mechanics and HV theories. In our experiments, the intensities and the visibilities of the interference are in consistent the theoretical prediction of quantum mechanics. The results imply that a contradiction is appearing when all three assumptions of HV models are combined, though any two of the above assumptions are compatible with it.
We introduce a Kazhdan--Lusztig-dual quantum group for (1,p) Virasoro logarithmic minimal models as the Lusztig limit of the quantum sl(2) at pth root of unity and show that this limit is a Hopf algebra. We calculate tensor products of irreducible and projective representations of the quantum group and show that these tensor products coincide with the fusion of irreducible and logarithmic modules in the (1,p) Virasoro logarithmic minimal models.
Given a Finsler manifold $(M,F)$, it is proved that the first eigenvalue of the Finslerian $p$-Laplacian is bounded above by a constant depending on $\ p$, the dimension of $M$, the Busemann-Hausdorff volume and the reversibility constant of $(M,F)$. For a Randers manifold $(M,F:=\sqrt{g}+\beta)$, where $g$ is a Riemannian metric on $M$ and $\beta$ an appropriate $1$-form on $M$, it is shown that the first eigenvalue $\lambda_{1,p}(M,F)$ of the Finslerian $p$-Laplacian defined by the Finsler metric $F$ is controled by the first eigenvalue $\lambda_{1,p}(M,g)$ of the Riemannian $p$-Laplacian defined on $(M,g)$. Finally, the Cheeger's inequality for Finsler Laplacian is extended for $p$-Laplacian, with $p > 1$.
We formulate a light-front spectator model for the proton incorporating the gluonic degree of freedom. In this model, at high energy scattering of the proton, the active parton is a gluon and the rest is viewed as a spin-$\frac{1}{2}$ spectator with an effective mass. The light front wave functions of the proton are constructed using a soft wall AdS/QCD prediction and parameterized by fitting the unpolarized gluon distribution function to the NNPDF3.0nlo data set. We investigate the helicity distribution of gluon in this model. We find that our prediction for the gluon helicity asymmetry agrees well with existing experimental data and satisfies the perturbative QCD constraints at small and large longitudinal momentum regions. We also present the transverse momentum dependent distributions (TMDs) for gluon in this model. We further show that the model-independent Mulders-Rodrigues inequalities are obeyed by the TMDs computed in our model.
We give an explicit evaluation, in terms of products of Jacobsthal numbers, of the Hankel determinants of order a power of two for the period-doubling sequence. We also explicitly give the eigenvalues and eigenvectors of the corresponding Hankel matrices. Similar considerations give the Hankel determinants for other orders.
Laplace's equation appears frequently in physical applications involving conservable quantities. Among these applications, miniaturized devices have been of interest, in particular those using interdigitated arrays. Therefore, we solved the two-dimensional Laplace's equation for a shallow or finite domain consisting of interdigitated boundaries. We achieved this by using Jacobian elliptic functions to conformally transform the interdigitated domain into a parallel plates domain. The obtained expressions for potential distribution, flux density and flux allow for arbitrary domain height, different band widths and asymmetric potentials at the interdigitated array, besides considering fringing effects at both ends of the array. All these expressions depend only on relative dimensions, instead of absolute ones. With these results we showed that the behavior in shallow or finite domains approaches that of a semi-infinite domain, when its height is greater than the separation between the centers of consecutive bands. We also found that, for any desired but fixed flux, bands of equal width minimize the total surface of the interdigitated array. Finally, we present approximate expressions for the flux, based on elementary functions, which can be applied to ease the calculation of currents (faradaic or non-faradaic), capacitances and resistances among other possible applications.
Using uniform global Carleman estimates for discrete elliptic and semi-discrete hyperbolic equations, we study Lipschitz and logarithmic stability for the inverse problem of recovering a potential in a semi-discrete wave equation, discretized by finite differences in a 2-d uniform mesh, from boundary or internal measurements. The discrete stability results, when compared with their continuous counterparts, include new terms depending on the discretization parameter h. From these stability results, we design a numerical method to compute convergent approximations of the continuous potential.
We investigate the role of Segal's Gamma-spaces in the context of classical and quantum information, based on categories of finite probabilities with stochastic maps and density matrices with quantum channels. The information loss functional extends to the setting of probabilistic Gamma-spaces considered here. The Segal construction of connective spectra from Gamma-spaces can be used in this setting to obtain spectra associated to certain categories of gapped systems.
We Microsoft Research Asia made submissions to 11 language directions in the WMT19 news translation tasks. We won the first place for 8 of the 11 directions and the second place for the other three. Our basic systems are built on Transformer, back translation and knowledge distillation. We integrate several of our rececent techniques to enhance the baseline systems: multi-agent dual learning (MADL), masked sequence-to-sequence pre-training (MASS), neural architecture optimization (NAO), and soft contextual data augmentation (SCA).