text
stringlengths
6
128k
In this paper, we present a novel unsupervised video summarization model that requires no manual annotation. The proposed model termed Cycle-SUM adopts a new cycle-consistent adversarial LSTM architecture that can effectively maximize the information preserving and compactness of the summary video. It consists of a frame selector and a cycle-consistent learning based evaluator. The selector is a bi-direction LSTM network that learns video representations that embed the long-range relationships among video frames. The evaluator defines a learnable information preserving metric between original video and summary video and "supervises" the selector to identify the most informative frames to form the summary video. In particular, the evaluator is composed of two generative adversarial networks (GANs), in which the forward GAN is learned to reconstruct original video from summary video while the backward GAN learns to invert the processing. The consistency between the output of such cycle learning is adopted as the information preserving metric for video summarization. We demonstrate the close relation between mutual information maximization and such cycle learning procedure. Experiments on two video summarization benchmark datasets validate the state-of-the-art performance and superiority of the Cycle-SUM model over previous baselines.
In the present paper we continue the project of systematic construction of invariant differential operators on the example of the non-compact exceptional algebra $E_{7(-25)}$. Our choice of this particular algebra is motivated by the fact that it belongs to a narrow class of algebras, which we call 'conformal Lie algebras', which have very similar properties to the conformal algebras of $n$-dimensional Minkowski space-time. This class of algebras is identified and summarized in a table. Another motivation is related to the AdS/CFT correspondence. We give the multiplets of indecomposable elementary representations, including the necessary data for all relevant invariant differential operators.
We present a novel framework for dynamic cut aggregation in L-shaped algorithms. The aim is to improve the parallel performance of distributed L-shaped algorithms through reduced communication latency and load imbalance. We show how optimality cuts can be aggregated into arbitrary partitions without affecting convergence of the L-shaped algorithm. Furthermore, we give a worst-case bound for L-shaped algorithms with static cut aggregation and then extend this result for dynamic aggregation. We propose a variety of aggregation schemes that fit into our framework, and evaluate them on a collection of large-scale stochastic programming problems. All methods are implemented in our open-source framework for stochastic programming, StochasticPrograms.jl, written in the Julia programming language. In addition, we propose a granulated strategy that combines the strengths of dynamic and static cut aggregation. Major performance improvements are possible with our approach in distributed settings. Our experimental results suggest that the granulated strategy can consistently yield high performance on a range of test problems. The experimental results are supported by our worst-case bounds.
We report on deep multi-color imaging (R_5-sigma = 26) of the Chandra Deep Field South, obtained with the Wide Field Imager (WFI) at the MPG/ESO 2.2 m telescope on La Silla as part of the COMBO-17 survey. As a result we present a catalog of 63501 objects in a field measuring 31.5' x 30' with astrometry and BVR photometry. A sample of 37 variable objects is selected from two-epoch photometry. We try to give interpretations based on color and variation amplitude.
Multiple phases occurring in a Bose gas with finite-range interaction are investigated. In the vicinity of the onset of Bose-Einstein condensation (BEC) the chemical potential and the pressure show a van-der-Waals like behavior indicating a first-order phase transition although there is no long-range attraction. Furthermore the equation of state becomes multivalued near the BEC transition. For a Hartree-Fock or Popov (Hartree-Fock-Bogoliubov) approximation such a multivalued region can be avoided by the Maxwell construction. For sufficiently weak interaction the multivalued region can also be removed using a many-body \mbox{T-matrix} approximation. However, for strong interactions there remains a multivalued region even for the \mbox{T-matrix} approximation and after the Maxwell construction, what is interpreted as a density hysteresis. This unified treatment of normal and condensed phases becomes possible due to the recently found scheme to eliminate self-interaction in the \mbox{T-matrix} approximation, which allows to calculate properties below and above the critical temperature.
This paper concerns Kalman filtering when the measurements of the process are censored. The censored measurements are addressed by the Tobit model of Type I and are one-dimensional with two censoring limits, while the (hidden) state vectors are multidimensional. For this model, Bayesian estimates for the state vectors are provided through a recursive algorithm of Kalman filtering type. Experiments are presented to illustrate the effectiveness and applicability of the algorithm. The experiments show that the proposed method outperforms other filtering methodologies in minimizing the computational cost as well as the overall Root Mean Square Error (RMSE) for synthetic and real data sets.
Inclusive associated production of a light Higgs boson (m_H < m_t) with one jet in pp collisions is studied in next-to-leading order QCD. Transverse momentum (p_T < 30 GeV) and rapidity distributions of the Higgs boson are calculated for the LHC in the large top-quark mass limit. It is pointed out that, as much as in the case of inclusive Higgs production, the K-factor of this process is large (~1.6) and depends weakly on the kinematics in a wide range of transverse momentum and rapidity intervals. Our result confirms previous suggestions that the production channel p+p -> H+jet -> gamma+gamma+jet gives a measurable signal for Higgs production at the LHC in the mass range 100-140 GeV, crucial also for the ultimate test of the Minimal Supersymmetric Standard Model.
Let $\phi: \R^d \longrightarrow \C$ be a compactly supported function which satisfies a refinement equation of the form $\phi(x) = \sum_{k\in\Lambda} c_k \phi(Ax - k),\quad c_k\in\C$, where $\Gamma\subset\R^d$ is a lattice, $\Lambda$ is a finite subset of $\Gamma$, and $A$ is a dilation matrix. We prove, under the hypothesis of linear independence of the $\Gamma$-translates of $\phi$, that there exists a correspondence between the vectors of the Jordan basis of a finite submatrix of $L=[c_{Ai-j}]_{i,j\in\Gamma}$ and a finite dimensional subspace $\mathcal H$ in the shift invariant space generated by $\phi$. We provide a basis of $\mathcal H$ and show that its elements satisfy a property of homogeneity associated to the eigenvalues of $L$. If the function $\phi$ has accuracy $\kappa$, this basis can be chosen to contain a basis for all the multivariate polynomials of degree less than $\kappa$. These latter functions are associated to eigenvalues that are powers of the eigenvalues of $A^{-1}$. Further we show that the dimension of $\mathcal H$ coincides with the local dimension of $\phi$, and hence, every function in the shift invariant space generated by $\phi$ can be written locally as a linear combination of translates of the homogeneous functions.
We describe experiments with an optical frequency standard based on a laser cooled $^{171}$Yb$^+$ ion confined in a radiofrequency Paul trap. The electric-quadrupole transition from the $^2S_{1/2}(F=0)$ ground state to the $^2D_{3/2}(F=2)$ state at the wavelength of 436 nm is used as the reference transition. In order to compare two $^{171}$Yb$^+$ standards, separate frequency servo systems are employed to stabilize two probe laser frequencies to the reference transition line centers of two independently stored ions. The experimental results indicate a relative instability (Allan standard deviation) of the optical frequency difference between the two systems of $\sigma_y(1000 {\rm s})=5\cdot 10^{-16}$ only, so that shifts in the sub-hertz range can be resolved. Shifts of several hertz are observed if a stationary electric field gradient is superimposed on the radiofrequency trap field. The absolute optical transition frequency of Yb$^+$ at 688 THz was measured with a cesium atomic clock at two times separated by 2.8 years. A temporal variation of this frequency can be excluded within a $1\sigma$ relative uncertainty of $4.4\cdot 10^{-15}$ yr$^{-1}$. Combined with recently published values for the constancy of other transition frequencies this measurement provides a limit on the present variability of the fine structure constant $\alpha$ at the level of $2.0\cdot 10^{-15}$ yr$^{-1}$.
Stanley associated with a graph G a symmetric function X_G which reduces to G's chromatic polynomial under a certain specialization of variables. He then proved various theorems generalizing results about the chromatic polynomial, as well as new ones that cannot be interpreted at that level. Unfortunately, X_G does not satisfy a Deletion-Contraction Law which makes it difficult to apply induction. We introduce a symmetric function in noncommuting variables which does have such a law and specializes to X_G when the variables are allowed to commute. This permits us to further generalize some of Stanley's theorems and prove them in a uniform and straightforward manner. Furthermore, we make some progress on the (3+1)-free Conjecture of Stanley and Stembridge.
We calculate the dynamic phase transition (DPT) temperatures and present the dynamic phase diagrams in the Blume-Capel model under the presence of a time-dependent oscillating external magnetic field by using the path probability method. We study the time variation of the average order parameters to obtain the phases in the system and the paramagnetic (P), ferromagnetic (F) and the F + P mixed phases are found. We also investigate the thermal behavior of the dynamic order parameters to analyze the nature (continuous and discontinuous) of transitions and to obtain the DPT points. We present the dynamic phase diagrams in three planes, namely (T, h), (d, T) and (k2/k1, T), where T is the reduced temperature, h the reduced magnetic field amplitude, d the reduced crystal-field interaction and the k2, k1 rate constants. The phase diagrams exhibit dynamic tricritical and reentrant behaviors as well as a double critical end point and triple point, strongly depending on the values of the interaction parameters and the rate constants. We compare and discuss the dynamic phase diagrams with dynamic phase diagrams that are obtained within the Glauber-type stochastic dynamics based on the mean-field theory and the effective field theory.
This article introduces tree substitutions as a tool to give explicit geometric representations of the dynamical systems generated by a particular set of automorphisms of the free group.
Neural networks are very powerful learning systems, but they do not readily generalize from one task to the other. This is partly due to the fact that they do not learn in a compositional way, that is, by discovering skills that are shared by different tasks, and recombining them to solve new problems. In this paper, we explore the compositional generalization capabilities of recurrent neural networks (RNNs). We first propose the lookup table composition domain as a simple setup to test compositional behaviour and show that it is theoretically possible for a standard RNN to learn to behave compositionally in this domain when trained with standard gradient descent and provided with additional supervision. We then remove this additional supervision and perform a search over a large number of model initializations to investigate the proportion of RNNs that can still converge to a compositional solution. We discover that a small but non-negligible proportion of RNNs do reach partial compositional solutions even without special architectural constraints. This suggests that a combination of gradient descent and evolutionary strategies directly favouring the minority models that developed more compositional approaches might suffice to lead standard RNNs towards compositional solutions.
Deep reinforcement learning (DRL) has led to a wide range of advances in sequential decision-making tasks. However, the complexity of neural network policies makes it difficult to understand and deploy with limited computational resources. Currently, employing compact symbolic expressions as symbolic policies is a promising strategy to obtain simple and interpretable policies. Previous symbolic policy methods usually involve complex training processes and pre-trained neural network policies, which are inefficient and limit the application of symbolic policies. In this paper, we propose an efficient gradient-based learning method named Efficient Symbolic Policy Learning (ESPL) that learns the symbolic policy from scratch in an end-to-end way. We introduce a symbolic network as the search space and employ a path selector to find the compact symbolic policy. By doing so we represent the policy with a differentiable symbolic expression and train it in an off-policy manner which further improves the efficiency. In addition, in contrast with previous symbolic policies which only work in single-task RL because of complexity, we expand ESPL on meta-RL to generate symbolic policies for unseen tasks. Experimentally, we show that our approach generates symbolic policies with higher performance and greatly improves data efficiency for single-task RL. In meta-RL, we demonstrate that compared with neural network policies the proposed symbolic policy achieves higher performance and efficiency and shows the potential to be interpretable.
In this paper, we mainly study the turbulence of Galerkin truncations of 2D Navier-Stokes equation under degenerate stochastic forcing and large-scale. We use a kind of chaotic structure named full-horseshoes to describe it. It is proved that if the stochastic forcing satisfies some kind of hypoelliptic condition, then the system has full-horseshoes.
Let A be a cosemisimple Hopf *-algebra with antipode S and let $\Gamma$ be a left-covariant first order differential *-calculus over A such that $\Gamma$ is self-dual and invariant under the Hopf algebra automorphism S^2. A quantum Clifford algebra $\Cl(\Gamma,\sigma,g)$ is introduced which acts on Woronowicz' external algebra $\Gamma^\wedge$. A minimal left ideal of $\Cl(\Gamma,\sigma,g)$ which is an A-bimodule is called a spinor module. Metrics on spinor modules are investigated. The usual notion of a linear left connection on $\Gamma$ is extended to quantum Clifford algebras and also to spinor modules. The corresponding Dirac operator and connection Laplacian are defined. For the quantum group SL_q(2) and its bicovariant $4D_\pm$-calculi these concepts are studied in detail. A generalization of Bochner's theorem is given. All invariant differential operators over a given spinor module are determined. The eigenvalues of the Dirac operator are computed. Keywords: quantum groups, covariant differential calculus, spin geometry
In this article we prove the absence of relativistic effects in leading order for the ground-state energy according to Brown-Ravenhall operator. We obtain this asymptotic result for negative ions and for systems with the number of electrons proportional to the nuclear charge. In the case of neutral atoms the analogous result was obtained earlier by Cassanas and Siedentop [4].
The luminosity function is a fundamental observable for characterizing how galaxies form and evolve throughout the cosmic history. One key ingredient to derive this measurement from the number counts in a survey is the characterization of the completeness and redshift selection functions for the observations. In this paper we present GLACiAR, an open python tool available on GitHub to estimate the completeness and selection functions in galaxy surveys. The code is tailored for multiband imaging surveys aimed at searching for high-redshift galaxies through the Lyman Break technique, but it can be applied broadly. The code generates artificial galaxies that follow Sersic profiles with different indexes and with customizable size, redshift and spectral energy distribution properties, adds them to input images, and measures the recovery rate. To illustrate this new software tool, we apply it to quantify the completeness and redshift selection functions for J-dropouts sources (redshift z~10 galaxies) in the Hubble Space Telescope Brightest of Reionizing Galaxies Survey (BoRG). Our comparison with a previous completeness analysis on the same dataset shows overall agreement, but also highlights how different modelling assumptions for artificial sources can impact completeness estimates.
Web-based human trafficking activity has increased in recent years but it remains sparsely dispersed among escort advertisements and difficult to identify due to its often-latent nature. The use of intelligent systems to detect trafficking can thus have a direct impact on investigative resource allocation and decision-making, and, more broadly, help curb a widespread social problem. Trafficking detection involves assigning a normalized score to a set of escort advertisements crawled from the Web -- a higher score indicates a greater risk of trafficking-related (involuntary) activities. In this paper, we define and study the problem of trafficking detection and present a trafficking detection pipeline architecture developed over three years of research within the DARPA Memex program. Drawing on multi-institutional data, systems, and experiences collected during this time, we also conduct post hoc bias analyses and present a bias mitigation plan. Our findings show that, while automatic trafficking detection is an important application of AI for social good, it also provides cautionary lessons for deploying predictive machine learning algorithms without appropriate de-biasing. This ultimately led to integration of an interpretable solution into a search system that contains over 100 million advertisements and is used by over 200 law enforcement agencies to investigate leads.
Visual tracking (VT) is the process of locating a moving object of interest in a video. It is a fundamental problem in computer vision, with various applications in human-computer interaction, security and surveillance, robot perception, traffic control, etc. In this paper, we address this problem for the first time in the quantum setting, and present a quantum algorithm for VT based on the framework proposed by Henriques et al. [IEEE Trans. Pattern Anal. Mach. Intell., 7, 583 (2015)]. Our algorithm comprises two phases: training and detection. In the training phase, in order to discriminate the object and background, the algorithm trains a ridge regression classifier in the quantum state form where the optimal fitting parameters of ridge regression are encoded in the amplitudes. In the detection phase, the classifier is then employed to generate a quantum state whose amplitudes encode the responses of all the candidate image patches. The algorithm is shown to be polylogarithmic in scaling, when the image data matrices have low condition numbers, and therefore may achieve exponential speedup over the best classical counterpart. However, only quadratic speedup can be achieved when the algorithm is applied to implement the ultimate task of Henriques's framework, i.e., detecting the object position. We also discuss two other important applications related to VT: (1) object disappearance detection and (2) motion behavior matching, where much more significant speedup over the classical methods can be achieved. This work demonstrates the power of quantum computing in solving computer vision problems.
A universal stellar initial mass function (IMF) should not be expected from theoretical models of star formation, but little conclusive observational evidence for a variable IMF has been uncovered. In this paper, a parameterization of the IMF is introduced into photometric template fitting of the COSMOS2015 catalog. The resulting best-fit templates suggest systematic variations in the IMF, with most galaxies exhibiting top-heavier stellar populations than in the Milky Way. At fixed redshift, only a small range of IMFs are found, with the typical IMF becoming progressively top-heavier with increasing redshift. Additionally, subpopulations of ULIRGs, quiescent- and star-forming galaxies are compared with predictions of stellar population feedback and show clear qualitative similarities to the evolution of dust temperatures.
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception. One prominent effort is learning object-centric representations, which are widely conjectured to enable compositional generalization. Yet, it remains unclear when this conjecture will be true, as a principled theoretical or empirical understanding of compositional generalization is lacking. In this work, we investigate when compositional generalization is guaranteed for object-centric representations through the lens of identifiability theory. We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally. We validate our theoretical result and highlight the practical relevance of our assumptions through experiments on synthetic image data.
We study the stability of coherent structures in plane Couette flow against long-wavelength perturbations in wide domains that cover several pairs of coherent structures. For one and two pairs of vortices, the states retain the stability properties of the small domains, but for three pairs new unstable modes are found. They are shown to be connected to bifurcations that break the translational symmetry and drive the coherent structures from the spanwise extended state to a modulated one that is a precursor to spanwise localized states. Tracking the stability of the orbits as functions of the spanwise wave length reveals a rich variety of additional bifurcations.
We discuss conditions for well-posedness of the scalar reaction-diffusion equation $u_{t}=\Delta u+f(u)$ equipped with Dirichlet boundary conditions where the initial data is unbounded. Standard growth conditions are juxtaposed with the no-blow-up condition $\int_{1}^{\infty}1/f(s) \d s=\infty$ that guarantees global solutions for the related ODE $\dot u=f(u)$. We investigate well-posedness of the toy PDE $u_{t}=f(u)$ in $L^{p}$ under this no-blow-up condition. An example is given of a source term $f$ and an initial condition $\psi\in L^{2}(0,1)$ such that $\int_{1}^{\infty}1/f(s)\d s=\infty$ and the toy PDE blows-up instantaneously while the reaction-diffusion equation is globally well-posed in $L^{2}(0,1)$.
The possibility of the resonance reflection (100 % at maximum) is revealed. The corresponding exactly solvable models with the controllable numbers of resonances, their positions and widths are presented.
A modern version of Monetary Circuit Theory with a particular emphasis on stochastic underpinning mechanisms is developed. It is explained how money is created by the banking system as a whole and by individual banks. The role of central banks as system stabilizers and liquidity providers is elucidated. It is shown how in the process of money creation banks become naturally interconnected. A novel Extended Structural Default Model describing the stability of the Interconnected Banking Network is proposed. The purpose of banks' capital and liquidity is explained. Multi-period constrained optimization problem for banks's balance sheet is formulated and solved in a simple case. Both theoretical and practical aspects are covered.
A MCMC approach is used to estimate the age-specific mortality rate ratio for German men and women with RA. For constructing priors, we calculate a range of admissible values from prevalence and incidence data based on about 60 million people in Germany. Using these priors, MCMC mimics and compares estimated mortality to the findings of a recent register study from Denmark. It is estimated that the mortality rate ratio is highest in the young ages (4.0 and 3.5 for men and women aged 17.5 years, respectively) and declines towards higher ages (1.0 and 1.2 for men and women aged 92.5 years, respectively). The lengths of the credibility intervals decrease from younger towards older ages.
We consider the heat equation defined by a generalized measure theoretic Laplacian on $[0,1]$. This equation describes heat diffusion in a bar such that the mass distribution of the bar is given by a non-atomic Borel probabiliy measure $\mu$, where we do not assume the existence of a strictly positive mass density. We show that weak measure convergence implies convergence of the corresponding generalized Laplacians in the strong resolvent sense. We prove that strong semigroup convergence with respect to the uniform norm follows, which implies uniform convergence of solutions to the corresponding heat equations. This provides, for example, an interpretation for the mathematical model of heat diffusion on a bar with gaps in that the solution to the corresponding heat equation behaves approximately like the heat flow on a bar with sufficiently small mass on these gaps.
We study various models of random non-crossing configurations consisting of diagonals of convex polygons, and focus in particular on uniform dissections and non-crossing trees. For both these models, we prove convergence in distribution towards Aldous' Brownian triangulation of the disk. In the case of dissections, we also refine the study of the maximal vertex degree and validate a conjecture of Bernasconi, Panagiotou and Steger. Our main tool is the use of an underlying Galton-Watson tree structure.
Dice odds in the board game RISK were first investigated by Tan, fixed by Osbourne, and extended by Blatt. We generalized dice odds further, varying the number of sides and the number of dice used in a single battle. We show that the attacker needs two more than 86% of the defending armies to have an over 50% chance to conquer an enemy territory. By normal approximation, we show that the conquer odds transition rapidly from low chance to high chance of conquering around the 86%+2 threshold.
Raman scattering experiments have been carried out on single crystals of Nd$_{0.5}$Sr$_{0.5}$MnO$_3$ as a function of temperature in the range of 320-50 K, covering the paramagnetic insulator-ferromagnetc metal transition at 250 K and the charge-ordering antiferromagnetic transition at 150 K. The diffusive electronic Raman scattering response is seen in the paramagnetic phase which continue to exist even in the ferromagnetic phase, eventually disappearing below 150 K. We understand the existence of diffusive response in the ferromagnetic phase to the coexistence of the different electronic phases. The frequency and linewidth of the phonons across the transitions show significant changes, which cannot be accounted for only by anharmonic interactions.
We determine the unique hypergraphs with maximum spectral radius among all connected $k$-uniform ($k\geq 3$) unicyclic hypergraphs with matching number at least $z$, and among all connected $k$-uniform ($k\geq 3$) unicyclic hypergraphs with a given matching number, respectively.
The crystalline silicates features are mainly reflected in infrared bands. The Spitzer Infrared Spectrograph (IRS) collected numerous spectra of various objects and provided a big database to investigate crystalline silicates in a wide range of astronomical environments. We apply the manifold ranking algorithm to perform a systematic search for the spectra with crystalline silicates features in the Spitzer IRS Enhanced Products available. In total, 868 spectra of 790 sources are found to show the features of crystalline silicate. These objects are cross-matched with the SIMBAD database as well as with the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST)/DR2. The average spectrum of young stellar objects show a variety of features dominated either by forsterite or enstatite or neither, while the average spectrum of evolved objects consistently present dominant features of forsterite in AGB, OH/IR, post-AGB and planetary nebulae. They are identified optically as early-type stars, evolved stars, galaxies and so on. In addition, the strength of spectral features in typical silicate complexes is calculated. The results are available through CDS for the astronomical community to further study crystalline silicate.
Tight wavelet frames (TWFs) in $L^2(\mathbb{R}^n)$ are versatile and practical structures that provide the perfect reconstruction property. Nevertheless, existing TWF construction methods exhibit limitations, including a lack of specific methods for generating mother wavelets in extension-based construction, and the necessity to address the sum of squares (SOS) problem even when specific methods for generating mother wavelets are provided in SOS-based construction. It is a common practice for current TWF constructions to begin with a given refinable function. However, this approach places the entire burden on finding suitable mother wavelets. In this paper, we introduce TWF construction methods that spread the burden between both types of functions: refinable functions and mother wavelets. These construction methods offer an alternative approach to circumvent the SOS problem while providing specific techniques for generating mother wavelets. We present examples to illustrate our construction methods.
Cosmic ray (CR) identification and replacement are critical components of imaging and spectroscopic reduction pipelines involving solid-state detectors. We present deepCR, a deep learning based framework for CR identification and subsequent image inpainting based on the predicted CR mask. To demonstrate the effectiveness of this framework, we train and evaluate models on Hubble Space Telescope ACS/WFC images of sparse extragalactic fields, globular clusters, and resolved galaxies. We demonstrate that at a false positive rate of 0.5%, deepCR achieves close to 100% detection rates in both extragalactic and globular cluster fields, and 91% in resolved galaxy fields, which is a significant improvement over the current state-of-the-art method LACosmic. Compared to a multicore CPU implementation of LACosmic, deepCR CR mask predictions run up to 6.5 times faster on CPU and 90 times faster on a single GPU. For image inpainting, the mean squared errors of deepCR predictions are 20 times lower in globular cluster fields, 5 times lower in resolved galaxy fields, and 2.5 times lower in extragalactic fields, compared to the best performing non-neural technique tested. We present our framework and the trained models as an open-source Python project, with a simple-to-use API. To facilitate reproducibility of the results we also provide a benchmarking codebase.
The top quark cross section close to threshold in $e^+e^-$ annihilation is computed including the summation of logarithms of the velocity at next-to-next-to-leading-logarithmic order in QCD. The remaining theoretical uncertainty in the normalization of the total cross section is at the few percent level, an order of magnitude smaller than in previous next-to-next-to-leading order calculations. This uncertainty is smaller than the effects of a light standard model Higgs boson.
We establish a stochastic maximum principle (SMP) for control problems of partially observed diffusions of mean-field type with risk-sensitive performance functionals.
A new scheme for incorporating radiative cooling in hydrodynamical codes is presented, centered around exact integration of the governing semi-discrete cooling equation. Using benchmark calculations based on the cooling downstream of a radiative shock, I demonstrate that the new scheme outperforms traditional explicit and implicit approaches in terms of accuracy, while remaining competitive in terms of execution speed.
We show that two-dimensional phononic crystals exhibit Dirac cone dispersion at k=0 by exploiting dipole and quadrupole accidental degeneracy. While the equi-frequency surface of Dirac cone modes is isotropic, such systems exhibit "super-anisotropy", meaning that only transverse waves are allowed along certain directions while only longitudinal waves are allowed along some other directions. Only one mode, not two, is allowed near the Dirac point, and only two effective parameters, not four, are needed to describe the dispersion. Effective medium theory finds that the phononic crystals have effectively zero mass density and zero 1/C44 at the Dirac point. Numerical simulations are used to demonstrate the unusual elastic wave properties near the Dirac point frequency.
Traveling phenomena, frequently observed in a variety of scientific disciplines including atmospheric science, seismography, and oceanography, have long suffered from limitations due to lack of realistic statistical modeling tools and simulation methods. Our work primarily addresses this, introducing more realistic and flexible models for spatio-temporal random fields. We break away from the traditional confines of the classic frozen field by either relaxing the assumption of a single deterministic velocity or rethinking the hypothesis regarding the spectrum shape, thus enhancing the realism of our models. While the proposed models stand out for their realism and flexibility, they are also paired with simulation algorithms that are equally or less computationally complex than the commonly used circulant embedding for Gaussian random fields in $\mathbb{R}^{2+1}$. This combination of realistic modeling with efficient simulation methods creates an effective solution for better understanding traveling phenomena.
In this paper, we study the weak sharpness of the solution set of variational inequality problem (in short, VIP) and the finite convergence property of the sequence generated by some algorithm for finding the solutions of VIP. In particular, we give some characterizations of weak sharpness of the solution set of VIP without considering the primal or dual gap function. We establish an abstract result on the finite convergence property for a sequence generated by some iterative methods. We then apply such abstract result to discuss the finite termination property of the sequence generated by proximal point method, exact proximal point method and gradient projection method. We also give an estimate on the number of iterates by which the sequence converges to a solution of the VIP.
The average weight distribution of a regular low-density parity-check (LDPC) code ensemble over a finite field is thoroughly analyzed. In particular, a precise asymptotic approximation of the average weight distribution is derived for the small-weight case, and a series of fundamental qualitative properties of the asymptotic growth rate of the average weight distribution are proved. Based on this analysis, a general result, including all previous results as special cases, is established for the minimum distance of individual codes in a regular LDPC code ensemble.
In an effective field theory framework we review the two main mechanisms of quarkonium dissociation in a weakly coupled thermal bath.
The modulation of charge density and spin order in (LaMnO$_3$)$_{2n}$/(SrMnO$_3$)$_n$ ($n$=1-4) superlattices is studied via Monte Carlo simulations of the double-exchange model. G-type antiferromagnetic barriers in the SrMnO$_{3}$ regions with low charge density are found to separate ferromagnetic LaMnO$_{3}$ layers with high charge density. The recently experimentally observed metal-insulator transition with increasing $n$ is reproduced in our studies, and $n=3$ is found to be the critical value.
We performed high-magnetic-field ultrasonic experiments on YbB$_{12}$ up to 59 T to investigate the valence fluctuations in Yb ions. In zero field, the longitudinal elastic constant $C_{11}$, the transverse elastic constants $C_{44}$ and $\left( C_{11} - C_{12} \right)/2$, and the bulk modulus $C_\mathrm{B}$ show a hardening with a change of curvature at around 35 K indicating a small contribution of valence fluctuations to the elastic constants. When high magnetic fields are applied at low temperatures, $C_\mathrm{B}$ exhibits a softening above a field-induced insulator-metal transition signaling field-induced valence fluctuations. Furthermore, at elevated temperatures, the field-induced softening of $C_\mathrm{B}$ takes place at even lower fields and $C_\mathrm{B}$ decreases continuously with field. Our analysis using the multipole susceptibility based on a two-band model reveals that the softening of $C_\mathrm{B}$ originates from the enhancement of multipole-strain interaction in addition to the decrease of the insulator energy gap. This analysis indicates that field-induced valence fluctuations of Yb cause the instability of the bulk modulus $C_\mathrm{B}$.
In his unpublished preprint "Definable Valuations" Koenigsmann shows that every field that admits a t-henselian topology is either real closed or separably closed or admits a definable valuation inducing the t-henselian topology. To show this Koenigsmann investigates valuation rings induced by certain (definable) subgroups of the field. The aim of this paper, based on the authors PhD thesis, is to look at the methods used in the preprint in greater detail and correct a mistake in the original paper based on a paper of Jahnke and Koenigsmann.
Let C be the union of two general connected, smooth, nonrational curves X and Y intersecting transversally at a point P. Assume that P is a general point of X or of Y. Our main result, in a simplified way, says: Let Q be a point of X. Then Q is the limit of special Weierstrass points on a family of smooth curves degenerating to C if and only if Q is not P and either of the following conditions hold: Q is a special ramification point of the linear system |K_X+(g_Y+1)P|, or Q is a ramification point of the linear system |K_X+(g_Y+1+j)P| for j=-1 or j=1 and P is a Weierstrass point of Y. Above, g_Y stands for the genus of Y and K_X for a canonical divisor of X. As an application, we recover in a unified and conceptually simpler way computations made by Diaz and Cukierman of certain divisor classes in the moduli space of stable curves. In our method there is no need to worry about multiplicities, an usual nuisance of the method of test curves.
Entanglement distillation is a basic task in quantum information, and the distillable entanglement of three bipartite reduced density matrices from a tripartite pure state has been studied in [Phys. Rev. A 84, 012325 (2011)]. We extend this result to tripartite mixed states by studying a conjectured matrix inequality, namely $\mathop{\rm rank}(\sum_i R_i \otimes S_i)\leq K \mathop{\rm rank}(\sum_i R_i^T \otimes S_i)$ holds for any bipartite matrix $M=\sum_i R_i \otimes S_i$ and Schmidt rank $K$. We prove that the conjecture holds for $M$ with $K=3$ and some special $M$ with arbitrary $K$.
A new indium-loaded liquid scintillator (LS) with up to 15wt% In and high light output promises a breakthrough in the 25y old proposal for observing pp solar neutrinos (nue) by tagged nue capture in 115In. Intense background from the natural beta-decay of In, the single obstacle blocking this project till now, can be reduced by more than x100 with the new In-LS. Only non-In background remains, dramatically relaxing design criteria. Eight tons of In yields ~400 pp nue/y after analysis cuts. With the lowest threshold yet, Q=118 keV, In is the most sensitive detector of the pp nue spectrum, the long sought touchstone for nue conversion.
We show the appearance of an unconventional Majorana zero mode whose wave function splits into multiple parts located at different ends of different topological superconductors, hereinafter referred to as a multi-locational Majorana zero mode. Specifically, we discuss the multi-locational Majorana zero modes in a three-terminal Josephson junction consisting of topological superconductors, which forms an elemental qubit of fault-tolerant topological quantum computers. We also demonstrate anomalously long-ranged nonlocal resonant transport phenomena caused by the multi-locational Majorana zero mode.
This communication reports the effect of DNA conformation on fluorescence resonance energy transfer (FRET) efficiency between two laser dyes in layer by layer (LbL) self assembled film. The dyes Acraflavine and Rhodamine B were attached onto the negative phosphate backbones of DNA in LbL film through electrostatic attraction. Then FRET between these dyes was investigated. Increase in pH or temperature causes the denaturation of DNA followed by coil formation of single stranded DNA. As a result the FRET efficiency also changed along with it. These observations demonstrated that by observing the change in FRET efficiency between two laser dyes in presence of DNA it is possible to detect the altered DNA conformation in the changed environment.
The francium atom is considered as a prospective candidate system to search for the T,P-violating electron electric dipole moment [T. Aoki et al Quantum Sci. Technol. 6, 044008 (2021)]. We demonstrate that the same experiment can be used for axionlike particles (ALP) search. For this, we calculate electronic structure constants of ALP-mediated interaction for a wide range of ALP masses. Using the recently updated constraints on the ALP-electron and ALP-nucleon coupling constants, we show that the contribution of considered interactions corresponding to these constraints can give significant contribution to atomic electric dipole moment. Therefore, obtainment of stronger restrictions for ALP characteristics in the francium atom electric dipole moment experiment is possible.
We show how to construct an ideal triangulation of a mapping torus of a pseudo-Anosov map punctured along the singular fibers. This gives rise to a new conjugacy invariant of mapping classes, and a new proof of a theorem of Farb-Leininger-Margalit. The approach in this paper is based on ideas of Hamenstadt.
We derive factorization identities for a class of preemptive-resume queueing systems, with batch arrivals and catastrophes that, whenever they occur, eliminate multiple customers present in the system. These processes are quite general, as they can be used to approximate Levy processes, diffusion processes, and certain types of growth-collapse processes; thus, all of the processes mentioned above also satisfy similar factorization identities. In the Levy case, our identities simplify to both the well-known Wiener-Hopf factorization, and another interesting factorization of reflected Levy processes starting at an arbitrary initial state. We also show how the ideas can be used to derive transforms for some well-known state-dependent/inhomogeneous birth-death processes and diffusion processes.
We present and begin to explore a collection of social data that represents part of the COVID-19 pandemic's effects on the United States. This data is collected from a range of sources and includes longitudinal trends of news topics, social distancing behaviors, community mobility changes, web searches, and more. This multimodal effort enables new opportunities for analyzing the impacts such a pandemic has on the pulse of society. Our preliminary results show that the number of COVID-19-related news articles published immediately after the World Health Organization declared the pandemic on March 11, and that since that time have steadily decreased---regardless of changes in the number of cases or public policies. Additionally, we found that politically moderate and scientifically-grounded sources have, relative to baselines measured before the beginning of the pandemic, published a lower proportion of COVID-19 news than more politically extreme sources. We suggest that further analysis of these multimodal signals could produce meaningful social insights and present an interactive dashboard to aid further exploration.
Age is an important variable to describe the expected brain's anatomy status across the normal aging trajectory. The deviation from that normative aging trajectory may provide some insights into neurological diseases. In neuroimaging, predicted brain age is widely used to analyze different diseases. However, using only the brain age gap information (\ie the difference between the chronological age and the estimated age) can be not enough informative for disease classification problems. In this paper, we propose to extend the notion of global brain age by estimating brain structure ages using structural magnetic resonance imaging. To this end, an ensemble of deep learning models is first used to estimate a 3D aging map (\ie voxel-wise age estimation). Then, a 3D segmentation mask is used to obtain the final brain structure ages. This biomarker can be used in several situations. First, it enables to accurately estimate the brain age for the purpose of anomaly detection at the population level. In this situation, our approach outperforms several state-of-the-art methods. Second, brain structure ages can be used to compute the deviation from the normal aging process of each brain structure. This feature can be used in a multi-disease classification task for an accurate differential diagnosis at the subject level. Finally, the brain structure age deviations of individuals can be visualized, providing some insights about brain abnormality and helping clinicians in real medical contexts.
The more detailed description of the quantum 'ax+b' group of Baaj and Skandalis is presented. In particular we give generators and present formulae for action of the comultiplication on them; it is also shown that this group is a quantization of a Poisson-Lie structure on a classical 'ax+b' group.
Crohn's disease, one of two inflammatory bowel diseases (IBD), affects 200,000 people in the UK alone, or roughly one in every 500. We explore the feasibility of deep learning algorithms for identification of terminal ileal Crohn's disease in Magnetic Resonance Enterography images on a small dataset. We show that they provide comparable performance to the current clinical standard, the MaRIA score, while requiring only a fraction of the preparation and inference time. Moreover, bowels are subject to high variation between individuals due to the complex and free-moving anatomy. Thus we also explore the effect of difficulty of the classification at hand on performance. Finally, we employ soft attention mechanisms to amplify salient local features and add interpretability.
The technique known as Grilliot's trick constitutes a template for explicitly defining the Turing jump functional $(\exists^2)$ in terms of a given effectively discontinuous type two functional. In this paper, we discuss the standard extensionality trick: a technique similar to Grilliot's trick in Nonstandard Analysis. This nonstandard trick proceeds by deriving from the existence of certain nonstandard discontinuous functionals, the Transfer principle from Nonstandard analysis limited to $\Pi_1^0$-formulas; from this (generally ineffective) implication, we obtain an effective implication expressing the Turing jump functional in terms of a discontinuous functional (and no longer involving Nonstandard Analysis). The advantage of our nonstandard approach is that one obtains effective content without paying attention to effective content. We also discuss a new class of functionals which all seem to fall outside the established categories. These functionals directly derive from the Standard Part axiom of Nonstandard Analysis.
The Horava - Lifshitz (HL) theory has recently attracted a lot of interest as a viable solution to some quantum gravity related problems and the presence of an effective cosmological constant able to drive the cosmic speed up. We show here that, in the weak field limit, the HL proposal leads to a modification of the gravitational potential because of two additive terms (scaling respectively as $r^2$ and $r^{-4}$) to the Newtonian $1/r$ potential. We then derive a general expression to compute the rotation curve of an extended system under the assumption that the mass density only depends on the cylindrical coordinates $(R, z)$ showing that the HL modification induces a dependence of the circular velocity on the mass function which is a new feature of the theory. As a first exploratory analysis, we then try fitting the Milky Way rotation curve using its visible components only in order to see whether the HL modified potential can be an alternative to the dark matter framework. This turns out not to be the case so that we argue that dark matter is still needed, but the amount of dark matter and the dark halo density profile have to be revised according to the new HL potential.
A honeycomb detector consisting of a matrix of 96 closely packed hexagonal cells, each working as a proportional counter with a wire readout, was fabricated and tested at the CERN PS. The cell depth and the radial dimensions of the cell were small, in the range of 5-10 mm. The appropriate cell design was arrived at using GARFIELD simulations. Two geometries are described illustrating the effect of field shaping. The charged particle detection efficiency and the preshower characteristics have been studied using pion and electron beams. Average charged particle detection efficiency was found to be 98%, which is almost uniform within the cell volume and also within the array. The preshower data show that the transverse size of the shower is in close agreement with the results of simulations for a range of energies and converter thicknesses.
We introduce a block-online variant of the temporal feature-wise linear modulation (TFiLM) model to achieve bandwidth extension. The proposed architecture simplifies the UNet backbone of the TFiLM to reduce inference time and employs an efficient transformer at the bottleneck to alleviate performance degradation. We also utilize self-supervised pretraining and data augmentation to enhance the quality of bandwidth extended signals and reduce the sensitivity with respect to downsampling methods. Experiment results on the VCTK dataset show that the proposed method outperforms several recent baselines in both intrusive and non-intrusive metrics. Pretraining and filter augmentation also help stabilize and enhance the overall performance.
We introduce and explore the relation between quivers and 3-manifolds with the topology of the knot complement. This idea can be viewed as an adaptation of the knots-quivers correspondence to Gukov-Manolescu invariants of knot complements (also known as $F_K$ or $\hat{Z}$). Apart from assigning quivers to complements of $T^{(2,2p+1)}$ torus knots, we study the physical interpretation in terms of the BPS spectrum and general structure of 3d $\mathcal{N}=2$ theories associated to both sides of the correspondence. We also make a step towards categorification by proposing a $t$-deformation of all objects mentioned above.
In about half of Seyfert galaxies, the X-ray emission is absorbed by an optically thin, ionized medium, the so-called "Warm Absorber", whose origin and location is still a matter of debate. The aims of this paper is to put more constraints on the warm absorber by studying its variability. We analyzed the X-ray spectra of a Seyfert 1 galaxy, Mrk 704, which was observed twice, three years apart, by XMM-Newton. The spectra were well fitted with a two zones absorber, possibly covering only partially the source. The parameters of the absorbing matter - column density, ionization state, covering factor - changed significantly between the two observations. Possible explanations for the more ionized absorber are a torus wind (the source is a polar scattering one) or, in the partial covering scenario, an accretion disk wind. The less ionized absorber may be composed of orbiting clouds in the surroundings of the nucleus, similarly to what already found in other sources, most notably NGC 1365.
We study the thermal transport properties of general conformal field theories (CFTs) on curved spacetimes in the leading order viscous hydrodynamic limit. At the level of linear response, we show that the thermal transport is governed by a system of forced linearised Navier-Stokes equations on a curved space. Our setup includes CFTs in flat spacetime that have been deformed by spatially dependent and periodic local temperature variations or strains that have been applied to the CFT, and hence is relevant to CFTs arising in condensed matter systems at zero charge density. We provide specific examples of deformations which lead to thermal backflow driven by a DC source: that is, the thermal currents locally flow in the opposite direction to the applied DC thermal source. We also consider thermal transport for relativistic quantum field theories that are not conformally invariant.
The weighted and directed network of countries based on the number of overseas banks is analyzed in terms of its fragility to the banking crisis of one country. We use two different models to describe transmission of shocks, one local and the other global. Depending on the original source of the crisis, the overall size of crisis impacts is found to differ country by country. For the two-step local spreading model, it is revealed that the scale of the first impact is determined by the out-strength, the total number of overseas branches of the country at the origin of the crisis, while the second impact becomes more serious if the in-strength at the origin is increased. For the global spreading model, some countries named "triggers" are found to play important roles in shock transmission, and the importance of the feed-forward-loop mechanism is pointed out. We also discuss practical policy implications of the present work.
The Early Universe, together with many nearby dwarf galaxies, is deficient in heavy elements. The evolution of massive stars in such environments is thought to be affected by rotation. Extreme rotators amongst them tend to form decretion disks and manifest themselves as OBe stars. We use a combination of U B, GAIA, Spitzer, and Hubble Space Telescope photometry to identify the complete populations of massive OBe stars - one hundred to thousands in number - in five nearby dwarf galaxies. This allows us to derive the galaxy-wide fractions of main sequence stars that are OBe stars (f_OBe), and how it depends on absolute magnitude, mass, and metallicity (Z). We find f_OBe = 0.22 in the Large Magellanic Cloud (0.5 Z_Sun), increasing to f_OBe = 0.31 in the Small Magellanic Cloud (0.2 Z_Sun). In the so far unexplored metallicity regime below 0.2 Z_Sun, in Holmberg I, Holmberg II, and Sextans A, we also obtain high OBe star fractions of 0.27, 0.27, and 0.27, respectively. These high OBe star fractions, and the strong contribution in the stellar mass range which dominates the production of supernovae, shed new light on the formation channel of OBe stars, as well as on the preference of long-duration gamma-ray bursts and superluminous supernovae to occur in metal-poor galaxies.
Large-gap quantum spin Hall insulators are promising materials for room-temperature applications based on Dirac fermions. Key to engineer the topologically non-trivial band ordering and sizable band gaps is strong spin-orbit interaction. Following Kane and Mele's original suggestion, one approach is to synthesize monolayers of heavy atoms with honeycomb coordination accommodated on templates with hexagonal symmetry. Yet, in the majority of cases, this recipe leads to triangular lattices, typically hosting metals or trivial insulators. Here, we conceive and realize "indenene", a triangular monolayer of indium on SiC exhibiting non-trivial valley physics driven by local spin-orbit coupling, which prevails over inversion-symmetry breaking terms. By means of tunneling microscopy of the 2D bulk we identify the quantum spin Hall phase of this triangular lattice and unveil how a hidden honeycomb connectivity emerges from interference patterns in Bloch $p_x \pm ip_y$-derived wave functions.
Based on a variational approach, we propose that there are two kinds of low energy states in the t-J type models at low doping. In a quasi-particle state an unpaired spin bound to a hole with a well-defined momentum can be excited with spin waves. The resulting state shows a suppression of antiferromagnetic order around the hole with the profile of a {\em spin bag}. These spin-bag states with spin and charge or hole separated form a continuum of low-energy excitations. Very different properties predicted by these two kinds of states explain a number of anomalous results observed in the exact diagonalization studies on small clusters up to 32 sites.
Video bronchoscopy is routinely conducted for biopsies of lung tissue suspected for cancer, monitoring of COPD patients and clarification of acute respiratory problems at intensive care units. The navigation within complex bronchial trees is particularly challenging and physically demanding, requiring long-term experiences of physicians. This paper addresses the automatic segmentation of bronchial orifices in bronchoscopy videos. Deep learning-based approaches to this task are currently hampered due to the lack of readily-available ground truth segmentation data. Thus, we present a data-driven pipeline consisting of a k-means followed by a compact marker-based watershed algorithm which enables to generate airway instance segmentation maps from given depth images. In this way, these traditional algorithms serve as weak supervision for training a shallow CNN directly on RGB images solely based on a phantom dataset. We evaluate generalization capabilities of this model on two in-vivo datasets covering 250 frames on 21 different bronchoscopies. We demonstrate that its performance is comparable to those models being directly trained on in-vivo data, reaching an average error of 11 vs 5 pixels for the detected centers of the airway segmentation by an image resolution of 128x128. Our quantitative and qualitative results indicate that in the context of video bronchoscopy, phantom data and weak supervision using non-learning-based approaches enable to gain a semantic understanding of airway structures.
Quantum information processing by liquid-state NMR spectroscopy uses pseudo-pure states to mimic the evolution and observations on true pure states. A new method of preparing pseudo-pure states is described, which involves the selection of the spatially labeled states of an ancilla spin with which the spin system of interest is correlated. This permits a general procedure to be given for the preparation of pseudo-pure states on any number of spins, subject to the limitations imposed by the loss of signal from the selected subensemble. The preparation of a single pseudo-pure state is demonstrated by carbon and proton NMR on 13C-labeled alanine. With a judicious choice of magnetic field gradients, the method further allows encoding of up to 2^N pseudo-pure states in independent spatial modes in an N+1 spin system. Fast encoding and decoding schemes are demonstrated for the preparation of four such spatially labeled pseudo-pure states.
We consider a BRST invariant generalization of the "massive background Landau gauge", resembling the original Curci-Ferrari model that saw a revived interest due to its phenomenological success in modeling infrared Yang-Mills dynamics, including that of the phase transition. Unlike the Curci-Ferrari model, however, the mass parameter is no longer a phenomenological input but it enters as a result of dimensional transmutation via a BRST invariant dimension two gluon condensate. The associated renormalization constant is dealt with using Zimmermann's reduction of constants program which fixes the value of the mass parameter to values close to those obtained within the Curci-Ferrari approach. Using a self-consistent background field, we can include the Polyakov loop and probe the deconfinement transition, including its interplay with the condensate and its electric-magnetic asymmetry. We report a continuous phase transition at Tc ~ 0.230 GeV in the SU(2) case and a first order one at Tc ~ 0.164 GeV in the SU(3) case, values which are again rather close to those obtained within the Curci-Ferrari model at one-loop order.
A simple graph $G$ is \textit{k-ordered} (respectively, \textit{k-ordered hamiltonian}), if for any sequence of $k$ distinct vertices $v_1, ..., v_k$ of $G$ there exists a cycle (respectively, hamiltonian cycle) in $G$ containing these $k$ vertices in the specified order. In 1997 Ng and Schultz introduced these concepts of cycle orderability and posed the question of the existence of 3-regular 4-ordered (hamiltonian) graphs other than $K_4$ and $K_{3, 3}$. Ng and Schultz observed that a 3-regular 4-ordered graph on more than 4 vertices is triangle free. We prove that a 3-regular 4-ordered graph $G$ on more than 6 vertices is square free, and we show that the smallest graph that is triangle and square free, namely the Petersen graph, is 4-ordered. Furthermore, we prove that the smallest graph after $K_4$ and $K_{3, 3}$ that is 3-regular 4-ordered hamiltonian is the Heawood graph, and we exhibit forbidden subgraphs for 3-regular 4-ordered hamiltonian graphs on more than 10 vertices. Finally, we construct an infinite family of 3-regular 4-ordered graphs.
We are concerned with existence of regular solutions for non-Newtonian fluids in dimension three. For a certain type of non-Newtonian fluids we prove local existence of unique regular solutions, provided that the initial data are sufficiently smooth. Moreover, if the $H^3$-norm of initial data is sufficiently small, then the regular solution exists globally in time.
We examine a Type-1 neck pinch singularity in simplicial Ricci flow (SRF) for an axisymmetric piecewise flat 3-dimensional geometry with 3-sphere topology. SRF was recently introduced as an unstructured mesh formulation of Hamilton's Ricci flow (RF). It describes the RF of a piecewise-flat simplicial geometry. In this paper, we apply the SRF equations to a representative double-lobed axisymmetric piecewise flat geometry with mirror symmetry at the neck similar to the geometry studied by Angenent and Knopf (A-K). We choose a specific radial profile and compare the SRF equations with the corresponding finite-difference solution of the continuum A-K RF equations. The piecewise-flat 3-geometries considered here are built of isosceles-triangle-based frustum blocks. The axial symmetry of this model allows us to use frustum blocks instead of tetrahedra. The 2-sphere cross-sectional geometries in our model are regular icosahedra. We demonstrate that, under a suitably-pinched initial geometry, the SRF equations for this relatively low-resolution discrete geometry yield the canonical Type-1 neck pinch singularity found in the corresponding continuum solution. We adaptively remesh during the evolution to keep the circumcentric dual lattice well-centered. Without such remeshing, we cannot evolve the discrete geometry to neck pinch. We conclude with a discussion of future generalizations and tests of this SRF model.
Let P and Q be non-zero integers. The Lucas sequence U_n(P,Q) is defined by U_0=0, U_1=1, U_n= P*U_{n-1}-Q*U_{n-2} for n >1. The question of when U_n(P,Q) can be a perfect square has generated interest in the literature. We show that for n=2,...,7, U_n is a square for infinitely many pairs (P,Q) with gcd(P,Q)=1; further, for n=8,...,12, the only non-degenerate sequences where gcd(P,Q)=1 and U_n(P,Q)=square, are given by U_8(1,-4)=21^2, U_8(4,-17)=620^2, and U_12(1,-1)=12^2.
Dynamic voltage scaling (DVS) is one of the most effective techniques for reducing energy consumption in embedded and real-time systems. However, traditional DVS algorithms have inherent limitations on their capability in energy saving since they rarely take into account the actual application requirements and often exploit fixed timing constraints of real-time tasks. Taking advantage of application adaptation, an enhanced energy-aware feedback scheduling (EEAFS) scheme is proposed, which integrates feedback scheduling with DVS. To achieve further reduction in energy consumption over pure DVS while not jeopardizing the quality of control, the sampling period of each control loop is adapted to its actual control performance, thus exploring flexible timing constraints on control tasks. Extensive simulation results are given to demonstrate the effectiveness of EEAFS under different scenarios. Compared with the optimal pure DVS scheme, EEAFS saves much more energy while yielding comparable control performance.
In this article, we investigate scalar field cosmology in the coincident $f(Q)$ gravity formalism. We calculate the motion equations of $f(Q)$ gravity under the flat Friedmann-Lema\^{i}tre-Robertson-Walker background in the presence of a scalar field. We consider a non-linear $f(Q)$ model, particularly $f(Q)=-Q+\alpha Q^n$, which is nothing but a polynomial correction to the STEGR case. Further, we assumed two well-known specific forms of the potential function, specifically the exponential from $V(\phi)= V_0 e^{-\beta \phi}$ and the power-law form $V(\phi)= V_0\phi^{-k}$. We employ some phase-space variables and transform the cosmological field equations into an autonomous system. We calculate the critical points of the corresponding autonomous systems and examine their stability behaviors. We discuss the physical significance corresponding to the exponential case for parameter values $n=2$ and $n=-1$ with $\beta=1$, and $n=-1$ with $\beta=\sqrt{3}$. Moreover, we discuss the same corresponding to the power-law case for the parameter value $n=-2$ and $k=0.16$. We also analyze the behavior of corresponding cosmological parameters such as scalar field and dark energy density, deceleration, and the effective equation of state parameter. Corresponding to the exponential case, we find that the results obtained for the parameter constraints in Case III is better among all three cases, and that represents the evolution of the universe from a decelerated stiff era to an accelerated de-Sitter era via matter-dominated epoch. Further, in the power-law case, we find that all trajectories exhibit identical behavior, representing the evolution of the universe from a decelerated stiff era to an accelerated de-Sitter era. Lastly, we conclude that the exponential case shows better evolution as compared to the power-law case.
This paper characterizes and discusses devolutionary genetic algorithms and evaluates their performances in solving the minimum labeling Steiner tree (MLST) problem. We define devolutionary algorithms as the process of reaching a feasible solution by devolving a population of super-optimal unfeasible solutions over time. We claim that distinguishing them from the widely used evolutionary algorithms is relevant. The most important distinction lies in the fact that in the former type of processes, the value function decreases over successive generation of solutions, thus providing a natural stopping condition for the computation process. We show how classical evolutionary concepts, such as crossing, mutation and fitness can be adapted to aim at reaching an optimal or close-to-optimal solution among the first generations of feasible solutions. We additionally introduce a novel integer linear programming formulation of the MLST problem and a valid constraint used for speeding up the devolutionary process. Finally, we conduct an experiment comparing the performances of devolutionary algorithms to those of state of the art approaches used for solving randomly generated instances of the MLST problem. Results of this experiment support the use of devolutionary algorithms for the MLST problem and their development for other NP-hard combinatorial optimization problems.
If modified gravity holds, but the weak lensing analysis is done in the standard way, one finds that dark matter halos have peculiar shapes, not following the standard Navarro-Frenk-White profiles, and are fully predictable from the distribution of baryons. Here we study in detail the distribution of the apparent dark matter around point masses, which approximate galaxies and galaxy clusters, and their pairs for the QUMOND MOND gravity, taking an external gravitational acceleration $g_e$ into account. At large radii, the apparent halo of a point mass $M$ is shifted against the direction of the external field. When averaged over all lines-of-sight, the halo has a hollow center, and denoting the by $a_0$ the MOND acceleration constant, its density behaves like $\rho(r)=\sqrt{Ma_0/G}/(4\pi r^2)$ between the galacticentric radii $\sqrt{GM/a_0}$ and $\sqrt{GMa_0}/g_e$, and like $\rho\propto r^{-7}G^2M^3a_0^3/g_e^5$ further away. Between a pair of point masses, there is a region of a negative apparent dark matter density, whose mass can exceed the baryonic mass of the system. The density of the combined dark matter halo is not a sum of the densities of the halos of the individual points. The density has a singularity near the zero-acceleration point, but remains finite in projection. We compute maps of the surface density and the lensing shear for several configurations of the problem, and derive formulas to scale them to further configurations. In general, for a large subset of MOND theories in their weak field regime, for any configuration of the baryonic mass $M$ with the characteristic size of $d$, the total lensing density scales as $\rho({\vec{x}})=\sqrt{Ma_0/G}d^{-2}f\left(\vec{\alpha},\vec{x}/d,g_ed/\sqrt{GMa_0}\right)$, where the vector $\vec{\alpha}$ describes the geometry of the system. Distinguishing between QUMOND and cold dark matter seems possible with the existing instruments.
We report low-temperature resistance measurements in a modulation-doped, (311)A GaAs two-dimensional hole system as a function of applied in-plane strain. The data reveal a strong but anisotropic piezoresistance whose magnitude depends on the density as well as the direction along which the resistance is measured. At a density of $1.6\times10^{11}$ cm$^{-2}$ and for a strain of about $2\times10^{-4}$ applied along [01$\bar{1}$], e.g., the resistance measured along this direction changes by nearly a factor of two while the resistance change in the [$\bar{2}$33] direction is less than 10% and has the opposite sign. Our accurate energy band calculations indicate a pronounced and anisotropic deformation of the heavy-hole dispersion with strain, qualitatively consistent with the experimental data. The extremely anisotropic magnitude of the piezoresistance, however, lacks a quantitative explanation.
In this paper, we introduce the so-called $L_p$ $q$-torsional measure for $p\in\mathbb{R}$ and $q>1$ by establishing the $L_p$ variational formula for the $q$-torsional rigidity of convex bodies without smoothness conditions. Moreover, we achieve the existence of solutions to the $L_p$ Minkowski problem $w.r.t.$ the $q$-torsional rigidity for discrete measure and general measure when $0<p<1$ and $q>1$.
The evolution of semicircular quantum vortex loops in oscillating potential flow emerging from an aperture is simulated in some highly symmetrical cases. As the frequency of potential flow oscillation increases, vortex loops that are evolving so as eventually to cross all of the streamlines of potential flow are drawn back toward the aperture when the flow reverses. As a result, the escape size of the vortex loops, and hence the net energy transferred from potential flow to vortex flow in such 2 Pi phase-slip events, decreases as the oscillation frequency increases. Above some aperture-dependent and flow-dependent threshold frequency, vortex loops are drawn back into the aperture. Simulations are preformed using both radial potential flow and oblate-spheroidal potential flow.
We consider identical quantum bosons with weak contact interactions in a two-dimensional isotropic harmonic trap. When the interactions are turned off, the energy levels are equidistant and highly degenerate. At linear order in the coupling parameter, these degenerate levels split, and we study the patterns of this splitting. It turns out that the problem is mathematically identical to diagonalizing the quantum resonant system of the two-dimensional Gross-Pitaevskii equation, whose classical counterpart has been previously studied in the mathematical literature on turbulence. Our purpose is to explore the implications of the symmetries and energy bounds of this resonant system, previously studied for the classical case, for the quantum level splitting. Simplifications in computing the splitting spectrum numerically result from exploiting the symmetries. The highest energy state emanating from each unperturbed level is explicitly described by our analytics. We furthermore discuss the energy level spacing distributions in the spirit of quantum chaos theory. After separating the eigenvalues into blocks with respect to the known conservation laws, we observe the Wigner-Dyson statistics within specific large blocks, which leaves little room for further integrable structures in the problem beyond the symmetries that are already explicitly known.
We study the reconstructability of (d+2)-dimensional bulk spacetime from (d+1)-dimensional boundary data, particularly concentrating on backgrounds which break (d+1)-dimensional Lorentz invariance. For a large class of such spacetimes, there exist null geodesics which do not reach the boundary. Therefore classically we expect some information is trapped in the bulk and thus invisible at the boundary. We show that this classical intuition correctly predicts the quantum situation: whenever there are null geodesics which do not reach the boundary, there are also "trapped scalar modes" whose boundary imprint is exponentially suppressed. We use these modes to show that no smearing function exists for pure Lifshitz spacetime, nor for any flow which includes a Lifshitz region. Indeed, for any (planar) spacetime which breaks (d+1)-dimensional Lorentz invariance at any radius, we show that local boundary data cannot reconstruct complete local bulk data.
The integration of renewable sources poses challenges at the operational and economic levels of the power grid. In terms of keeping the balance between supply and demand, the usual scheme of supply following load may not be appropriate for large penetration levels of uncertain and intermittent renewable supply. In this paper, we focus on an alternative scheme in which the load follows the supply, exploiting the flexibility associated with the demand side. We consider a model of flexible loads that are to be serviced by zero-marginal cost renewable power together with conventional generation if necessary. Each load demands 1 kW for a specified number of time slots within an operational period. The flexibility of a load resides in the fact that the service may be delivered over any slots within the operational period. Loads therefore require flexible energy services that are differentiated by the demanded duration. We focus on two problems associated with durations-differentiated loads. The first problem deals with the operational decisions that a supplier has to make to serve a given set of duration differentiated loads. The second problem focuses on a market implementation for duration differentiated services. We give necessary and sufficient conditions under which the available power can service the loads, and we describe an algorithm that constructs an appropriate allocation. In the event the available supply is inadequate, we characterize the minimum amount of power that must be purchased to service the loads. Next we consider a forward market where consumers can purchase duration differentiated energy services. We first characterize social welfare maximizing allocations in this forward market and then show the existence of an efficient competitive equilibrium.
We study a double mean field-type PDE related to a prescribed curvature problem on compacts surfaces with boundary. We provide a general blow-up analysis, then a Moser-Trudinger inequality, which gives energy-minimizing solutions for some range of parameters. Finally, we provide existence of min-max solutions for a wider range of parameters, which is dense in the plane if $\S$ is not simply connected.
In this work we consider the state estimation problem in nonlinear/non-Gaussian systems. We introduce a framework, called the scaled unscented transform Gaussian sum filter (SUT-GSF), which combines two ideas: the scaled unscented Kalman filter (SUKF) based on the concept of scaled unscented transform (SUT), and the Gaussian mixture model (GMM). The SUT is used to approximate the mean and covariance of a Gaussian random variable which is transformed by a nonlinear function, while the GMM is adopted to approximate the probability density function (pdf) of a random variable through a set of Gaussian distributions. With these two tools, a framework can be set up to assimilate nonlinear systems in a recursive way. Within this framework, one can treat a nonlinear stochastic system as a mixture model of a set of sub-systems, each of which takes the form of a nonlinear system driven by a known Gaussian random process. Then, for each sub-system, one applies the SUKF to estimate the mean and covariance of the underlying Gaussian random variable transformed by the nonlinear governing equations of the sub-system. Incorporating the estimations of the sub-systems into the GMM gives an explicit (approximate) form of the pdf, which can be regarded as a "complete" solution to the state estimation problem, as all of the statistical information of interest can be obtained from the explicit form of the pdf ... This work is on the construction of the Gaussian sum filter based on the scaled unscented transform.
We develop a theory of gapped domain wall between topologically ordered systems in two spatial dimensions. We find a new type of superselection sector -- referred to as the parton sector -- that subdivides the known superselection sectors localized on gapped domain walls. Moreover, we introduce and study the properties of composite superselection sectors that are made out of the parton sectors. We explain a systematic method to define these sectors, their fusion spaces, and their fusion rules, by deriving nontrivial identities relating their quantum dimensions and fusion multiplicities. We propose a set of axioms regarding the ground state entanglement entropy of systems that can host gapped domain walls, generalizing the bulk axioms proposed in [B. Shi, K. Kato, and I. H. Kim, Ann. Phys. 418, 168164 (2020)]. Similar to our analysis in the bulk, we derive our main results by examining the self-consistency relations of an object called information convex set. As an application, we define an analog of topological entanglement entropy for gapped domain walls and derive its exact expression.
Analysis of road accidents is crucial to understand the factors involved and their impact. Accidents usually involve multiple variables like time, weather conditions, age of driver, etc. and hence it is challenging to analyze the data. To solve this problem, we use Multiple Correspondence Analysis (MCA) to first, filter out the most number of variables which can be visualized effectively in two dimensions and then study the correlations among these variables in a two dimensional scatter plot. Other variables, for which MCA cannot capture ample variance in the projected dimensions, we use hypothesis testing and time series analysis for the study.
We characterize the set of functions $u\_0\in L^2(R^n)$ such that the solution of the problem $u\_t=\mathcal{L}u$ in $R^n\times(0,\infty)$ starting from $u\_0$ satisfy upper and lower bounds of the form $c(1+t)^{-\gamma}\le \|u(t)\|\_2\le c'(1+t)^{-\gamma}$.Here $\mathcal{L}$ is in a large class of linear pseudo-differential operator with homogeneous symbol (including the Laplacian, the fractional Laplacian, etc.). Applications to nonlinear PDEs will be discussed: in particular our characterization provides necessary and sufficient conditions on $u\_0$ for a solution of the Navier--Stokes system to satisfy sharp upper-lower decay estimates as above.In doing so, we will revisit and improve the theory of \emph{decay characters} by C. Bjorland, C. Niche, and M.E. Schonbek, by getting advantage of the insight provided by the Littlewood--Paley analysis and the use of Besov spaces.
The rise of the Internet of Things (IoT) and mobile internet applications has spurred interest in location-based services (LBS) for commercial, military, and social applications. While the global positioning system (GPS) dominates outdoor localization, its efficacy wanes indoors due to signal challenges. Indoor localization systems leverage wireless technologies like Wi-Fi, ZigBee, Bluetooth, UWB, selecting based on context. Received signal strength indicator (RSSI) technology, known for its accuracy and simplicity, is widely adopted. This study employs machine learning algorithms in three phases: supervised regressors, supervised classifiers, and ensemble methods for RSSI-based indoor localization. Additionally, it introduces a weighted least squares technique and pseudo-linear solution approach to address non-linear RSSI measurement equations by approximating them with linear equations. An experimental testbed, utilizing diverse wireless technologies and anchor nodes, is designed for data collection, employing IoT cloud architectures. Pre-processing involves investigating filters for data refinement before algorithm training. The study employs machine learning models like linear regression, polynomial regression, support vector regression, random forest regression, and decision tree regressor across various wireless technologies. These models estimate the geographical coordinates of a moving target node, and their performance is evaluated using metrics such as accuracy, root mean square errors, precision, recall, sensitivity, coefficient of determinant, and the f1-score. The experiment's outcomes provide insights into the effectiveness of different supervised machine learning techniques in terms of localization accuracy and robustness in indoor environments.
By studying the minimum resources required to perform a unitary transformation, families of metrics and pseudo-metrics on unitary matrices that are closely related to a recently reported quantum speed limit by the author are found. Interestingly, this family of metrics can be naturally converted into useful indicators of the degree of non-commutativity between two unitary matrices.
Sterile neutrinos as source of the mass and flavor mixing of active neutrinos as well as genesis of the dark matter (DM) and matter-antimatter asymmetry have gained special interest. Here we study the case of the Standard Model (SM) extended with three right-handed (RH) neutrinos and a dark sector with two extra sterile neutrinos, odd under a discrete $Z_2$ symmetry. The RH neutrinos are responsible for producing the baryon asymmetry via the high-scale unflavored leptogenesis. They are superheavy and their abundance at the electroweak broken stage is vanishingly small, so that they have no impact on the phenomenology at low energies. The two dark neutrinos generate the tiny mass of two active neutrinos through a mechanism similar to the minimal linear seesaw, and saturate the relic abundance as freeze-in DM via decay of the heavy SM bosons. The absence of the dark Majorana mass terms in the dark linear seesaw is explained by invoking a hidden symmetry, the so-called presymmetry, and the DM candidate appears in the form of a quasi-Dirac neutrino. The $Z_2$ symmetry is broken in the dark neutrino sector, but exact in the realm of RH neutrinos. The required coupling weakness for the freeze-in DM neutrino is related to a very small breach of unitarity in the active neutrino mixing matrix. We show how phenomenological constraints on the production and decay of the DM neutrino imply an upper bound around 1 MeV for its mass and unitarity up to $\mathcal{O}(10^{-7})$ for the mixing matrix.
In this paper, we prove that: (1) Let $f:G\rightarrow H$ be a continuous $d$-open surjective homomorphism; if $G$ is an $\mathbb{R}$-factorizabile paratopological group, then so is $H$. Peng and Zhang's result \cite[Theorem 1.7]{PZ} is improved. (2) Let $G$ be a regular $\mathbb{R}$-factorizable paratopological group; then every subgroup $H$ of $G$ is $\mathbb{R}$-factorizable if and only if $H$ is $z$-embedded in $G$. This result gives out a positive answer to an question of M.~Sanchis and M.~Tkachenko \cite[Problem 5.3]{ST}.
Let $A$ be a square complex matrix and $z$ a complex number. The distance, with respect to the spectral norm, from $A$ to the set of matrices which have $z$ as an eigenvalue is less than or equal to the distance from $z$ to the spectrum of $A$. If these two distances are equal for a sufficiently large finite set of numbers $z$ which are not in the spectrum of $A$, then the matrix $A$ is normal.
We present a theory of jellyfish swarm formation and exemplify it with simulations of active Brownian particles. The motivation for our analysis is the phenomenon of jellyfish blooms in the ocean and clustering of jellyfish in tank experiments. We argue that such clusters emerge due to an externally induced phase transition of jellyfish density, such as convergent flows, which is then maintained and amplified by self-induced stimuli. Our study introduces three mechanisms relevant for a better understanding of jellyfish blooming that have not been taken into account before which are a signaling tracer, jellyfish-wall interaction and ignorance of external stimuli. Our results agree with the biological fact that jellyfish exhibit an extreme sensitivity to stimuli in order to achieve favorable aggregations. Based on our theoretical framework, we are able to provide a clear terminology for future experimental analysis of jellyfish swarming and we pinpoint potential limitations of tank experiments.
We study abrupt changes in the dynamics and/or steady state of fermionic dissipative systems produced by small changes of the system parameters. Specifically, we consider open fermionic systems whose dynamics is described by master equations that are quadratic (and, under certain conditions, quartic) in creation and annihilation operators. We analyze both phase transitions in steady state, as well as "dynamical transitions". The latter are characterized by abrupt changes in the rate at which the system asymptotically approaches the steady state. We illustrate our general findings with relevant examples of fermionic (and, equivalently, spin) systems, and show that they can be realized in ion chains.
We have investigated the temperature dependence of the electrical conductivity sigma(N,B,T) of nominally uncompensated, neutron-transmutation-doped ^{70}Ge:Ga samples in magnetic fields up to B=8 T at low temperatures (T=0.05-0.5 K). In our earlier studies at B=0, the critical exponent mu=0.5 defined by sigma(N,0,0) \propto (N-N_c)^{mu} has been determined for the same series of ^{70}Ge:Ga samples with the doping concentration N ranging from 1.861 \times 10^{17} cm^{-3} to 2.434 \times 10^{17} cm^{-3}. In magnetic fields, the motion of carriers loses time-reversal symmetry, the universality class may change and with it the value of mu. In this work, we show that magnetic fields indeed affect the value of mu (mu changes from 0.5 at B=0 to 1.1 at B \geq 4 T). The same exponent mu'=1.1 is also found in the magnetic-field-induced MIT for three different ^{70}Ge:Ga samples, i.e., sigma(N,B,0) \propto [B_c(N)-B]^{mu'} where B_c(N) is the concentration-dependent critical magnetic induction. We show that sigma(N,B,0) obeys a simple scaling rule on the (N,B) plane. Based on this finding, we derive from a simple mathematical argument that mu=mu' as has been observed in our experiment.
Caching data files directly on mobile user devices combined with device-to-device (D2D) communications has recently been suggested to improve the capacity of wireless net6works. We investigate the performance of regenerating codes in terms of the total energy consumption of a cellular network. We show that regenerating codes can offer large performance gains. It turns out that using redundancy against storage node failures is only beneficial if the popularity of the data is between certain thresholds. As our major contribution, we investigate under which circumstances regenerating codes with multiple redundant data fragments outdo uncoded caching.