text
stringlengths
133
1.92k
summary
stringlengths
24
228
Ooguri, Vafa, and Verlinde have recently proposed an approach to string cosmology which is based on the idea that cosmological string moduli should be selected by a Hartle-Hawking wave function. They are led to consider a certain Euclidean space which has *two different Lorentzian interpretations*, one of which is a model of an *accelerating cosmology*. We describe in detail how to implement this idea without resorting to a "complex metric". We show that the four-dimensional version of the OVV cosmology is null geodesically incomplete but has no curvature singularity; also that it is [barely] stable against the Seiberg-Witten process [nucleation of brane pairs]. The introduction of matter satisfying the Null Energy Condition has the paradoxical effect of both stabilizing the spacetime and rendering it genuinely singular. We show however that it is possible to arrange for an effective violation of the NEC in such a way that the singularity is avoided and yet the spacetime remains stable. The possible implications for the early history of these cosmologies are discussed.
Singularities in and Stability of Ooguri-Vafa-Verlinde Cosmologies
Adaptive bitrate streaming (ABR) has been widely adopted to support video streaming services over heterogeneous devices and varying network conditions. With ABR, each video content is transcoded into multiple representations in different bitrates and resolutions. However, video transcoding is computing intensive, which requires the transcoding service providers to deploy a large number of servers for transcoding the video contents published by the content producers. As such, a natural question for the transcoding service provider is how to provision the computing resource for transcoding the video contents while maximizing service profit. To address this problem, we design a cloud video transcoding system by taking the advantage of cloud computing technology to elastically allocate computing resource. We propose a method for jointly considering the task scheduling and resource provisioning problem in two timescales, and formulate the service profit maximization as a two-timescale stochastic optimization problem. We derive some approximate policies for the task scheduling and resource provisioning. Based on our proposed methods, we implement our open source cloud video transcoding system Morph and evaluate its performance in a real environment. The experiment results demonstrate that our proposed method can reduce the resource consumption and achieve a higher profit compared with the baseline schemes.
Resource Provisioning and Profit Maximization for Transcoding in Information Centric Networking
We construct most symmetric Saddle towers in Heisenberg space i.e. periodic minimal surfaces that can be seen as the desingularization of vertical planes intersecting equiangularly. The key point is the construction of a suitable barrier to ensure the convergence of a family of bounded minimal disks. Such a barrier is actually a periodic deformation of a minimal plane with prescribed asymptotic behavior. A consequence of the barrier construction is that the number of disjoint minimal graphs suppoerted on domains is not bounded in Heisenberg space.
Saddle towers in Heisenberg space
We have studied numerically the dynamics of sliding charge-density waves (CDWs) in the presence of impurities in d=1,2. The model considered exhibits a first order dynamical transition at a critical driving force $F_c$ between ``rough'' (disorder dominated) $(F<F_c)$ and ``flat'' $(F>F_c)$ sliding phases where disorder is washed out by the external drive. The effective model for the sliding CDWs in the presence of impurities can be mapped onto that of a magnetic flux line pinned by columnar defects and tilted by an applied field. The dynamical transition of sliding CDWs corresponds to the transverse Meissner effect of the tilted flux line.
Dynamical Transition in Sliding Charge-density Waves with Quenched Disorder
This work revisits, from a geometric perspective, the notion of discrete connection on a principal bundle, introduced by M. Leok, J. Marsden and A. Weinstein. It provides precise definitions of discrete connection, discrete connection form and discrete horizontal lift and studies some of their basic properties and relationships. An existence result for discrete connections on principal bundles equipped with appropriate Riemannian metrics is proved.
A geometric approach to discrete connections on principal bundles
We compute the local Gromov-Witten invariants of the "closed vertex", that is, a configuration of three rational curves meeting in a single triple point in a Calabi-Yau threefold. The method is to express the local invariants of the vertex in terms of ordinary Gromov-Witten invariants of a certain blowup of CP^3 and then to compute those invariants via the geometry of the Cremona transformation.
The closed topological vertex via the Cremona transform
Unbiased learning to rank (ULTR) aims to mitigate various biases existing in user clicks, such as position bias, trust bias, presentation bias, and learn an effective ranker. In this paper, we introduce our winning approach for the "Unbiased Learning to Rank" task in WSDM Cup 2023. We find that the provided data is severely biased so neural models trained directly with the top 10 results with click information are unsatisfactory. So we extract multiple heuristic-based features for multi-fields of the results, adjust the click labels, add true negatives, and re-weight the samples during model training. Since the propensities learned by existing ULTR methods are not decreasing w.r.t. positions, we also calibrate the propensities according to the click ratios and ensemble the models trained in two different ways. Our method won the 3rd prize with a DCG@10 score of 9.80, which is 1.1% worse than the 2nd and 25.3% higher than the 4th.
Feature-Enhanced Network with Hybrid Debiasing Strategies for Unbiased Learning to Rank
We present a renormalization group analysis to Einstein-Rosen waves or vacuum spacetimes with whole-cylinder symmetry. It is found that self-similar solutions appear as fixed points in the renormalization group transformation. These solutions correspond to the explosive gravitational waves and the collapsing gravitational waves at late times and early times, respectively. Based on the linear perturbation analysis of the self-similar solutions, we conclude that the self-similar evolution is stable as explosive gravitational waves under the condition of no incoming waves, while it is weakly unstable as collapsing gravitational waves. The result implies that self-similar solutions can describe the asymptotic behavior of more general solutions for exploding gravitational waves and thus extends the similarity hypothesis in general relativity from spherical symmetry to cylindrical symmetry.
Renormalization group approach to Einstein-Rosen waves
This paper is concerned with the numerical implementation of a formula in the enclosure method as applied to a prototype inverse initial boundary value problem for thermal imaging in a one-space dimension. A precise error estimate of the formula is given and the effect on the discretization of the used integral of the measured data in the formula is studied. The formula requires a large frequency to converge; however, the number of time interval divisions grows exponetially as the frequency increases. Therefore, for a given number of divisions, we fixed the trusted frequency region of convergence with some given error bound. The trusted frequency region is computed theoretically using theorems provided in this paper and is numerically implemented for various cases.
Trusted frequency region of convergence for the enclosure method in an inverse heat equation
With the advancements of computer architectures, the use of computational models proliferates to solve complex problems in many scientific applications such as nuclear physics and climate research. However, the potential of such models is often hindered because they tend to be computationally expensive and consequently ill-fitting for uncertainty quantification. Furthermore, they are usually not calibrated with real-time observations. We develop a computationally efficient algorithm based on variational Bayes inference (VBI) for calibration of computer models with Gaussian processes. Unfortunately, the speed and scalability of VBI diminishes when applied to the calibration framework with dependent data. To preserve the efficiency of VBI, we adopt a pairwise decomposition of the data likelihood using vine copulas that separate the information on dependence structure in data from their marginal distributions. We provide both theoretical and empirical evidence for the computational scalability of our methodology and describe all the necessary details for an efficient implementation of the proposed algorithm. We also demonstrate the opportunities given by our method for practitioners on a real data example through calibration of the Liquid Drop Model of nuclear binding energies.
Variational Inference with Vine Copulas: An efficient Approach for Bayesian Computer Model Calibration
We give an elementary description of the space of formal periods of a mixed motive. This allows for a simplified reformulation of the period conjectures of Grothendieck and Kontsevich-Zagier. Furthermore, we develop a machinery which in principle allows to determine the space of formal periods for an arbitrary mixed motive explicitly.
A note on formal periods
Automated vascular segmentation on optical coherence tomography angiography (OCTA) is important for the quantitative analyses of retinal microvasculature in neuroretinal and systemic diseases. Despite recent improvements, artifacts continue to pose challenges in segmentation. Our study focused on removing the speckle noise artifact from OCTA images when performing segmentation. Speckle noise is common in OCTA and is particularly prominent over large non-perfusion areas. It may interfere with the proper assessment of retinal vasculature. In this study, we proposed a novel Supervision Vessel Segmentation network (SVS-net) to detect vessels of different sizes. The SVS-net includes a new attention-based module to describe vessel positions and facilitate the understanding of the network learning process. The model is efficient and explainable and could be utilized to reduce the need for manual labeling. Our SVS-net had better performance in accuracy, recall, F1 score, and Kappa score when compared to other well recognized models.
SVS-net: A Novel Semantic Segmentation Network in Optical Coherence Tomography Angiography Images
For intelligent reflecting surface (IRS) aided downlink communication in frequency division duplex (FDD) systems, the overhead for the base station (BS) to acquire channel state information (CSI) is extremely high under the conventional ``estimate-then-quantize'' scheme, where the users first estimate and then feed back their channels to the BS. Recently, [1] revealed a strong correlation in different users' cascaded channels stemming from their common BS-IRS channel component, and leveraged such a correlation to significantly reduce the pilot transmission overhead in IRS-aided uplink communication. In this paper, we aim to exploit the above channel property for reducing the overhead of both pilot transmission and feedback transmission in IRS-aided downlink communication. Different from the uplink counterpart where the BS possesses the pilot signals containing the CSI of all the users, in downlink communication, the distributed users merely receive the pilot signals containing their own CSI and cannot leverage the correlation in different users' channels revealed in [1]. To tackle this challenge, this paper proposes a novel ``quantize-then-estimate'' protocol in FDD IRS-aided downlink communication. Specifically, the users first quantize their received pilot signals, instead of the channels estimated from the pilot signals, and then transmit the quantization bits to the BS. After de-quantizing the pilot signals received by all the users, the BS estimates all the cascaded channels by leveraging the correlation embedded in them, similar to the uplink scenario. Furthermore, we manage to show both analytically and numerically the great overhead reduction in terms of pilot transmission and feedback transmission arising from our proposed ``quantize-then-estimate'' protocol.
A Quantize-then-Estimate Protocol for CSI Acquisition in IRS-Aided Downlink Communication
We present a systematic study of the ballistic electron conductance through sp and 3d transition metal atoms attached to copper and palladium crystalline electrodes. We employ the 'ab initio' screened Korringa-Kohn-Rostoker Green's function method to calculate the electronic structure of nanocontacts while the ballistic transmission and conductance eigenchannels were obtained by means of the Kubo approach as formulated by Baranger and Stone. We demonstrate that the conductance of the systems is mainly determined by the electronic properties of the atom bridging the macroscopic leads. We classify the conducting eigenchannels according to the atomic orbitals of the contact atom and the irreducible representations of the symmetry point group of the system that leads to the microscopic understanding of the conductance. We show that if impurity resonances in the density of states of the contact atom appear at the Fermi energy, additional channels of appropriate symmetry could open. On the other hand the transmission of the existing channels could be blocked by impurity scattering.
Transport properties of single atoms
Convolution is an integral operation that defines how the shape of one function is modified by another function. This powerful concept forms the basis of hierarchical feature learning in deep neural networks. Although performing convolution in Euclidean geometries is fairly straightforward, its extension to other topological spaces---such as a sphere ($\mathbb{S}^2$) or a unit ball ($\mathbb{B}^3$)---entails unique challenges. In this work, we propose a novel `\emph{volumetric convolution}' operation that can effectively model and convolve arbitrary functions in $\mathbb{B}^3$. We develop a theoretical framework for \emph{volumetric convolution} based on Zernike polynomials and efficiently implement it as a differentiable and an easily pluggable layer in deep networks. By construction, our formulation leads to the derivation of a novel formula to measure the symmetry of a function in $\mathbb{B}^3$ around an arbitrary axis, that is useful in function analysis tasks. We demonstrate the efficacy of proposed volumetric convolution operation on one viable use case i.e., 3D object recognition.
Representation Learning on Unit Ball with 3D Roto-Translational Equivariance
We propose that the recently observed violation of the Wiedemann-Franz law in the normal state of underdoped cuprates is caused by spin-charge separation and dynamical chiral symmetry breaking in a (2+1)-dimensional system consisting of massless Dirac fermions, charged bosons and a gauge field. While the d-wave spinon gap vanishes at the Fermi points, the nodal fermions acquire a finite mass due to strong gauge fluctuations. This mass provides a gap below which no free fermions can be excited. This implies that there is not a residual linear term for the thermal conductivity, in good agreement with experiments. Other physical implications of the CSB are also discussed.
Chiral symmetry breaking and violation of the Wiedemann-Franz law in underdoped cuprates
We consider a magnetic impurity in the antiferromagnetic spin-1/2 chain which is equivalent to the two-channel Kondo problem in terms of the field theoretical description. Using a modification of the transfer-matrix density matrix renormalization group (DMRG) we are able to determine local and global properties in the thermodynamic limit. The cross-over function for the impurity susceptibility is calculated over a large temperature range, which exhibits universal data-collapse. We are also able to determine the local susceptibilities near the impurity, which show an interesting competition of boundary effects. This results in quantitative predictions for experiments on doped spin-1/2 chains, which could observe two-channel Kondo physics directly.
Universal cross-over behavior of a magnetic impurity and consequences for doping in spin-1/2 chains
Modern computationally-heavy applications are often time-sensitive, demanding distributed strategies to accelerate them. On the other hand, distributed computing suffers from the bottleneck of slow workers in practice. Distributed coded computing is an attractive solution that adds redundancy such that a subset of distributed computations suffices to obtain the final result. However, the final result is still either obtained within a desired time or not, and for the latter, the resources that are spent are wasted. In this paper, we introduce the novel concept of layered-resolution distributed coded computations such that lower resolutions of the final result are obtained from collective results of the workers -- at an earlier stage than the final result. This innovation makes it possible to have more effective deadline-based systems, since even if a computational job is terminated because of timing, an approximated version of the final result can be released. Based on our theoretical and empirical results, the average execution delay for the first resolution is notably smaller than the one for the final resolution. Moreover, the probability of meeting a deadline is one for the first resolution in a setting where the final resolution exceeds the deadline almost all the time, reducing the success rate of the systems with no layering.
Distributed Computations with Layered Resolution
Emphasizing the contribution of Professor Roger A Cowley, FRS to the Theory of Raman Scattering from crystals, the development of the Theory of Raman cattering since 1928 has been briefly discussed. Some experimental studies of Strontium Titanate using Inelastic Neutron Scattering, Raman Scattering, Electro- paramagnetic resonance measurement and X-ray & Gamma Ray techniques has been briefly discussed. Using Schwabl's semi-phenomenological theory for the soft mode and central peak, we have developed (a) a one-phonon Green's function exhibiting the three peaked structure and (b) a two phonon Green's function involving one hard mode under damped quasiharmonic phonon and one three peaked soft-mode phonon. We have developed the pre-cursor order induced Raman scattering near the displacive phase transition in terms of Green's functions. Using Group Theory, we have predicted the Raman-active modes in Strontium Titanate contributing to Critical Raman Scattering near hard-mode frequencies above and below critical temperature. We have calculated the Critical Integrated Raman Scattering Intensity and the Two-phonon Background Raman Scattering Intensity near hard-mode frequencies above and below the critical temperature. The results show the same trends as observed in some of the experimental observations.
Critical Integrated Raman Scattering Intensity near the cubic-tetragonal phase transition in Strontium Titanate
The purpose of this paper is to extend the notions of generalised Poincar\'e series and divisorial generalised Poincar\'e series (of motivic nature) introduced by Campillo, Delgado and Gusein-Zade for complex curve singularities to curves defined over perfect fields, as well as to express them in terms of an embedded resolution of curves.
Generalised Poincar\'e series and embedded resolution of curves
We have extended the search for topological insulators to the ternary tetradymite-like compounds M2X2Y (M = Bi or Sb; X and Y = S, Se or Te), which are variations of the well-known binary compounds Bi2Se3 and Bi2Te3. Our first-principles computations suggest that five existing compounds are strong topological insulators with a single Dirac cone on the surface. In particular, stoichiometric Bi2Se2S, Sb2Te2Se and Sb2Te2S are predicted to have an isolated Dirac cone on their naturally cleaved surface. This finding paves the way for the realization of the topological transport regime.
An isolated Dirac cone on the surface of ternary tetradymite-like topological insulators
I prove that a vector bundle on a minuscule homogeneous variety splits into a direct sum of line bundles if and only if its restriction to the union of two-dimensional Schubert subvarieties splits. A case-by-case analysis is done.
Splitting criteria for vector bundles on minuscule homogeneous varieties
We study the uniqueness of steady states of strong-KPP reaction--diffusion equations in general domains under various boundary conditions. We show that positive bounded steady states are unique provided the domain satisfies a certain spectral nondegeneracy condition. We also formulate a number of open problems and conjectures.
The steady states of strong-KPP reactions in general domains
Considering a realistic situation, where electron-neutral collisions persist in plasma, analytical calculations are carried out for the THz radiation generation by beating of two Skew Cosh-Gaussian (SG) lasers of parameters n & s. The competency of these lasers over Gaussian lasers is discussed in detail with respect to the effects of collision and beam width on the THz field amplitude and efficiency of the mechanism. A critical transverse distance of the peak of the THz field is defined that shows a dependence on the skewness parameter of skew ChG lasers. Although electron-neutral collisions and larger beam width lead to the drastic reduction in the THz field when the skew ChG lasers are used in the plasma, the efficiency of the mechanism remains much larger than the case of Gaussian lasers. Moreover, the skewChG lasers produce stronger and multifocal THz radiation.
Powered Multifocal THz Radiation by Mixing of Two Skew (ChG) Cosh-Gaussian Laser Beams in Collisional Plasma
We solve the wave equations of arbitrary integer spin fields in the BTZ black hole background and obtain exact expressions for their quasinormal modes. We show that these quasinormal modes precisely agree with the location of the poles of the corresponding two point function in the dual conformal field theory as predicted by the AdS/CFT correspondence. We then use these quasinormal modes to construct the one-loop determinant of the higher spin field in the thermal BTZ background. This is shown to agree with that obtained from the corresponding heat kernel constructed recently by group theoretic methods.
Higher spin quasinormal modes and one-loop determinants in the BTZ black hole
Grover's algorithm achieves a quadratic speedup over classical algorithms, but it is considered necessary to know the value of $\lambda$ exactly [Phys. Rev. Lett. 95, 150501 (2005); Phys. Rev. Lett. 113, 210501 (2014)], where $\lambda$ is the fraction of target items in the database. In this paper, we find out that the Grover algorithm can actually apply to the case where one can identify the range that $\lambda$ belongs to from a given series of disjoint ranges. However, Grover's algorithm still cannot maintain high success probability when there exist multiple target items. For this problem, we proposed a complementary-multiphase quantum search algorithm, %with general iterations, in which multiple phases complement each other so that the overall high success probability can be maintained. Compared to the existing algorithms, in the case defined above, for the first time our algorithm achieves the following three goals simultaneously: (1) the success probability can be no less than any given value between 0 and 1, (2) the algorithm is applicable to the entire range of $\lambda$, and (3) the number of iterations is almost the same as that of Grover's algorithm. Especially compared to the optimal fixed-point algorithm [Phys. Rev. Lett. 113, 210501 (2014)], our algorithm uses fewer iterations to achieve success probability greater than 82.71\%, e.g., when the minimum success probability is required to be 99.25\%, the number of iterations can be reduced by 50\%.
Complementary-multiphase quantum search for all numbers of target items
Large language models (LLMs) have demonstrated dominating performance in many NLP tasks, especially on generative tasks. However, they often fall short in some information extraction tasks, particularly those requiring domain-specific knowledge, such as Biomedical Named Entity Recognition (NER). In this paper, inspired by Chain-of-thought, we leverage the LLM to solve the Biomedical NER step-by-step: break down the NER task into entity span extraction and entity type determination. Additionally, for entity type determination, we inject entity knowledge to address the problem that LLM's lack of domain knowledge when predicting entity category. Experimental results show a significant improvement in our two-step BioNER approach compared to previous few-shot LLM baseline. Additionally, the incorporation of external knowledge significantly enhances entity category determination performance.
Inspire the Large Language Model by External Knowledge on BioMedical Named Entity Recognition
Decision-making under uncertainty is hugely important for any decisions sensitive to perturbations in observed data. One method of incorporating uncertainty into making optimal decisions is through robust optimization, which minimizes the worst-case scenario over some uncertainty set. We connect conformal prediction regions to robust optimization, providing finite sample valid and conservative ellipsoidal uncertainty sets, aptly named conformal uncertainty sets. In pursuit of this connection we explicitly define Mahalanobis distance as a potential conformity score in full conformal prediction. We also compare the coverage and optimization performance of conformal uncertainty sets, specifically generated with Mahalanobis distance, to traditional ellipsoidal uncertainty sets on a collection of simulated robust optimization examples.
Conformal Uncertainty Sets for Robust Optimization
For two properly intersecting effective cycles in projective space X,Y, and their intersection product Z, the metric Bezout Theorem relates the degrees, heights of X,Y, and Z, as well as their distances and algebraic distances to a given point theta. Applications of this Theorem are in the area of Diophantine Approximation, giving estimates for approximation properties of Z with respect to $\theta$ against the ones of X, and Y.
Diophantine Approximation on projective Varieties I: Algebraic distance and metric B\'ezout Theorem
As traffic congestion becomes a huge problem for most developing and developed countries across the world, intelligent transportation systems (ITS) are becoming a hot topic that is attracting attention of researchers and the general public alike. In this paper, we demonstrate a specific implementation of an ITS system whereby traffic lights are actuated by DSRC radios installed in vehicles. More specifically, we report the design of prototype of a DSRC-Actuated Traffic Lights (DSRC-ATL) system. It is shown that this system can reduce the travel time and commute time significantly, especially during rush hours. Furthermore, the results reported in this paper do not assume or require all vehicles to be equipped with DSCR radios. Even with low penetration ratios, e.g., when only 20% of all vehicles in a city are equipped with DSRC radios, the overall performance of the designed system is superior to the current traffic control systems.
Increasing Traffic Flows with DSRC Technology: Field Trials and Performance Evaluation
In this paper, we prove global in time existence, uniqueness and stability of mild solutions near vacuum for the 4-wave inhomogeneous kinetic wave equation, for Laplacian dispersion relation in dimension $d=2,3$. We also show that for non-negative initial data, the solution remains non-negative. This is achieved by connecting the inhomogeneous kinetic wave equation to the cubic part of a quantum Boltzmann-type equation with moderately hard potential and no collisional averaging.
Global well-posedness of the inhomogeneous kinetic wave equation near vacuum
We propose an approach for annotating object classes using free-form text written by undirected and untrained annotators. Free-form labeling is natural for annotators, they intuitively provide very specific and exhaustive labels, and no training stage is necessary. We first collect 729 labels on 15k images using 124 different annotators. Then we automatically enrich the structure of these free-form annotations by discovering a natural vocabulary of 4020 classes within them. This vocabulary represents the natural distribution of objects well and is learned directly from data, instead of being an educated guess done before collecting any labels. Hence, the natural vocabulary emerges from a large mass of free-form annotations. To do so, we (i) map the raw input strings to entities in an ontology of physical objects (which gives them an unambiguous meaning); and (ii) leverage inter-annotator co-occurrences, as well as biases and knowledge specific to individual annotators. Finally, we also automatically extract natural vocabularies of reduced size that have high object coverage while remaining specific. These reduced vocabularies represent the natural distribution of objects much better than commonly used predefined vocabularies. Moreover, they feature more uniform sample distribution over classes.
Natural Vocabulary Emerges from Free-Form Annotations
We propose a randomized method for solving linear programs with a large number of columns but a relatively small number of constraints. Since enumerating all the columns is usually unrealistic, such linear programs are commonly solved by column generation, which is often still computationally challenging due to the intractability of the subproblem in many applications. Instead of iteratively introducing one column at a time as in column generation, our proposed method involves sampling a collection of columns according to a user-specified randomization scheme and solving the linear program consisting of the sampled columns. While similar methods for solving large-scale linear programs by sampling columns (or, equivalently, sampling constraints in the dual) have been proposed in the literature, in this paper we derive an upper bound on the optimality gap that holds with high probability. This bound converges at a rate $1 / \sqrt{K}$, where $K$ is the number of sampled columns, to the optimality gap of a linear program related to the sampling distribution. We analyze the gap of this latter linear program, which we dub the distributional counterpart, and derive conditions under which this gap will be small. Finally, we numerically demonstrate the effectiveness of the proposed method in the cutting-stock problem and in nonparametric choice model estimation.
Column-Randomized Linear Programs: Performance Guarantees and Applications
Bayesian learning using Gaussian processes provides a foundational framework for making decisions in a manner that balances what is known with what could be learned by gathering data. In this dissertation, we develop techniques for broadening the applicability of Gaussian processes. This is done in two ways. Firstly, we develop pathwise conditioning techniques for Gaussian processes, which allow one to express posterior random functions as prior random functions plus a dependent update term. We introduce a wide class of efficient approximations built from this viewpoint, which can be randomly sampled once in advance, and evaluated at arbitrary locations without any subsequent stochasticity. This key property improves efficiency and makes it simpler to deploy Gaussian process models in decision-making settings. Secondly, we develop a collection of Gaussian process models over non-Euclidean spaces, including Riemannian manifolds and graphs. We derive fully constructive expressions for the covariance kernels of scalar-valued Gaussian processes on Riemannian manifolds and graphs. Building on these ideas, we describe a formalism for defining vector-valued Gaussian processes on Riemannian manifolds. The introduced techniques allow all of these models to be trained using standard computational methods. In total, these contributions make Gaussian processes easier to work with and allow them to be used within a wider class of domains in an effective and principled manner. This, in turn, makes it possible to potentially apply Gaussian processes to novel decision-making settings.
Gaussian Processes and Statistical Decision-making in Non-Euclidean Spaces
Organelle size control is a fundamental question in biology that demonstrates the fascinating ability of cells to maintain homeostasis within their highly variable environments. Theoretical models describing cellular dynamics have the potential to help elucidate the principles underlying size control. Here, we perform a detailed study of the active disassembly model proposed in [Fai et al, Length regulation of multiple flagella that self-assemble from a shared pool of components, eLife, 8, (2019): e42599]. We construct a hybrid system which is shown to be well-behaved throughout the domain. We rule out the possibility of oscillations arising in the model and prove global asymptotic stability in the case of two flagella by the construction of a suitable Lyapunov function. Finally, we generalize the model to the case of arbitrary flagellar number in order to study olfactory sensory neurons, which have up to twenty cilia per cell. We show that our theoretical results may be extended to this case and explore the implications of this universal mechanism of size control.
Global asymptotic stability of the active disassembly model of flagellar length control
We show that the joint probability generating function of the stationary measure of a finite state asymmetric exclusion process with open boundaries can be expressed in terms of joint moments of Markov processes called quadratic harnesses. We use our representation to prove the large deviations principle for the total number of particles in the system. We use the generator of the Markov process to show how explicit formulas for the average occupancy of a site arise for special choices of parameters. We also give similar representations for limits of stationary measures as the number of sites tends to infinity.
Asymmetric Simple Exclusion Process with open boundaries and Quadratic Harnesses
This paper proposes a new method for Non-Rigid Structure-from-Motion (NRSfM) from a long monocular video sequence observing a non-rigid object performing recurrent and possibly repetitive dynamic action. Departing from the traditional idea of using linear low-order or lowrank shape model for the task of NRSfM, our method exploits the property of shape recurrency (i.e., many deforming shapes tend to repeat themselves in time). We show that recurrency is in fact a generalized rigidity. Based on this, we reduce NRSfM problems to rigid ones provided that certain recurrency condition is satisfied. Given such a reduction, standard rigid-SfM techniques are directly applicable (without any change) to the reconstruction of non-rigid dynamic shapes. To implement this idea as a practical approach, this paper develops efficient algorithms for automatic recurrency detection, as well as camera view clustering via a rigidity-check. Experiments on both simulated sequences and real data demonstrate the effectiveness of the method. Since this paper offers a novel perspective on rethinking structure-from-motion, we hope it will inspire other new problems in the field.
Structure from Recurrent Motion: From Rigidity to Recurrency
We examine the uniform distribution theory of H. Weyl when there is a periodic perturbation present. As opposed to the classical setting, in this case the conditions for (mod 1) density and (mod 1) uniform distribution turn out to be different.
Weyl's uniform distribution under periodic perturbation
In collisions of intense beams, or clouds, coherent dynamics open the way for fast transformations of light highly relativistic species into one another. Possible applications are suggested for situations, each involving two out of three of: photons, neutrinos, or gravitons.
Effective mixing between clouds of different species of particles that are massless, or nearly massless
We introduce the notions of scaling transition and distributional long-range dependence for stationary random fields $Y$ on $\mathbb {Z}^2$ whose normalized partial sums on rectangles with sides growing at rates $O(n)$ and $O(n^{\gamma})$ tend to an operator scaling random field $V_{\gamma}$ on $\mathbb {R}^2$, for any $\gamma>0$. The scaling transition is characterized by the fact that there exists a unique $\gamma_0>0$ such that the scaling limits $V_{\gamma}$ are different and do not depend on $\gamma$ for $\gamma>\gamma_0$ and $\gamma<\gamma_0$. The existence of scaling transition together with anisotropic and isotropic distributional long-range dependence properties is demonstrated for a class of $\alpha$-stable $(1<\alpha\le2)$ aggregated nearest-neighbor autoregressive random fields on $\mathbb{Z}^2$ with a scalar random coefficient $A$ having a regularly varying probability density near the "unit root" $A=1$.
Aggregation of autoregressive random fields and anisotropic long-range dependence
We present measurements of single-photon ionization time delays between valence electrons of argon and neon using a coincidence detection technique that allows for the simultaneous measurement of both species under identical conditions. Taking into account the chirp of the ionizing single attosecond pulse (attochirp) ensures that the clock of our measurement technique is started at the same time for both types of electrons, revealing with high accuracy and resolution energy-dependent time delays of a few tens of attoseconds. By comparing our results with theoretical predictions, we confirm that the so-called Wigner delay correctly describes single-photon ionization delays as long as atomic resonances can be neglected. Our data, however, also reveal that such resonances can greatly affect the measured delays beyond the simple Wigner picture.
Resonance effects in photoemission time delays
This paper provides estimation and inference methods for the best linear predictor (approximation) of a structural function, such as conditional average structural and treatment effects, and structural derivatives, based on modern machine learning (ML) tools. We represent this structural function as a conditional expectation of an unbiased signal that depends on a nuisance parameter, which we estimate by modern machine learning techniques. We first adjust the signal to make it insensitive (Neyman-orthogonal) with respect to the first-stage regularization bias. We then project the signal onto a set of basis functions, growing with sample size, which gives us the best linear predictor of the structural function. We derive a complete set of results for estimation and simultaneous inference on all parameters of the best linear predictor, conducting inference by Gaussian bootstrap. When the structural function is smooth and the basis is sufficiently rich, our estimation and inference result automatically targets this function. When basis functions are group indicators, the best linear predictor reduces to group average treatment/structural effect, and our inference automatically targets these parameters. We demonstrate our method by estimating uniform confidence bands for the average price elasticity of gasoline demand conditional on income.
Debiased Machine Learning of Conditional Average Treatment Effects and Other Causal Functions
Background: There is uncertainty about the role of different age groups in propagating the SARS-CoV-2 epidemics in different countries. Methods: We used the Koch Institute data on COVID-19 cases in Germany. To minimize the effect of changes in healthcare seeking behavior and testing practices, we included the following 5-year age groups in the analyses: 10-14y through 45-49y. For each age group g, we considered the proportion PL(g) of individuals in age group g among all detected cases aged 10-49y during weeks 13-14, 2020 (later period), as well as corresponding proportion PE(g) for weeks 10-11, 2020 (early period), and the relative risk RR(g)=PL(g)/PE(g). For each pair of age groups g1,g2, a higher value of RR(g1) compared to RR(g2) is interpreted as the relative increase in the population incidence of SARS-Cov-2 for g1 compared to g2 for the later vs. early period. Results: The relative risk was highest for individuals aged 20-24y (RR=1.4(95% CI (1.27,1.55))), followed by individuals aged 15-19y (RR=1.14(0.99,1.32)), aged 30-34y (RR= 1.07(0.99,1.16)), aged 25-29y (RR= 1.06(0.98,1.15)), aged 35-39y (RR=0.95(0.87,1.03)), aged 40-44y (RR=0.9(0.83,0.98)), aged 45-49y (RR=0.83(0.77,0.89)) and aged 10-14y (RR=0.78(0.64,0.95)). Conclusions: The observed relative increase with time in the prevalence of individuals aged 15-34y (particularly those aged 20-24y) among COVID-19 cases is unlikely to be explained by increases in the likelihood of seeking medical care/being tested for individuals in those age groups compared to individuals aged 35-49y or 10-14y, suggesting an actual increase in the prevalence of individuals aged 15-34y among SARS-CoV-2 infections in the German population. That increase likely reflects elevated mixing among individuals aged 15-34y (particularly those aged 20-24y) compared to other age groups, possibly due to lesser adherence to social distancing practices.
Temporal rise in the proportion of younger adults and older adolescents among COVID-19 cases in Germany: evidence of lesser adherence to social distancing practices?
In this paper, we study pathologies of Du Val del Pezzo surfaces defined over an algebraically closed field of positive characteristic by relating them to their non-liftability to the ring of Witt vectors. More precisely, we investigate the condition (NB): all the anti-canonical divisors are singular, (ND): there are no Du Val del Pezzo surfaces over the field of complex numbers with the same Dynkin type, Picard rank, and anti-canonical degree, (NK): there exists an ample $\mathbb{Z}$-divisor which violates the Kodaira vanishing theorem for $\mathbb{Z}$-divisors, and (NL): the pair $(Y, E)$ does not lift to the ring of Witt vectors, where $Y$ is the minimal resolution and $E$ is its reduced exceptional divisor. As a result, for each of these conditions, we determine all the Du Val del Pezzo surfaces which satisfy the given one.
Pathologies and liftability of Du Val del Pezzo surfaces in positive characteristic
We consider a sparse deep ReLU network (SDRN) estimator obtained from empirical risk minimization with a Lipschitz loss function in the presence of a large number of features. Our framework can be applied to a variety of regression and classification problems. The unknown target function to estimate is assumed to be in a Sobolev space with mixed derivatives. Functions in this space only need to satisfy a smoothness condition rather than having a compositional structure. We develop non-asymptotic excess risk bounds for our SDRN estimator. We further derive that the SDRN estimator can achieve the same minimax rate of estimation (up to logarithmic factors) as one-dimensional nonparametric regression when the dimension of the features is fixed, and the estimator has a suboptimal rate when the dimension grows with the sample size. We show that the depth and the total number of nodes and weights of the ReLU network need to grow as the sample size increases to ensure a good performance, and also investigate how fast they should increase with the sample size. These results provide an important theoretical guidance and basis for empirical studies by deep neural networks.
Statistical Learning using Sparse Deep Neural Networks in Empirical Risk Minimization
We propose a mechanism of unconventional superconductivity in two-dimensional strongly-correlated electron systems. We consider a two-dimensional Kondo lattice system or double-exchange system with spin-orbit coupling arising from buckling of the plane. We show that a Chern-Simons term is induced for a gauge field describing the phase fluctuations of the localized spins. Through the induced Chern-Simons term, carriers behave like skyrmion excitations that lead to a destruction mechanism of magnetic long-range order by carrier doping. After magnetic long-range order is destroyed by carrier doping, the Chern-Simons term plays a dominant role and the attractive interaction between skyrmions leads to unconventional superconductivity. For the case of the ferromagnetic interaction between the localized spins, the symmetry of the Cooper pair is p-wave ($p_x \pm ip_y$). For the case of the antiferromagnetic interaction between the localized spins, the symmetry of the Cooper pair is d-wave ($d_{x^2-y^2}$). Applications to various systems are discussed, in particular to the high-$T_c$ cuprates.
Mechanism of unconventional superconductivity induced by skyrmion excitations in two-dimensional strongly-correlated electron systems
Von Neumann entropy has a natural extension to the case of an arbitrary semifinite von Neumann algebra, as was considered by I. E. Segal. We relate this entropy to the relative entropy and show that the entropy increase for an inclusion of von Neumann factors is bounded by the logarithm of the Jones index. The bound is optimal if the factors are infinite dimensional.
A note on continuous entropy
Many debris disks seen in scattered light have shapes that imply their dust grains trace highly eccentric, apsidally aligned orbits. Apsidal alignment is surprising, especially for dust. Even when born from an apse-aligned ring of parent bodies, dust grains have their periastra dispersed in all directions by stellar radiation pressure. The periastra cannot be re-oriented by planets within the short dust lifetimes at the bottom of the collisional cascade. We propose that what re-aligns dust orbits is drag exerted by second-generation gas. Gas is largely immune to radiation pressure, and when released by photodesorption or collisions within an eccentric ring of parent bodies should occupy a similarly eccentric, apse-aligned ring. Dust grains launched onto misaligned orbits cross the eccentric gas ring supersonically and can become dragged into alignment within collisional lifetimes. The resultant dust configurations, viewed nearly but not exactly edge-on, with periastra pointing away from the observer, appear moth-like, with kinked wings and even doubled pairs of wings, explaining otherwise mysterious features in HD 61005 ("The Moth") and HD 32297, including their central bulbs when we account for strong forward scattering from irregularly shaped particles. Around these systems we predict gas at Kuiper-belt-like distances to move on highly elliptical streamlines that owe their elongation, ultimately, to highly eccentric planets. Unresolved issues and an alternative explanation for apsidal alignment are outlined.
Sculpting eccentric debris disks with eccentric gas rings
We study Diophantine approximation in completions of functions fields over finite fields, and in particular in fields of formal Laurent series over finite fields. We introduce a Lagrange spectrum for the approximation by orbits of quadratic irrationals under the modular group. We give nonarchimedean analogs of various well known results in the real case: the closedness and boundedness of the Lagrange spectrum, the existence of a Hall ray, as well as computations of various Hurwitz constants. We use geometric methods of group actions on Bruhat-Tits trees.
On the nonarchimedean quadratic Lagrange spectra
In this work we provide a triple master action interpolating among three self-dual descriptions of massive spin-3/2 particles in $D=2+1$ dimensions. Such result generalizes a master action previously suggested in the literature. We also show that, surprisingly a shorthand notation in terms of differential operators applied in the bosonic cases of spins 2 and 3, can also be defined to the fermionic case. With the help of projection operators, we have also obtained the propagator and analyzed unitarity in $D$ dimensions of a second order spin-3/2 doublet model. Once we demonstrate that this doublet model is free of ghosts, we provide a master action interpolating such model with a fourth order theory which has several similarities with the spin-2 linearized New Massive Gravity theory.
Duality and unitarity of massive spin-3/2 models in $D=2+1$
For decades, a lot of work has been devoted to the problem of constructing a non-trivial quantum field theory in four-dimensional space time. This letter addresses the attempts to construct an algebraic quantum field theory in the framework of non-standard theories like hyperfunction or ultra-hyperfunction quantum field theory. For this purpose model theories of formally interacting neutral scalar fields are constructed and some of their characteristic properties like two-point functions are discussed. The formal self-couplings are obtained from local normally-ordered analytic redefinitions of the free scalar quantum field, mimicking a non-trivial structure of the resulting Lagrangians and equations of motion.
Scalar models of formally interacting non-standard quantum fields in Minkowski space-time
Different theoretical and phenomenological aspects of the Minimal and Nonminimal Walking Technicolor theories have recently been studied. The goal here is to make the models ready for collider phenomenology. We do this by constructing the low energy effective theory containing scalars, pseudoscalars, vector mesons and other fields predicted by the minimal walking theory. We construct their self-interactions and interactions with standard model fields. Using the Weinberg sum rules, opportunely modified to take into account the walking behavior of the underlying gauge theory, we find interesting relations for the spin-one spectrum. We derive the electroweak parameters using the newly constructed effective theory and compare the results with the underlying gauge theory. Our analysis is sufficiently general such that the resulting model can be used to represent a generic walking technicolor theory not at odds with precision data.
Minimal Walking Technicolor: Set Up for Collider Physics
This paper proposes an original statistical decision theory to accomplish a multi-speaker recognition task in cocktail party problem. This theory relies on an assumption that the varied frequencies of speakers obey Gaussian distribution and the relationship of their voiceprints can be represented by Euclidean distance vectors. This paper uses Mel-Frequency Cepstral Coefficients to extract the feature of a voice in judging whether a speaker is included in a multi-speaker environment and distinguish who the speaker should be. Finally, a thirteen-dimension constellation drawing is established by mapping from Manhattan distances of speakers in order to take a thorough consideration about gross influential factors.
Multi-speaker Recognition in Cocktail Party Problem
Unmanned Aerial Vehicles (UAVs) have recently shown great performance collecting visual data through autonomous exploration and mapping in building inspection. Yet, the number of studies is limited considering the post processing of the data and its integration with autonomous UAVs. These will enable huge steps onward into full automation of building inspection. In this regard, this work presents a decision making tool for revisiting tasks in visual building inspection by autonomous UAVs. The tool is an implementation of fine-tuning a pretrained Convolutional Neural Network (CNN) for surface crack detection. It offers an optional mechanism for task planning of revisiting pinpoint locations during inspection. It is integrated to a quadrotor UAV system that can autonomously navigate in GPS-denied environments. The UAV is equipped with onboard sensors and computers for autonomous localization, mapping and motion planning. The integrated system is tested through simulations and real-world experiments. The results show that the system achieves crack detection and autonomous navigation in GPS-denied environments for building inspection.
Transfer Learning-Based Crack Detection by Autonomous UAVs
Belinskii, Khalatnikov, and Lifshitz (BKL) conjectured that the description of the asymptotic behavior of a generic solution of Einstein equations near a spacelike singularity could be drastically simplified by considering that the time derivatives of the metric asymptotically dominate (except at a sequence of instants, in the `chaotic case') over the spatial derivatives. We present a precise formulation of the BKL conjecture (in the chaotic case) that consists of basically three elements: (i) we parametrize the spatial metric $g_{ij}$ by means of \it{Iwasawa variables} $\beta^a, {\cal N}^a{}_i$); (ii) we define, at each spatial point, a (chaotic) \it{asymptotic evolution system} made of ordinary differential equations for the Iwasawa variables; and (iii) we characterize the exact Einstein solutions $\beta, {\cal{N}}$ whose asymptotic behavior is described by a solution $\beta_{[0]}, {\cal N}_{[0]}$ of the previous evolution system by means of a `\it{generalized Fuchsian system}' for the differenced variables $\bar \beta = \beta - \beta_{[0]}$, $\bar {\cal N} = {\cal N} - {\cal N}_{[0]}$, and by requiring that $\bar \beta$ and $\bar {\cal N}$ tend to zero on the singularity. We also show that, in spite of the apparently chaotic infinite succession of `Kasner epochs' near the singularity, there exists a well-defined \it{asymptotic geometrical structure} on the singularity : it is described by a \it{partially framed flag}. Our treatment encompasses Einstein-matter systems (comprising scalar and p-forms), and also shows how the use of Iwasawa variables can simplify the usual (`asymptotically velocity term dominated') description of non-chaotic systems.
Describing general cosmological singularities in Iwasawa variables
We model the abundance gradients in the disk of the Milky Way for several chemical elements (O, Mg, Si, S, Ca, Sc, Ti, Co, V, Fe, Ni, Zn, Cu, Mn, Cr, Ba, La and Eu), and compare our results with the most recent and homogeneous observational data. We adopt a chemical evolution model able to well reproduce the main properties of the solar vicinity. We compute, for the first time, the abundance gradients for all the above mentioned elements in the galactocentric distance range 4 - 22 kpc. The comparison with the observed data on Cepheids in the galactocentric distance range 5-17 kpc gives a very good agreement for many of the studied elements. In addition, we fit very well the data for the evolution of Lanthanum in the solar vicinity for which we present results here for the first time. We explore, also for the first time, the behaviour of the abundance gradients at large galactocentric distances by comparing our results with data relative to distant open clusters and red giants and select the best chemical evolution model model on the basis of that. We find a very good fit to the observed abundance gradients, as traced by Cepheids, for most of the elements, thus confirming the validity of the inside-out scenario for the formation of the Milky Way disk as well as the adopted nucleosynthesis prescriptions.
Abundance gradients in the Milky Way for alpha elements, Iron peak elements, Barium, Lanthanum and Europium
This paper emphasizes that non-linear rotational or diamagnetic susceptibility is characteristic of Bose fluids above their superfluid Tcs, and for sufficiently slow rotation or weak B-fields amounts to an incompressible response to vorticity. The cause is a missing term in the conventionally accepted model Hamiltonian for quantized vortices in the Bose fluid. The resulting susceptibility can account for recent observations of Chan et al on solid He, and Ong et al on cuprate superconductors.
Bose Fluids Above Tc: Incompressible Vortex Fluids and "Supersolidity"
No-scale supergravity provides a successful framework for Starobinsky-like inflation models. Two classes of models can be distinguished depending on the identification of the inflaton with the volume modulus, $T$ (C-models), or a matter-like field, $\phi$ (WZ-models). When supersymmetry is broken, the inflationary potential may be perturbed, placing restrictions on the form and scale of the supersymmetry breaking sector. We consider both types of inflationary models in the context of high-scale supersymmetry. We further distinguish between models in which the gravitino mass is below and above the inflationary scale. We examine the mass spectra of the inflationary sector. We also consider in detail mechanisms for leptogenesis for each model when a right-handed neutrino sector, used in the seesaw mechanism to generate neutrino masses, is employed. In the case of C-models, reheating occurs via inflaton decay to two Higgs bosons. However, there is a direct decay channel to the lightest right-handed neutrino which leads to non-thermal leptogenesis. In the case of WZ-models, in order to achieve reheating, we associate the matter-like inflaton with one of the right-handed sneutrinos whose decay to the lightest right handed neutrino simultaneously reheats the Universe and generates the baryon asymmetry through leptogenesis.
Inflation and Leptogenesis in High-Scale Supersymmetry
Consider a random $n\times m$ matrix $A$ over the finite field of order $q$ where every column has precisely $k$ nonzero elements, and let $M[A]$ be the matroid represented by $A$. In the case that q=2, Cooper, Frieze and Pegden (RS\&A 2019) proved that given a fixed binary matroid $N$, if $k\ge k_N$ and $m/n\ge d_N$ where $k_N$ and $d_N$ are sufficiently large constants depending on N, then a.a.s. $M[A]$ contains $N$ as a minor. We improve their result by determining the sharp threshold (of $m/n$) for the appearance of a fixed matroid $N$ as a minor of $M[A]$, for every $k\ge 3$, and every finite field.
Minors of matroids represented by sparse random matrices over finite fields
Many young, massive stars are found in close binaries. Using population synthesis simulations we predict the likelihood of a companion star being present when these massive stars end their lives as core-collapse supernovae (SNe). We focus on stripped-envelope SNe, whose progenitors have lost their outer hydrogen and possibly helium layers before explosion. We use these results to interpret new Hubble Space Telescope observations of the site of the broad-lined Type Ic SN 2002ap, 14 years post-explosion. For a subsolar metallicity consistent with SN 2002ap, we expect a main-sequence companion present in about two thirds of all stripped-envelope SNe and a compact companion (likely a stripped helium star or a white dwarf/neutron star/black hole) in about 5% of cases. About a quarter of progenitors are single at explosion (originating from initially single stars, mergers or disrupted systems). All the latter scenarios require a massive progenitor, inconsistent with earlier studies of SN 2002ap. Our new, deeper upper limits exclude the presence of a main-sequence companion star $>8$-$10$ Msun, ruling out about 40% of all stripped-envelope SN channels. The most likely scenario for SN 2002ap includes nonconservative binary interaction of a primary star initially $\lesssim 23$ Msun. Although unlikely ($<$1% of the scenarios), we also discuss the possibility of an exotic reverse merger channel for broad-lined Type Ic events. Finally, we explore how our results depend on the metallicity and the model assumptions and discuss how additional searches for companions can constrain the physics that governs the evolution of SN progenitors.
Predicting the Presence of Companions for Stripped-Envelope Supernovae: The Case of the Broad-Lined Type Ic SN 2002ap
In many applications, it is necessary to retrieve the sub-signal building blocks of a multi-component signal, which is usually non-stationary in real-world and real-life applications. Empirical mode decomposition (EMD), synchrosqueezing transform (SST), signal separation operation (SSO), and iterative filtering decomposition (IFD) have been proposed and developed for this purpose. However, these computational methods are restricted by the specification of well-separation of the sub-signal frequency curves for multi-component signals. On the other hand, the chirplet transform-based signal separation scheme (CT3S) that extends SSO from the two-dimensional "time-frequency" plane to the three-dimensional "time-frequency-chirp rate" space was recently proposed in our recent work to remove the frequency-separation specification, and thereby allowing "frequency crossing". The main objective of this present paper is to carry out an in-depth error analysis study of instantaneous frequency estimation and component recovery for the CT3S method.
Analysis of a Direct Separation Method Based on Adaptive Chirplet Transform for Signals with Crossover Instantaneous Frequencies
The nature of Mashhoon's spin-rotation coupling is the interaction between a particle spin (gravitomagnetic moment) and a gravitomagnetic field. Here we will consider the coupling of graviton spin to the weak gravitomagnetic fields by analyzing the Lagrangian density of weak gravitational field, and hence study the purely gravitational generalization of Mashhoon's spin-rotation couplings.
The purely gravitational generalization of spin-rotation couplings
This paper presents a novel time series clustering method, the self-organising eigenspace map (SOEM), based on a generalisation of the well-known self-organising feature map (SOFM). The SOEM operates on the eigenspaces of the embedded covariance structures of time series which are related directly to modes in those time series. Approximate joint diagonalisation acts as a pseudo-metric across these spaces allowing us to generalise the SOFM to a neural network with matrix input. The technique is empirically validated against three sets of experiments; univariate and multivariate time series clustering, and application to (clustered) multi-variate time series forecasting. Results indicate that the technique performs a valid topologically ordered clustering of the time series. The clustering is superior in comparison to standard benchmarks when the data is non-aligned, gives the best clustering stage for when used in forecasting, and can be used with partial/non-overlapping time series, multivariate clustering and produces a topological representation of the time series objects.
A self-organising eigenspace map for time series clustering
Expansion of model Nambu-Jona-Lasinio is considered and on its basis an effective lagrangian for mesons, baryons and baryon - meson interaction has been obtained.
Effective Lagrangian for Baryons and Baryon-Meson Interaction
Fluidization of solid particles by an ascending fluid is frequent in industry because of the high rates of mass and heat transfers achieved. However, in some cases blockages occur and hinder the correct functioning of the fluidized bed. In this paper, we investigate the crystallization (defluidization) and refluidization that take place in very-narrow solid-liquid fluidized beds under steady flow conditions. For that, we carried out experiments where either monodisperse or bidisperse beds were immersed in water flows whose velocities were above those necessary for fluidization, and the ratio between the tube and grain diameters was smaller than 6. For monodisperse beds consisting of regular spheres, we observed that crystallization and refluidization alternate successively along time, which we quantify in terms of macroscopic structures and agitation of individual grains. We found the characteristic times for crystallization, and propose a new macroscopic parameter quantifying the degree of bed agitation. The bidisperse beds consisted of less-regular spheres placed on the bottom of a layer of regular spheres (the latter was identical to the monodisperse beds tested). We measured the changes that macroscopic structures and agitation of grains undergo, and show that the higher agitation in the bottom layer hinders crystallization of the top layer. Our results bring new insights into the dynamics of very-narrow beds, in addition to proposing a way of mitigating defluidization.
Crystallization and refluidization in very-narrow fluidized beds
Automatic face recognition is an area with immense practical potential which includes a wide range of commercial and law enforcement applications. Hence it is unsurprising that it continues to be one of the most active research areas of computer vision. Even after over three decades of intense research, the state-of-the-art in face recognition continues to improve, benefitting from advances in a range of different research fields such as image processing, pattern recognition, computer graphics, and physiology. Systems based on visible spectrum images, the most researched face recognition modality, have reached a significant level of maturity with some practical success. However, they continue to face challenges in the presence of illumination, pose and expression changes, as well as facial disguises, all of which can significantly decrease recognition accuracy. Amongst various approaches which have been proposed in an attempt to overcome these limitations, the use of infrared (IR) imaging has emerged as a particularly promising research direction. This paper presents a comprehensive and timely review of the literature on this subject. Our key contributions are: (i) a summary of the inherent properties of infrared imaging which makes this modality promising in the context of face recognition, (ii) a systematic review of the most influential approaches, with a focus on emerging common trends as well as key differences between alternative methodologies, (iii) a description of the main databases of infrared facial images available to the researcher, and lastly (iv) a discussion of the most promising avenues for future research.
Infrared face recognition: a comprehensive review of methodologies and databases
The aim of this paper is to show that every representative function of a maximal monotone operator is the Fitzpatrick transform of a bifunction corresponding to the operator. In this way we exhibit the relation between the recent theory of representative functions, and the much older theory of saddle functions initiated by Rockafellar.
Representative functions of maximal monotone operators and bifunctions
In this work we explore the viability of nonminimally coupled matter- curvature gravity theories, namely the conditions required for the absence of tachyon instabilities and ghost degrees of freedom. We contrast our finds with recent claims of a pathological behaviour of this class of models, which resorted to, in our view, an incorrect analogy with k-essence.
Viability of nonminimally coupled f (R) gravity
We study procedures for discriminating combinatorial jets in a high background environment, such as a heavy ion collision, from signal jets arising from a hard-scattering. We investigate a population of jets clustered from a combined PYTHIA+TennGen event, focusing on jets which can unambiguously be classified as signal or combinatorial jets. By selecting jets based on their kinematic properties, we investigate whether it is possible to separate signal and combinatorial jets without biasing the signal population significantly. We find that, after a loose selection on the jet area, surviving combinatorial jets are dominantly imposters, combinatorial jets with properties indistinguishable from signal jets. We also find that, after a loose selection on the leading hadron momentum, surviving combinatorial jets are still dominantly imposters. We use rule extraction, a machine learning technique, to extract an optimal kinematic selection from a random forest trained on our population of jets. In general, this technique found a stricter kinematic selection on the jet's leading hadron momentum to be optimal. We find that it is possible to suppress combinatorial jets significantly using this machine learning based selection, but that some signal is removed as well. Due to this stricter kinematic selection, we find that the surviving signal is biased towards quark-like jets. Since similar selections are used in many measurements, this indicates that those measurements are biased towards quark-like jets as well. These studies should motivate an increased emphasis on assumptions made when suppressing and subtracting combinatorial background and the biases introduced by methods for doing so.
Separating signal from combinatorial jets in a high background environment
The pressure of hot gas in groups and clusters of galaxies is a key physical quantity, which is directly linked to the total mass of the halo and several other thermodynamical properties. In the wake of previous observational works on the hot gas pressure distribution in massive halos, we have investigated a sample of 31 clusters detected in both the Planck and Atacama Cosmology Telescope (ACT), MBAC surveys. We made use of an optimised Sunyaev-Zeldovich (SZ) map reconstructed from the two data sets and tailored for the detection of the SZ effect, taking advantage of both Planck coverage of large scales and the ACT higher spatial resolution. Our average pressure profile covers a radial range going from 0.04xR_500 in the central parts to 2.5xR_500 in the outskirts. In this way, it improves upon previous pressure-profile reconstruction based on SZ measurements. It is compatible, as well as competitive, with constraints derived from joint X-ray and SZ analysis. This work demonstrates the possibilities offered by large sky surveys of the SZ effect with multiple experiments with different spatial resolutions and spectral coverages, such as ACT and Planck.
PACT. II. Pressure profiles of galaxy clusters using Planck and ACT
We establish a correspondence between orbifold and singular elliptic genera of a global quotient. While the former is defined in terms of the fixed point set of the action, the latter is defined in terms of the resolution of singularities. As a byproduct, the second quantization formula of Dijkgraaf, Moore, Verlinde and Verlinde is extended to arbitrary Kawamata log-terminal pairs.
McKay correspondence for elliptic genera
The present study deals with spatial homogeneous and anisotropic locally rotationally symmetric (LRS) Bianchi-II dark energy model in general relativity. The Einstein's field equations have been solved exactly by taking into account the proportionality relation between one of the components of shear scalar $(\sigma^{1}_{1})$ and expansion scalar $(\vartheta)$, which, for some suitable choices of problem parameters, yields time dependent equation of state (EoS) and deceleration parameter (DP), representing a model which generates a transition of universe from early decelerating phase to present accelerating phase. The physical and geometrical behavior of universe have been discussed in detail.
Dark energy model with variable $q$ and $\omega$ in LRS Bianchi-II space-time
Lorentz's group represented by the hypercomplex system of numbers, which is based on dirac matrices, is investigated. This representation is similar to the space rotation representation by quaternions. This representation has several advantages. Firstly, this is reducible representation. That is why transformation of different geometrical objects (vectors, antisymmetric tensors of the second order and bispinors) are implemented by the same operators. Secondly, the rule of composition of two arbitrary Lorentz's transformations has a simple form. These advantages strongly simplify finding a lot of the laws related to the Lorentz's group. In particular they simplify investigation of the spin connection with Pauli-Lubanski pseudovector and Wigner little group.
Hypercomplex representation of the Lorentz's group
Let $\mm=(m_0,m_1,m_2,n)$ be an almost arithmetic sequence, i.e., a sequence of positive integers with ${\rm gcd}(m_0,m_1,m_2,n) = 1$, such that $m_0<m_1<m_2$ form an arithmetic progression, $n$ is arbitrary and they minimally generate the numerical semigroup $\Gamma = m_0\N + m_1\N + m_2\N + n\N$. Let $k$ be a field. The homogeneous coordinate ring $k[\Gamma]$ of the affine monomial curve parametrically defined by $X_0=t^{m_0},X_{1}=t^{m_1},X_2=t^{m_3},Y=t^{n}$ is a graded $R$-module, where $R$ is the polynomial ring $k[X_0,X_1,X_3, Y]$ with the grading $\deg{X_i}:=m_i, \deg{Y}:=n$. In this paper, we construct a minimal graded free resolution for $k[\Gamma]$.
Minimal Graded Free Resolution for Monomial Curves in $\mathbb{A}^{4}$ defined by almost arithmetic sequences
To satisfy the ever-increasing capacity demand and quality of service (QoS) requirements of users, 5G cellular systems will take the form of heterogeneous networks (HetNets) that consist of macro cells and small cells. To build and operate such systems, mobile operators have given significant attention to cloud radio access networks (C-RANs) due to their beneficial features of performance optimization and cost effectiveness. Along with the architectural enhancement of C-RAN, large-scale antennas (a.k.a. massive MIMO) at cell sites contribute greatly to increased network capacity either with higher spectral efficiency or through permitting many users at once. In this article, we discuss the challenging issues of C-RAN based HetNets (H-CRAN), especially with respect to large-scale antenna operation. We provide an overview of existing C-RAN architectures in terms of large-scale antenna operation and promote a partially centralized approach. This approach reduces, remarkably, fronthaul overheads in CRANs with large-scale antennas. We also provide some insights into its potential and applicability in the fronthaul bandwidthlimited H-CRAN with large-scale antennas.
Large-scale Antenna Operation in Heterogeneous Cloud Radio Access Networks: A Partial Centralization Approach
The Fermat-Weber center of a planar body $Q$ is a point in the plane from which the average distance to the points in $Q$ is minimal. We first show that for any convex body $Q$ in the plane, the average distance from the Fermat-Weber center of $Q$ to the points of $Q$ is larger than ${1/6} \cdot \Delta(Q)$, where $\Delta(Q)$ is the diameter of $Q$. This proves a conjecture of Carmi, Har-Peled and Katz. From the other direction, we prove that the same average distance is at most $\frac{2(4-\sqrt3)}{13} \cdot \Delta(Q) < 0.3490 \cdot \Delta(Q)$. The new bound substantially improves the previous bound of $\frac{2}{3 \sqrt3} \cdot \Delta(Q) \approx 0.3849 \cdot \Delta(Q)$ due to Abu-Affash and Katz, and brings us closer to the conjectured value of ${1/3} \cdot \Delta(Q)$. We also confirm the upper bound conjecture for centrally symmetric planar convex bodies.
New bounds on the average distance from the Fermat-Weber center of a planar convex body
We study the quantum order by disorder effect of the $S=1/2$ system, which is programmable in quantum simulator composed of Rydberg atoms in the triangular optical lattice with a controllable diagonal anisotropy. When the total magnetization is zero, a set of sub-extensive degenerate ground states is present in the classical limit, composed of continuous strings whose configuration enjoys a large degree of freedom. Among all possible configurations, we focus on the stripe (up and down spins align straightly) and kinked (up and down spins form a zigzag shape) patterns. Adopting the the real space perturbation theory (RSPT), we estimate the leading order energy correction when the nearest-neighbor ($nn$) spin-flip coupling $J$ term is considered, and the overall model becomes an effective XXZ model with spatial anisotropy. Our calculation demonstrates a lifting of the degeneracy, favoring the stripe configuration. When $J$ becomes larger, we adopt the infinite projected entangled-pair state (iPEPS) and numerically check the effect of degeneracy lifting. The iPEPS results show that even when the spin-flip coupling term is strong the stripe pattern is still favored. While the above system is realizable with the deformed optical lattice, at last, we numerically calculate the hard-core bosonic Hamiltonian with the $nn$ dipole-dipole interaction for demonstrating the possible underlying phases, using the cluster mean-field theory. We provide various phase diagrams with different tilted angles, showing the abundant phases including the supersolid. Our proposal indicates a realizable scenario through quantum simulator in studying the quantum effect as well as extraordinary phases.
Programmable Order by Disorder Effect through Quantum Simulator
We prove that semiclassical gravity in conformally static, globally hyperbolic spacetimes with a massless, conformally coupled Klein-Gordon field is well posed, when viewed as a coupled theory for the dynamical conformal factor of the metric and the Klein-Gordon theory. Namely, it admits unique and stable solutions whenever constrained fourth-order initial data for the conformal factor and suitably defined Hadamard initial data for the Klein-Gordon state are provided on a spacelike Cauchy surface. As no spacetime symmetries are imposed on the conformal factor, the present result implies that, provided constrained initial data exists, there also exist exact solutions to the semiclassical gravity equations beyond the isotropic, homogeneous or static cases.
Semiclassical gravity with a conformally covariant field in globally hyperbolic spacetimes
This work is a brief review of applications of hidden symmetries to black hole physics. Symmetry is one of the most important concepts of the science. In physics and mathematics the symmetry allows one to simplify a problem, and often to make it solvable. According to the Noether theorem symmetries are responsible for conservation laws. Besides evident (explicit) spacetime symmetries, responsible for conservation of energy, momentum, and angular momentum of a system, there also exist what is called hidden symmetries, which are connected with higher order in momentum integrals of motion. A remarkable fact is that black holes in four and higher dimensions always possess a set (`tower') of explicit and hidden symmetries which make the equations of motion of particles and light completely integrable. The paper gives a general review of the recently obtained results. The main focus is on understanding why at all black holes have something (symmetry) to hide.
Applications of hidden symmetries to black hole physics
Computational context understanding refers to an agent's ability to fuse disparate sources of information for decision-making and is, therefore, generally regarded as a prerequisite for sophisticated machine reasoning capabilities, such as in artificial intelligence (AI). Data-driven and knowledge-driven methods are two classical techniques in the pursuit of such machine sense-making capability. However, while data-driven methods seek to model the statistical regularities of events by making observations in the real-world, they remain difficult to interpret and they lack mechanisms for naturally incorporating external knowledge. Conversely, knowledge-driven methods, combine structured knowledge bases, perform symbolic reasoning based on axiomatic principles, and are more interpretable in their inferential processing; however, they often lack the ability to estimate the statistical salience of an inference. To combat these issues, we propose the use of hybrid AI methodology as a general framework for combining the strengths of both approaches. Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks. We further ground our discussion in two applications of neuro-symbolism and, in both cases, show that our systems maintain interpretability while achieving comparable performance, relative to the state-of-the-art.
Neuro-symbolic Architectures for Context Understanding
Aims: We study the formation of substellar objects (exoplanets and brown dwarfs) as companions to young nearby stars. Methods: With high contrast AO imaging obtained with NACO at ESO's VLT we search for faint companion-candidates around our targets, whose companionship can be confirmed with astrometry. Results: In the course of our imaging campaign we found a faint substellar companion of the nearby pre-main sequence star PZ Tel, a member of the beta Pic moving group. The companion is 5-6 mag fainter than its host star in JHK and is located at a separation of only 0.3 arcsec (or 15 AU of projected separation) north-east of PZ Tel. Within three NACO observing epochs we could confirm common proper motion (>39 sigma) and detected orbital motion of PZ Tel B around its primary (>37 sigma). The photometry of the newly found companion is consistent with a brown dwarf with a mass of 24 to 40 MJup, at the distance (50 pc) and age (8-20 Myr) of PZ Tel. The effective temperature of the companion, derived from its photometry, ranges between 2500 and 2700 K, which corresponds to a spectral type between M6 and M8. After beta Pic b, PZ Tel B is the second closest substellar companion imaged directly around a young star.
Direct detection of a substellar companion to the young nearby star PZ Telescopii
This work presents the design, hardware realization, autonomous exploration and object detection capabilities of RMF-Owl, a new collision-tolerant aerial robot tailored for resilient autonomous subterranean exploration. The system is custom built for underground exploration with focus on collision tolerance, resilient autonomy with robust localization and mapping, alongside high-performance exploration path planning in confined, obstacle-filled and topologically complex underground environments. Moreover, RMF-Owl offers the ability to search, detect and locate objects of interest which can be particularly useful in search and rescue missions. A series of results from field experiments are presented in order to demonstrate the system's ability to autonomously explore challenging unknown underground environments.
RMF-Owl: A Collision-Tolerant Flying Robot for Autonomous Subterranean Exploration
Previous ground based observations of the Seyfert 2 galaxy Mrk 78 revealed a double set of emission lines, similar to those seen in several AGN from recent surveys. Are the double lines due to two AGN with different radial velocities in the same galaxy, or are they due to mass outflows from a single AGN?We present a study of the outflowing ionized gas in the resolved narrow-line region (NLR) of Mrk 78 using observations from Space Telescope Imaging Spectrograph (STIS) and Faint Object Camera (FOC) aboard the Hubble Space Telescope(HST) as part of an ongoing project to determine the kinematics and geometries of active galactic nuclei (AGN) outflows. From the spectroscopic information, we deter- mined the fundamental geometry of the outflow via our kinematics modeling program by recreating radial velocities to fit those seen in four different STIS slit positions. We determined that the double emission lines seen in ground-based spectra are due to an asymmetric distribution of outflowing gas in the NLR. By successfully fitting a model for a single AGN to Mrk 78, we show that it is possible to explain double emission lines with radial velocity offsets seen in AGN similar to Mrk 78 without requiring dual supermassive black holes.
HST Observations of the Double-Peaked Emission Lines in the Seyfert Galaxy Markarian 78: Mass Outflows from a Single AGN
Do online platforms facilitate the consumption of potentially harmful content? Using paired behavioral and survey data provided by participants recruited from a representative sample in 2020 (n=1,181), we show that exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment. These viewers often subscribe to these channels (prompting recommendations to their videos) and follow external links to them. In contrast, non-subscribers rarely see or follow recommendations to videos from these channels. Our findings suggest YouTube's algorithms were not sending people down "rabbit holes" during our observation window in 2020, possibly due to changes that the company made to its recommender system in 2019. However, the platform continues to play a key role in facilitating exposure to content from alternative and extremist channels among dedicated audiences.
Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos
This paper presents a unified approach for inverse and direct dynamics of constrained multibody systems that can serve as a basis for analysis, simulation, and control. The main advantage of the formulation of the dynamic is that it does not require the constraint equations to be linearly independent. Thus, a simulation may proceed even in the presence of redundant constraints or singular configurations and a controller does not need to change its structure whenever the mechanical system changes its topology or number of degrees of freedom. A motion control scheme is proposed based on a projected inverse-dynamics scheme which proves to be stable and minimizes the weighted Euclidean norm of the actuation force. The projection-based control scheme is further developed for constrained systems, e.g. parallel manipulators, which have some joints with no actuators (passive joints). This is complemented by the development of constraint force control. A condition on the inertia matrix resulting in a decoupled mechanical system is analytically derived which simplifies the implementation of the force control.
Control and Simulation of Motion of Constrained Multibody Systems Based on Projection Matrix Formulation
We report on the observation of magnetoresistance oscillations in graphene p-n junctions. The oscillations have been observed for six samples, consisting of single-layer and bilayer graphene, and persist up to temperatures of 30 K, where standard Shubnikov-de Haas oscillations are no longer discernible. The oscillatory magnetoresistance can be reproduced by tight-binding simulations. We attribute this phenomenon to the modulated densities of states in the n- and p- regions.
Oscillating magnetoresistance in graphene p-n junctions at intermediate magnetic fields
This paper presents BlendNet, a neural network architecture employing a novel building block called Blend module, which relies on performing binary and fixed-point convolutions in its main and skip paths, respectively. There is a judicious deployment of batch normalizations on both main and skip paths inside the Blend module and in between consecutive Blend modules. This paper also presents a compiler for mapping various BlendNet models obtained by replacing some blocks/modules in various vision neural network models with BlendNet modules to FPGA devices with the goal of minimizing the end-to-end inference latency while achieving high output accuracy. BlendNet-20, derived from ResNet-20 trained on the CIFAR-10 dataset, achieves 88.0% classification accuracy (0.8% higher than the state-of-the-art binary neural network) while it only takes 0.38ms to process each image (1.4x faster than state-of-the-art). Similarly, our BlendMixer model trained on the CIFAR-10 dataset achieves 90.6% accuracy (1.59% less than full precision MLPMixer) while achieving a 3.5x reduction in the model size. Moreover, The reconfigurability of DSP blocks for performing 48-bit bitwise logic operations is utilized to achieve low-power FPGA implementation. Our measurements show that the proposed implementation yields 2.5x lower power consumption.
BlendNet: Design and Optimization of a Neural Network-Based Inference Engine Blending Binary and Fixed-Point Convolutions
On April 16th, The White House launched "Opening up America Again" (OuAA) campaign while many U.S. counties had stay-at-home orders in place. We created a panel data set of 1,563 U.S. counties to study the impact of U.S. counties' stay-at-home orders on community mobility before and after The White House's campaign to reopen the country. Our results suggest that before the OuAA campaign stay-at-home orders brought down time spent in retail and recreation businesses by about 27% for typical conservative and liberal counties. However, after the launch of OuAA campaign, the time spent at retail and recreational businesses in a typical conservative county increased significantly more than in liberal counties (15% increase in a typical conservative county Vs. 9% increase in a typical liberal county). We also found that in conservative counties with stay-at-home orders in place, time spent at retail and recreational businesses increased less than that of conservative counties without stay-at-home orders. These findings illuminate to what extent residents' political ideology could determine to what extent they follow local orders and to what extent the White House's OuAA campaign polarized the obedience between liberal and conservative counties. The silver lining in our study is that even when the federal government was reopening the country, the local authorities that enforced stay-at-home restrictions were to some extent effective.
When Local Governments' Stay-at-Home Orders Meet the White House's "Opening Up America Again"
We define a notion of Hodge modules with rational singularities. A variety has rational singularities in the usual sense, if it is normal and the Hodge module related to intersection cohomology has rational singularities in the present sense. Our main result is a generalization of Boutot's theorem that if a reductive group acts on an affine variety with a stable point, and $H$ is an equivariant Hodge module with rational singularities, then the induced module on the GIT quotient also has rational singularities.
Equivariant Hodge modules and rational singularities
This paper deals with the highly challenging problem of reconstructing the shape of a refracting object from a single image of its resulting caustic. Due to the ubiquity of transparent refracting objects in everyday life, reconstruction of their shape entails a multitude of practical applications. The recent Shape from Caustics (SfC) method casts the problem as the inverse of a light propagation simulation for synthesis of the caustic image, that can be solved by a differentiable renderer. However, the inherent complexity of light transport through refracting surfaces currently limits the practicability with respect to reconstruction speed and robustness. To address these issues, we introduce Neural-Shape from Caustics (N-SfC), a learning-based extension that incorporates two components into the reconstruction pipeline: a denoising module, which alleviates the computational cost of the light transport simulation, and an optimization process based on learned gradient descent, which enables better convergence using fewer iterations. Extensive experiments demonstrate the effectiveness of our neural extensions in the scenario of quality control in 3D glass printing, where we significantly outperform the current state-of-the-art in terms of computational speed and final surface error.
N-SfC: Robust and Fast Shape Estimation from Caustic Images
In a companion paper we have shown how the equations describing gas and dust as two fluids coupled by a drag term can be reformulated to describe the system as a single fluid mixture. Here we present a numerical implementation of the one-fluid dusty gas algorithm using Smoothed Particle Hydrodynamics (SPH). The algorithm preserves the conservation properties of the SPH formalism. In particular, the total gas and dust mass, momentum, angular momentum and energy are all exactly conserved. Shock viscosity and conductivity terms are generalised to handle the two-phase mixture accordingly. The algorithm is benchmarked against a comprehensive suit of problems: dustybox, dustywave, dustyshock and dustyoscill, each of them addressing different properties of the method. We compare the performance of the one-fluid algorithm to the standard two-fluid approach. The one-fluid algorithm is found to solve both of the fundamental limitations of the two- fluid algorithm: it is no longer possible to concentrate dust below the resolution of the gas (they have the same resolution by definition), and the spatial resolution criterion h < csts, required in two-fluid codes to avoid over-damping of kinetic energy, is unnecessary. Implicit time stepping is straightforward. As a result, the algorithm is up to ten billion times more efficient for 3D simulations of small grains. Additional benefits include the use of half as many particles, a single kernel and fewer SPH interpolations. The only limitation is that it does not capture multi-streaming of dust in the limit of zero coupling, suggesting that in this case a hybrid approach may be required.
Dusty gas with one fluid in smoothed particle hydrodynamics
The elastic scattering cross section measured at energies $E\lesssim 10$ MeV/nucleon for some light heavy-ion systems having two identical cores like $^{16}$O+$^{12}$C exhibits an enhanced oscillatory pattern at the backward angles. Such a pattern is known to be due to the transfer of the valence nucleon or cluster between the two identical cores. In particular, the elastic $\alpha$ transfer has been shown to originate directly from the core-exchange symmetry in the elastic $^{16}$O+$^{12}$C scattering. Given the strong transition strength of the $2^+_1$ state of $^{12}$C and its large overlap with the $^{16}$O ground state, it is natural to expect a similar $\alpha$ transfer process (or inelastic $\alpha$ transfer) to take place in the inelastic $^{16}$O+$^{12}$C scattering. The present work provides a realistic coupled channel description of the $\alpha$ transfer in the inelastic $^{16}$O+$^{12}$C scattering at low energies. Based on the results of the 4 coupled reaction-channels calculation, we show a significant contribution of the $\alpha$ transfer to the inelastic $^{16}$O+$^{12}$C scattering cross section at the backward angles. These results suggest that the explicit coupling to the $\alpha$ transfer channels is crucial in the studies of the elastic and inelastic scattering of a nucleus-nucleus system with the core-exchange symmetry.
Elastic and inelastic alpha transfer in the $^{16}$O+$^{12}$C scattering
Massive stars and supernovae (SNe) have a huge impact on their environment. Despite their importance, a comprehensive knowledge of which massive stars produce which SNe is hitherto lacking. We use a Monte Carlo method to predict the mass-loss rates of massive stars in the Hertzsprung-Russell Diagram (HRD) covering all phases from the OB main sequence, the unstable Luminous Blue Variable (LBV) stage, to the final Wolf-Rayet (WR) phase. Although WR produce their own metals, a strong dependence of the mass-loss rate on the initial iron abundance is found at sub-solar metallicities (1/10 -- 1/100 solar). This may present a viable mechanism to prevent the loss of angular momentum by stellar winds, which could inhibit GRBs occurring at solar metallicities -- providing a significant boost to the collapsar model. Furthermore, we discuss recently reported quasi-sinusoidal modulations in the radio lightcurves of SNe 2001ig and 2003bg. We show that both the sinusoidal behaviour and the recurrence timescale of these modulations are consistent with the predicted mass-loss behaviour of LBVs. We discuss potential ramifications for the ``Conti'' scenario for massive star evolution.
Pre-supernova mass loss predictions for massive stars
For a prime number $p$ and a free profinite group $S$ on the basis $X$, let $S^{(n,p)}$, $n=1,2,\ldots$ be the lower $p$-central filtration of $S$. For $p>n$, we give a combinatorial description of $H^2(S/S^{(n,p)},\mathbb{Z}/p)$ in terms of the Shuffle algebra on $X$.
The lower $p$-central series of a free profinite group and the shuffle algebra
Scalar and tensorial cosmological perturbations generated in warm inflationary scenarios whose matter-radiation fluid is endowed with a viscous pressure are considered. Recent observational data from the WMAP experiment are employed to restrict the parameters of the model. Although the effect of this pressure on the matter power spectrum is of the order of a few percent, it may be detected in future experiments.
Cosmological perturbations in warm inflationary models with viscous pressure
A popular model selection approach for generalized linear mixed-effects models is the Akaike information criterion, or AIC. Among others, \cite{vaida05} pointed out the distinction between the marginal and conditional inference depending on the focus of research. The conditional AIC was derived for the linear mixed-effects model which was later generalized by \cite{liang08}. We show that the similar strategy extends to Poisson regression with random effects, where condition AIC can be obtained based on our observations. Simulation studies demonstrate the usage of the criterion.
A note on conditional Akaike information for Poisson regression with random effects
The close packing density of log-normal and bimodal distributed, surface-adsorbed particles or discs in 2D is studied by numerical simulation. For small spread in particle size, the system orders in a polycrystalline structure of hexagonal domains. The domain size and the packing density both decrease as the spread in particle size is increased up to 10.5+/-0.5%. From this point onwards the system becomes amorphous, and the close packing density increases again with spread in particle size. We argue that the polycrystalline and amorphous regions are separated by a Kosterlitz-Thouless-type phase transition. In the amorphous region we find the close packing density to vary proportional to the logarithm of the friction factor, or cooling rate. We also studied the fracture behaviour of surface layers of sintered particles. Fracture strength increases with spread in particle size, but the brittleness of the layers shows a minimum at the polycrystalline-amorphous transition. We further show that mixing distributions of big and small particles generally leads to weaker and more brittle layers, even though the close packing density is higher than for either of the particle types. We point out applications to foam stability by the Pickering mechanism.
Close packing density and fracture strength of adsorbed polydisperse particle layers
We present an analysis of the predictions made by the Galform semi-analytic galaxy formation model for the evolution of the relationship between stellar mass and halo mass. We show that for the standard implementations of supernova feedback and gas reincorporation used in semi-analytic models, this relationship is predicted to evolve weakly over the redshift range 0<z<4. Modest evolution in the median stellar mass versus halo mass (SHM) relationship implicitly requires that, at fixed halo mass, the efficiency of stellar mass assembly must be almost constant with cosmic time. We show that in our model, this behaviour can be understood in simple terms as a result of a constant efficiency of gas reincorporation, and an efficiency of SNe feedback that is, on average, constant at fixed halo mass. We present a simple explanation of how feedback from active galactic nuclei (AGN) acts in our model to introduce a break in the SHM relation whose location is predicted to evolve only modestly. Finally, we show that if modifications are introduced into the model such that, for example, the gas reincorporation efficiency is no longer constant, the median SHM relation is predicted to evolve significantly over 0<z<4. Specifically, we consider modifications that allow the model to better reproduce either the evolution of the stellar mass function or the evolution of average star formation rates inferred from observations.
The evolution of the stellar mass versus halo mass relationship
In this paper, we present a fractional decomposition of the probability generating function of the innovation process of the first-order non-negative integer-valued autoregressive [INAR(1)] process to obtain the corresponding probability mass function. We also provide a comprehensive review of integer-valued time series models, based on the concept of thinning operators, with geometric-type marginals. In particular, we develop four fractional approaches to obtain the distribution of innovation processes of the INAR(1) model and show that the distribution of the innovations sequence has geometric-type distribution. These approaches are discussed in detail and illustrated through a few examples. Finally, using the methods presented here, we develop four new first-order non-negative integer-valued autoregressive process for autocorrelated counts with overdispersion with known marginals, and derive some properties of these models.
Fractional approaches for the distribution of innovation sequence of INAR(1) processes
We study probe corrections to the Eigenstate Thermalization Hypothesis (ETH) in the context of 2D CFTs with large central charge and a sparse spectrum of low dimension operators. In particular, we focus on observables in the form of non-local composite operators $\mathcal{O}_{obs}(x)=\mathcal{O}_L(x)\mathcal{O}_L(0)$ with $h_L\ll c$. As a light probe, $\mathcal{O}_{obs}(x)$ is constrained by ETH and satisfies $\langle \mathcal{O}_{obs}(x)\rangle_{h_H}\approx \langle \mathcal{O}_{obs}(x)\rangle_{\text{micro}}$ for a high energy energy eigenstate $| h_H\rangle$. In the CFTs of interests, $\langle \mathcal{O}_{obs}(x)\rangle_{h_H}$ is related to a Heavy-Heavy-Light-Light (HL) correlator, and can be approximated by the vacuum Virasoro block, which we focus on computing. A sharp consequence of ETH for $\mathcal{O}_{obs}(x)$ is the so called "forbidden singularities", arising from the emergent thermal periodicity in imaginary time. Using the monodromy method, we show that finite probe corrections of the form $\mathcal{O}(h_L/c)$ drastically alter both sides of the ETH equality, replacing each thermal singularity with a pair of branch-cuts. Via the branch-cuts, the vacuum blocks are connected to infinitely many additional "saddles". We discuss and verify how such violent modification in analytic structure leads to a natural guess for the blocks at finite $c$: a series of zeros that condense into branch cuts as $c\to\infty$. We also discuss some interesting evidences connecting these to the Stoke's phenomena, which are non-perturbative $e^{-c}$ effects. As a related aspect of these probe modifications, we also compute the Renyi-entropy $S_n$ in high energy eigenstates on a circle. For subsystems much larger than the thermal length, we obtain a WKB solution to the monodromy problem, and deduce from this the entanglement spectrum.
Probing beyond ETH at large $c$