text
stringlengths
6
128k
Can we study hot QCD using nuclear collisions? Can we learn about metallic hydrogen from the impact of comet Shoemaker-Levy 9 on Jupiter? The answer to both questions may surprise you! I summarize progress in relativistic heavy ion theory reported at DPF `94 in the parallel sessions.
An Euclidean first-passage percolation (FPP) model describing the competing growth between $k$ different types of infection is considered. We focus on the long time behavior of this multi-type growth process and we derive multi-type shape results related to its morphology.
The R Package IBMPopSim aims to simulate the random evolution of heterogeneous populations using stochastic Individual-Based Models (IBMs). The package enables users to simulate population evolution, in which individuals are characterized by their age and some characteristics, and the population is modified by different types of events, including births/arrivals, death/exit events, or changes of characteristics. The frequency at which an event can occur to an individual can depend on their age and characteristics, but also on the characteristics of other individuals (interactions). Such models have a wide range of applications in fields including actuarial science, biology, ecology or epidemiology. IBMPopSim overcomes the limitations of time-consuming IBMs simulations by implementing new efficient algorithms based on thinning methods, which are compiled using the Rcpp package while providing a user-friendly interface.
We consider Nonlinear Schrodinger type equations on $S^1$. In this paper, we obtain polynomial bounds on the growth in time of high Sobolev norms of their solutions. The key is to derive an iteration bound based on a frequency decomposition of the solution. This iteration bound is different than the one used earlier in the work of Bourgain, and is less dependent on the structure of the nonlinearity. We first look at the defocusing NLS equation with nonlinearity of degree $\geq 5$. For the quintic NLS, Bourgain derives stronger bounds using different techniques. However, our approach works for higher nonlinearities, where the techniques of Bourgain don't seem to apply. Furthermore, we study variants of the defocusing cubic NLS in which the complete integrability is broken. Among this class of equations, we consider in particular the Hartree Equation, with sufficiently regular convolution potential. For most of the equations that come from modifying the defocusing cubic NLS, we obtain better bounds than for the other equations due to the fact that we can use higher modified energies as in the work of the I-Team.
We investigate Bose-Einstein condensation of noninteracting gases in a harmonic trap with an off-center dimple potential. We specifically consider the case of a tight and deep dimple potential which is modelled by a point interaction. This point interaction is represented by a Dirac delta function. The atomic density, chemical potential, critical temperature and condensate fraction, the role of the relative depth and the position of the dimple potential are analyzed by performing numerical calculations.
Research in several fields now requires the analysis of data sets in which multiple high-dimensional types of data are available for a common set of objects. In particular, The Cancer Genome Atlas (TCGA) includes data from several diverse genomic technologies on the same cancerous tumor samples. In this paper we introduce Joint and Individual Variation Explained (JIVE), a general decomposition of variation for the integrated analysis of such data sets. The decomposition consists of three terms: a low-rank approximation capturing joint variation across data types, low-rank approximations for structured variation individual to each data type, and residual noise. JIVE quantifies the amount of joint variation between data types, reduces the dimensionality of the data and provides new directions for the visual exploration of joint and individual structures. The proposed method represents an extension of Principal Component Analysis and has clear advantages over popular two-block methods such as Canonical Correlation Analysis and Partial Least Squares. A JIVE analysis of gene expression and miRNA data on Glioblastoma Multiforme tumor samples reveals gene-miRNA associations and provides better characterization of tumor types. Data and software are available at https://genome.unc.edu/jive/
The quest to identify and observe Majorana fermions in physics and condensed-matter systems remains an important challenge. Here, we introduce a qubit (spin-$1/2$) from the occurrence of two delocalized zero-energy Majorana fermions in a model of two spins-$1/2$ on the Bloch sphere within the fractional one-half topological state. We address specific protocols in time with circularly polarized light and the protection of this delocalized spin-$1/2$ state related to quantum information protocols. We also show how disorder can play a positive and important role allowing singlet-triplet transitions and resulting in an additional elongated region for the fractional phase, demonstrating the potential of this platform related to applications in topologically protected quantum information. We generalize our approach with an array and Majorana fermions at the edges in a ring geometry.
We investigate a two-species Fermi gas in which one species is confined in a two-dimensional plane (2D) or one-dimensional line (1D) while the other is free in the three-dimensional space (3D). We discuss the realization of such a system with the interspecies interaction tuned to resonance. When the mass ratio is in the range 0.0351<m_2D/m_3D<6.35 for the 2D-3D mixture or 0.00646<m_1D/m_3D<2.06 for the 1D-3D mixture, the resulting system is stable against the Efimov effect and has universal properties. We calculate key quantities in the many-body phase diagram. Other possible scale-invariant systems with short-range few-body interactions are also elucidated.
In the development of oligodendrocytes in the central nervous systems, the inner and outer tongue of the myelin sheath tend to be located within the same quadrant, which was named as Peters quadrant mystery. In this study, we conduct in silico investigations to explore the possible mechanisms underlying the Peters quadrant mystery. A biophysically detailed model of oligodendrocytes was used to simulate the effect of the actional potential-induced electric field across the myelin sheath. Our simulation suggests that the paranodal channel connecting the inner and outer tongue forms a low impedance route, inducing two high-current zones at the area around the inner and outer tongue. When the inner tongue and outer tongue are located within the same quadrant, the interaction of these two high-current-zones will induce a maximum amplitude and a polarity reverse of the voltage upon the inner tongue, resulting in the same quadrant phenomenon. This model indicates that the growth of myelin follows a simple principle: an external negative or positive E-field can promote or inhibit the growth of the inner tongue, respectively.
Despite several indirect confirmations of the existence of dark matter, the properties of a new dark matter particle are still largely unknown. Several experiments are currently searching for this particle underground in direct detection, in space and on earth in indirect detection and at the LHC. A confirmed signal could select a model for dark matter among the many extensions of the standard model. In this paper we present a short review of the public codes for computation of dark matter observables.
The discrete direct deconvolution model (D3M) is developed for the large-eddy simulation (LES) of turbulence. The D3M is a discrete approximation of previous direct deconvolution model studied by Chang et al. ["The effect of sub-filter scale dynamics in large eddy simulation of turbulence," Phys. Fluids 34, 095104 (2022)]. For the first type model D3M-1, the original Gaussian filter is approximated by local discrete formulation of different orders, and direct inverse of the discrete filter is applied to reconstruct the unfiltered flow field. The inverse of original Gaussian filter can be also approximated by local discrete formulation, leading to a fully local model D3M-2. Compared to traditional models including the dynamic Smagorinsky model (DSM) and the dynamic mixed model (DMM), the D3M-1 and D3M-2 exhibit much larger correlation coefficients and smaller relative errors in the a priori studies. In the a posteriori validations, both D3M-1 and D3M-2 can accurately predict turbulence statistics, including velocity spectra, probability density functions (PDFs) of sub-filter scale (SFS) stresses and SFS energy flux, as well as time-evolving kinetic energy spectra, momentum thickness, and Reynolds stresses in turbulent mixing layer. Moreover, the proposed model can also well capture spatial structures of the Q-criterion iso surfaces. Thus, the D3M holds potential as an effective SFS modeling approach in turbulence simulations.
We propose a representation of Gaussian processes (GPs) based on powers of the integral operator defined by a kernel function, we call these stochastic processes integral Gaussian processes (IGPs). Sample paths from IGPs are functions contained within the reproducing kernel Hilbert space (RKHS) defined by the kernel function, in contrast sample paths from the standard GP are not functions within the RKHS. We develop computationally efficient non-parametric regression models based on IGPs. The main innovation in our regression algorithm is the construction of a low dimensional subspace that captures the information most relevant to explaining variation in the response. We use ideas from supervised dimension reduction to compute this subspace. The result of using the construction we propose involves significant improvements in the computational complexity of estimating kernel hyper-parameters as well as reducing the prediction variance.
The decays $\chi_{c1} \rightarrow J/\psi \mu^+ \mu^-$ and $\chi_{c2} \rightarrow J/\psi \mu^+ \mu^-$ are observed and used to study the resonance parameters of the $\chi_{c1}$ and $\chi_{c2}$ mesons. The masses of these states are measured to be m(\chi_{c1}) = 3510.71 \pm 0.04(stat) \pm 0.09(syst)MeV\,, m(\chi_{c2}) = 3556.10 \pm 0.06(stat) \pm 0.11(syst)MeV\,, where the knowledge of the momentum scale for charged particles dominates the systematic uncertainty. The momentum-scale uncertainties largely cancel in the mass difference m(\chi_{c2}) - m(\chi_{c1}) = 45.39 \pm 0.07(stat) \pm 0.03(syst)MeV\,. The natural width of the $\chi_{c2}$ meson is measured to be $$\Gamma(\chi_{c2}) = 2.10 \pm 0.20(stat) \pm 0.02(syst)MeV\,.$$ These results are in good agreement with and have comparable precision to the current world averages.
We report results of muon spin rotation measurements performed on the ferromagnetic semiconductor EuO, which is one of the best approximations to a localized ferromagnet. We argue that implanted muons are sensitive to the internal field primarily through a combination of hyperfine and Lorentz fields. The temperature dependences of the internal field and the relaxation rate have been measured and are compared with previous theoretical predictions.
In this paper we propose a method to solve the Kadomtsev--Petviashvili equation based on splitting the linear part of the equation from the nonlinear part. The linear part is treated using FFTs, while the nonlinear part is approximated using a semi-Lagrangian discontinuous Galerkin approach of arbitrary order. We demonstrate the efficiency and accuracy of the numerical method by providing a range of numerical simulations. In particular, we find that our approach can outperform the numerical methods considered in the literature by up to a factor of five. Although we focus on the Kadomtsev--Petviashvili equation in this paper, the proposed numerical scheme can be extended to a range of related models as well.
Experiments require human decisions in the design process, which in turn are reformulated and summarized as inputs into a system (computational or otherwise) to generate the experimental design. I leverage this system to promote a language of experimental designs by proposing a novel computational framework, called "the grammar of experimental designs", to specify experimental designs based on an object-oriented programming system that declaratively encapsulates the experimental structure. The framework aims to engage human cognition by building experimental designs with modular functions that modify a targeted singular element of the experimental design object. The syntax and semantics of the framework are built upon consideration from multiple perspectives. While the core framework is language-agnostic, the framework is implemented in the `edibble` R-package. A range of examples is shown to demonstrate the utility of the framework.
Liouville field theory on the pseudosphere is considered (Dirichlet conditions). We compute explicitely the bulk-boundary structure constant with two different methods: first we use a suggestion made by Hosomichi in JHEP 0111 (2001) that relates this quantity directly to the bulk-boundary structure constant with Neumann conditions, then we do a direct computation. Agreement is found.
Let $G$ be a simple connected simple graph of order $n$. The distance Laplacian matrix $D^{L}(G)$ is defined as $D^L(G)=Diag(Tr)-D(G)$, where $Diag(Tr)$ is the diagonal matrix of vertex transmissions and $D(G)$ is the distance matrix of $G$. The eigenvalues of $D^{L}(G)$ are the distance Laplacian eigenvalues of $G$ and are denoted by $\partial_{1}^{L}(G), \partial_{2}^{L}(G),\dots,\partial_{n}^{L}(G)$. The \textit{ distance Laplacian spread} $DLS(G)$ of a connected graph $G$ is the difference between largest and second smallest distance Laplacian eigenvalues, that is, $\partial_{1}^{L}(G)-\partial_{n-1}^{L}(G)$. We obtain bounds for $DLS(G)$ in terms of the Wiener index $W(G)$, order $n$ and the maximum transmission degree $Tr_{max}(G)$ of $G$ and characterize the extremal graphs. We obtain two lower bounds for $DLS(G)$, the first one in terms of the order, diameter and the Wiener index of the graph, and the second one in terms of the order, maximum degree and the independence number of the graph. For a connected $ k-partite$ graph $G$, $k\leq n-1$, with $n$ vertices having disconnected complement, we show that $ DLS(G)\geq \Big \lfloor \frac{n}{k}\Big \rfloor$ with equality if and only if $G$ is a $complete ~ k-partite$ graph having cardinality of each independent class same and $n \equiv 0 \pmod k$.
HS0705+6700 (also identified as V470 Cam) is a short period (2.3 h) post common envelope detached eclipsing sdB binary system which exhibits transit time variations (TTVs) of a cyclical nature. We report a further 25 timings of light minima and show that our new TTVs support and extend this cyclical pattern to 1.6 periods. We examine possible causes of the observed TTVs and confirm that the presence of a third, and possibly a fourth, body could provide an elegant explanation of these cyclical variations. However other non-circumbinary mechanisms, e.g. Applegate magnetic dynamo effects, will remain possible contenders until sufficient data has been accumulated to demonstrate that the periodicity of the TTVs is time independent.
With the widespread use of mobile computing devices in contemporary society, our trajectories in the physical space and virtual world are increasingly closely connected. Using the anonymous smartphone data of $1 \times 10^5$ users in 30 days, we constructed the mobility network and the attention network to study the correlations between online and offline human behaviours. In the mobility network, nodes are physical locations and edges represent the movements between locations, and in the attention network, nodes are websites and edges represent the switch of users between websites. We apply the box-covering method to renormalise the networks. The investigated network properties include the size of box $l_B$ and the number of boxes $N(l_B)$. We find two universal classes of behaviours: the mobility network is featured by a small-world property, $N(l_B) \simeq e^{-l_B}$, whereas the attention network is characterised by a self-similar property $N(l_B) \simeq l_B^{-\gamma}$. In particular, with the increasing of the length of box $l_B$, the degree correlation of the network changes from positive to negative which indicates that there are two layers of structure in the mobility network. We use the results of network renormalisation to detect the community and map the structure of the mobility network. Further, we located the most relevant websites visited in these communities, and identified three typical location-based behaviours, including the shopping, dating, and taxi-calling. Finally, we offered a revised geometric network model to explain our findings in the perspective of spatial-constrained attachment.
Experiments by several groups during the past decade have shown that a molten polymer nanofilm subject to a large transverse thermal gradient undergoes spontaneous formation of periodic nanopillar arrays. The prevailing explanation is that coherent reflections of acoustic phonons within the film cause a periodic modulation of the radiation pressure which enhances pillar growth. By exploring a deformational instability of particular relevance to nanofilms, we demonstrate that thermocapillary forces play a crucial role in the formation process. Analytic and numerical predictions show good agreement with the pillar spacings obtained in experiment. Simulations of the interface equation further determine the rate of pillar growth of importance to technological applications.
We establish some new existence results for global surfaces of section of dynamically convex Reeb flows on the three-sphere. These sections often have genus, and are the result of a combination of pseudo-holomorphic curve methods with some elementary ergodic methods.
In view of the new (preliminary) search results for instanton-induced events at HERA from the H1 collaboration, we present a brief discussion of (controllable) theoretical uncertainties, both in the event topology and the calculated rate.
Consider a tree $T=(V,E)$ with root $\circ$ and edge length function $\ell:E\to\mathbb{R}_+$. The phylogenetic covariance matrix of $T$ is the matrix $C$ with rows and columns indexed by $L$, the leaf set of $T$, with entries $C(i,j):=\sum_{e\in[i\wedge j,o]}\ell(e)$, for each $i,j\in L$. Recent work [15] has shown that the phylogenetic covariance matrix of a large, random binary tree $T$ is significantly sparsified with overwhelmingly high probability under a change-of-basis with respect to the so-called Haar-like wavelets of $T$. This finding notably enables manipulating the spectrum of covariance matrices of large binary trees without the necessity to store them in computer memory but instead performing two post-order traversals of the tree. Building on the methods of [15], this manuscript further advances their sparsification result to encompass the broader class of $k$-regular trees, for any given $k\ge2$. This extension is achieved by refining existing asymptotic formulas for the mean and variance of the internal path length of random $k$-regular trees, utilizing hypergeometric function properties and identities.
We prove some new bounds for the maximum of Riemann zeta-function on very short segments of the critical line. All the theorems are based on the Riemann hypothesis.
Biological data mainly comprises of Deoxyribonucleic acid (DNA) and protein sequences. These are the biomolecules which are present in all cells of human beings. Due to the self-replicating property of DNA, it is a key constitute of genetic material that exist in all breathingcreatures. This biomolecule (DNA) comprehends the genetic material obligatory for the operational and expansion of all personified lives. To save DNA data of single person we require 10CD-ROMs.Moreover, this size is increasing constantly, and more and more sequences are adding in the public databases. This abundant increase in the sequence data arise challenges in the precise information extraction from this data. Since many data analyzing and visualization tools do not support processing of this huge amount of data. To reduce the size of DNA and protein sequence, many scientists introduced various types of sequence compression algorithms such as compress or gzip, Context Tree Weighting (CTW), Lampel Ziv Welch (LZW), arithmetic coding, run-length encoding and substitution method etc. These techniques have sufficiently contributed to minimizing the volume of the biological datasets. On the other hand, traditional compression techniques are also not much suitable for the compression of these types of sequential data. In this paper, we have explored diverse types of techniques for compression of large amounts of DNA Sequence Data. In this paper, the analysis of techniques reveals that efficient techniques not only reduce the size of the sequence but also avoid any information loss. The review of existing studies also shows that compression of a DNA sequence is significant for understanding the critical characteristics of DNA data in addition to improving storage efficiency and data transmission. In addition, the compression of the protein sequence is a challenge for the research community. The major parameters for evaluation of these compression algorithms include compression ratio, running time complexity etc.
The ability to learn new concepts continually is necessary in this ever-changing world. However, deep neural networks suffer from catastrophic forgetting when learning new categories. Many works have been proposed to alleviate this phenomenon, whereas most of them either fall into the stability-plasticity dilemma or take too much computation or storage overhead. Inspired by the gradient boosting algorithm to gradually fit the residuals between the target model and the previous ensemble model, we propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively. Specifically, we first dynamically expand new modules to fit the residuals between the target and the output of the original model. Next, we remove redundant parameters and feature dimensions through an effective distillation strategy to maintain the single backbone model. We validate our method FOSTER on CIFAR-100 and ImageNet-100/1000 under different settings. Experimental results show that our method achieves state-of-the-art performance. Code is available at: https://github.com/G-U-N/ECCV22-FOSTER.
Systematic error, which is not determined by chance, often refers to the inaccuracy (involving either the observation or measurement process) inherent to a system. In this paper, we exhibit some long-neglected but frequent-happening adversarial examples caused by systematic error. More specifically, we find the trained neural network classifier can be fooled by inconsistent implementations of image decoding and resize. This tiny difference between these implementations often causes an accuracy drop from training to deployment. To benchmark these real-world adversarial examples, we propose ImageNet-S dataset, which enables researchers to measure a classifier's robustness to systematic error. For example, we find a normal ResNet-50 trained on ImageNet can have 1%-5% accuracy difference due to the systematic error. Together our evaluation and dataset may aid future work toward real-world robustness and practical generalization.
In this paper, we study the multi-server setting of the \emph{Private Information Retrieval with Coded Side Information (PIR-CSI)} problem. In this problem, there are $K$ messages replicated across $N$ servers, and there is a user who wishes to download one message from the servers without revealing any information to any server about the identity of the requested message. The user has a side information which is a linear combination of a subset of $M$ messages in the database. The parameter $M$ is known to all servers in advance, whereas the indices and the coefficients of the messages in the user's side information are unknown to any server \emph{a priori}. We focus on a class of PIR-CSI schemes, referred to as \emph{server-symmetric schemes}, in which the queries/answers to/from different servers are symmetric in structure. We define the \emph{rate} of a PIR-CSI scheme as its minimum download rate among all problem instances, and define the \emph{server-symmetric capacity} of the PIR-CSI problem as the supremum of rates over all server-symmetric PIR-CSI schemes. Our main results are as follows: (i) when the side information is not a function of the user's requested message, the capacity is given by ${(1+{1}/{N}+\dots+{1}/{N^{\left\lceil \frac{K}{M+1}\right\rceil -1}})^{-1}}$ for any ${1\leq M\leq K-1}$; and (ii) when the side information is a function of the user's requested message, the capacity is equal to $1$ for $M=2$ and $M=K$, and it is equal to ${N}/{(N+1)}$ for any ${3 \leq M \leq K-1}$. The converse proofs rely on new information-theoretic arguments, and the achievability schemes are inspired by our recently proposed scheme for single-server PIR-CSI as well as the Sun-Jafar scheme for multi-server PIR.
The Infrared Spectrum is used as an experimental data target, to improved the TIP4P/$\epsilon$, adding harmonic potential U(r) in all bonds and harmonic potential U({\theta}) in the angle formed by the hydrogens and oxygen atoms of the water molecule. The flexibility in the molecule gives the ability of the water molecules to change, around different temperatures and pressures, their structure and in the bulk liquid the distribution of the dipole moment. This distribution helps to reproduce better the experimental data that the rigid models can not describe. The rigid water models of 3 and 4 sites have limitations to describe all the experimental properties, and is because can not take on how the dipole moment is distributed of dipole moment around the different thermodynamics phases. The new flexible TIP4P/$\epsilon$ Flex water model is compared to the improve models of TIP4P and OPC model.
We report on the unveiling of the nature of the unidentified X-ray source 3XMM J005450.3-373849 as a Seyfert-2 galaxy located behind the spiral galaxy NGC 300 using Hubble Space Telescope data, new spectroscopic Gemini observations and available XMM-Newton and Chandra data. We show that the X-ray source is positionally coincident with an extended optical source, composed by a marginally resolved nucleus/bulge, surrounded by an elliptical disc-like feature and two symmetrical outer rings. The optical spectrum is typical of a Seyfert-2 galaxy redshifted to z=0.222 +/- 0.001, which confirms that the source is not physically related to NGC 300. At this redshift the source would be located at 909+/-4 Mpc (comoving distance in the standard model). The X-ray spectra of the source are well-fitted by an absorbed power-law model. By tying $N_\mathrm{H}$ between the six available spectra, we found a variable index $\Gamma$ running from ~2 in 2000-2001 years, to 1.4-1.6 in the 2005-2014 period. Alternatively, by tying $\Gamma$, we found variable absorption columns of N_H ~ 0.34 x $10^{-22}$ cm$^{-2}$ in 2000-2001 years, and 0.54-0.75 x $10^{-22}$ cm$^{-2}$ in the 2005-2014 period. Although we cannot distinguish between an spectral or absorption origin, from the derived unabsorbed X-ray fluxes, we are able to assure the presence of long-term X-ray variability. Furthermore, the unabsorbed X-ray luminosities of 0.8-2 x 10$^{43}$ erg s$^{-1}$ derived in the X-ray band are in agreement with a weakly obscured Seyfert-2 AGN at $z \approx 0.22$.
We consider a branching particle system where particles reproduce according to the pure birth Yule process with the birth rate L, conditioned on the observed number of particles to be equal n. Particles are assumed to move independently on the real line according to the Brownian motion with the local variance s2. In this paper we treat $n$ particles as a sample of related species. The spatial Brownian motion of a particle describes the development of a trait value of interest (e.g. log-body-size). We propose an unbiased estimator Rn2 of the evolutionary rate r2=s2/L. The estimator Rn2 is proportional to the sample variance Sn2 computed from n trait values. We find an approximate formula for the standard error of Rn2 based on a neat asymptotic relation for the variance of Sn2.
In this article, we examine how the structure of soluble groups of infinite torsion-free rank with no section isomorphic to the wreath product of two infinite cyclic groups can be analysed. As a corollary, we obtain that if a finitely generated soluble group has a defined Krull dimension and has no sections isomorphic to the wreath product of two infinite cyclic groups then it is a group of finite torsion-free rank. There are further corollaries including applications to return probabilities for random walks. The paper concludes with constructions of examples that can be compared with recent constructions of Brieussel and Zheng.
We explore the application of generating symmetries, i.e. symmetries that depend on a parameter, to integrable hyperbolic third order equations, and in particular to consistent pairs of such equations as introduced by Adler and Shabat (AS). Our main result is that different infinite hierarchies of symmetries for these equations can arise from a single generating symmetry by expansion about different values of the parameter. We illustrate this, and study in depth the symmetry structure, for two examples. The first is an equation related to the potential KdV equation taken from AS. The second is a more general hyperbolic equation than the kind considered in AS. Both equations depend on a parameter, and when this parameter vanishes they become part of a consistent pair. When this happens, the nature of the expansions of the generating symmetries needed to derive the hierarchies also changes.
We show the expected order of RNA saturated secondary structures of size $n$ is $\log_4n(1+O(\frac{\log_2n}{n}))$, if we select the saturated secondary structure uniformly at random. Furthermore, the order of saturated secondary structures is sharply concentrated around its mean. As a consequence saturated structures and structures in the traditional model behave the same with respect to the expected order. Thus we may conclude that the traditional model has already drawn the right picture and conclusions inferred from it with respect to the order (the overall shape) of a structure remain valid even if enforcing saturation (at least in expectation).
We present initial results of a spectroscopic study of the Pistol and of the cocoon stars in the Quintuplet Cluster. From ISOCAM CVF 5--17 micron spectroscopy of the field of the Pistol Star, we have discovered a nearly spherical shell of hot dust surrounding this star, a probable LBV. This shell is most prominent at lambda > 12 micron, and its morphology clearly indicates that the shell is stellar ejecta. Emission line images show that most of the ionised material is along the northern border of this shell, and its morphology is very similar to that of the Pistol HII region (Yusef-Zadeh & Morris, 1987, AJ, 94, 1178). We thus confirm that the ionisation comes from very hot stars in the core of the Quintuplet Cluster. An SWS spectrum of the Pistol Nebula indicates a harder ionising radiation than could be provided by the Pistol Star, but which is consistent with ionisation from Wolf-Rayet stars in the Quintuplet Cluster. The CVF 5--17 micron spectra of the cocoon stars in the Quintuplet do not show any emission feature that could help elucidate their nature.
The mechanism of how critical end points of the first-order valence transitions (FOVT) are controlled by a magnetic field is discussed. We demonstrate that the critical temperature is suppressed to be a quantum critical point (QCP) by a magnetic field. This results explain the field dependence of the isostructural FOVT observed in Ce metal and YbInCu_4. Magnetic field scan can lead to reenter in a critical valence fluctuation region. Even in the intermediate-valence materials, the QCP is induced by applying a magnetic field, at which the magnetic susceptibility also diverges. The driving force of the field-induced QCP is shown to be a cooperative phenomenon of the Zeeman effect and the Kondo effect, which creates a distinct energy scale from the Kondo temperature. The key concept is that the closeness to the QCP of the FOVT is capital in understanding Ce- and Yb-based heavy fermions. It explains the peculiar magnetic and transport responses in CeYIn_5 (Y=Ir, Rh) and metamagnetic transition in YbXCu_4 for X=In as well as the sharp contrast between X=Ag and Cd.
Dynamical similarities are non-standard symmetries found in a wide range of physical systems that identify solutions related by a change of scale. In this paper we will show through a series of examples how this symmetry extends to the space of couplings, as measured through observations of a system. This can be exploited to focus on observations that can be used distinguish between different theories, and identify those which give rise to identical physical evolutions. These can be reduced into a description which makes no reference to scale. The resultant systems can be derived from Herglotz's principle and generally exhibit friction. Here we will demonstrate this through three example systems: The Kepler problem, the N-body system and Friedmann-Lema\^itre-Robertson-Walker cosmology.
The call-by-value lambda calculus can be endowed with permutation rules, arising from linear logic proof-nets, having the advantage of unblocking some redexes that otherwise get stuck during the reduction. We show that such an extension allows to define a satisfying notion of B\"ohm(-like) tree and a theory of program approximation in the call-by-value setting. We prove that all lambda terms having the same B\"ohm tree are observationally equivalent, and characterize those B\"ohm-like trees arising as actual B\"ohm trees of lambda terms. We also compare this approach with Ehrhard's theory of program approximation based on the Taylor expansion of lambda terms, translating each lambda term into a possibly infinite set of so-called resource terms. We provide sufficient and necessary conditions for a set of resource terms in order to be the Taylor expansion of a lambda term. Finally, we show that the normal form of the Taylor expansion of a lambda term can be computed by performing a normalized Taylor expansion of its B\"ohm tree. From this it follows that two lambda terms have the same B\"ohm tree if and only if the normal forms of their Taylor expansions coincide.
A simple, self-calibrating, rotating-waveplate polarimeter is largely insensitive to light intensity fluctuations and is shown to be useful for determining the Stokes parameters of light. This study shows how to minimize the in situ self-calibration time, the measurement time and the measurement uncertainty. The suggested methods are applied to measurements of spatial variations in the linear and circular polarizations of laser light passing through glass plates with a laser intensity dependent birefringence. These are crucial measurements for the ACME electron electric dipole measurements, requiring accuracies in circular and linear polarization fraction of about 0.1% and 0.4%, with laser intensities up to 100 $\text{mW/mm}^2$ incident into the polarimeter.
Long-time asymptotic of field-field correlator for radiation propagated through a medium composed of random point-like scatterers is studied using Bete-Salpeter equation. It is shown that for plane source the fluctuation intensity (zero spatial moment of the correlator) obeys a power-logarithmic stretched exponential decay law, the exponent and preexponent being dependent on the scattering angle. Spatial center of gravity and dispersion of the correlator (normalized first and second spatial moments, respectively) prove to weakly diverge as time tends to infinity. A spin analogy of this problem is discussed.
We resolve a number of long-standing open problems in online graph coloring. More specifically, we develop tight lower bounds on the performance of online algorithms for fundamental graph classes. An important contribution is that our bounds also hold for randomized online algorithms, for which hardly any results were known. Technically, we construct lower bounds for chordal graphs. The constructions then allow us to derive results on the performance of randomized online algorithms for the following further graph classes: trees, planar, bipartite, inductive, bounded-treewidth and disk graphs. It shows that the best competitive ratio of both deterministic and randomized online algorithms is $\Theta(\log n)$, where $n$ is the number of vertices of a graph. Furthermore, we prove that this guarantee cannot be improved if an online algorithm has a lookahead of size $O(n/\log n)$ or access to a reordering buffer of size $n^{1-\epsilon}$, for any $0<\epsilon\leq 1$. A consequence of our results is that, for all of the above mentioned graph classes except bipartite graphs, the natural $\textit{First Fit}$ coloring algorithm achieves an optimal performance, up to constant factors, among deterministic and randomized online algorithms.
Aims: This paper presents 2.5D numerical experiments of Alfv\'en wave phase mixing and aims to assess the effects of nonlinearities on wave behaviour and dissipation. In addition, this paper aims to quantify how effective the model presented in this work is at providing energy to the coronal volume. Methods: The model is presented and explored through the use of several numerical experiments which were carried out using the Lare2D code. The experiments study footpoint driven Alfv\'en waves in the neighbourhood of a two-dimensional x-type null point with initially uniform density and plasma pressure. A continuous sinusoidal driver with a constant frequency is used. Each experiment uses different driver amplitudes to compare weakly nonlinear experiments with linear experiments. Results: We find that the wave trains phase-mix owing to variations in the length of each field line and variations in the field strength. The nonlinearities reduce the amount of energy entering the domain, as they reduce the effectiveness of the driver, but they have relatively little effect on the damping rate (for the range of amplitudes studied). The nonlinearities produce density structures which change the natural frequencies of the field lines and hence cause the resonant locations to move. The shifting of the resonant location causes the Poynting flux associated with the driver to decrease. Reducing the magnetic diffusivity increases the energy build-up on the resonant field lines, however, it has little effect on the total amount of energy entering the system. From an order of magnitude estimate, we show that the Poynting flux in our experiments is comparable to the energy requirements of the quiet Sun corona. However a (possibly unphysically) large amount of magnetic diffusion was used however and it remains unclear if the model is able to provide enough energy under actual coronal conditions.
This review describes in detail the essential techniques used in microscopic theories on spintronics. We have investigated the domain wall dynamics induced by electric current based on the $s$-$d$ exchange model. The domain wall is treated as rigid and planar and is described by two collective coordinates: the position and angle of wall magnetization. The effect of conduction electrons on the domain wall dynamics is calculated in the case of slowly varying spin structure (close to the adiabatic limit) by use of a gauge transformation. The spin-transfer torque and force on the wall are expressed by Feynman diagrams and calculated systematically using non-equilibrium Green's functions, treating electrons fully quantum mechanically. The wall dynamics is discussed based on two coupled equations of motion derived for two collective coordinates. The force is related to electron transport properties, resistivity, and the Hall effect. Effect of conduction electron spin relaxation on the torque and wall dynamics is also studied.
We prove that the Navier-Stokes, the Euler and the Stokes equations admit a Lagrangian structure using the stochastic embedding of Lagrangian systems. These equations coincide with extremals of an explicit stochastic Lagrangian functional, i.e. they are stochastic Lagrangian systems in the sense of [Cresson-Darses, J. Math. Phys. 48, 072703 (2007]
We show that the orbifold Chow ring of a root stack over a well-formed weighted projective space can be naturally seen as the Jacobian algebra of a function on a singular variety given by a partial compactification of its Ginzburg-Landau model.
We experimentally demonstrate control of the rate of spontaneous emission in a tunable hybrid photonic system that consists of two canonical building blocks for spontaneous emission control, an optical antenna and a mirror, each providing a modification of the local density of optical states (LDOS).We couple fluorophores to a plasmonic antenna to create a superemitter with an enhanced decay rate. In a superemitter analog of the seminal Drexhage experiment we probe the LDOS of a nanomechanically approached mirror. Due to the electrodynamic interaction of the antenna with its own mirror image the superemitter traces the inverse LDOS of the mirror, in stark contrast to a bare source, whose decay rate is proportional to the mirror LDOS.
Inspired by a recent work of M. Nakasuji, O. Phuksuwan and Y. Yamasaki we combine interpolated multiple zeta values and Schur multiple zeta values into one object, which we call interpolated Schur multiple zeta values. Our main result will be a Jacobi-Trudi formula for a certain class of these new objects. This generalizes an analogous result for Schur multiple zeta values and implies algebraic relations between interpolated multiple zeta values.
In this short note, we will show that the metric of Deligne's pairing is continous.
A causal set C can describe a discrete spacetime, but this discrete spacetime is not quantum, because C is endowed with Boolean logic, as it does not allow cycles. In a quasi-ordered set Q, cycles are allowed. In this paper, we consider a subset QC of a quasi-ordered set Q, whose elements are all the cycles. In QC, which is endowed with quantum logic, each cycle of maximal outdegree N in a node, is associated with N entangled qubits. Then QC describes a quantum computing spacetime. This structure, which is non-local and non-casual, can be understood as a proto-spacetime. Micro-causality and locality can be restored in the subset U of Q whose elements are unentangled qubits which we interpret as the states of quantum spacetime. The mapping of quantum spacetime into proto-spacetime is given by the action of the XOR gate. Moreover, a mapping is possible from the Boolean causal set into U by the action of the Hadamard gate. In particular, the causal order defined on the elements of U induces the causal evolution of spin networks.
Automatic translation from signed to spoken languages is an interdisciplinary research domain, lying on the intersection of computer vision, machine translation and linguistics. Nevertheless, research in this domain is performed mostly by computer scientists in isolation. As the domain is becoming increasingly popular - the majority of scientific papers on the topic of sign language translation have been published in the past three years - we provide an overview of the state of the art as well as some required background in the different related disciplines. We give a high-level introduction to sign language linguistics and machine translation to illustrate the requirements of automatic sign language translation. We present a systematic literature review to illustrate the state of the art in the domain and then, harking back to the requirements, lay out several challenges for future research. We find that significant advances have been made on the shoulders of spoken language machine translation research. However, current approaches are often not linguistically motivated or are not adapted to the different input modality of sign languages. We explore challenges related to the representation of sign language data, the collection of datasets, the need for interdisciplinary research and requirements for moving beyond research, towards applications. Based on our findings, we advocate for interdisciplinary research and to base future research on linguistic analysis of sign languages. Furthermore, the inclusion of deaf and hearing end users of sign language translation applications in use case identification, data collection and evaluation is of the utmost importance in the creation of useful sign language translation models. We recommend iterative, human-in-the-loop, design and development of sign language translation models.
The higher order Painleve system of type D^{(1)}_{2n+2} was proposed by Y. Sasano as an extension of the sixth Painleve equation for the affine Weyl group symmetry with the aid of algebraic geometry for Okamoto initial value space. In this article, we give it as the monodromy preserving deformation of a Fuchsian system.
In this work we present measurements of permeability, effective porosity and tortuosity on a variety of rock samples using NMR/MRI of thermal and laser-polarized gas. Permeability and effective porosity are measured simultaneously using MRI to monitor the inflow of laser-polarized xenon into the rock core. Tortuosity is determined from measurements of the time-dependent diffusion coefficient using thermal xenon in sealed samples. The initial results from a limited number of rocks indicate inverse correlations between tortuosity and both effective porosity and permeability. Further studies to widen the number of types of rocks studied may eventually aid in explaining the poorly understood connection between permeability and tortuosity of rock cores.
We give integral representations for multiple Hermite and multiple Hermite polynomials of both type I and II. We also show how these are connected with double integral representations of certain kernels from random matrix theory.
Cellular networks equipped with mobile edge computing (MEC) servers can be beneficial for unmanned aerial vehicles (UAVs) with limited onboard computation power and battery life-time. In this paper, we compare energy consumption of a UAV connected to cellular MEC servers in various possible scenarios such as onboard/MEC processing or parallel computation. Using detailed 3GPP-based modelings, we provide quantitative understanding of the most energy efficient approach and its relation with communication technologies, computation factors, and mobility parameters. Our findings show that, across the different frequencies from sub-6GHz to THz bands, the mmWave cellular MEC network is more energy efficient than UAV's onborad processing for a broader range of network densities. Secondly, while the UAV propulsion power consumption is non-decreasing function of velocity, the UAV movement cost yet can be optimized to further provide remarkable energy savings as compared to hovering. Finally, our results show that the most energy efficient approach can be obtained if mobility of UAV is combined with efficient parallel onboard-MEC processing.
Galaxy mergers trigger both star formation and accretion onto the central supermassive black hole. As a result of subsequent energetic feedback processes, it has long been proposed that star formation may be promptly extinguished in galaxy merger remnants. However, this prediction of widespread, rapid quenching in late stage mergers has been recently called into question with modern simulations and has never been tested observationally. Here we perform the first empirical assessment of the long-predicted end phase in the merger sequence. Based on a sample of ~500 post-mergers identified from the Ultraviolet Near Infrared Optical Northern Survey (UNIONS), we show that the frequency of post-merger galaxies that have rapidly shutdown their star formation following a previous starburst is 30-60 times higher than expected from a control sample of non-merging galaxies. No such excess is found in a sample of close galaxy pairs, demonstrating that mergers can indeed lead to a rapid halt to star formation, but that this process only manifests after coalescence.
Scaling in the dynamical properties of complex many-body systems has been of strong interest since turbulence phenomena became the subject of systematic mathematical studies. In this article, dynamical critical phenomena far from equilibrium are investigated with functional renormalisation group equations. The focus is set on scaling solutions of the stochastic driven-dissipative Burgers equation and their relation to solutions known in the literature for Burgers and Kardar-Parisi-Zhang dynamics. We furthermore relate superfluid as well as acoustic turbulence described by the Gross-Pitaevskii model to known analytic and numerical results for scaling solutions. In this way, the canonical Kolmogorov exponent 5/3 for the energy cascade in superfluid turbulence is obtained analytically. We also get first results for anomalous exponents of acoustic and quantum turbulence. These are consistent with existing experimental data. Our results should be relevant for future experiments with, e.g., exciton-polariton condensates in solid-state systems as well as with ultra-cold atomic gases.
Updating Kormendy & Kennicutt (2004, ARAA, 42, 603), we review internal secular evolution of galaxy disks. One consequence is the growth of pseudobulges that often are mistaken for true (merger-built) bulges. Many pseudobulges are recognizable as cold, rapidly rotating, disky structures. Bulges have Sersic function brightness profiles with index n > 2; most pseudobulges have n <= 2. Recognition of pseudobulges makes the biggest problem with cold dark matter galaxy formation more acute: How can hierarchical clustering make so many pure disk galaxies with no evidence for merger-built bulges? E. g., the giant Scd galaxies M101 and NGC 6946 have rotation velocities of V ~ 200 km/s but nuclear star clusters with velocity dispersions of 25 to 40 km/s. Within 8 Mpc of us, 11 of 19 galaxies with V > 150 km/s show no evidence for a classical bulge, while only 7 are ellipticals or have classical bulges. It is hard to understand how bulgeless galaxies could form as the quiescent tail of a distribution of merger histories. Our second theme is environmental secular evolution. We confirm that spheroidal galaxies have fundamental plane (FP) correlations that are almost perpendicular to those for bulges and ellipticals. Spheroidals are not dwarf ellipticals. Rather, their structural parameters are similar to those of late-type galaxies. We suggest that spheroidals are defunct late-type galaxies transformed by internal processes such as supernova-driven gas ejection and environmental processes such as secular harassment and ram-pressure stripping. Minus spheroidals, the FP of ellipticals and bulges has small scatter. With respect to these, pseudobulges are larger and less dense.
Analyzing job hopping behavior is important for understanding job preference and career progression of working individuals. When analyzed at the workforce population level, job hop analysis helps to gain insights of talent flow among different jobs and organizations. Traditionally, surveys are conducted on job seekers and employers to study job hop behavior. Beyond surveys, job hop behavior can also be studied in a highly scalable and timely manner using a data driven approach in response to fast-changing job landscape. Fortunately, the advent of online professional networks (OPNs) has made it possible to perform a large-scale analysis of talent flow. In this paper, we present a new data analytics framework to analyze the talent flow patterns of close to 1 million working professionals from three different countries/regions using their publicly-accessible profiles in an established OPN. As OPN data are originally generated for professional networking applications, our proposed framework re-purposes the same data for a different analytics task. Prior to performing job hop analysis, we devise a job title normalization procedure to mitigate the amount of noise in the OPN data. We then devise several metrics to measure the amount of work experience required to take up a job, to determine that existence duration of the job (also known as the job age), and the correlation between the above metric and propensity of hopping. We also study how job hop behavior is related to job promotion/demotion. Lastly, we perform connectivity analysis at job and organization levels to derive insights on talent flow as well as job and organizational competitiveness.
We study a possible gauge symmetry breaking pattern in an ${\rm SU}(7)$ grand unified theory, which describes the mass origins of all electrically charged SM fermions of the second and the third generations. Two intermediate gauge symmetries of ${\cal G}_{341}\equiv {\rm SU}(3)_c \otimes {\rm SU}(4)_W \otimes {\rm U}(1)_{X_0}$ and ${\cal G}_{331}\equiv {\rm SU}(3)_c \otimes {\rm SU}(3)_W \otimes {\rm U}(1)_{X_1}$ arise above the electroweak scale. SM fermion mass hierarchies between two generations can be obtained through a generalized seesaw mechanism. The mechanism can be achieved with suppressed symmetry breaking VEVs from multiple Higgs fields that are necessary to avoid tadpole terms in the Higgs potential. Some general features of the ${\rm SU}(7)$ fermion spectrum will be described, which include the existence of vectorlike fermions, the tree-level flavor changing weak currents between the SM fermions and heavy partner fermions, and the flavor non-universality between different SM generations from the extended weak sector.
In this paper, we study the algebraic structure of $(\sigma,\delta)$-polycyclic codes, defined as submodules in the quotient module $S/Sf$, where $S=R[x,\sigma,\delta]$ is the Ore extension ring, $f\in S$, and $R$ is a finite but not necessarily commutative ring. We establish that the Euclidean duals of $(\sigma,\delta)$-polycyclic codes are $(\sigma,\delta)$-sequential codes. By using $(\sigma,\delta)$-Pseudo Linear Transformation, we define the annihilator dual of $(\sigma,\delta)$-polycyclic codes. Then, we demonstrate that the annihilator duals of $(\sigma,\delta)$-polycyclic codes maintain their $(\sigma,\delta)$-polycyclic nature. Furthermore, we classify when two $(\sigma,\delta)$-polycyclic codes are Hamming isometrical equivalent. By employing Wedderburn polynomials, we introduce simple-root $(\sigma,\delta)$-polycyclic codes. Subsequently, we define the $(\sigma, \delta)$-Mattson-Solomon transform for this class of codes and we address the problem of decomposing these codes by using the properties of Wedderburn polynomials.
In the field of Life Sciences it it very common to deal with extremely complex systems, from both analytical and computational points of view, due to the unavoidable coupling of different interacting structures. As an example, angiogenesis has revealed to be an highly complex, and extremely interesting biomedical problem, due to the strong coupling between the kinetic parameters of the relevant branching - growth - anastomosis stochastic processes of the capillary network, at the microscale, and the family of interacting underlying biochemical fields, at the macroscale. In this paper an original revisited conceptual stochastic model of tumor driven angiogenesis has been proposed, for which it has been shown that it is possible to reduce complexity by taking advantage of the intrinsic multiscale structure of the system; one may keep the stochasticity of the dynamics of the vessel tips at their natural microscale, whereas the dynamics of the underlying fields is given by a deterministic mean field approximation obtained by an averaging at a suitable mesoscale. While in previous papers only an heuristic justification of this approach had been offered, in this paper a rigorous proof is given of the so called "propagation of chaos", which leads to a mean field approximation of the stochastic relevant measures associated with the vessel dynamics, and consequently of the underlying TAF field. As a side though important result, the non-extinction of the random process of tips has been proven during any finite time interval.
Medical image segmentation is an increasingly popular area of research in medical imaging processing and analysis. However, many researchers who are new to the field struggle with basic concepts. This tutorial paper aims to provide an overview of the fundamental concepts of medical imaging, with a focus on Magnetic Resonance and Computerized Tomography. We will also discuss deep learning algorithms, tools, and frameworks used for segmentation tasks, and suggest best practices for method development and image analysis. Our tutorial includes sample tasks using public data, and accompanying code is available on GitHub (https://github.com/MICLab-Unicamp/Medical-ImagingTutorial). By sharing our insights gained from years of experience in the field and learning from relevant literature, we hope to assist researchers in overcoming the initial challenges they may encounter in this exciting and important area of research.
This paper is written because I receive several inquiry emails saying it is hard to achieve good results when applying token repetition learning techniques. If REP (proposed by me) or Pointer-Mixture (proposed by Jian Li) is directly applied to source code to decide all token repetitions, the model performance will decrease sharply. As we use pre-order traversal to traverse the Abstract Syntax Tree (AST) to generate token sequence, tokens corresponding to AST grammar are ignored when learning token repetition. For non-grammar tokens, there are many kinds: strings, chars, numbers and identifiers. For each kind of tokens, we try to learn its repetition pattern and find that only identifiers have the property of token repetition. For identifiers, there are also many kinds such as variables, package names, method names, simple types, qualified types or qualified names. Actually, some kinds of identifiers such as package names, method names, qualified names or qualified types are unlikely to be repeated. Thus, we ignore these kinds of identifiers that are unlikely to be repeated when learning token repetition. This step is crucial and this important implementation trick is not clearly presented in the paper because we think it is trivial and too many details may bother readers. We offer the GitHub address of our model in our conference paper and readers can check the description and implementation in that repository. Thus, in this paper, we supplement the important implementation optimization details for the already published papers.
For a stable irreducible curve $X$ and a torsion free sheaf $L$ on $X$ of rank one and degree $d$, D.S. Nagaraj and C.S. Seshadri ([NS]) defined a closed subset $\Cal U_X(r,L)$ in the moduli space of semistable torsion free sheaves of rank $r$ and degree $d$ on $X$. We prove that $\Cal U_X(r,L)$ is irreducible, when a smooth curve $Y$ specializes to $X$ and a line bundle $\Cal L$ on $Y$ specializes to $L$, the specialization of moduli space of semistable rank $r$ vector bundles on $Y$ with fixed determinant $\Cal L$ has underlying set $\Cal U_X(r,L)$. For rank 2 and 3, we show that there is a Cohen-Macaulay closed subscheme in the Gieseker space which represents a suitable moduli functor and has good specialization property.
The radio detection technique of cosmic ray air showers has gained renewed interest in the last two decades. While the radio experiments are very cost-effective to deploy, the Monte-Carlo simulations required to analyse the data are computationally expensive. Here we present a proof of concept for a novel way to synthesise the radio emission from extensive air showers in simulations. It is a hybrid approach which uses a single microscopic Monte-Carlo simulation, called the origin shower, to generate the radio emission from a target shower with a different longitudinal evolution, primary particle type and energy. The method employs semi-analytical relations which only depend on the shower parameters to transform the radio signals in the simulated antennas. We apply this method to vertical air showers with energies ranging from $10^{17}$ eV to $10^{19}$ eV and compare the results with CoREAS simulations in two frequency bands, namely the broad [20, 500] MHz band and a more narrow one at [30, 80] MHz. We gauge the synthesis quality using the maximal amplitude and energy fluence contained in the signal. We observe that the quality depends primarily on the difference in $X_{\text{max}}$ between the origin and target shower. After applying a linear bias correction, we find that for a shift in $X_{\text{max}}$ of less than 150 $\text{g}/\text{cm}^2$ , template synthesis has a bias of less than 2% and a scatter up to 6%, both in amplitude, on the broad frequency range. On the restricted [30, 80] MHz range the bias is similar, but the spread on amplitude drops down to 3%. These fluctuations are on the same level as the intrinsic scatter we observe in Monte-Carlo ensembles. We therefore surmise the observed scatter in amplitude to originate from intrinsic shower fluctuations we do not explicitly account for in template synthesis.
We report synthesis, structural details and complete superconducting characterization of very recently discovered Nb2PdS5 new superconductor. The synthesized compound is crystallized in mono-clinic structure. Bulk superconductivity is seen in both magnetic susceptibility and electrical resistivity measurements with superconducting transition temperature (Tc) at 6K. The upper critical field (Hc2), being estimated from high field magneto-transport measure-ments is above 240kOe. The estimated Hc2(0) is clearly above the Pauli paramagnetic limit. Heat capacity measurements show clear transition with well defined peak at Tc, but with lower jump than as expected for a BCS type superconductor. The Sommerfield constant and Debye temperature as determined from low temperature fitting of heat capacity data are 32mJ/moleK2 and 263K respectively. Hall coefficients and resistivity in conjugation with electronic heat capacity indicates multiple gap superconductivity signatures in Nb2PdS5. We also studied the impact of hydrostatic pressure on superconductivity of Nb2PdS5 and found nearly no change in Tc for the given pressure range.
We formulate, using heuristic reasoning, precise conjectures for the range of the number of primes in intervals of length $y$ around $x$, where $y\ll (\log x)^2$. In particular we conjecture that the maximum grows surprisingly slowly as $y$ ranges from $\log x$ to $(\log x)^2$. We will show that our conjectures are somewhat supported by available data, though not so well that there may not be room for some modification.
The hadron-quark/gluon duality formulated in terms of a topology change at a density $n\gsim 2n_0$ $n_0\simeq 0.16$fm$^{-3}$ is found to describe the core of massive compact stars in terms of quasiparticles of fractional baryon charges, behaving neither like pure baryons nor like deconfined quarks. This can be considered as the Cheshire-Cat mechanism~\cite{CC} for the hadron-quark continuity arrived at bottom-up from skyrmions that is equivalent to the "MIT-bag"-to-skyrmion continuity arrived at top-down from quarks/gluons. Hidden symmetries, both local gauge and pseudo-conformal (or broken scale), emerge and give rise both to the long-standing "effective $g_A^\ast\approx 1$" in nuclear Gamow-Teller transitions at $\lsim n_0$ and to the pseudo-conformal sound velocity $v_{pcs}^2/c^2\approx 1/3$ at $\gsim 3n_0$. It is suggested that what has been referred to, since a long time, as "quenched $g_A$" in light nuclei reflects what leads to the dilaton-limit $g_A^{\rm DL}=1$ at near the (putative) infrared fixed point of scale invariance. These properties are confronted with the recent observations in Gamow-Teller transitions and in astrophysical observations.
We establish an infinitesimal variant of Guo-Jacquet trace formula for the case of $(GL_{2n,D}, GL_{n,D}\times GL_{n,D})$. It is a kind of Poisson summation formula obtained by an analogue of Arthur's truncation process. It consists in the equality of the sums of two types of distributions which are non-equivariant in general: one type is associated to rational points in the categorical quotient, while the other type is the Fourier transform of the first type. For regular semi-simple points in the categorical quotient, we obtain weighted orbital integrals.
A new theoretical method is presented for future multi-scale aerodynamic optimisation of very large wind farms. The new method combines a recent two-scale coupled momentum analysis of ideal wind turbine arrays with the classical blade-element-momentum (BEM) theory for turbine rotor design, making it possible to explore some potentially important relationships between the design of rotors and their performance in a very large wind farm. The details of the original two-scale momentum model are described first, followed by the new coupling procedure with the classical BEM theory and some example solutions. The example solutions, obtained using a simplified but still realistic NREL S809 airfoil performance curve, illustrate how the aerodynamically optimal rotor design may change depending on the farm density. It is also shown that the peak power of the rotors designed optimally for a given farm (i.e. 'tuned' rotors) could be noticeably higher than that of the rotors designed for a different farm (i.e. 'untuned' rotors) even if the blade pitch angle is allowed to be adjusted optimally during the operation. The results presented are for ideal very large wind farms and a possible future extension of the present work for real large wind farms is also discussed briefly.
In a non-minimal Higgs framework, we present a novel mechanism in which the CP violating dark particles only interact with the SM through the gauge bosons, primarily the $Z$ boson. Such $Z$-portal dark CP violation is realised in the regions of the parameter space where Higgs-mediated (co)annihilation processes are sub-dominant and have negligible contributions to the DM relic density. We show that such $Z$-portal CP violating DM can still thermalise and satisfy all experimental and observational bounds and discuss implications of such phenomena for electroweak baryogenesis.
We present a general-purpose interior-point solver for convex optimization problems with conic constraints. Our method is based on a homogeneous embedding method originally developed for general monotone complementarity problems and more recently applied to operator splitting methods, and here specialized to an interior-point method for problems with quadratic objectives. We allow for a variety of standard symmetric and non-symmetric cones, and provide support for chordal decomposition methods in the case of semidefinite cones. We describe the implementation of this method in the open-source solver Clarabel, and provide a detailed numerical evaluation of its performance versus several state-of-the-art solvers on a wide range of standard benchmarks problems. Clarabel is faster and more robust than competing commercial and open-source solvers across a range of test sets, with a particularly large performance advantage for problems with quadratic objectives. Clarabel is currently distributed as a standard solver for the Python CVXPY optimization suite.
Let $p\ge 5$ be a prime. We show that the space of weight one Eisenstein series defines an embedding into $\PP^{(p-3)/2}$ of the modular curve $X_1(p)$ for the congruence group $\Gamma_1(p)$ that is scheme-theoretically cut out by explicit quadratic equations.
A procedure for constructing topological actions from centrally extended Lie groups is introduced. For a \km\ group, this produces \3al \cs, while for the \vir\ group the result is a new \3al \tft\ whose physical states satisfy the \vir\ \wi. This \tft\ is shown to be a first order formulation of two dimensional induced gravity in the chiral gauge. The extension to $W_3$-gravity is discussed.
We show that there are many compact subsets of the moduli space $M_g$ of Riemann surfaces of genus $g$ that do not intersect any symmetry locus. This has interesting implications for $\mathcal{N}=2$ supersymmetric conformal field theories in four dimensions.
We describe an algorithm computing an optimal prefix free code for $n$ unsorted positive weights in time within $O(n(1+\lg \alpha))\subseteq O(n\lg n)$, where the alternation $\alpha\in[1..n-1]$ measures the amount of sorting required by the computation. This asymptotical complexity is within a constant factor of the optimal in the algebraic decision tree computational model, in the worst case over all instances of size $n$ and alternation $\alpha$. Such results refine the state of the art complexity of $\Theta(n\lg n)$ in the worst case over instances of size $n$ in the same computational model, a landmark in compression and coding since 1952, by the mere combination of van Leeuwen's algorithm to compute optimal prefix free codes from sorted weights (known since 1976), with Deferred Data Structures to partially sort a multiset depending on the queries on it (known since 1988).
I argue that in open-string theory with hierarchically small (or large) extra dimensions, gauge groups can unify naturally with logarithmically-running coupling constants at the high Kaluza-Klein (or string-winding) scale. This opens up the possibility of rescuing the standard logarithmic unification at $M_U\sim 10^{15-18}$ GeV even if the fundamental-string scale is much lower, at intermediate or possibly even electroweak scales. I also explain, however, why a low type-I string scale may not suffice to obliterate the ultraviolet problems usually associated with the gauge hierarchy.
The generation and manipulation of strong entanglement and Einstein-Podolsky-Rosen (EPR) steering in macroscopic systems are outstanding challenges in modern physics. Especially, the observation of asymmetric EPR steering is important for both its fundamental role in interpreting the nature of quantum mechanics and its application as resource for the tasks where the levels of trust at different parties are highly asymmetric. Here, we study the entanglement and EPR steering between two macroscopic magnons in a hybrid ferrimagnet-light system. In the absence of light, the two types of magnons on the two sublattices can be entangled, but no quantum steering occurs when they are damped with the same rates. In the presence of the cavity field, the entanglement can be significantly enhanced, and strong two-way asymmetric quantum steering appears between two magnons with equal dispassion. This is very different from the conventional protocols to produce asymmetric steering by imposing additional unbalanced losses or noises on the two parties at the cost of reducing steerability. The essential physics is well understood by the unbalanced population of acoustic and optical magnons under the cooling effect of cavity photons. Our finding may provide a novel platform to manipulate the quantum steering and the detection of bi-party steering provides a knob to probe the magnetic damping on each sublattice of a magnet.
Suppose there is a Reinhardt cardinal. Then (1) $M_n(X)$ exists and is fully iterable (above $X$) for every transitive set $X$ and every $n<\omega$ (here $M_n(X)$ denotes the canonical minimal proper class inner model containing $X$ and having $n$ Woodin cardinals above the rank of $X$); and (2) Projective Determinacy holds in every set generic extension.
The history of modern condensed matter physics may be regarded as the competition and reconciliation between Stoner's and Anderson's physical pictures, where the former is based on momentum-space descriptions focusing on long wave-length fluctuations while the latter is based on real-space physics emphasizing emergent localized excitations. In particular, these two view points compete with each other in various nonperturbative phenomena, which range from the problem of high T$_{c}$ superconductivity, quantum spin liquids in organic materials and frustrated spin systems, heavy-fermion quantum criticality, metal-insulator transitions in correlated electron systems such as doped silicons and two-dimensional electron systems, the fractional quantum Hall effect, to the recently discussed Fe-based superconductors. An approach to reconcile these competing frameworks is to introduce topologically nontrivial excitations into the Stoner's description, which appear to be localized in either space or time and sometimes both, where scattering between itinerant electrons and topological excitations such as skyrmions, vortices, various forms of instantons, emergent magnetic monopoles, and etc. may catch nonperturbative local physics beyond the Stoner's paradigm. In this review article we discuss nonperturbative effects of topological excitations on dynamics of correlated electrons. ......
We revisit the Ravine method of Gelfand and Tsetlin from a dynamical system perspective, study its convergence properties, and highlight its similarities and differences with the Nesterov accelerated gradient method. The two methods are closely related. They can be deduced from each other by reversing the order of the extrapolation and gradient operations in their definitions. They benefit from similar fast convergence of values and convergence of iterates for general convex objective functions. We will also establish the high resolution ODE of the Ravine and Nesterov methods, and reveal an additional geometric damping term driven by the Hessian for both methods. This will allow us to prove fast convergence towards zero of the gradients not only for the Ravine method but also for the Nesterov method for the first time. We also highlight connections to other algorithms stemming from more subtle discretization schemes, and finally describe a Ravine version of the proximal-gradient algorithms for general structured smooth + non-smooth convex optimization problems.
Classical random matrix ensembles with orthogonal symmetry have the property that the joint distribution of every second eigenvalue is equal to that of a classical random matrix ensemble with symplectic symmetry. These results are shown to be the case $r=1$ of a family of inter-relations between eigenvalue probability density functions for generalizations of the classical random matrix ensembles referred to as $\beta$-ensembles. The inter-relations give that the joint distribution of every $(r+1)$-st eigenvalue in certain $\beta$-ensembles with $\beta = 2/(r+1)$ is equal to that of another $\beta$-ensemble with $\beta = 2(r+1)$. The proof requires generalizing a conditional probability density function due to Dixon and Anderson.
One way to model telecommunication networks are static Boolean models. However, dynamics such as node mobility have a significant impact on the performance evaluation of such networks. Consider a Boolean model in $\mathbb{R}^d$ and a random direction movement scheme. Given a fixed time horizon $T>0$, we model these movements via cylinders in $\mathbb{R}^d \times [0,T]$. In this work, we derive central limit theorems for functionals of the union of these cylinders. The volume and the number of isolated cylinders and the Euler characteristic of the random set are considered and give an answer to the achievable throughput, the availability of nodes, and the topological structure of the network.
Lattice QCD simulations with staggered fermions rely on the ``fourth-root trick.'' The validity of this trick has been proved for free staggered fermions using renormalization-group block transformations. I review the elements of the construction and discuss how it might be generalized to the interacting case.
We introduce braided Lie bialgebras as the infinitesimal version of braided groups. They are Lie algebras and Lie coalgebras with the coboundary of the Lie cobracket an infinitesimal braiding. We provide theorems of transmutation, Lie biproduct, bosonisation and double-bosonisation relating braided Lie bialgebras to usual Lie bialgebras. Among the results, the kernel of any split projection of Lie bialgebras is a braided-Lie bialgebra. The Kirillov-Kostant Lie cobracket provides a natural braided-Lie bialgebra on any complex simple Lie algebra $g$, as the transmutation of the Drinfeld-Sklyanin Lie cobracket. Other nontrivial braided-Lie bialgebras are associated to the inductive construction of simple Lie bialgebras along the $C$ and exceptional series.
We considered diffusion-driven processes on small-world networks with distance-dependent random links. The study of diffusion on such networks is motivated by transport on randomly folded polymer chains, synchronization problems in task-completion networks, and gradient driven transport on networks. Changing the parameters of the distance-dependence, we found a rich phase diagram, with different transient and recurrent phases in the context of random walks on networks. We performed the calculations in two limiting cases: in the annealed case, where the rearrangement of the random links is fast, and in the quenched case, where the link rearrangement is slow compared to the motion of the random walker or the surface. It has been well-established that in a large class of interacting systems, adding an arbitrarily small density of, possibly long-range, quenched random links to a regular lattice interaction topology, will give rise to mean-field (or annealed) like behavior. In some cases, however, mean-field scaling breaks down, such as in diffusion or in the Edwards-Wilkinson process in "low-dimensional" small-world networks. This break-down can be understood by treating the random links perturbatively, where the mean-field (or annealed) prediction appears as the lowest-order term of a naive perturbation expansion. The asymptotic analytic results are also confirmed numerically by employing exact numerical diagonalization of the network Laplacian. Further, we construct a finite-size scaling framework for the relevant observables, capturing the cross-over behaviors in finite networks. This work provides a detailed account of the self-consistent-perturbative and renormalization approaches briefly introduced in two earlier short reports.
We analyze the localization properties for eigenvectors of the Dirac operator in quenched lattice QCD in the vicinity of the deconfinement phase transition. Studying the characteristic differences between the Z_3 sectors above the critical temperature T_c, we find indications for the presence of calorons.
There is a longstanding conjecture, due to Gregory Cherlin and Boris Zilber, that all simple groups of finite Morley rank are simple algebraic groups. One of the major theorems in the area is Borovik's trichotomy theorem. The "trichotomy" here is a case division of the minimal counterexamples within odd type, i.e. groups with a divisibble connected component of the Sylow 2-subgroup. We introduce a charateristic zero notion of unipotence which can be used to obtain a connected nilpotent signalizer functor from any sufficiently non-trivial solvable signalizer functor. This result plugs seamlessly into Borovik's work to eliminate the assumption of tameness from his trichotomy theorem for odd type groups. This work also provides us with a form of Borovik's theorem for degenerate type groups.
Discovery of the 6.7-hour periodicity in the X-ray source 1E 161348-5055 in RCW 103 has led to investigations of the nature of this periodicity. We explore a model for 1E 161348-5055, wherein a fast-spinning neutron star with a magnetic field $\sim 10^{12}$ G in a young pre-Low-Mass X-ray Binary (pre-LMXB) with an eccentric orbit of period 6.7 hr operates in the "propeller" phase. The 6.7-hr light curve of 1E 161348-5055 can be quantitatively accounted by a model of orbitally-modulated mass transfer through a viscous accretion disk and subsequent propeller emission (both Illarionov-Sunyaev type and Romanova-Lovelace et al type), and spectral and other properties are also in agreement. Formation and evolution of model systems are shown to be in accordance both with standard theories.
Let A be a pre-defined set of rational numbers. We say a set of natural numbers S is an A-quotient-free set if no ratio of two elements in S belongs to A. We find the maximal asymptotic density and the maximal upper asymptotic density of A-quotient-free sets when A belongs to a particular class. It is known that in the case A = {p, q}, where p, q are coprime integers greater than one, the latest problem is reduced to evaluation of the largest number of lattice non-adjacent points in a triangle whose legs lie on coordinate axis. We prove that this number is achieved by choosing points of the same color in the checkerboard coloring.
We consider the problem of digitalizing Euclidean segments. Specifically, we look for a constructive method to connect any two points in $\mathbb{Z}^d$. The construction must be {\em consistent} (that is, satisfy the natural extension of the Euclidean axioms) while resembling them as much as possible. Previous work has shown asymptotically tight results in two dimensions with $\Theta(\log N)$ error, where resemblance between segments is measured with the Hausdorff distance, and $N$ is the $L_1$ distance between the two points. This construction was considered tight because of a $\Omega(\log N)$ lower bound that applies to any consistent construction in $\mathbb{Z}^2$. In this paper we observe that the lower bound does not directly extend to higher dimensions. We give an alternative argument showing that any consistent construction in $d$ dimensions must have $\Omega(\log^{1/(d-1)} N)$ error. We tie the error of a consistent construction in high dimensions to the error of similar {\em weak} constructions in two dimensions (constructions for which some points need not satisfy all the axioms). This not only opens the possibility for having constructions with $o(\log N)$ error in high dimensions, but also opens up an interesting line of research in the tradeoff between the number of axiom violations and the error of the construction. In order to show our lower bound, we also consider a colored variation of the concept of discrepancy of a set of points that we find of independent interest.
We show that the well known Kronecker product is a suitable tool for the construction of matrix representations of widely used spin Hamiltonians. In this way we avoid the explicit use of basis sets for the construction of the matrix elements. As illustrative examples we discuss two isotropic models and an anisotropic one.
Various methods of constructing an orthonomal set out of a given set of linearly independent vectors are discussed. Particular attention is paid to the Gram-Schmidt and the Schweinler-Wigner orthogonalization procedures. A new orthogonalization procedure which, like the Schweinler- Wigner procedure, is democratic and is endowed with an extremal property is suggested.
The goal of this article is to derive the reciprocity theorem, mutual energy theorem from Poynting theorem instead of from Maxwell equation. The Poynting theorem is generalized to the modified Poynting theorem. In the modified Poynting theorem the electromagnetic field is superimposition of different electromagnetic fields including the retarded potential and advanced potential, time-offset field. The media epsilon (permittivity) and mu (permeability) can also be different in the different fields. The concept of mutual energy is introduced which is the difference between the total energy and self-energy. Mixed mutual energy theorem is derived. We derive the mutual energy from Fourier domain. We obtain the time-reversed mutual energy theorem and the mutual energy theorem. Then we derive the mutual energy theorem in time-domain. The instantaneous modified mutual energy theorem is derived. Applying time-offset transform and time integral to the instantaneous modified mutual energy theorem, the time-correlation modified mutual energy theorem is obtained. Assume there are two electromagnetic fields one is retarded potential and one is advanced potential, the convolution reciprocity theorem can be derived. Corresponding to the modified time-correlation mutual energy theorem and the time-convolution reciprocity theorem in Fourier domain, there is the modified mutual energy theorem and the Lorentz reciprocity theorem. Hence all mutual energy theorem and the reciprocity theorems are put in one frame of the concept of the mutual energy. 3 new Complementary theorems are derived. The inner product is introduced for two different electromagnetic fields in both time domain and Fourier domain for the application of the wave expansion.
For a permutation $\pi:[k] \to [k]$, a function $f:[n] \to \mathbb{R}$ contains a $\pi$-appearance if there exists $1 \leq i_1 < i_2 < \dots < i_k \leq n$ such that for all $s,t \in [k]$, $f(i_s) < f(i_t)$ if and only if $\pi(s) < \pi(t)$. The function is $\pi$-free if it has no $\pi$-appearances. In this paper, we investigate the problem of testing whether an input function $f$ is $\pi$-free or whether $f$ differs on at least $\varepsilon n$ values from every $\pi$-free function. This is a generalization of the well-studied monotonicity testing and was first studied by Newman, Rabinovich, Rajendraprasad and Sohler (Random Structures and Algorithms 2019). We show that for all constants $k \in \mathbb{N}$, $\varepsilon \in (0,1)$, and permutation $\pi:[k] \to [k]$, there is a one-sided error $\varepsilon$-testing algorithm for $\pi$-freeness of functions $f:[n] \to \mathbb{R}$ that makes $\tilde{O}(n^{o(1)})$ queries. We improve significantly upon the previous best upper bound $O(n^{1 - 1/(k-1)})$ by Ben-Eliezer and Canonne (SODA 2018). Our algorithm is adaptive, while the earlier best upper bound is known to be tight for nonadaptive algorithms.
We study the optimal scheduling problem where n source nodes attempt to transmit updates over L shared wireless on/off fading channels to optimize their age performance under energy and age-violation tolerance constraints. Specifically, we provide a generic formulation of age-optimization in the form of a constrained Markov Decision Processes (CMDP), and obtain the optimal scheduler as the solution of an associated Linear Programming problem. We investigate the characteristics of the optimal single-user multi-channel scheduler for the important special cases of average-age and violation-rate minimization. This leads to several key insights on the nature of the optimal allocation of the limited energy, where a usual threshold-based policy does not apply and will be useful in guiding scheduler designers. We then investigate the stability region of the optimal scheduler for the multi-user case. We also develop an online scheduler using Lyapunov-drift-minimization methods that do not require the knowledge of channel statistics. Our numerical studies compare the stability region of our online scheduler to the optimal scheduler to reveal that it performs closely with unknown channel statistics.
Unrecognized hazards increase the likelihood of workplace fatalities and injuries substantially. However, recent research has demonstrated that a large proportion of hazards remain unrecognized in dynamic construction environments. Recent studies have suggested a strong correlation between viewing patterns of workers and their hazard recognition performance. Hence, it is important to study and analyze the viewing patterns of workers to gain a better understanding of their hazard recognition performance. The objective of this exploratory research is to explore hazard recognition as a visual search process to identifying various visual search factors that affect the process of hazard recognition. Further, the study also proposes a framework to develop a vision based tool capable of recording and analyzing viewing patterns of construction workers and generate feedback for personalized training and proactive safety management.
We construct a holographic dual of the Schwinger-Keldysh effective action for the dissipative low-energy dynamics of relativistic charged matter at strong coupling in a fixed thermal background. To do so, we use a mixed signature bulk spacetime whereby an eternal asymptotically anti-de Sitter black hole is glued to its Euclidean counterpart along an initial time slice in a way to match the desired double-time contour of the dual field theory. Our results are consistent with existing literature and can be regarded as a fully-ab initio derivation of a Schwinger-Keldysh effective action. In addition, we provide a simple infrared effective action for the near horizon region that drives all the dissipation and can be viewed as an alternative to the membrane paradigm approximation.
Azimuthal angle two particle correlations have been shown to be a powerful probe for extracting novel features of jet induced correlations produced in Au+Au collisions at RHIC. At intermediate $p_T$, 2-5GeV/c, the jets have been shown to be significantly modified in both their particle composition and their angular distribution compared to p+p collisions. Two-particle angular correlations with identified particles provide sensitive probes of both the interactions between hard scattered partons and the medium. The systematics of these correlations are essential to understanding the physics of intermediate $p_T$ in heavy ion collisions.