text
stringlengths
6
128k
In this work, the role of inelastic processes in the formation of a transient Bose-Einstein condensation (BEC) is investigated based on kinetic theory. We calculate the condensation rate for an overpopulated gluon system which is assumed to be in thermal equilibrium and with the presence of a BEC. The matrix elements of the inelastic processes are chosen as the isotropic one and the gluons are considered to have a finite mass. Our calculations indicate that the inelastic processes can hinder the formation of a BEC since the negatively infinite net condensation rate can destroy any BEC instantly.
We consider the relation between Sion's minimax theorem for a continuous function and a Nash equilibrium in a five-players game with two groups which is zero-sum and symmetric in each group. We will show the following results. 1. The existence of Nash equilibrium which is symmetric in each group implies Sion's minimax theorem for a pair of playes in each group. 2. Sion's minimax theorem for a pair of playes in each group imply the existence of a Nash equilibrium which is symmetric in each group. Thus, they are equivalent. An example of such a game is a relative profit maximization game in each group under oligopoly with two groups such that firms in each group have the same cost functions and maximize their relative profits in each group, and the demand functions are symmetric for the firms in each group.
This paper proposes a novel approach to create an unit set for CTC based speech recognition systems. By using Byte Pair Encoding we learn an unit set of an arbitrary size on a given training text. In contrast to using characters or words as units this allows us to find a good trade-off between the size of our unit set and the available training data. We evaluate both Crossword units, that may span multiple word, and Subword units. By combining this approach with decoding methods using a separate language model we are able to achieve state of the art results for grapheme based CTC systems.
We give a new, simple, dimension-independent definition of the serendipity finite element family. The shape functions are the span of all monomials which are linear in at least s-r of the variables where s is the degree of the monomial or, equivalently, whose superlinear degree (total degree with respect to variables entering at least quadratically) is at most r. The degrees of freedom are given by moments of degree at most r-2d on each face of dimension d. We establish unisolvence and a geometric decomposition of the space.
Germanium (Ge) detectors with ability of measuring a single electron-hole (e-h) pair are needed in searching for light dark matter (LDM) down to the MeV scale. We investigate the feasibility of Ge detectors with amorphous-Ge (a-Ge) contacts to achieve the sensitivity of measuring a single e-h pair, which requires extremely low leakage current. Three Ge detectors with a-Ge contacts are used to study the charge barrier height for blocking electrons and holes. Using the measured bulk leakage current and the D$\ddot{o}$hler-Brodsky model, we obtain the values for charge barrier height and the density of localized energy states near the Fermi energy level for the top and bottom contacts, respectively. We predict that the bulk leakage current is extremely small and can be neglected at helium temperature ($\sim$4 K). Thus, Ge detectors with a-Ge contacts possess the potential to measure a single e-h pair for detecting LDM particles.
A thorough backward stability analysis of Hotelling's deflation, an explicit external deflation procedure through low-rank updates for computing many eigenpairs of a symmetric matrix, is presented. Computable upper bounds of the loss of the orthogonality of the computed eigenvectors and the symmetric backward error norm of the computed eigenpairs are derived. Sufficient conditions for the backward stability of the explicit external deflation procedure are revealed. Based on these theoretical results, the strategy for achieving numerical backward stability by dynamically selecting the shifts is proposed. Numerical results are presented to corroborate the theoretical analysis and to demonstrate the stability of the procedure for computing many eigenpairs of large symmetric matrices arising from applications.
We investigate the Rashba-type spin splitting in the Shockley surface states on Au(111) and Ag(111) surfaces, based on first-principles calculations. By turning on and off spin-orbit interaction (SOI) partly, we show that although the surface states are mainly of p-orbital character with only small d-orbital one, d-channel SOI determines the splitting and the spin direction while p-channel SOI has minor and negative effects. The small d-orbital character of the surface states, present even without SOI, varies linearly with the crystal momentum k, resulting in the linear k dependence of the splitting, the Hallmark of the Rashba type. As a way to perturb the d-orbital character of the surface states, we discuss effects of electron and hole doping to the Au(111) surface.
The redundancy of Convolutional neural networks not only depends on weights but also depends on inputs. Shuffling is an efficient operation for mixing channel information but the shuffle order is usually pre-defined. To reduce the data-dependent redundancy, we devise a dynamic shuffle module to generate data-dependent permutation matrices for shuffling. Since the dimension of permutation matrix is proportional to the square of the number of input channels, to make the generation process efficiently, we divide the channels into groups and generate two shared small permutation matrices for each group, and utilize Kronecker product and cross group shuffle to obtain the final permutation matrices. To make the generation process learnable, based on theoretical analysis, softmax, orthogonal regularization, and binarization are employed to asymptotically approximate the permutation matrix. Dynamic shuffle adaptively mixes channel information with negligible extra computation and memory occupancy. Experiment results on image classification benchmark datasets CIFAR-10, CIFAR-100, Tiny ImageNet and ImageNet have shown that our method significantly increases ShuffleNets' performance. Adding dynamic generated matrix with learnable static matrix, we further propose static-dynamic-shuffle and show that it can serve as a lightweight replacement of ordinary pointwise convolution.
Transition of a system between two states is an important but difficult problem in natural science. In this article we study the transition problem in the framework of transition path ensemble. Using the overdamped Langevin method, we introduce the path integral formulation of the transition probability and obtain the equation for the minimum action path in the transition path space. For the effective sampling in the transition path ensemble, we derive a conditional overdamped Langevin equation. In two exactly solvable models, the free particle system and the harmonic system, we present the expression of the conditional probability density and the explicit solutions for the conditional Langevin equation and the minimum action path. The analytic results demonstrate the consistence of the conditional Langevin equation with the desired probability distribution in the transition. It is confirmed that the conditional Langevin equation is an effective tool to sample the transition path ensemble, and the minimum action principle actually leads to the most probable path.
We show that magnetic susceptibility can reveal spin entanglement between individual constituents of a solid, while magnetisation describes their local properties. We then show that these two thermodynamical quantities satisfy complementary relation in the quantum-mechanical sense. It describes sharing of (quantum) information in the solid between spin entanglement and local properties of its individual constituents. Magnetic susceptibility is shown to be a macroscopic spin entanglement witness that can be applied without complete knowledge of the specific model (Hamiltonian) of the solid.
We derive and justify analytically the dynamics of a small macroscopically modulated amplitude of a single plane wave in a nonlinear diatomic chain with stabilizing on-site potentials including the case where a wave generates another wave via self-interaction. More precisely, we show that in typical chains acoustical waves can generate optical but not acoustical waves, while optical waves are always closed with respect to self-interaction.
Now there are many different methods to do the PV-reduction for the one loop amplitudes. Two of them are unitarity cut method and generalized unitarity cut method. In this short paper, we present an explicit connection of these two methods, especially how the extractions of triangle and bubble coefficients are equivalent to each other.
We argue that the 3A2 state considered by Oles in Phys. Stat. Sol. (b) 236 (2003) 281 for the d2 system occurring in the V3+ ion in V2O3 and LaVO3 as well as in Ti2+ ion in TiO and in many other oxides is wrong. The proper ground state is 3T1g - its 9-fold degeneracy is further split in a crystal by intra-atomic spin-orbit interactions and lattice distortions.
Edge systems promise to bring data and computing closer to the users of time-critical applications. Specifically, edge storage systems are emerging as a new system paradigm, where users can retrieve data from small-scale servers inter-operating at the network's edge. The analysis, design, and optimization of such systems require a tractable model that will reflect their costs and bottlenecks. Alas, most existing mathematical models for edge systems focus on stateless tasks, network performance, or isolated nodes and are inapplicable for evaluating edge-based storage performance. We analyze the capacity-region model - the most promising model proposed so far for edge storage systems. The model addresses the system's ability to serve a set of user demands. Our analysis reveals five inherent gaps between this model and reality, demonstrating the significant remaining challenges in modeling storage service at the edge.
We give an example of a space with the nontrivial composition of the brane product and the brane coproduct, which we introduced in a previous article.
In this paper we report a clustering analysis of upper main-sequence stars in the Small Magellanic Cloud, using data from the VMC survey (the VISTA near-infrared YJKs survey of the Magellanic system). Young stellar structures are identified as surface overdensities on a range of significance levels. They are found to be organized in a hierarchical pattern, such that larger structures at lower significance levels contain smaller ones at higher significance levels. They have very irregular morphologies, with a perimeter-area dimension of 1.44 +/- 0.02 for their projected boundaries. They have a power-law mass-size relation, power-law size/mass distributions, and a lognormal surface density distribution. We derive a projected fractal dimension of 1.48 +/- 0.03 from the mass-size relation, or of 1.4 +/- 0.1 from the size distribution, reflecting significant lumpiness of the young stellar structures. These properties are remarkably similar to those of a turbulent interstellar medium (ISM), supporting a scenario of hierarchical star formation regulated by supersonic turbulence.
Nowadays, the use of advanced sensors, such as terrestrial 3D laser scanners, mobile LiDARs and Unmanned Aerial Vehicles (UAV) photogrammetric imaging, has become the prevalent practice for 3D Reality Modeling and digitization of large-scale monuments of Cultural Heritage (CH). In practice, this process is heavily related to the expertise of the surveying team, handling the laborious planning and time-consuming execution of the 3D mapping process that is tailored to the specific requirements and constraints of each site. To minimize human intervention, this paper introduces a novel methodology for autonomous 3D Reality Modeling for CH monuments by employing au-tonomous biomimetic quadrupedal robotic agents and UAVs equipped with the appropriate sensors. These autonomous robotic agents carry out the 3D RM process in a systematic and repeatable ap-proach. The outcomes of this automated process may find applications in digital twin platforms, facilitating secure monitoring and management of cultural heritage sites and spaces, in both indoor and outdoor environments.
We construct new class of regular soliton solutions of the gauged planar Skyrme model on the target space $S^2$ with fractional topological charges in the scalar sector. These field configurations represent Skyrmed vortices, they have finite energy and carry topologically quantized magnetic flux $\Phi=2\pi n$ where $n$ is an integer. Using a special version of the product ansatz as guide, we obtain by numerical relaxation various multimeron solutions and investigate the pattern of interaction between the fractionally charged solitons. We show that, unlike the vortices in the Abelian Higgs model, the gauged merons may combine short range repulsion and long range attraction. Considering the strong gauge coupling limit we demonstrate that the topological quantization of the magnetic flux is determined by the Poincar\'{e} index of the planar components $\phi_\perp = \phi_1+i\phi_2$ of the Skyrme field.
The COMPASS experiment at CERN has collected a large sample of events of inelastic scattering of longitudinally polarised muons off longitudinally polarised protons in the non-perturbative region (four-momentum transfer squared $Q^2<1$ GeV$^2$/$c^2$), with a Bjorken scaling variable in the range $4\times 10^{-5}<x<4\times 10^{-2}$. The data set is two orders of magnitude larger than the similar sample collected by the SMC experiment. These data complement our data for polarised deuterons. They allow the accurate determination of the longitudinal double spin asymmetry $A_1^p$ and of the spin-dependent structure function $g_1^p$ of the proton in the region of low $x$ and low $Q^2$. The preliminary results of the analysis of these data yield non zero and positive asymmetries and of the structure function $g_1^p$. This is the first time that spin effects are observed at such low $x$.
Let $S$ be a closed Riemann surface of genus $g \geq 2$ and $\varphi$ be a conformal automorphism of $S$ of prime order $p$ such that $S/\langle \varphi \rangle$ has genus zero. Let ${\mathbb K} \leq {\mathbb C}$ be a field of definition of $S$. We provide an argument for the existence of a field extension ${\mathbb F}$ of ${\mathbb K}$, of degree at most $2(p-1)$, for which $S$ is definable by a curve of the form $y^{p}=F(x) \in {\mathbb F}[x]$, in which case $\varphi$ corresponds to $(x,y) \mapsto (x,e^{2 \pi i/p} y)$. If, moreover, $\varphi$ is also definable over ${\mathbb K}$, then ${\mathbb F}$ can be chosen to be at most a quadratic extension of ${\mathbb K}$. For $p=2$, that is when $S$ is hyperelliptic and $\varphi$ is its hyperelliptic involution, this fact is due to Mestre (for even genus) and Huggins and Lercier-Ritzenthaler-Sijslingit in the case that ${\rm Aut}(S)/\varphi\rangle$ is non-trivial.
In-process compartmentalization and access control have been actively explored to provide in-place and efficient isolation of in-process security domains. Many works have proposed compartmentalization schemes that leverage hardware features, most notably using the new page-based memory isolation feature called Protection Keys for Userspace (PKU) on x86. Unfortunately, the modern ARM architecture does not have an equivalent feature. Instead, newer ARM architectures introduced Pointer Authentication (PA) and Memory Tagging Extension (MTE), adapting the reference validation model for memory safety and runtime exploit mitigation. We argue that those features have been underexplored in the context of compartmentalization and that they can be retrofitted to implement a capability-based in-process access control scheme. This paper presents Capacity, a novel hardware-assisted intra-process access control design that embraces capability-based security principles. Capacity coherently incorporates the new hardware security features on ARM that already exhibit inherent characteristics of capability. It supports the life-cycle protection of the domain's sensitive objects -- starting from their import from the file system to their place in memory. With intra-process domains authenticated with unique PA keys, Capacity transforms file descriptors and memory pointers into cryptographically-authenticated references and completely mediates reference usage with its program instrumentation framework and an efficient system call monitor. We evaluate our Capacity-enabled NGINX web server prototype and other common applications in which sensitive resources are isolated into different domains. Our evaluation shows that Capacity incurs a low-performance overhead of approximately 17% for the single-threaded and 13.54% for the multi-threaded webserver.
In a strongly stratified turbulent layer, a uniform horizontal magnetic field can become unstable to spontaneously form local flux concentrations due to a negative contribution of turbulence to the large-scale (mean-field) magnetic pressure. This mechanism, called the negative effective magnetic pressure instability (NEMPI), is of interest in connection with dynamo scenarios where most of the magnetic field resides in the bulk of the convection zone, and not at the bottom. Recent work using the mean-field hydromagnetic equations has shown that NEMPI becomes suppressed at rather low rotation rates with Coriolis numbers as low as 0.1.}{Here we extend these earlier investigations by studying the effects of rotation both on the development of NEMPI and on the effective magnetic pressure. We also quantify the kinetic helicity from direct numerical simulations (DNS) and compare with earlier work.}{To calculate the rotational effect on the effective magnetic pressure we consider both DNS and analytical studies using the $\tau$ approach. To study the effects of rotation on the development of NEMPI we use both DNS and mean-field calculations of the 3D hydromagnetic equations in a Cartesian domain.}{We find that the growth rates of NEMPI from earlier mean-field calculations are well reproduced with DNS, provided the Coriolis number is below about 0.06. In that case, kinetic and magnetic helicities are found to be weak. For faster rotation, dynamo action becomes possible. However, there is an intermediate range of rotation rates where dynamo action on its own is not yet possible, but the rotational suppression of NEMPI is being alleviated.}{Production of magnetic flux concentrations through the suppression of turbulent pressure appears to be possible only in the upper-most layers of the Sun, where the convective turnover time is less than 2 hours.}
Asymmetric mode transformation in waveguide is of great significance for on-chip integrated devices with one-way effect, while it is challenging to achieve asymmetric nonlinear-mode-conversion (NMC) due to the limitations imposed by phase-matching. In this letter, we theoretically proposed a new scheme for realizing asymmetric NMC by combining frequency-doubling process and periodic PT symmetric modulation in an optical waveguide. By engineering the one-way momentum from PT symmetric modulation, we have demonstrated the unidirectional conversion from pump to second harmonic with desired guided modes. Our findings offer new opportunities for manipulating nonlinear optical fields with PT symmetry, which could further boost more exploration on on-chip nonlinear devices assisted by non-Hermitian optics.
A conjecture going back to the eighties claims that there are no non-trivial self-extensions of irreducible modules over symmetric groups if the characteristic of the ground field is not equal to $2$. We obtain some partial positive results on this conjecture.
We study the asymptotics of large simple graphs constrained by the limiting density of edges and the limiting subgraph density of an arbitrary fixed graph $H$. We prove that, for all but finitely many values of the edge density, if the density of $H$ is constrained to be slightly higher than that for the corresponding Erd\H{o}s-R\'enyi graph, the typical large graph is bipodal with parameters varying analytically with the densities. Asymptotically, the parameters depend only on the degree sequence of $H$.
Intrinsic polar metals are rare, especially in oxides, because free electrons screen electric fields in a metal and eliminate the internal dipoles that are needed to break inversion symmetry. Here we use first-principles high-throughput structure screening to predict a new polar metal in bulk and thin film forms. After screening more than 1000 different crystal structures, we find that ordered BiPbTi2O6 can crystallize in three polar and metallic structures, which can be transformed between via pressure or strain. In a heterostructure of layered BiPbTi2O6 and PbTiO3, multiple states with different relative orientations of BiPbTi2O6 polar displacements, and PbTiO3 polarization, can be stabilized. At room temperature, the interfacial coupling enables electric fields to first switch PbTiO3 polarization and subsequently drive 180{\deg} change of BiPbTi2O6 polar displacements. At low temperatures, the heterostructure provides a tunable tunnelling barrier and might be used in multi-state memory devices.
We introduce the logistic model of consumption growth, which captures a negative feedback loop preventing an unlimited growth of consumption due to finite biophysical resources of our planet. This simple dynamic model allows for derivation of the expression describing the declining long-term tail of a social discount curve. The latter plays a critical role in, e.g., climate finance with benefits on current investments deferred to centuries from now. The growth rate of consumption is irregularly evolving in time, which makes estimation of an expected term-structure of consumption growth and associated social discount rates a challenging task. Nonetheless, observations show that the problem at hand is perturbative with the small parameter being the product of an average strength of fluctuations in the growth rate and its autocorrelation time. This fact permits utilization of the cumulant expansion method to derive remarkably simple expressions for the term-structure of expected consumption growth and associated discount rates in the bounded economy. Comparison with empirical data shows that the dynamic effect related to the planetary resource constrains could become a dominant mechanism responsible for a declining long-term tail of a social discount curve at the time horizon estimated here as about100 years from now (the lower boundary). The derived results can help to shape a more realistic long-term social discounting policy. Furthermore, with the obvious redefinition of the key parameters of the model, obtained results are directly applicable for description of expected long-term population growth in stochastic environments.
Scattering in the ionized interstellar medium is commonly observed to be anisotropic, with theories of magnetohydrodynamic (MHD) turbulence explaining the anisotropy through a preferred magnetic field direction throughout the scattering regions. In particular, the line of sight to the Galactic Center supermassive black hole, Sgr A*, exhibits strong and anisotropic scattering, which dominates its observed size at wavelengths of a few millimeters and longer. Therefore, inferences of the intrinsic structure of \sgra\ at these wavelengths are sensitive to the assumed scattering model. In addition, extrapolations of the scattering model from long wavelengths, at which its parameters are usually estimated, to 1.3 mm, where the Event Horizon Telescope (EHT) seeks to image Sgr A* on Schwarzschild-radius scales, are also sensitive to the assumed scattering model. Past studies of Sgr A* have relied on simple Gaussian models for the scattering kernel that effectively presume an inner scale of turbulence far greater than the diffractive scale; this assumption is likely violated for Sgr A* at 1.3 mm. We develop a physically motivated model for anisotropic scattering, using a simplified model for MHD turbulence with a finite inner scale and a wandering transverse magnetic field direction. We explore several explicit analytic models for this wandering and derive the expected observational properties --- scatter broadening and refractive scintillation --- for each. For expected values of the inner scale, the scattering kernel for all models is markedly non-Gaussian at 1.3 mm but is straightforward to calculate and depends only weakly on the assumed model for the wandering of the magnetic field direction. On the other hand, in all models, the refractive substructure depends strongly on the wandering model and may be an important consideration in imaging Sgr A* with the EHT.
Image-based multi-person reconstruction in wide-field large scenes is critical for crowd analysis and security alert. However, existing methods cannot deal with large scenes containing hundreds of people, which encounter the challenges of large number of people, large variations in human scale, and complex spatial distribution. In this paper, we propose Crowd3D, the first framework to reconstruct the 3D poses, shapes and locations of hundreds of people with global consistency from a single large-scene image. The core of our approach is to convert the problem of complex crowd localization into pixel localization with the help of our newly defined concept, Human-scene Virtual Interaction Point (HVIP). To reconstruct the crowd with global consistency, we propose a progressive reconstruction network based on HVIP by pre-estimating a scene-level camera and a ground plane. To deal with a large number of persons and various human sizes, we also design an adaptive human-centric cropping scheme. Besides, we contribute a benchmark dataset, LargeCrowd, for crowd reconstruction in a large scene. Experimental results demonstrate the effectiveness of the proposed method. The code and datasets will be made public.
Stochastic epidemic models describe the dynamics of an epidemic as a disease spreads through a population. Typically, only a fraction of cases are observed at a set of discrete times. The absence of complete information about the time evolution of an epidemic gives rise to a complicated latent variable problem in which the state space size of the epidemic grows large as the population size increases. This makes analytically integrating over the missing data infeasible for populations of even moderate size. We present a data augmentation Markov chain Monte Carlo (MCMC) framework for Bayesian estimation of stochastic epidemic model parameters, in which measurements are augmented with subject-level disease histories. In our MCMC algorithm, we propose each new subject-level path, conditional on the data, using a time-inhomogeneous continuous-time Markov process with rates determined by the infection histories of other individuals. The method is general, and may be applied, with minimal modifications, to a broad class of stochastic epidemic models. We present our algorithm in the context of multiple stochastic epidemic models in which the data are binomially sampled prevalence counts, and apply our method to data from an outbreak of influenza in a British boarding school.
We study hadronic polarization and the related anisotropy of the dilepton angular distribution for the reaction $\pi N \to Ne^+e^-$. We employ consistent effective interactions for baryon resonances up to spin-5/2 to compute their contribution to the anisotropy coefficient. We show that the spin and parity of the intermediate baryon resonance is reflected in the angular dependence of the anisotropy coefficient. We present results for the anisotropy coefficient including the $N(1520)$ and $N(1440)$ resonances, which are essential at the collision energy of the recent data obtained by the HADES collaboration on this reaction. We conclude that the anisotropy coefficient provides useful constraints for unraveling the resonance contributions to this process.
Relativistic high energy heavy ion collision cross sections have been interpreted in terms of almost ideal liquid droplets of nuclear matter. The experimental low viscosity of these nuclear fluids have been of considerable recent quantum chromodynamic interest. The viscosity is here discussed in terms of the string fragmentation models wherein the temperature dependence of the nuclear fluid viscosity obeys the Vogel-Fulcher-Tammann law.
With the aim to investigate the overall evolution of UIR band features with hardening of UV radiation (increase of the star's effective temperature) we have analysed ISO spectra for 32 C-rich stars: 20 proto-planetary nebulae and 12 planetary nebulae with Wolf-Rayet central stars. In this contribution we discuss variations in the peak position of UIR bands among analysed objects, and demonstrate that variations in the ``7.7'' to 11.3 microns flux ratio are correlated with the effective temperature (probably due to an increase of the ionization state of their carriers).
In this paper, we study compactifications of the moduli of smooth del Pezzo surfaces using K-stability and the line arrangement. We construct K-moduli of log del Pezzo pairs with sum of lines as boundary divisors, and prove that for $d=2,3,4$, these K-moduli of pairs are isomorphic to the K-moduli spaces of del Pezzo surfaces. For $d=1$, we prove that they are different by exhibiting some walls.
Sequential transitions between metastable states are ubiquitously observed in the neural system and underlie various cognitive functions. Although a number of studies with asymmetric Hebbian connectivity have investigated how such sequences are generated, the focused sequences are simple Markov ones. On the other hand, supervised machine learning methods can generate complex non-Markov sequences, but these sequences are vulnerable against perturbations. Further, concatenation of newly learned sequence to the already learned one is difficult due to catastrophe forgetting, although concatenation is essential for cognitive functions such as inference. How stable complex sequences are generated still remains unclear. We have developed a neural network with fast and slow dynamics, which are inspired by the experiments. The slow dynamics store history of inputs and outputs and affect the fast dynamics depending on the stored history. We show the learning rule that requires only local information can form the network generating the complex and robust sequences in the fast dynamics. The slow dynamics work as bifurcation parameters for the fast one, wherein they stabilize the next pattern of the sequence before the current pattern is destabilized. This co-existence period leads to the stable transition between the current and the next pattern in the sequence. We further find that timescale balance is critical to this period. Our study provides a novel mechanism generating the robust complex sequences with multiple timescales in neural dynamics. Considering the multiple timescales are widely observed, the mechanism advances our understanding of temporal processing in the neural system.
We consider and compare the structural properties of bulk TIP4P water and of a sodium chloride aqueous solution in TIP4P water with concentration c = 0.67 mol/kg, in the metastable supercooled region. In a previous paper [D. Corradini, M. Rovere and P. Gallo, J. Chem. Phys. 132, 134508 (2010)] we found in both systems the presence of a liquid-liquid critical point (LLCP). The LLCP is believed to be the end point of the coexistence line between a high density liquid (HDL) and a low density liquid (LDL) phase of water. In the present paper we study the different features of water-water structure in HDL and LDL both in bulk water and in the solution. We find that the ions are able to modify the bulk LDL structure, rendering water-water structure more similar to the bulk HDL case. By the study of the hydration structure in HDL and LDL, a possible mechanism for the modification of the bulk LDL structure in the solution is identified in the substitution of the oxygen by the chloride ion in oxygen coordination shells.
We apply the method of QCD sum rules to study the structure $X$ newly observed by the BESIII Collaboration in the $\phi \eta^\prime$ mass spectrum in 2.0-2.1 GeV region in the $J/\psi \rightarrow \phi \eta \eta^\prime$ decay. We construct all the $s s \bar s \bar s$ tetraquark currents with $J^{PC} = 1^{+-}$, and use them to perform QCD sum rule analyses. One current leads to reliable QCD sum rule results and the mass is extracted to be $2.00^{+0.10}_{-0.09}$ GeV, suggesting that the structure $X$ can be interpreted as an $s s \bar s \bar s$ tetraquark state with $J^{PC} = 1^{+-}$. The $Y(2175)$ can be interpreted as its $s s \bar s \bar s$ partner having $J^{PC} = 1^{--}$, and we propose to search for the other two partners, the $s s \bar s \bar s$ tetraquark states with $J^{PC} = 1^{++}$ and $1^{-+}$, in the $\eta^\prime f_0(980)$, $\eta^\prime K \bar K$, and $\eta^\prime K \bar K^*$ mass spectra.
We study quasilinear elliptic double obstacle problems with a variable exponent growth when the right-hand side is a measure. A global Calder\'{o}n-Zygmund estimate for the gradient of an approximable solution is obtained in terms of the associated double obstacles and a given measure, identifying minimal requirements for the regularity estimate.
An increasing number of studies use gender information to understand phenomena such as gender bias, inequity in access and participation, or the impact of the Covid pandemic response. Unfortunately, most datasets do not include self-reported gender information, making it necessary for researchers to infer gender from other information, such as names or names and country information. An important limitation of these tools is that they fail to appropriately capture the fact that gender exists on a non-binary scale, however, it remains important to evaluate and compare how well these tools perform in a variety of contexts. In this paper, we compare the performance of a generative Artificial Intelligence (AI) tool ChatGPT with three commercially available list-based and machine learning-based gender inference tools (Namsor, Gender-API, and genderize.io) on a unique dataset. Specifically, we use a large Olympic athlete dataset and report how variations in the input (e.g., first name and first and last name, with and without country information) impact the accuracy of their predictions. We report results for the full set, as well as for the subsets: medal versus non-medal winners, athletes from the largest English-speaking countries, and athletes from East Asia. On these sets, we find that Namsor is the best traditional commercially available tool. However, ChatGPT performs at least as well as Namsor and often outperforms it, especially for the female sample when country and/or last name information is available. All tools perform better on medalists versus non-medalists and on names from English-speaking countries. Although not designed for this purpose, ChatGPT may be a cost-effective tool for gender prediction. In the future, it might even be possible for ChatGPT or other large scale language models to better identify self-reported gender rather than report gender on a binary scale.
We have calculated cross sections and branching ratios for neutrino induced reactions on ^{208}Pb and ^{56}Fe for various supernova and accelerator-relevant neutrino spectra. This was motivated by the facts that lead and iron will be used on one hand as target materials in future neutrino detectors, on the other hand have been and are still used as shielding materials in accelerator-based experiments. In particular we study the inclusive ^{56}$Fe(\nu_e,e^-)$^{56}Co and ^{208}$Pb(\nu_e,e^-)$^{208}Bi cross sections and calculate the neutron energy spectra following the decay of the daughter nuclei. These reactions give a potential background signal in the KARMEN and LSND experiment and are discussed as a detection scheme for supernova neutrinos in the proposed OMNIS and LAND detectors. We also study the neutron-emission following the neutrino-induced neutral-current excitation of ^{56}Fe and ^{208}Pb.
We review some of the main features of Bilinear R-Parity Violation (BRpV), defined by a quadratic term in the superpotential which mixes lepton and Higgs superfields and is proportional to a mass parameter epsilon. We show how large values of epsilon can induce a small neutrino mass without fine-tunning. We mention the effect on the mass of the lightest Higgs boson. Finally we report on the effect of BRpV on gauge and Yukawa unification, showing that bottom-tau unification can be achieved at any value of tan(beta).
We suggest a generalized definition of self-organized criticality (SOC) systems: SOC is a critical state of a nonlinear energy dissipation system that is slowly and continuously driven towards a critical value of a system-wide instability threshold, producing scale-free, fractal-diffusive, and intermittent avalanches with powerlaw-like size distributions. We develop here a macroscopic description of SOC systems that provides an equivalent description of the complex microscopic fine structure, in terms of fractal-diffusive transport (FD-SOC). Quantitative values for the size distributions of SOC parameters (length scales $L$, time scales $T$, waiting times $\Delta t$, fluxes $F$, and energies $E$) are derived from first principles, using the scale-free probability conjecture, $N(L) dL \propto L^{-d}$, for Euclidean space dimension $d$. We apply this model to astrophysical SOC systems, such as lunar craters, the asteroid belt, Saturn ring particles, magnetospheric substorms, radiation belt electrons, solar flares, stellar flares, pulsar glitches, soft gamma-ray repeaters, black-hole objects, blazars, and cosmic rays. The FD-SOC model predicts correctly the size distributions of 8 out of these 12 astrophysical phenomena, and indicates non-standard scaling laws and measurement biases for the others.
We present a search for the rare flavour-changing neutral-current decay $B^{-}\to K^{-} \nu \bar{\nu}$ based on a sample of $(86.9 \pm 1.0) \times 10^{6}$ $\Upsilon (4S) \to $B\bar{B}$ events collected by the BABAR experiment at the SLAC B-factory. Signal candidate events are selected by fully reconstructing a $B^+ \to \bar{D}^{0} X^+$ decay, where $X^+$ represents a combination of up to three charged pions or kaons and up to two $\pi^0$ candidates. The charged tracks and calorimeter clusters not used in the $B^+$ reconstruction are required to be compatible with a $B^{-}\to K^{-} \nu \bar{\nu}$ decay. We observe a total of three signal candidate events with an expected background of $2.7 \pm 0.8$, resulting in a preliminary limit of $\mathcal{B}(B^{-}\to K^{-} \nu \bar{\nu}) < 1.05 \times 10^{-4}$ at the 90% confidence level. This search is combined with the results of a previous and statistically independent preliminary BABAR search for $B^{-}\to K^{-} \nu \bar{\nu}$ to give a limit of $\mathcal{B}(B^{-}\to K^{-} \nu \bar{\nu}) < 7.0 \times 10^{-5}$ at the 90% confidence level.
We show that randomly choosing the matrices in a completely positive map from the unitary group gives a quantum expander. We consider Hermitian and non-Hermitian cases, and we provide asymptotically tight bounds in the Hermitian case on the typical value of the second largest eigenvalue. The key idea is the use of Schwinger-Dyson equations from lattice gauge theory to efficiently compute averages over the unitary group.
Two, replica symmetry breaking specific, quantities of the Ising spin glass --- the breakpoint x1 of the order parameter function and the Almeida-Thouless line --- are calculated in six dimensions (the upper critical dimension of the replicated field theory used), and also below and above it. The results comfirm that replica symmetry breaking does exist below d=6, and also the tendency of its escalation for decreasing dimension continues. As a new feature, x1 has a nonzero and universal value for d<6 at criticality. Near six dimensions we have x1=3(6-d)+O[(6-d)^2]. A method to expand a generic theory with replica equivalence around the replica symmetric one is also demonstrated.
For fixed g and T we show that finiteness of the set of affine equivalence classes of flat surfaces of genus g whose Veech groups contain a cusp of hyperbolic co-area less than T. We obtain new restrictions on Veech groups: we show that any non-elementary Veech group can appear only finitely many times in a fixed stratum, that any non-elementary Veech group is of finite index in its normalizer, and that the quotient of the upper half plane by a non-lattice Veech group contains arbitrarily large embedded disks. These are proved using the finiteness of the set of affine equivalence classes of flat surfaces of genus g whose Veech group contains a hyperbolic element with eigenvalue less than T.
We reproduce apparently complex cellular automaton behaviour with simple partial differential equations as developed in (Keane 09). Our PDE model easily explains behaviour observed in selected scenarios of the cellular automaton wargame ISAAC without resorting to anthropomorphisation of autonomous 'agents'. The insinuation that agents have a reasoning and planning ability is replaced with a deterministic numerical approximation which encapsulates basic motivational factors and demonstrates a variety of spatial behaviours approximating the mean behaviour of the ISAAC scenarios. All scenarios presented here highlight the dangers associated with attributing intelligent reasoning to behaviour shown, when this can be explained quite simply through the effects of the terms in our equations. A continuum of forces is able to behave in a manner similar to a collection of individual autonomous agents, and shows decentralised self-organisation and adaptation of tactics to suit a variety of combat situations.
This review covers selected developments in maser theory since the previous meeting, "Cosmic Masers: From Proto-Stars to Black Holes" (Migenes & Reid 2002). Topics included are time variability of fundamental constants, pumping of OH megamasers and indicators for differentiating disks from bi-directional outflows.
We report physical properties of the brightest ($S_{870\,\mu \rm m}=12.4$-$19.2\,$mJy) and not strongly lensed 18 870$\,\mu$m selected dusty star-forming galaxies (DSFGs), also known as submillimeter galaxies (SMGs), in the COSMOS field. This sample is part of an ALMA band$\,$3 spectroscopic survey (AS2COSPEC), and spectroscopic redshifts are measured in 17 of them at $z=2$-$5$. We perform spectral energy distribution analyses and deduce a median total infrared luminosity of $L_{\rm IR}=(1.3\pm0.1)\times10^{13}\,L_{\odot}$, infrared-based star-formation rate of ${\rm SFR}_{\rm IR}=1390\pm150~M_{\odot}\,\rm yr^{-1}$, stellar mass of $M_\ast=(1.4\pm0.6)\times10^{11}\,M_\odot$, dust mass of $M_{\rm dust}=(3.7\pm0.5)\times10^9\,M_\odot$, and molecular gas mass of $M_{\rm gas}= (\alpha_{\rm CO}/0.8)(1.2\pm0.1)\times10^{11}\,M_\odot$, suggesting that they are one of the most massive, ISM-enriched, and actively star-forming systems at $z=2$-$5$. In addition, compared to less massive and less active galaxies at similar epochs, SMGs have comparable gas fractions; however, they have much shorter depletion time, possibly caused by more active dynamical interactions. We determine a median dust emissivity index of $\beta=2.1\pm0.1$ for our sample, and by combining our results with those from other DSFG samples, we find no correlation of $\beta$ with redshift or infrared luminosity, indicating similar dust grain compositions across cosmic time for infrared luminous galaxies. We also find that AS2COSPEC SMGs have one of the highest dust-to-stellar mass ratios, with a median of $0.02\pm0.01$, significantly higher than model predictions, possibly due to too strong of a AGN feedback implemented in the model. Finally, our complete and uniform survey enables us to put constraints on the most massive end of the dust and molecular gas mass functions.
The task of collaborative human pose forecasting stands for predicting the future poses of multiple interacting people, given those in previous frames. Predicting two people in interaction, instead of each separately, promises better performance, due to their body-body motion correlations. But the task has remained so far primarily unexplored. In this paper, we review the progress in human pose forecasting and provide an in-depth assessment of the single-person practices that perform best for 2-body collaborative motion forecasting. Our study confirms the positive impact of frequency input representations, space-time separable and fully-learnable interaction adjacencies for the encoding GCN and FC decoding. Other single-person practices do not transfer to 2-body, so the proposed best ones do not include hierarchical body modeling or attention-based interaction encoding. We further contribute a novel initialization procedure for the 2-body spatial interaction parameters of the encoder, which benefits performance and stability. Altogether, our proposed 2-body pose forecasting best practices yield a performance improvement of 21.9% over the state-of-the-art on the most recent ExPI dataset, whereby the novel initialization accounts for 3.5%. See our project page at https://www.pinlab.org/bestpractices2body
A method that enables an industrial robot to accomplish the peg-in-hole task for holes in concrete is proposed. The proposed method involves slightly detaching the peg from the wall, when moving between search positions, to avoid the negative influence of the concrete's high friction coefficient. It uses a deep neural network (DNN), trained via reinforcement learning, to effectively find holes with variable shape and surface finish (due to the brittle nature of concrete) without analytical modeling or control parameter tuning. The method uses displacement of the peg toward the wall surface, in addition to force and torque, as one of the inputs of the DNN. Since the displacement increases as the peg gets closer to the hole (due to the chamfered shape of holes in concrete), it is a useful parameter for inputting in the DNN. The proposed method was evaluated by training the DNN on a hole 500 times and attempting to find 12 unknown holes. The results of the evaluation show the DNN enabled a robot to find the unknown holes with average success rate of 96.1% and average execution time of 12.5 seconds. Additional evaluations with random initial positions and a different type of peg demonstrate the trained DNN can generalize well to different conditions. Analyses of the influence of the peg displacement input showed the success rate of the DNN is increased by utilizing this parameter. These results validate the proposed method in terms of its effectiveness and applicability to the construction industry.
Steering is a manifestation of quantum correlations that embodies the Einstein-Podolsky-Rosen (EPR) paradox. While there have been recent attempts to quantify steering, continuous variable systems remained elusive. We introduce a steering measure for two-mode continuous variable systems that is valid for arbitrary states. The measure is based on the violation of an optimized variance test for the EPR paradox, and admits a computable and experimentally friendly lower bound only depending on the second moments of the state, which reduces to a recently proposed quantifier of steerability by Gaussian measurements. We further show that Gaussian states are extremal with respect to our measure, minimizing it among all continuous variable states with fixed second moments. As a byproduct of our analysis, we generalize and relate well-known EPR-steering criteria. Finally an operational interpretation is provided, as the proposed measure is shown to quantify the optimal guaranteed key rate in semi-device independent quantum key distribution.
The magnetosphere of a rotating pulsar naturally develops a current sheet beyond the light cylinder (LC). Magnetic reconnection in this current sheet inevitably dissipates a nontrivial fraction of the pulsar spin-down power within a few LC radii. We develop a basic physical picture of reconnection in this environment and discuss its implications for the observed pulsed gamma-ray emission. We argue that reconnection proceeds in the plasmoid-dominated regime, via an hierarchical chain of multiple secondary islands/flux ropes. The inter-plasmoid reconnection layers are subject to strong synchrotron cooling, leading to significant plasma compression. Using the conditions of pressure balance across these current layers, the balance between the heating by magnetic energy dissipation and synchrotron cooling, and Ampere's law, we obtain simple estimates for key parameters of the layers --- temperature, density, and layer thickness. In the comoving frame of the relativistic pulsar wind just outside of the equatorial current sheet, these basic parameters are uniquely determined by the strength of the reconnecting upstream magnetic field. For the case of the Crab pulsar, we find them to be of order 10 GeV, $10^{13} cm^{-3}$, and 10 cm, respectively. After accounting for the bulk Doppler boosting due to the pulsar wind, the synchrotron and inverse-Compton emission from the reconnecting current sheet can explain the observed pulsed high-energy (GeV) and VHE (~100 GeV) radiation, respectively. Also, we suggest that the rapid relative motions of the secondary plasmoids in the hierarchical chain may contribute to the production of the pulsar radio emission.
The distribution of TeV spectral slopes versus redshift for currently known TeV blazars (16 sources with z<0.21, and one with z>0.25) is essentially a scatter plot with hardly any hint of a global trend. We suggest that this is the outcome of two combined effects of intergalactic gamma-gamma absorption, plus an inherent feature of the SSC (synchro-self-Compton) process of blazar emission. First, flux dimming introduces a bias that favors detection of progressively more flaring sources at higher redshifts. According to mainstream SSC models, more flaring source states imply sources with flatter TeV slopes. This results in a structured relation between intrinsic TeV slope and redshift. The second effect, spectral steepening by intergalactic absorption, affects sources progressively with distance and effectively wipes out the intrinsic slope-redshift correlation.
First-order primal-dual methods are appealing for their low memory overhead, fast iterations, and effective parallelization. However, they are often slow at finding high accuracy solutions, which creates a barrier to their use in traditional linear programming (LP) applications. This paper exploits the sharpness of primal-dual formulations of LP instances to achieve linear convergence using restarts in a general setting that applies to ADMM (alternating direction method of multipliers), PDHG (primal-dual hybrid gradient method) and EGM (extragradient method). In the special case of PDHG, without restarts we show an iteration count lower bound of $\Omega(\kappa^2 \log(1/\epsilon))$, while with restarts we show an iteration count upper bound of $O(\kappa \log(1/\epsilon))$, where $\kappa$ is a condition number and $\epsilon$ is the desired accuracy. Moreover, the upper bound is optimal for a wide class of primal-dual methods, and applies to the strictly more general class of sharp primal-dual problems. We develop an adaptive restart scheme and verify that restarts significantly improve the ability of PDHG, EGM, and ADMM to find high accuracy solutions to LP problems.
Heterogeneous big data poses many challenges in machine learning. Its enormous scale, high dimensionality, and inherent uncertainty make almost every aspect of machine learning difficult, from providing enough processing power to maintaining model accuracy to protecting privacy. However, perhaps the most imposing problem is that big data is often interspersed with sensitive personal data. Hence, we propose a privacy-preserving hierarchical fuzzy neural network (PP-HFNN) to address these technical challenges while also alleviating privacy concerns. The network is trained with a two-stage optimization algorithm, and the parameters at low levels of the hierarchy are learned with a scheme based on the well-known alternating direction method of multipliers, which does not reveal local data to other agents. Coordination at high levels of the hierarchy is handled by the alternating optimization method, which converges very quickly. The entire training procedure is scalable, fast and does not suffer from gradient vanishing problems like the methods based on back-propagation. Comprehensive simulations conducted on both regression and classification tasks demonstrate the effectiveness of the proposed model.
We demonstrate a practical scalable approach to the fabrication of tunable metamaterials. Designed for THz wavelengths, the metamaterial is comprised of polyurethane filled with an array of indium wires using the well-established fiber drawing technique. Modification of the dimensions of the metamaterial provides tunability: by compressing the metamaterial we demonstrated a 50% plasma frequency shift using THz time domain spectroscopy. Releasing the compression allowed the metamaterial to return to its original dimensions and plasma frequency, demonstrating dynamic reversible tunability.
During catastrophic processes of environmental variations of a thermodynamic system, such as rapid temperature decreasing, many novel and complex patterns often form. To understand such phenomena, a general mechanism is proposed based on the competition between heat transfer and conversion of heat to other energy forms. We apply it to the smectic-A filament growth process during quench-induced isotropic to smectic-A phase transition. Analytical forms for the buckling patterns are derived and we find good agreement with experimental observation [Phys. Rev. {\bf E55} (1997) 1655]. The present work strongly indicates that rapid cooling will lead to structural transitions in the smectic-A filament at the molecular level to optimize heat conversion. The force associated with this pattern formation process is estimated to be in the order of $10^{-1}$ piconewton.
Raman and Brillouin amplification of laser pulses in plasma have been shown to produce picosecond pulses of petawatt power. In previous studies, filamentation of the probe pulse has been identified as the biggest threat to the amplification process, especially for Brillouin amplification, which employs the highest plasma densities. Therefore it has been proposed to perform Brillouin scattering at densities below $n_{cr}/4$ to reduce the influence of filamentation. However, parastic Raman scattering can become a problem at such densities, contrary to densities above $n_{cr}/4$, where it is suppressed. In this paper, we investigate the influence of parasitic Raman scattering on Brillouin amplification at densities below $n_{cr}/4$. We expose the specific problems posed by both Raman backward and forward scattering, and how both types of scattering can be mitigated, leading to an increased performance of the Brillouin amplification process.
We show some improved mapping properties of the Time Domain Electric Field Integral Equation and of its Galerkin semidiscretization in space. We relate the weak distributional framework with a stronger class of solutions using a group of strongly continuous operators. The stability and error estimates we derive are sharper than those in the literature.
We demonstrate a method for efficient coupling of guided light from a single mode optical fiber to nanophotonic devices. Our approach makes use of single-sided conical tapered optical fibers that are evanescently coupled over the last ~10 um to a nanophotonic waveguide. By means of adiabatic mode transfer using a properly chosen taper, single-mode fiber-waveguide coupling efficiencies as high as 97(1)% are achieved. Efficient coupling is obtained for a wide range of device geometries which are either singly-clamped on a chip or attached to the fiber, demonstrating a promising approach for integrated nanophotonic circuits, quantum optical and nanoscale sensing applications.
Motivated by the research on upper bounds on the rate of quantum transport for one-dimensional operators, particularly, the recent works of Jitomirskaya--Liu and Jitomirskaya--Powell and the earlier ones of Damanik--Tcheremchantsev, we propose a method to prove similar bounds in arbitrary dimension. The method applies both to Schroedinger and to long-range operators. In the case of ergodic operators, one can use large deviation estimates for the Green function in finite volumes to verify the assumptions of our general theorem. Such estimates have been proved for numerous classes of quasiperiodic operators in one and higher dimension, starting from the works of Bourgain, Goldstein, and Schlag. One of the applications is a power-logarithmic bound on the quantum transport defined by a multidimensional discrete Schr\"odinger (or even long-range) operator associated with an irrational shift, valid for all Diophantine frequencies and uniformly for all phases. To the best of our knowledge, these are the first results on the quantum dynamics for quasiperiodic operators in dimension greater than one that do not require exclusion of a positive measure of phases. Moreover, and in contrast to localisation, the estimates are uniform in the phase. The arguments are also applicable to ergodic operators corresponding to other kinds of base dynamics, such as the skew-shift.
An experiment proposed by Karl Popper to test the standard interpretation of quantum mechanics was realized by Kim and Shih. We use a quantum mechanical calculation to analyze Popper's proposal, and find a surprising result for the location of the virtual slit. We also analyze Kim and Shih's experiment, and demonstrate that although it ingeniously overcomes the problem of temporal spreading of the wave-packet, it is inconclusive about Popper's test. We point out that another experiment which (unknowingly) implements Popper's test in a conclusive way, has actually been carried out. Its results are in contradiction with Popper's prediction, and agree with our analysis.
We report on the experimental observation of bunching dynamics with temporal cavity solitons in a continuously-driven passive fibre resonator. Specifically, we excite a large number of ultrafast cavity solitons with random temporal separations, and observe in real time how the initially random sequence self-organizes into regularly-spaced aggregates. To explain our experimental observations, we develop a simple theoretical model that allows long-range acoustically-induced interactions between a large number of temporal cavity solitons to be simulated. Significantly, results from our simulations are in excellent agreement with our experimental observations, strongly suggesting that the soliton bunching dynamics arise from forward Brillouin scattering. In addition to confirming prior theoretical analyses and unveiling a new cavity soliton self-organization phenomenon, our findings elucidate the manner in which sound interacts with large ensembles of ultrafast pulses of light.
We discuss the form of the string-loop-corrected effective action and the loop-corrected solutions of the equations of motion. At the string-tree level, a solution we consider is the extremal magnetic black hole, in which case the tree-level effective gauge couplings decrease at small r, and in this region string-loop corrections to the gauge couplings become important. The effective 4D theory is the N=2 supergravity interacting with matter. Using the N=2 structure of the theory, we calculate the loop corrections to the effective action and solve the loop-corrected equations of motion. In the resulting perturbative solution for the metric, singularity at the origin is smeared by quantum effects.
We have simulated a system of classical particles confined on the surface of a sphere interacting with a repulsive $r^{-12}$ potential. The same system simulated on a plane with periodic boundary conditions has van der Waals loops in pressure-density plots which are usually interpreted as evidence for a first order melting transition, but on the sphere such loops are absent. We also investigated the structure factor and from the width of the first peak as a function of density we can show that the growth of the correlation length is consistent with KTHNY theory. This suggests that simulations of two dimensional melting phenomena are best performed on the surface of a sphere.
We propose a method for detecting structural changes in a city using images captured from vehicular mounted cameras over traversals at two different times. We first generate 3D point clouds for each traversal from the images and approximate GNSS/INS readings using Structure-from-Motion (SfM). A direct comparison of the two point clouds for change detection is not ideal due to inaccurate geo-location information and possible drifts in the SfM. To circumvent this problem, we propose a deep learning-based non-rigid registration on the point clouds which allows us to compare the point clouds for structural change detection in the scene. Furthermore, we introduce a dual thresholding check and post-processing step to enhance the robustness of our method. We collect two datasets for the evaluation of our approach. Experiments show that our method is able to detect scene changes effectively, even in the presence of viewpoint and illumination differences.
The complex Ancient Egyptian (AE) writing system was characterised by widespread use of graphemic classifiers (determinatives): silent (unpronounced) hieroglyphic signs clarifying the meaning or indicating the pronunciation of the host word. The study of classifiers has intensified in recent years with the launch and quick growth of the iClassifier project, a web-based platform for annotation and analysis of classifiers in ancient and modern languages. Thanks to the data contributed by the project participants, it is now possible to formulate the identification of classifiers in AE texts as an NLP task. In this paper, we make first steps towards solving this task by implementing a series of sequence-labelling neural models, which achieve promising performance despite the modest amount of training data. We discuss tokenisation and operationalisation issues arising from tackling AE texts and contrast our approach with frequency-based baselines.
Attribution algorithms are frequently employed to explain the decisions of neural network models. Integrated Gradients (IG) is an influential attribution method due to its strong axiomatic foundation. The algorithm is based on integrating the gradients along a path from a reference image to the input image. Unfortunately, it can be observed that gradients computed from regions where the output logit changes minimally along the path provide poor explanations for the model decision, which is called the saturation effect problem. In this paper, we propose an attribution algorithm called integrated decision gradients (IDG). The algorithm focuses on integrating gradients from the region of the path where the model makes its decision, i.e., the portion of the path where the output logit rapidly transitions from zero to its final value. This is practically realized by scaling each gradient by the derivative of the output logit with respect to the path. The algorithm thereby provides a principled solution to the saturation problem. Additionally, we minimize the errors within the Riemann sum approximation of the path integral by utilizing non-uniform subdivisions determined by adaptive sampling. In the evaluation on ImageNet, it is demonstrated that IDG outperforms IG, Left-IG, Guided IG, and adversarial gradient integration both qualitatively and quantitatively using standard insertion and deletion metrics across three common models.
We consider the problem of hypothesis testing in the situation when the first hypothesis is simple and the second one is local one-sided composite. We describe the choice of the thresholds and the power functions of the Score Function test, of the General Likelihood Ratio test, of the Wald test and of two Bayes tests in the situation when the intensity function of the observed inhomogeneous Poisson process is smooth with respect to the parameter. It is shown that almost all these tests are asymptotically uniformly most powerful. The results of numerical simulations are presented.
Electrowetting-on-dielectric (EWOD) is a powerful tool in many droplet-manipulation applications with a notorious weakness caused by contact-angle saturation (CAS), a phenomenon limiting the equilibrium contact angle of an EWOD-actuated droplet at high applied voltage. In this paper, we study the spreading behaviours of droplets on EWOD substrates with the range of applied voltage exceeding the saturation limit. We experimentally find that at the initial stage of spreading, the driving force at the contact line still follows the Young-Lippmann law even if the applied voltage is higher than the CAS voltage. We then theoretically establish the relation between the initial contact-line velocity and the applied voltage using the force balance at the contact line. We also find that the amplitude of capillary waves on the droplet surface generated by the contact-line's initial motion increases with the applied voltage. We provide a working framework utilising EWOD with voltages beyond CAS by characterising the capillary waves formed on the droplet surface and their self-similar behaviours. We finally propose a theoretical model of the wave profiles taking into account the viscous effects and verify this model experimentally. Our results provide avenues to utilise the EWOD effect with voltages beyond CAS threshold and have strong bearing on emerging applications such as digital microfluidic and ink-jet printing.
Statistical mechanics for states with complex eigenvalues, which are described by Gel'fand triplet and represent unstable states like resonances, are discussed on the basis of principle of equal ${\it a priori}$ probability. A new entropy corresponding to the freedom for the imaginary eigenvalues appears in the theory. In equilibriums it induces a new physical observable which can be identified as a common time scale. It is remarkable that in spaces with more than 2 dimensions we find out existence of stable and quasi-stable systems, even though all constituents are unstable. In such systems all constituents are connected by stationary flows which are generally observable and then we can say that they are semiclassical systems. Examples for such semiclassical systems are constructed in parabolic potential barriers. The flexible structure of the systems is also pointed out.
In this article, we present the results of a series of twelve 3.6-cm radio continuum observations of T Tau Sb, one of the companions of the famous young stellar object T Tauri. The data were collected roughly every two months between September 2003 and July 2005 with the Very Long Baseline Array (VLBA). Thanks to the remarkably accurate astrometry delivered by the VLBA, the absolute position of T Tau Sb could be measured with a precision typically better than about 100 micro-arcseconds at each of the twelve observed epochs. The trajectory of T Tau Sb on the plane of the sky could, therefore, be traced very precisely, and modeled as the superposition of the trigonometric parallax of the source and an accelerated proper motion. The best fit yields a distance to T Tau Sb of 147.6 +/- 0.6 pc. The observed positions of T Tau Sb are in good agreement with recent infrared measurements, but seem to favor a somewhat longer orbital period than that recently reported by Duchene et al. (2006) for the T Tau Sa/T Tau Sb system.
Motivated by many applications (geophysical flows, general relativity), we attempt to set the foundations for a study of entropy solutions to nonlinear hyperbolic conservation laws posed on a (Riemannian or Lorentzian) manifold. The flux of the conservation laws is viewed as a vector-field on the manifold and depends on the unknown function as a parameter. We introduce notions of entropy solutions in the class of bounded measurable functions and in the class of measure-valued mappings. We establish the well-posedness theory for conservation laws on a manifold, by generalizing both Kruzkov's and DiPerna's theories originally developed in the Euclidian setting. The class of {\sl geometry-compatible} (as we call it) conservation laws is singled out as an important case of interest, which leads to robust $L^p$ estimates independent of the geometry of the manifold. On the other hand, general conservation laws solely enjoy the $L^1$ contraction property and leads to a unique contractive semi-group of entropy solutions. Our framework allows us to construct entropy solutions on a manifold via the vanishing diffusion method or the finite volume method.
We study $h$-vectors and graded Betti numbers of level modules up to multiplication by a rational number. Assuming a conjecture on the possible graded Betti numbers of Cohen-Macaulay modules we get a description of the possible $h$-vectors of level modules up to multiplication by a rational number. We also determine, again up to multiplication by a rational number, the cancellable $h$-vectors and the $h$-vectors of level modules with the weak Lefschetz property. Furthermore, we prove that level modules of codimension three satisfy the upper bound of the Multiplicity conjecture of Herzog, Huneke and Srinivasan, and that the lower bound holds if the module, in addition, has the weak Lefschetz property.
We propose an RNN-based efficient Ising model solver, the Criticality-ordered Recurrent Mean Field (CoRMF), for forward Ising problems. In its core, a criticality-ordered spin sequence of an $N$-spin Ising model is introduced by sorting mission-critical edges with greedy algorithm, such that an autoregressive mean-field factorization can be utilized and optimized with Recurrent Neural Networks (RNNs). Our method has two notable characteristics: (i) by leveraging the approximated tree structure of the underlying Ising graph, the newly-obtained criticality order enables the unification between variational mean-field and RNN, allowing the generally intractable Ising model to be efficiently probed with probabilistic inference; (ii) it is well-modulized, model-independent while at the same time expressive enough, and hence fully applicable to any forward Ising inference problems with minimal effort. Computationally, by using a variance-reduced Monte Carlo gradient estimator, CoRFM solves the Ising problems in a self-train fashion without data/evidence, and the inference tasks can be executed by directly sampling from RNN. Theoretically, we establish a provably tighter error bound than naive mean-field by using the matrix cut decomposition machineries. Numerically, we demonstrate the utility of this framework on a series of Ising datasets.
The electronic property and magnetic susceptibility of Ce$_3$Pd$_3$Bi$_4$ were systemically investigated from 18 K to 290 K for varying values of cell-volume using dynamic mean-field theory coupled with density functional theory. By extrapolating to zero temperature, the ground state of Ce$_3$Pd$_3$Bi$_4$ at ambient pressure is found to be a correlated semimetal due to insufficient hybridization. Upon applying pressure, the hybridization strength increases and a crossover to Kondo insulator is observed at finite temperatures. The characteristic temperature signaling the formation of Kondo singlet, as well as the characteristic temperature associated with $f$-electron delocalization-localization change, simultaneously vanishes around a critical volume of 0.992$V_0$, suggesting that such metal-insulator transition is possibly associated with a quantum critical point. Finally, the Wilson's loop calculations indicate that the Kondo insulating side is topologically trivial, thus a topological transition also occurs across the quantum critical point.
In this paper, we prove a time dependent lower bound on density in the optimal order $O(1/(1+t))$ for the general smooth nonisentropic flow of compressible Euler equations.
Cellular networks are promising to support effective wireless communications for unmanned aerial vehicles (UAVs), which will help to enable various long-range UAV applications. However, these networks are optimized for terrestrial users, and thus do not guarantee seamless aerial coverage. In this paper, we propose to overcome this difficulty by exploiting controllable mobility of UAVs, and investigate connectivity-aware UAV path planning. To explicitly impose communication requirements on UAV path planning, we introduce two new metrics to quantify the cellular connectivity quality of a UAV path. Moreover, aerial coverage maps are used to provide accurate locations of scattered coverage holes in the complicated propagation environment. We formulate the UAV path planning problem as finding the shortest path subject to connectivity constraints. Based on graph search methods, a novel connectivity-aware path planning algorithm with low complexity is proposed. The effectiveness and superiority of our proposed algorithm are demonstrated using the aerial coverage map of an urban section in Virginia, which is built by ray tracing. Simulation results also illustrate a tradeoff between the path length and connectivity quality of UAVs.
Energy efficiency and transmission delay are very important parameters for wireless multi-hop networks. Previous works that study energy efficiency and delay are based on the assumption of reliable links. However, the unreliability of the channel is inevitable in wireless multi-hop networks. This paper investigates the trade-off between the energy consumption and the end-to-end delay of multi-hop communications in a wireless network using an unreliable link model. It provides a closed form expression of the lower bound on the energy-delay trade-off for different channel models (AWGN, Raleigh flat fading and Nakagami block-fading) in a linear network. These analytical results are also verified in 2-dimensional Poisson networks using simulations. The main contribution of this work is the use of a probabilistic link model to define the energy efficiency of the system and capture the energy-delay trade-offs. Hence, it provides a more realistic lower bound on both the energy efficiency and the energy-delay trade-off since it does not restrict the study to the set of perfect links as proposed in earlier works.
Although planning is a crucial component of the autonomous driving stack, researchers have yet to develop robust planning algorithms that are capable of safely handling the diverse range of possible driving scenarios. Learning-based planners suffer from overfitting and poor long-tail performance. On the other hand, rule-based planners generalize well, but might fail to handle scenarios that require complex driving maneuvers. To address these limitations, we investigate the possibility of leveraging the common-sense reasoning capabilities of Large Language Models (LLMs) such as GPT4 and Llama2 to generate plans for self-driving vehicles. In particular, we develop a novel hybrid planner that leverages a conventional rule-based planner in conjunction with an LLM-based planner. Guided by commonsense reasoning abilities of LLMs, our approach navigates complex scenarios which existing planners struggle with, produces well-reasoned outputs while also remaining grounded through working alongside the rule-based approach. Through extensive evaluation on the nuPlan benchmark, we achieve state-of-the-art performance, outperforming all existing pure learning- and rule-based methods across most metrics. Our code will be available at https://llmassist.github.io.
Reconfigurable intelligent surface (RIS) is capable of intelligently manipulating the phases of the incident electromagnetic wave to improve the wireless propagation environment between the base-station (BS) and the users. This paper addresses the joint user scheduling, RIS configuration, and BS beamforming problem in an RIS-assisted downlink network with limited pilot overhead. We show that graph neural networks (GNN) with permutation invariant and equivariant properties can be used to appropriately schedule users and to design RIS configurations to achieve high overall throughput while accounting for fairness among the users. As compared to the conventional methodology of first estimating the channels then optimizing the user schedule, RIS configuration and the beamformers, this paper shows that an optimized user schedule can be obtained directly from a very short set of pilots using a GNN, then the RIS configuration can be optimized using a second GNN, and finally the BS beamformers can be designed based on the overall effective channel. Numerical results show that the proposed approach can utilize the received pilots more efficiently than the conventional channel estimation based approach, and can generalize to systems with an arbitrary number of users.
Attention-based models, exemplified by the Transformer, can effectively model long range dependency, but suffer from the quadratic complexity of self-attention operation, making them difficult to be adopted for high-resolution image generation based on Generative Adversarial Networks (GANs). In this paper, we introduce two key ingredients to Transformer to address this challenge. First, in low-resolution stages of the generative process, standard global self-attention is replaced with the proposed multi-axis blocked self-attention which allows efficient mixing of local and global attention. Second, in high-resolution stages, we drop self-attention while only keeping multi-layer perceptrons reminiscent of the implicit neural function. To further improve the performance, we introduce an additional self-modulation component based on cross-attention. The resulting model, denoted as HiT, has a nearly linear computational complexity with respect to the image size and thus directly scales to synthesizing high definition images. We show in the experiments that the proposed HiT achieves state-of-the-art FID scores of 30.83 and 2.95 on unconditional ImageNet $128 \times 128$ and FFHQ $256 \times 256$, respectively, with a reasonable throughput. We believe the proposed HiT is an important milestone for generators in GANs which are completely free of convolutions. Our code is made publicly available at https://github.com/google-research/hit-gan
We construct new regular black hole solutions by matching the de Sitter solution and the Reissner-Nordstrom solution with a timelike thin shell. The thin shell is assumed to have mass but no pressure and obeys an equation of motion derived from Israel's junction conditions. By investigating the equation of motion for the shell, we obtain stationary solutions of charged regular black holes and examine stability of the solutions. Stationary solutions are found in limited ranges of 0.87L < m < 1.99L, and they are stable against small radial displacement of the shell with fixed values of m, M, and Q if M>0, where L is the de Sitter horizon radius, m the black hole mass, M the proper mass of the shell and Q the black hole charge. All the solutions obtained are highly charged in the sense of Q/m >0.866. By taking the massless limit of the shell in the present regular black hole solutions, we obtain the charged regular black hole with a massless shell obtained by Lemos and Zanchin and investigate stability of the solutions. It is found that Lemos and Zanchin's regular black hole solutions given by the massless limit of the present regular black hole solutions permit stable solutions, which are obtained by the limit of M -> 0.
The secular evolution of disk galaxies is largely driven by resonances between the orbits of 'particles' (stars or dark matter) and the rotation of non-axisymmetric features (spiral arms or a bar). Such resonances may also explain kinematic and photometric features observed in the Milky Way and external galaxies. In simplified cases, these resonant interactions are well understood: for instance, the dynamics of a test particle trapped near a resonance of a steadily rotating bar is easily analyzed using the angle-action tools pioneered by Binney, Monari and others. However, such treatments do not address the stochasticity and messiness inherent to real galaxies - effects which have, with few exceptions, been previously explored only with complex N-body simulations. In this paper, we propose a simple kinetic equation describing the distribution function of particles near an orbital resonance with a rigidly rotating bar, allowing for diffusion of the particles' slow actions. We solve this equation for various values of the dimensionless diffusion strength $\Delta$, and then apply our theory to the calculation of bar-halo dynamical friction. For $\Delta = 0$ we recover the classic result of Tremaine & Weinberg that friction ultimately vanishes, owing to the phase-mixing of resonant orbits. However, for $\Delta > 0$ we find that diffusion suppresses phase-mixing, leading to a finite torque. Our results suggest that stochasticity - be it physical or numerical - tends to increase bar-halo friction, and that bars in cosmological simulations might experience significant artificial slowdown, even if the numerical two-body relaxation time is much longer than a Hubble time.
The recent detection of gravitational waves and electromagnetic counterparts emitted during and after the collision of two neutron stars marks a breakthrough in the field of multi-messenger astronomy. Numerical relativity simulations are the only tool to describe the binary's merger dynamics in the regime when speeds are largest and gravity is strongest. In this work we report state-of-the-art binary neutron star simulations for irrotational (non-spinning) and spinning configurations. The main use of these simulations is to model the gravitational-wave signal. Key numerical requirements are the understanding of the convergence properties of the numerical data and a detailed error budget. The simulations have been performed on different HPC clusters, they use multiple grid resolutions, and are based on eccentricity reduced quasi-circular initial data. We obtain convergent waveforms with phase errors of 0.5-1.5 rad accumulated over approximately 12 orbits to merger. The waveforms have been used for the construction of a phenomenological waveform model which has been applied for the analysis of the recent binary neutron star detection. Additionally, we show that the data can also be used to test other state-of-the-art semi-analytical waveform models.
A common goal in statistics and machine learning is to learn models that can perform well against distributional shifts, such as latent heterogeneous subpopulations, unknown covariate shifts, or unmodeled temporal effects. We develop and analyze a distributionally robust stochastic optimization (DRO) framework that learns a model providing good performance against perturbations to the data-generating distribution. We give a convex formulation for the problem, providing several convergence guarantees. We prove finite-sample minimax upper and lower bounds, showing that distributional robustness sometimes comes at a cost in convergence rates. We give limit theorems for the learned parameters, where we fully specify the limiting distribution so that confidence intervals can be computed. On real tasks including generalizing to unknown subpopulations, fine-grained recognition, and providing good tail performance, the distributionally robust approach often exhibits improved performance.
The mechanical properties of biological membranes play an important role in the structure and the functioning of living organisms. One of the most widely used methods for determination of the bending elasticity modulus of the model lipid membranes (simplified models of the biomembranes with similar mechanical properties) is analysis of the shape fluctuations of the nearly spherical lipid vesicles. A theoretical basis of such an analysis is developed by Milner and Safran. In the present studies we analyze their results using an approach based on the Bogoljubov inequalities and the approximating Hamiltonian method. This approach is in accordance with the principles of statistical mechanics and is free of contradictions. Our considerations validate the results of Milner and Safran if the stretching elasticity K_s of the membrane tends to zero.
The ATLAS detector at the Large Hadron Collider is used to search for the lepton flavor violating process $Z \rightarrow e \mu$ in pp collisions using 20.3 $fb^{-1}$ of data collected at $\sqrt{s}$ = 8 TeV. An enhancement in the $e \mu$ invariant mass spectrum is searched for at the Z boson mass. The number of Z bosons produced in the data sample is estimated using events of similar topology, $Z \rightarrow ee$ and $\mu \mu$, significantly reducing the systematic uncertainty in the measurement. There is no evidence of an enhancement at the Z boson mass, resulting in an upper limit on the branching fraction, $B(Z \rightarrow e \mu)$ < 7.5 x 10$^{-7}$ at the 95% confidence level.
We analyze a large system of heterogeneous quadratic integrate-and-fire (QIF) neurons with time delayed, all-to-all synaptic coupling. The model is exactly reduced to a system of firing rate equations that is exploited to investigate the existence, stability and bifurcations of fully synchronous, partially synchronous, and incoherent states. In conjunction with this analysis we perform extensive numerical simulations of the original network of QIF neurons, and determine the relation between the macroscopic and microscopic states for partially synchronous states. The results are summarized in two phase diagrams, for homogeneous and heterogeneous populations, which are obtained analytically to a large extent. For excitatory coupling, the phase diagram is remarkably similar to that of the Kuramoto model with time delays, although here the stability boundaries extend to regions in parameter space where the neurons are not self-sustained oscillators. In contrast, the structure of the boundaries for inhibitory coupling is different, and already for homogeneous networks unveils the presence of various partially synchronized states not present in the Kuramoto model: Collective chaos, quasiperiodic partial synchronization (QPS), and a novel state which we call modulated-QPS (M-QPS). In the presence of heterogeneity partially synchronized states reminiscent to collective chaos, QPS and M-QPS persist. In addition, the presence of heterogeneity greatly amplifies the differences between the incoherence stability boundaries of excitation and inhibition. Finally, we compare our results with those of a traditional (Wilson Cowan-type) firing rate model with time delays. The oscillatory instabilities of the traditional firing rate model qualitatively agree with our results only for the case of inhibitory coupling with strong heterogeneity.
Modern navigation services often provide multiple paths connecting the same source and destination for users to select. Hence, ranking such paths becomes increasingly important, which directly affects the service quality. We present PathRank, a data-driven framework for ranking paths based on historical trajectories using multi-task learning. If a trajectory used path P from source s to destination d, PathRank considers this as an evidence that P is preferred over all other paths from s to d. Thus, a path that is similar to P should have a larger ranking score than a path that is dissimilar to P. Based on this intuition, PathRank models path ranking as a regression problem, where each path is associated with a ranking score. To enable PathRank, we first propose an effective method to generate a compact set of training data: for each trajectory, we generate a small set of diversified paths. Next, we propose a multi-task learning framework to solve the regression problem. In particular, a spatial network embedding is proposed to embed each vertex to a feature vector by considering both road network topology and spatial properties, such as distances and travel times. Since a path is represented by a sequence of vertices, which is now a sequence of feature vectors after embedding, recurrent neural network is applied to model the sequence. The objective function is designed to consider errors on both ranking scores and spatial properties, making the framework a multi-task learning framework. Empirical studies on a substantial trajectory data set offer insight into the designed properties of the proposed framework and indicating that it is effective and practical.
We transform Tutte-Grothedieck invariants thus also Tutte polynomials on matroids so that the contraction-deletion rule for loops (isthmuses) coincides with the general case.
In this paper we study the local behavior of a solution to the Lam\'e system with \emph{Lipschitz} coefficients in dimension $n\ge 2$. Our main result is the bound on the vanishing order of a nontrivial solution, which immediately implies the strong unique continuation property. This paper solves the open problem of the strong uniqueness continuation property for the Lam\'e system with Lipschitz coefficients in any dimension.
Theoretical model for the radiation linewidth in a multi-fluxon state of a long Josephson junction is presented. Starting from the perturbed sine-Gordon model with the temperature dependent noise term, we develop a collective coordinate approach which allows to calculate the finite radiation linewidth due to excitation of the internal degrees of freedom in the moving fluxon chain. At low fluxon density, the radiation linewidth is expected to be substantially larger than that of a lumped Josephson oscillator. With increasing the fluxon density, a crossover to a much smaller linewidth corresponding to the lumped oscillator limit is predicted.
We obtain exact travelling wave solutions for three families of stochastic one-dimensional nonequilibrium lattice models with open boundaries. These solutions describe the diffusive motion and microscopic structure of (i) of shocks in the partially asymmetric exclusion process with open boundaries, (ii) of a lattice Fisher wave in a reaction-diffusion system, and (iii) of a domain wall in non-equilibrium Glauber-Kawasaki dynamics with magnetization current. For each of these systems we define a microscopic shock position and calculate the exact hopping rates of the travelling wave in terms of the transition rates of the microscopic model. In the steady state a reversal of the bias of the travelling wave marks a first-order non-equilibrium phase transition, analogous to the Zel'dovich theory of kinetics of first-order transitions. The stationary distributions of the exclusion process with $n$ shocks can be described in terms of $n$-dimensional representations of matrix product states.
In any multiperiod panel, a two-way fixed effects (TWFE) regression is numerically equivalent to a first-difference (FD) regression that pools all possible between-period gaps. Building on this observation, this paper develops numerical and causal interpretations of the TWFE coefficient. At the sample level, the TWFE coefficient is a weighted average of FD coefficients with different between-period gaps. This decomposition is useful for assessing the source of identifying variation for the TWFE coefficient. At the population level, a causal interpretation of the TWFE coefficient requires a common trends assumption for any between-period gap, and the assumption has to be conditional on changes in time-varying covariates. I propose a natural generalization of the TWFE estimator that can relax these requirements.
The carriers in the high-Tc cuprates are found to be polaron-like "stripons" carrying charge and located in stripe-like inhomogeneities, "quasi-electrons" carrying charge and spin, and "svivons" carrying spin and some lattice distortion. The anomalous spectroscopic and transport properties of the cuprates are understood. The stripe-like inhomogeneities result from the Bose condensation of the svivon field, and the speed of their dynamics is determined by the width of the double-svivon neutron-resonance peak. The connection of this peak to the peak-dip-hump gap structure observed below Tc emerges naturally. Pairing results from transitions between pair states of stripons and quasi-electrons through the exchange of svivons. The pairing symmetry is of the d_{x^2-y^2} type; however, sign reversal through the charged stripes results in features not characteristic of this symmetry. The phase diagram is determined by pairing and coherence lines within the regime of a Mott transition. Coherence without pairing results in a Fermi-liquid state, and incoherent pairing results in the pseudogap state where localized electron and electron pair states exist within the Hubbard gap. A metal-insulator-transition quantum critical point occurs between these two states at T=0 when the superconducting state is suppressed. An intrinsic heterogeneity is expected of superconducting and pseudogap nanoscale regions.
Environmentally assisted cracking phenomena are widespread across the transport, defence, energy and construction sectors. However, predicting environmentally assisted fractures is a highly cross-disciplinary endeavour that requires resolving the multiple material-environment interactions taking place. In this manuscript, an overview is given of recent breakthroughs in the modelling of environmentally assisted cracking. The focus is on the opportunities created by two recent developments: phase field and multi-physics modelling. The possibilities enabled by the confluence of phase field methods and electro-chemo-mechanics modelling are discussed in the context of three environmental assisted cracking phenomena of particular engineering interest: hydrogen embrittlement, localised corrosion and corrosion fatigue. Mechanical processes such as deformation and fracture can be coupled with chemical phenomena like local reactions, ionic transport and hydrogen uptake and diffusion. Moreover, these can be combined with the prediction of an evolving interface, such as a growing pit or a crack, as dictated by a phase field variable that evolves based on thermodynamics and local kinetics. Suitable for both microstructural and continuum length scales, this new generation of simulation-based, multi-physics phase field models can open new modelling horizons and enable Virtual Testing in harmful environments.
We explore the origin of the colour-magnitude relation (CMR) of early type galaxies in the Virgo cluster using spectra of very high S/N ratio for six elliptical galaxies selected along the CMR. The data are analysed using a new evolutionary stellar population synthesis model to generate galaxy spectra at the resolution given by their velocity dispersions. In particular we use a new age indicator that is virtually free of the effects of metallicity. We find that the luminosity weighted mean ages of Virgo ellipticals are greater than ~8 Gyr, and show no clear trend with galaxy luminosity. We also find a positive correlation of metallicity with luminosity, colour and velocity dispersion. We conclude that the CMR is driven primarily by a luminosity-metallicity correlation. However, not all elements increase equally with the total metallicity and we speculate that the CMR may be driven by both a total metallicity increase and by a systematic departure from solar abundance ratios of some elements along the CMR. A full understanding of the role played by the total metallicity, abundance ratios and age in generating the CMR requires the analysis of spectra of very high quality, such as those reported here, for a larger number of galaxies in Virgo and other clusters.
We propose a data-driven approach for power allocation in the context of federated learning (FL) over interference-limited wireless networks. The power policy is designed to maximize the transmitted information during the FL process under communication constraints, with the ultimate objective of improving the accuracy and efficiency of the global FL model being trained. The proposed power allocation policy is parameterized using a graph convolutional network and the associated constrained optimization problem is solved through a primal-dual algorithm. Numerical experiments show that the proposed method outperforms three baseline methods in both transmission success rate and FL global performance.