text
stringlengths
133
1.92k
summary
stringlengths
24
228
We study numerically the one dimensional ferromagnetic Kondo lattice, a model widely used to describe nickel and manganese perovskites. By including a nearest neighbor Coulomb interaction (V) and a superexchange interaction between the localized moments (K), we obtain the phase diagram in parameter space for several dopings at T=0. Due to the competition between double and superexchange, we find a region where the formation of magnetic polarons induces a charge ordered (CO) state which survives also for V=0. This mechanism should be taken into account in theories of charge ordering involving spin degrees of freedom.
New Type of Charge and Magnetic order in the Ferromagnetic Kondo Lattice
In this paper order estimates for the linear widths of some function classes are obtained; these classes are defined by restrictions on the weighted $L_{p_1}$-norm of the r-th derivative and the weighted $L_{p_0}$-norm of zero derivative.
Linear widths of weighted Sobolev classes with conditions on the highest order and zero derivatives
We expand upon a graph theoretic set of uncertainty principles with tight bounds for difference estimators acting simultaneously in the graph domain and the frequency domain. We show that the eigenfunctions of a modified graph Laplacian and a modified normalized graph Laplacian operator dictate the upper and lower bounds for the inequalities. Finally, we establish the feasibility region of difference estimator values in $\mathbb{R}^2$.
Graph theoretic uncertainty and feasibility
Let F be a nonarchimedean local field and let G = GL(n) = GL(n,F). Let E/F be a finite Galois extension. We investigate base change E/F at two levels: at the level of algebraic varieties, and at the level of K-theory. We put special emphasis on the representations with Iwahori fixed vectors, and the tempered spectrum of GL(1) and GL(2). In this context, the prominent arithmetic invariant is the residue degree f(E/F).
Base change and K-theory for GL(n)
High-quality data accumulation is now becoming ubiquitous in the health domain. There is increasing opportunity to exploit rich data from normal subjects to improve supervised estimators in specific diseases with notorious data scarcity. We demonstrate that low-dimensional embedding spaces can be derived from the UK Biobank population dataset and used to enhance data-scarce prediction of health indicators, lifestyle and demographic characteristics. Phenotype predictions facilitated by Variational Autoencoder manifolds typically scaled better with increasing unlabeled data than dimensionality reduction by PCA or Isomap. Performances gains from semisupervison approaches will probably become an important ingredient for various medical data science applications.
Label scarcity in biomedicine: Data-rich latent factor discovery enhances phenotype prediction
This work combines information about the dialogue history encoded by pre-trained model with a meaning representation of the current system utterance to realize contextual language generation in task-oriented dialogues. We utilize the pre-trained multi-context ConveRT model for context representation in a model trained from scratch; and leverage the immediate preceding user utterance for context generation in a model adapted from the pre-trained GPT-2. Both experiments with the MultiWOZ dataset show that contextual information encoded by pre-trained model improves the performance of response generation both in automatic metrics and human evaluation. Our presented contextual generator enables higher variety of generated responses that fit better to the ongoing dialogue. Analysing the context size shows that longer context does not automatically lead to better performance, but the immediate preceding user utterance plays an essential role for contextual generation. In addition, we also propose a re-ranker for the GPT-based generation model. The experiments show that the response selected by the re-ranker has a significant improvement on automatic metrics.
Context Matters in Semantically Controlled Language Generation for Task-oriented Dialogue Systems
We have carried out a study of the interstellar medium (ISM) toward a shell-like supernova remnant SNR Puppis A by using the NANTEN CO and ATCA HI data. We synthesized a comprehensive picture of the SNR radiation by combining the ISM data with the gamma-ray and X-ray distributions. The ISM, both atomic and molecular gas, is dense and highly clumpy, and is distributed all around the SNR, but mainly in the north-east. The CO distribution revealed an enhanced line intensity ratio of CO($J$ = 2-1)/($J$ = 1-0) transitions as well as CO line broadening, which indicate shock heating/acceleration. The results support that Puppis A is located at 1.4 kpc, in the local arm. The ISM interacting with the SNR has a large mass of $\sim$10$^{4}$ $M_{\odot}$ which is dominated by HI, showing good spatial correspondence with the Fermi-LAT gamma-ray image. This favors the hadronic origin of the gamma-rays, while additional contribution of the leptonic component is not excluded. The distribution of the X-ray ionization timescales within the shell suggests that the shock front ionized various parts of the ISM at epochs ranging over a few to ten 1000 yr. We therefore suggest that the age of the SNR is around 10$^{4}$ yr as given by the largest ionization timescale. We estimate the total cosmic ray energy $W_{\rm p}$ to be 10$^{47}$ erg, which is well placed in the cosmic-ray escaping phase of an age-$W_{\rm p}$ plot including more than ten SNRs.
Molecular and Atomic Clouds Associated with the Gamma-Ray Supernova Remnant Puppis A
We address various notions of shadowing and expansivity for continuous maps restricted to a proper subset of their domain. We prove new equivalences of shadowing and expansive properties, we demonstrate under what conditions certain expanding maps have shadowing, and generalize some known results in this area. We also investigate the impact of our theory on maps of the interval, in which context some of our results can be extended.
Shadowing and Expansivity in Sub-Spaces
In the Aharonov-Bohm (AB) effect, interference fringes are observed for a charged particle in the absence of the local overlap with the external electromagnetic field. This notion of the apparent nonlocality of the interaction or the significant role of the potential has recently been challenged and are under debate. The quantum electrodynamic approach provides a microscopic picture of the characteristics of the interaction between a charge and an external field. We explicitly show the gauge invariance of the local phase shift in the magnetic AB effect, which is in contrast to the results obtained using the usual semiclassical vector potential. Our study can resolve the issue of the locality in the magnetic AB effect. However, the problem is not solved in the same way in the electric counterpart wherein virtual scalar photons play an essential role.
Gauge invariance of the local phase in the Aharonov-Bohm interference: quantum electrodynamic approach
We probe the electrostatic mechanism driving adsorption of polyelectrolytes onto like-charged membranes upon the addition of tri- and tetravalent counterions to a bathing monovalent salt solution. We develop a one-loop-dressed strong coupling theory that treats the monovalent salt at the electrostatic one-loop level and the multivalent counterions within a strong-coupling approach. It is shown that the adhesive force of the multivalent counterions mediating the like-charge adsorption arises from their strong condensation at the charged membrane. The resulting interfacial counterion excess locally maximizes the screening ability of the electrolyte and minimizes the electrostatic polymer grand potential. This translates into an attractive force that pulls the polymer to the similarly charged membrane. We show that the high counterion valency enables this adsorption transition even at weakly charged membranes. Additionally, strongly charged membranes give rise to salt-induced correlations and intensify the interfacial multivalent counterion condensation, strenghtening the complexation of the polymer with the like-charged membrane, as well as triggering the orientational transition of the molecule prior to its adsorption. Finally, our theory provides two additional key features as evidenced by previous adsorption experiments: first, the critical counterion concentration for polymer adsorption decreases with the rise of the counterion valency, and second, the addition of monovalent salt enhances the screening of the membrane charges and suppresses salt correlations. This weakens the interfacial multivalent counterion condensation and results in the desorption of the polymer from the substrate.
Like-charge polymer-membrane complexation mediated by multivalent cations: one-loop-dressed strong coupling theory
Open-set classification is a problem of handling `unknown' classes that are not contained in the training dataset, whereas traditional classifiers assume that only known classes appear in the test environment. Existing open-set classifiers rely on deep networks trained in a supervised manner on known classes in the training set; this causes specialization of learned representations to known classes and makes it hard to distinguish unknowns from knowns. In contrast, we train networks for joint classification and reconstruction of input data. This enhances the learned representation so as to preserve information useful for separating unknowns from knowns, as well as to discriminate classes of knowns. Our novel Classification-Reconstruction learning for Open-Set Recognition (CROSR) utilizes latent representations for reconstruction and enables robust unknown detection without harming the known-class classification accuracy. Extensive experiments reveal that the proposed method outperforms existing deep open-set classifiers in multiple standard datasets and is robust to diverse outliers. The code is available in https://nae-lab.org/~rei/research/crosr/.
Classification-Reconstruction Learning for Open-Set Recognition
The study of low regularity (in-)extendibility of Lorentzian manifolds is motivated by the question whether a given solution to the Einstein equations can be extended (or is maximal) as a weak solution. In this paper we show that a timelike complete and globally hyperbolic Lorentzian manifold is $C^0$-inextendible. For the proof we make use of the result, recently established by S\"amann [17], that even for \emph{continuous} Lorentzian manifolds that are globally hyperbolic, there exists a length-maximizing causal curve between any two causally related points.
Timelike completeness as an obstruction to $C^0$-extensions
For $n\geq 3$, let $(H_n, E)$ denote the $n$-th Henson graph, i.e., the unique countable homogeneous graph with exactly those finite graphs as induced subgraphs that do not embed the complete graph on $n$ vertices. We show that for all structures $\Gamma$ with domain $H_n$ whose relations are first-order definable in $(H_n,E)$ the constraint satisfaction problem for $\Gamma$ is either in P or is NP-complete. We moreover show a similar complexity dichotomy for all structures whose relations are first-order definable in a homogeneous graph whose reflexive closure is an equivalence relation. Together with earlier results, in particular for the random graph, this completes the complexity classification of constraint satisfaction problems of structures first-order definable in countably infinite homogeneous graphs: all such problems are either in P or NP-complete.
Constraint satisfaction problems for reducts of homogeneous graphs
In this paper, we consider the behaviour, when $q$ goes to $1$, of the set of a convenient basis of meromorphic solutions of a family of linear $q$-difference equations. In particular, we show that, under convenient assumptions, such basis of meromorphic solutions converges, when $q$ goes to $1$, to a basis of meromorphic solutions of a linear differential equation. We also explain that given a linear differential equation of order at least two, which has a Newton polygon that has only slopes of multiplicities one, and a basis of meromorphic solutions, we may build a family of linear $q$-difference equations that discretizes the linear differential equation, such that a convenient family of basis of meromorphic solutions is a $q$-deformation of the given basis of meromorphic solutions of the linear differential equation.
q-Deformation of Meromorphic Solutions of Linear Differential Equations
Deep reinforcement learning augments the reinforcement learning framework and utilizes the powerful representation of deep neural networks. Recent works have demonstrated the remarkable successes of deep reinforcement learning in various domains including finance, medicine, healthcare, video games, robotics, and computer vision. In this work, we provide a detailed review of recent and state-of-the-art research advances of deep reinforcement learning in computer vision. We start with comprehending the theories of deep learning, reinforcement learning, and deep reinforcement learning. We then propose a categorization of deep reinforcement learning methodologies and discuss their advantages and limitations. In particular, we divide deep reinforcement learning into seven main categories according to their applications in computer vision, i.e. (i)landmark localization (ii) object detection; (iii) object tracking; (iv) registration on both 2D image and 3D image volumetric data (v) image segmentation; (vi) videos analysis; and (vii) other applications. Each of these categories is further analyzed with reinforcement learning techniques, network design, and performance. Moreover, we provide a comprehensive analysis of the existing publicly available datasets and examine source code availability. Finally, we present some open issues and discuss future research directions on deep reinforcement learning in computer vision
Deep Reinforcement Learning in Computer Vision: A Comprehensive Survey
We compute finite-volume corrections to nucleon matrix elements of the axial-vector current. We show that knowledge of this finite-volume dependence --as well as that of the nucleon mass-- obtained using lattice QCD will allow a clean determination of the chiral-limit values of the nucleon and Delta-resonance axial-vector couplings.
Baryon Axial Charge in a Finite Volume
We prove #P-completeness results for counting edge colorings on simple graphs. These strengthen the corresponding results on multigraphs from [4]. We prove that for any $\kappa \ge r \ge 3$ counting $\kappa$-edge colorings on $r$-regular simple graphs is #P-complete. Furthermore, we show that for planar $r$-regular simple graphs where $r \in \{3, 4, 5\}$ counting edge colorings with \k{appa} colors for any $\kappa \ge r$ is also #P-complete. As there are no planar $r$-regular simple graphs for any $r > 5$, these statements cover all interesting cases in terms of the parameters $(\kappa, r)$.
The Complexity of Counting Edge Colorings for Simple Graphs
We report a high-resolution numerical study of two-dimensional (2D) miscible Rayleigh-Taylor (RT) incompressible turbulence with the Boussinesq approximation. An ensemble of 100 independent realizations were performed at small Atwood number and unit Prandtl number with a spatial resolution of $2048\times8193$ grid points. Our main focus is on the temporal evolution and the scaling behavior of global quantities and of small-scale turbulence properties. Our results show that the buoyancy force balances the inertial force at all scales below the integral length scale and thus validate the basic force-balance assumption of the Bolgiano-Obukhov scenario in 2D RT turbulence. It is further found that the Kolmogorov dissipation scale $\eta(t)\sim t^{1/8}$, the kinetic-energy dissipation rate $\varepsilon_u(t)\sim t^{-1/2}$, and the thermal dissipation rate $\varepsilon_{\theta}(t)\sim t^{-1}$. All of these scaling properties are in excellent agreement with the theoretical predictions of the Chertkov model [Phys. Rev. Lett. \textbf{91}, 115001 (2003)]. We further discuss the emergence of intermittency and anomalous scaling for high order moments of velocity and temperature differences. The scaling exponents $\xi^r_p$ of the $p$th-order temperature structure functions are shown to saturate to $\xi^r_{\infty}\simeq0.78\pm0.15$ for the highest orders, $p\sim10$. The value of $\xi^r_{\infty}$ and the order at which saturation occurs are compatible with those of turbulent Rayleigh-B\'{e}nard (RB) convection [Phys. Rev. Lett. \textbf{88}, 054503 (2002)], supporting the scenario of universality of buoyancy-driven turbulence with respect to the different boundary conditions characterizing the RT and RB systems.
Temporal evolution and scaling of mixing in two-dimensional Rayleigh-Taylor turbulence
The waves (including acoustic, electromagnetic and elastic ones) propagating in the presence of a cluster of inhomogeneities undergo multiple interactions between them. When these inhomogeneities have sub-wavelength sizes, the dominating field due to the these multiple interactions is the Foldy-Lax field. This field models the interaction between the equivalent point-like scatterers, located at the centers of the small inhomogeneities, with scattering coefficients related to geometrical/material properties of each inhomogeneities, as polarization coefficients. One of the related questions left open for a long time is whether we can reconstruct this Foldy-Lax field from the scattered field measured far away from the cluster of the small inhomogeneities. This is the Foldy-Lax approximation (or Foldy-Lax paradigm). In this work, we show that this approximation is indeed valid as soon as the inhomogeneities enjoy critical scales between their sizes and contrasts. These critical scales allow them to generate resonances which can be characterized and computed. The main result here is that exciting the cluster by incident frequencies which are close to the real parts of these resonances allow us to reconstruct the Fold-Lax field from the scattered waves collected far away from the cluster itself (as the farfields). In short, we show that the Foldy-Lax approximation is valid using nearly resonating incident frequencies. This results is demonstrated by using, as small inhomogeneities, dielectric nanoparticles for the 2D TM model of electromagnetic waves and bubbles for the 3D acoustic waves.
The Foldy-Lax approximation is valid for nearly resonating frequencies
We numerically study the two-dimensional, area preserving, web map. When the map is governed by ergodic behavior, it is, as expected, correctly described by Boltzmann-Gibbs statistics, based on the additive entropic functional $S_{BG}[p(x)] = -k\int dx\,p(x) \ln p(x)$. In contrast, possible ergodicity breakdown and transitory sticky dynamical behavior drag the map into the realm of generalized $q$-statistics, based on the nonadditive entropic functional $S_q[p(x)]=k\frac{1-\int dx\,[p(x)]^q}{q-1}$ ($q \in {\cal R}; S_1=S_{BG}$). We statistically describe the system (probability distribution of the sum of successive iterates, sensitivity to the initial condition, and entropy production per unit time) for typical values of the parameter that controls the ergodicity of the map. For small (large) values of the external parameter $K$, we observe $q$-Gaussian distributions with $q=1.935\dots$ (Gaussian distributions), like for the standard map. In contrast, for intermediate values of $K$, we observe a different scenario, due to the fractal structure of the trajectories embedded in the chaotic sea. Long-standing non-Gaussian distributions are characterized in terms of the kurtosis and the box-counting dimension of chaotic sea.
Statistical characterization of discrete conservative systems: The web map
A well-known result of W. Ray asserts that if $C$ is an unbounded convex subset of a Hilbert space, then there is a nonexpansive mapping $T$: $C\to C$ that has no fixed point. In this paper we establish some common fixed point properties for a semitopological semigroup $S$ of nonexpansive mappings acting on a closed convex subset $C$ of a Hilbert space, assuming that there is a point $c\in C$ with a bounded orbit and assuming that certain subspace of $C_b(S)$ has a left invariant mean. Left invariant mean (or amenability) is an important notion in harmonic analysis of semigroups and groups introduced by von Neumann in 1929 \cite{Neu} and formalized by Day in 1957 \cite{Day}. In our investigation we use the notion of common attractive points introduced recently by S. Atsushiba and W. Takahashi.
Fixed point properties for semigroups of nonlinear mappings on unbounded sets
We construct, for each irrational number $\alpha$, a minimal $C^1$-diffeomorphism of the circle with rotation number $\alpha$ which admits a measur
Minimal C^1 diffeomorphisms of the circle which admit measurable fundamental domains
We derive spectral width estimates for traces of tempered solutions of a large class of multiplier equations in $\mathbf{R}^n$. The estimates are uniform for solutions up to a given order. In the process, we find a rather explicit expression for a tempered fundamental solution of a multiplier. We successfully verify our spectral width estimates against numerical results in several scenarios involving the inhomogeneous Helmholtz equation in $\mathbf{R}^n$ with $n=1,\dots,9$. Our main result is directly applicable in the stability analysis of solutions of inverse source problems.
On the transfer of information in multiplier equations
This article gives an elementary computational proof of the group law for Edwards elliptic curves. The associative law is expressed as a polynomial identity over the integers that is directly checked by polynomial division. Unlike other proofs, no preliminaries such as intersection numbers, Bezout's theorem, projective geometry, divisors, or Riemann Roch are required. The proof of the group law has been formalized in the Isabelle/HOL proof assistant.
Formal Proof of the Group Law for Edwards Elliptic Curves
Knowledge graph embedding aims at modeling entities and relations with low-dimensional vectors. Most previous methods require that all entities should be seen during training, which is unpractical for real-world knowledge graphs with new entities emerging on a daily basis. Recent efforts on this issue suggest training a neighborhood aggregator in conjunction with the conventional entity and relation embeddings, which may help embed new entities inductively via their existing neighbors. However, their neighborhood aggregators neglect the unordered and unequal natures of an entity's neighbors. To this end, we summarize the desired properties that may lead to effective neighborhood aggregators. We also introduce a novel aggregator, namely, Logic Attention Network (LAN), which addresses the properties by aggregating neighbors with both rules- and network-based attention weights. By comparing with conventional aggregators on two knowledge graph completion tasks, we experimentally validate LAN's superiority in terms of the desired properties.
Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding
Keyphrase extraction (KPE) is an important task in Natural Language Processing for many scenarios, which aims to extract keyphrases that are present in a given document. Many existing supervised methods treat KPE as sequential labeling, span-level classification, or generative tasks. However, these methods lack the ability to utilize keyphrase information, which may result in biased results. In this study, we propose Diff-KPE, which leverages the supervised Variational Information Bottleneck (VIB) to guide the text diffusion process for generating enhanced keyphrase representations. Diff-KPE first generates the desired keyphrase embeddings conditioned on the entire document and then injects the generated keyphrase embeddings into each phrase representation. A ranking network and VIB are then optimized together with rank loss and classification loss, respectively. This design of Diff-KPE allows us to rank each candidate phrase by utilizing both the information of keyphrases and the document. Experiments show that Diff-KPE outperforms existing KPE methods on a large open domain keyphrase extraction benchmark, OpenKP, and a scientific domain dataset, KP20K.
Enhancing Phrase Representation by Information Bottleneck Guided Text Diffusion Process for Keyphrase Extraction
In this paper, a six-cylinder-port hohlraum is proposed to provide high symmetry flux on capsule. It is designed to ignite a capsule with 1.2 mm radius in indirect-drive inertial confinement fusion (ICF) . Flux symmetry and laser energy are calculated by using three dimensional view factor method and laser energy balance in hohlraums. Plasma conditions are analyzed based on the two dimensional radiation-hydrodynamic simulations. There is no Ylm (l<=4) asymmetry in the six-cylinder-port hohlraum when the influences of laser entrance holes (LEHs) and laser spots cancel each other out with suitable target parameters. A radiation drive with 300 eV and good flux symmetry can be achieved with use of laser energy of 2.3 MJ and 500 TW peak power. According to the simulations, the electron temperature and the electron density on the wall of laser cone are high and low, respectively, which are similar to those of outer cones in the hohlraums on National Ignition Facility (NIF). And the laser intensity is also as low as those of NIF outer cones. So the backscattering due to laser plasma interaction (LPI) is considered to be negligible. The six-cyliner-port hohlraum could be superior to the traditional cylindrical hohlraum and the octahedral hohlraum in both higher symmetry and lower backscattering without supplementary technology at acceptable laser energy. It is undoubted that the hohlraum will add to the diversity of ICF approaches.
A new ignition hohlraum design for indirect-drive inertial confinement fusion
Transiting circumbinary planets are more easily detected around short-period than long-period binaries, but none have yet been observed by {\it Kepler} orbiting binaries with periods shorter than seven days. In triple systems, secular Kozai-Lidov cycles and tidal friction (KLCTF) have been shown to reduce the inner orbital period from $\sim 10^4$ to a few days. Indeed, the majority of short-period binaries are observed to possess a third stellar companion. Using secular evolution analysis and population synthesis, we show that KLCTF makes it unlikely for circumbinary transiting planets to exist around short-period binaries. We find the following outcomes. (1) Sufficiently massive planets in tight and/or coplanar orbits around the inner binary can quench the KL evolution because they induce precession in the inner binary. The KLCTF process does not take place, preventing the formation of a short-period binary. (2) Secular evolution is not quenched and it drives the planetary orbit into a high eccentricity, giving rise to an unstable configuration, in which the planet is most likely ejected from the system. (3) Secular evolution is not quenched but the planet survives the KLCTF evolution. Its orbit is likely to be much wider than the currently observed inner binary orbit, and is likely to be eccentric and inclined with respect to the inner binary. These outcomes lead to two main conclusions: (1) it is unlikely to find a massive planet on a tight and coplanar orbit around a short-period binary, and (2) the properties of circumbinary planets in short-period binaries are constrained by secular evolution.
A triple origin for the lack of tight coplanar circumbinary planets around short-period binaries
We perform an in-depth analysis on the inequality in 863 Wikimedia projects. We take the complete editing history of 267,304,095 Wikimedia items until 2016, which not only covers every language edition of Wikipedia, but also embraces the complete versions of Wiktionary, Wikisource, Wikivoyage, etc. Our findings of common growth pattern described by the interrelations between four characteristic growth yardsticks suggest a universal law of communal data formation. In this encyclopaedic data set, we observe the interplay between the number of edits and the degree of inequality. In particular, the rapid increase in the Gini coefficient suggests that this entrenched inequality stems from the nature of such open-editing communal data sets, namely the abiogenesis of the supereditors' oligopoly. We show that these supereditor groups were created at the early stages of these open-editing media and are still active. Furthermore, our model considers both short-term and long-term memories to successfully elucidate the underlying mechanism of the establishment of oligarchy in Wikipedia. Our results anticipate a noticeable prospect of such communal databases in the future: the disparity will not be resolved spontaneously.
Early onset of structural inequality in the formation of collaborative knowledge, Wikipedia
In this paper, we describe the design, construction and performance of an apparatus installed in the Aberdeen Tunnel laboratory in Hong Kong for studying spallation neutrons induced by cosmic-ray muons under a vertical rock overburden of 611 meter water equivalent (m.w.e.). The apparatus comprises of six horizontal layers of plastic-scintillator hodoscopes for determining the direction and position of the incident cosmic-ray muons. Sandwiched between the hodoscope planes is a neutron detector filled with 650 kg of liquid scintillator doped with about 0.06% of Gadolinium by weight for improving the efficiency of detecting the spallation neutrons. Performance of the apparatus is also presented.
An apparatus for studying spallation neutrons in the Aberdeen Tunnel laboratory
The ability of young stellar clusters to expel or retain the gas left over after a first episode of star formation is a central issue in all models aiming to explain multiple stellar populations and the peculiar light element abundance patterns in globular clusters. Recent attempts to detect the gas left over from star formation in present day clusters with masses similar to those of globular clusters did not reveal a significant amount of gas in the majority of them, which strongly restricts the scenarios of multiple stellar population formation. Here the conditions required to retain the gas left over from star formation within the natal star forming cloud are revised. It is shown that the usually accepted concept regarding the thermalization of the star cluster kinetic energy due to nearby stellar winds and SNe ejecta collisions must be taken with care in the case of very compact and dense star forming clouds where three star formation regimes are possible if one considers different star formation efficiencies and mass concentrations. The three possible regimes are well separated in the half-mass radius and in the natal gas central density vs pre-stellar cloud mass parameter space. The two gas free clusters in the Antennae galaxies and the gas rich cluster with a similar mass and age in the galaxy NGC 5253 appear in different zones in these diagrams. The critical lines obtained for clusters with a solar and a primordial gas metallicity are compared.
Gas expulsion vs gas retention: what process dominates in young massive clusters?
At the exit surface of a photonic crystal, the intensity of the diffracted wave can be periodically modulated, showing a maximum in the "positive" (forward diffracted) or in the "negative" (diffracted) direction, depending on the slab thickness. This thickness dependence is a direct result of the so-called Pendellosung phenomenon, consisting of the periodic exchange inside the crystal of the energy between direct and diffracted beams. We report the experimental observation of this effect in the microwave region at about 14 GHz by irradiating 2D photonic crystal slabs of different thickness and detecting the intensity distribution of the electromagnetic field at the exit surface and inside the crystal itself.
Pendellosung effect in photonic crystals
We study the local effects of an external time-dependent magnetic field on axion-like particles assuming they are all the dark matter of the universe. We find that under suitable conditions the amplitude of the dark matter field can resonate parametrically. The resonance depends on the velocity of the axion-like particles and scales quadratically with the strength} of the external magnetic field, $\frac{\rho}{\rho_{DM}} \sim {B_0}^3$. By considering typical experimental benchmark values, we find the resonance could amplify around two orders of magnitude the local energy density stored in the dark matter condensate.
Parametric Resonance and Dark Matter Axion-Like Particles
We consider the XY spin chain with arbitrary time-dependent magnetic field and anisotropy. We argue that a certain subclass of Gaussian states, called Coherent Ensemble (CE) following [1], provides a natural and unified framework for out-of-equilibrium physics in this model. We show that $all$ correlation functions in the CE can be computed using form factor expansion and expressed in terms of Fredholm determinants. In particular, we present exact out-of-equilibrium expressions in the thermodynamic limit for the previously unknown order parameter one-point function, dynamical two-point function and equal-time three-point function.
Out-of-equilibrium dynamics of the XY spin chain from form factor expansion
Revocation of dishonest users is not an easy problem. This paper proposes a new way to manage revocation of pseudonyms in vehicular ad-hoc networks when using identity-based authentication to increase efficiency and security through certificateless authentication. In order to improve the performance of revocation lists, this paper proposes the use of a data structure based on authenticated dynamic hash k-ary trees and the frequency with which revoked pseudonyms are consulted. The use of the knowledge about the frequency of consultation of revoked pseudonyms allows an easier access to the most popular revoked pseudonyms to the detriment of revoked pseudonyms that are the least consulted. Accordingly, the proposal is especially useful in urban environments where there are vehicles that spend more time on road than others, such as public service vehicles.
Using query frequencies in tree-based revocation for certificateless authentication in VANETs
The expressive capacity of physical systems employed for learning is limited by the unavoidable presence of noise in their extracted outputs. Although present in biological, analog, and quantum systems, the precise impact of noise on learning is not yet fully understood. Focusing on supervised learning, we present a mathematical framework for evaluating the resolvable expressive capacity (REC) of general physical systems under finite sampling noise, and provide a methodology for extracting its extrema, the eigentasks. Eigentasks are a native set of functions that a given physical system can approximate with minimal error. We show that the REC of a quantum system is limited by the fundamental theory of quantum measurement, and obtain a tight upper bound for the REC of any finitely-sampled physical system. We then provide empirical evidence that extracting low-noise eigentasks can lead to improved performance for machine learning tasks such as classification, displaying robustness to overfitting. We present analyses suggesting that correlations in the measured quantum system enhance learning capacity by reducing noise in eigentasks. The applicability of these results in practice is demonstrated with experiments on superconducting quantum processors. Our findings have broad implications for quantum machine learning and sensing applications.
Tackling Sampling Noise in Physical Systems for Machine Learning Applications: Fundamental Limits and Eigentasks
The discussions on the connection between gravity and thermodynamics attract much attention recently. We consider a static self-gravitating perfect fluid system in $f(R)$ gravity, which is an important theory could explain the accelerated expansion of the universe. We first show that the Tolman-Oppenheimer-Volkoff equation of $f(R)$ theories can be obtained by thermodynamical method in spherical symmetric spacetime. Then we prove that the maximum entropy principle is also valid for $f(R)$ gravity in general static spacetimes beyond spherical symmetry. The result shows that if the constraint equation is satisfied and the temperature of fluid obeys Tolmans law, the extrema of total entropy implies other components of gravitational equations. Conversely, if $f(R)$ gravitational equation hold, the total entropy of the fluid should be extremum. Our work suggests a general and solid connection between $f(R)$ gravity and thermodynamics.
General proof of the entropy principle for self-gravitating fluid in f(R) Gravity
Bergweiler and Kotus gave sharp upper bounds for the Hausdorff dimension of the escaping set of a meromorphic function in the Eremenko-Lyubich class, in terms of the order of the function and the maximal multiplicity of the poles. We show that these bounds are also sharp in the Speiser class. We apply this method also to construct meromorphic functions in the Speiser class with preassigned dimensions of the Julia set and the escaping set.
The Hausdorff dimension of escaping sets of meromorphic functions in the Speiser class
Cybercrime investigators face numerous challenges when policing online crimes. Firstly, the methods and processes they use when dealing with traditional crimes do not necessarily apply in the cyber-world. Additionally, cyber criminals are usually technologically-aware and constantly adapting and developing new tools that allow them to stay ahead of law enforcement investigations. In order to provide adequate support for cybercrime investigators, there needs to be a better understanding of the challenges they face at both technical and socio-technical levels. In this paper, we investigate this problem through an analysis of current practices and workflows of investigators. We use interviews with experts from government and private sectors who investigate cybercrimes as our main data gathering process. From an analysis of the collected data, we identify several outstanding challenges faced by investigators. These pertain to practical, technical, and social issues such as systems availability, usability, and in computer-supported collaborative work. Importantly, we use our findings to highlight research areas where user-centric workflows and tools are desirable. We also define a set of recommendations that can aid in providing a better foundation for future research in the field and allow more effective combating of cybercrimes.
Cybercrime Investigators are Users Too! Understanding the Socio-Technical Challenges Faced by Law Enforcement
Experiments observing the liquid surface in a vertically oscillating container have indicated that modeling the dynamics of such systems require maps that admit states at infinity. In this paper we investigate the bifurcations in such a map. We show that though such maps in general fall in the category of piecewise smooth maps, the mechanisms of bifurcations are quite different from those in other piecewise smooth maps. We obtain the conditions of occurrence of infinite states, and show that periodic orbits containing such states are superstable. We observe period-adding cascade in this system, and obtain the scaling law of the successive periodic windows.
Dynamics of a piecewise smooth map with singularity
This tutorial article presents a "bottom-up" view of electrical resistance starting from something really small, like a molecule, and then discussing the issues that arises as we move to bigger conductors. Remarkably, no serious quantum mechanics is needed to understand electrical conduction through something really small, except for unusual things like the Kondo effect that are seen only for a special range of parameters. This article starts with energy level diagrams (section 2), shows that the broadening that accompanies coupling limits the conductance to a maximum of q^2/h per level (sections 3, 4), describes how a change in the shape of the self-consistent potential profile can turn a symmetric current-voltage characteristic into a rectifying one (sections 5, 6), shows that many interesting effects in molecular electronics can be understood in terms of a simple model (section 7), introduces the non-equilibrium Green function (NEGF) formalism as a sophisticated version of this simple model with ordinary numbers replaced by appropriate matrices (section 8) and ends with a personal view of unsolved problems in the field of nanoscale electron transport (section 9). Appendix A discusses the Coulomb blockade regime of transport, while appendix B presents a formal derivation of the NEGF equations. MATLAB codes for numerical examples are listed in the appendix C.
Electrical resistance: an atomistic view
For the Lagrangian-DNN relaxation of quadratic optimization problems (QOPs), we propose a Newton-bracketing method to improve the performance of the bisection-projection method implemented in BBCPOP [to appear in ACM Tran. Softw., 2019]. The relaxation problem is converted into the problem of finding the largest zero $y^*$ of a continuously differentiable (except at $y^*$) convex function $g : \mathbb{R} \rightarrow \mathbb{R}$ such that $g(y) = 0$ if $y \leq y^*$ and $g(y) > 0$ otherwise. In theory, the method generates lower and upper bounds of $y^*$ both converging to $y^*$. Their convergence is quadratic if the right derivative of $g$ at $y^*$ is positive. Accurate computation of $g'(y)$ is necessary for the robustness of the method, but it is difficult to achieve in practice. As an alternative, we present a secant-bracketing method. We demonstrate that the method improves the quality of the lower bounds obtained by BBCPOP and SDPNAL+ for binary QOP instances from BIQMAC. Moreover, new lower bounds for the unknown optimal values of large scale QAP instances from QAPLIB are reported.
A Newton-bracketing method for a simple conic optimization problem
We revisit the decoupling phenomenon of massless modes in the noncommutative open string (NCOS) theories. We check the decoupling by explicit computation in (2+1) or higher dimensional NCOS theories and recapitulate the validity of the decoupling to all orders in perturbation theory.
On Decoupling of Massless Modes in NCOS Theories
In this paper we consider linearly constrained optimization problems and propose a loopless projection stochastic approximation (LPSA) algorithm. It performs the projection with probability $p_n$ at the $n$-th iteration to ensure feasibility. Considering a specific family of the probability $p_n$ and step size $\eta_n$, we analyze our algorithm from an asymptotic and continuous perspective. Using a novel jump diffusion approximation, we show that the trajectories connecting those properly rescaled last iterates weakly converge to the solution of specific stochastic differential equations (SDEs). By analyzing SDEs, we identify the asymptotic behaviors of LPSA for different choices of $(p_n, \eta_n)$. We find that the algorithm presents an intriguing asymptotic bias-variance trade-off and yields phase transition phenomenons, according to the relative magnitude of $p_n$ w.r.t. $\eta_n$. This finding provides insights on selecting appropriate ${(p_n, \eta_n)}_{n \geq 1}$ to minimize the projection cost. Additionally, we propose the Debiased LPSA (DLPSA) as a practical application of our jump diffusion approximation result. DLPSA is shown to effectively reduce projection complexity compared to vanilla LPSA.
Asymptotic Behaviors and Phase Transitions in Projected Stochastic Approximation: A Jump Diffusion Approach
We consider a general path-dependent version of the hedging problem with price impact of Bouchard et al. (2019), in which a dual formulation for the super-hedging price is obtained by means of PDE arguments, in a Markovian setting and under strong regularity conditions. Using only probabilistic arguments, we prove, in a path-dependent setting and under weak regularity conditions, that any solution to this dual problem actually allows one to construct explicitly a perfect hedging portfolio. From a pure probabilistic point of view, our approach also allows one to exhibit solutions to a specific class of second order forward backward stochastic differential equations, in the sense of Cheridito et al. (2007). Existence of a solution to the dual optimal control problem is also addressed in particular settings. As a by-product of our arguments, we prove a version of It{\^o}'s Lemma for path-dependent functionals that are only C^{0,1} in the sense of Dupire.
Understanding the dual formulation for the hedging of path-dependent options with price impact
An efficient numerical method to compute solitary wave solutions to the free surface Euler equations is reported. It is based on the conformal mapping technique combined with an efficient Fourier pseudo-spectral method. The resulting nonlinear equation is solved via the Petviashvili iterative scheme. The computational results are compared to some existing approaches, such as the Tanaka method and Fenton's high-order asymptotic expansion. Several important integral quantities are numerically computed for a large range of amplitudes. The integral representation of the velocity and acceleration fields in the bulk of the fluid is also provided.
Efficient computation of steady solitary gravity waves
We study the Wigner distributions of quarks and gluons in light-front dressed quark model using the overlap of light front wave functions (LFWFs). We take the target to be a dressed quark, this is a composite spin $-1/2$ state of quark dressed with a gluon. This state allows us to calculate the quark and gluon Wigner distributions analytically in terms of LFWFs using Hamiltonian perturbation theory. We analyze numerically the Wigner distributions of quark and gluon and report their nature in the contour plots. We use an improved numerical technique to remove the cutoff dependence of the Fourier transformed integral over ${\bf \Delta}_\perp$.
Three Dimensional Imaging of the Nucleon
We reverse-engineer a formal semantics of the Component Definition Language (CDL), which is part of the highly configurable, embedded operating system eCos. This work provides the basis for an analysis and comparison of the two variability-modeling languages Kconfig and CDL. The semantics given in this document are based on analyzing the CDL documentation, inspecting the source code of the toolchain, as well as testing the tools on particular examples.
Formal Semantics of the CDL Language
We study conditions under which $P(S_\tau>x)\sim P(M_\tau>x)\sim E\tau P(\xi_1>x)$ as $x\to\infty$, where $S_\tau$ is a sum $\xi_1+...+\xi_\tau$ of random size $\tau$ and $M_\tau$ is a maximum of partial sums $M_\tau=\max_{n\le\tau}S_n$. Here $\xi_n$, $n=1$, 2, ..., are independent identically distributed random variables whose common distribution is assumed to be subexponential. We consider mostly the case where $\tau$ is independent of the summands; also, in a particular situation, we deal with a stopping time. Also we consider the case where $E\xi>0$ and where the tail of $\tau$ is comparable with or heavier than that of $\xi$, and obtain the asymptotics $P(S_\tau>x) \sim E\tau P(\xi_1>x)+P(\tau>x/E\xi)$ as $x\to\infty$. This case is of a primary interest in the branching processes. In addition, we obtain new uniform (in all $x$ and $n$) upper bounds for the ratio $P(S_n>x)/P(\xi_1>x)$ which substantially improve Kesten's bound in the subclass ${\mathcal S}^*$ of subexponential distributions.
Asymptotics of randomly stopped sums in the presence of heavy tails
We investigated complex magnetic properties of multifunctional LaCrO3-LaFeO3 system. The magnetic measurements substantiate the presence of competing complex magnetic ordering against temperature, showing paramagnetic to ferrimagnetic transition at 300 K, followed by antiferromagnetic (AFM) transition near 250 K superimposed on ferrimagnetic phase. The onset of weak ferrimagnetic ordering is attributed to the competing complex interaction between two AFM LaCrO3-LaFeO3 sublattices. The low-temperature AFM ordering is also substantiated by temperature-dependent Raman measurements, where the intensity ratio of 700 cm-1 Raman active mode showed the clear enhancement with lowering the temperature. The non-saturating nature of magnetic moments in LaCrO3-LaFeO6 suggests the predominating AFM ordering in conjunction with ferrimagnetic ordering between 250 K to 300 K up to 5 T magnetic field. A complex magnetic structure of LaCrO3-LaFeO3 is constructed, emphasizing the metastable magnetic phase near room temperature and low temperature antiferromagnetic state.
Anomalous magnetic behavior and complex magnetic structure of proximate LaCrO3 LaFeO3 system
In this chapter we describe a selection of mathematical techniques and results that suggest interesting links between the theory of gratings and the theory of homogenization, including a brief introduction to the latter. By no means do we purport to imply that homogenization theory is an exclusive method for studying gratings, neither do we hope to be exhaustive in our choice of topics within the subject of homogenization. Our preferences here are motivated most of all by our own latest research, and by our outlook to the future interactions between these two subjects. We have also attempted, in what follows, to contrast the "classical" homogenization (Section 11.1.2), which is well suited for the description of composites as we have known them since their advent until about a decade ago, and the "non-standard" approaches, high-frequency homogenization (Section 11.2) and high-contrast homogenization (Section 11.3), which have been developing in close relation to the study of photonic crystals and metamaterials, which exhibit properties unseen in conventional composite media, such as negative refraction allowing for super-lensing through a flat heterogeneous lens, and cloaking, which considerably reduces the scattering by finite size objects (invisibility) in certain frequency range. These novel electromagnetic paradigms have renewed the interest of physicists and applied mathematicians alike in the theory of gratings.
Homogenization Techniques for Periodic Structures
We performed high-resolution Fourier-transform infrared (FTIR) spectroscopy of a polymethyl methacrylate (PMMA) sphere of unknown size in the Mie scattering region. Apart from a slow, oscillatory structure (wiggles), which is due to an interference effect, the measured FTIR extinction spectrum exhibits a ripple structure, which is due to electromagnetic resonances. We fully characterize the underlying electromagnetic mode structure of the spectrum by assigning two mode numbers to each of the ripples in the measured spectrum. We show that analyzing the ripple structure in the spectrum in the wavenumber region from about $3000\,$cm$^{-1}$ to $8000\,$cm$^{-1}$ allows us to both determine the unknown radius of the sphere and the PMMA index of refraction, which shows a strong frequency dependence in this near-infrared spectral region. While in this paper we focus on examining a PMMA sphere as an example, our method of determining the refractive index and its dispersion from synchrotron infrared extinction spectra is generally applicable for the determination of the index of refraction of any transparent substance that can be shaped into micron-sized spheres.
Infrared refractive index dispersion of PMMA spheres from synchrotron extinction spectra
Previous studies modelled the origin of life and the emergence of photosynthesis on the early Earth-i.e. the origin of plants-in terms of biological heat engines that worked on thermal cycling caused by suspension in convecting water. In this new series of studies, heat engines using a more complex mechanism for thermal cycling are invoked to explain the origin of animals as well. Biological exploitation of the thermal gradient above a submarine hydrothermal vent is hypothesized, where a relaxation oscillation in the length of a protein 'thermotether' would have yielded the thermal cycling required for thermosynthesis. Such a thermal transition driven movement is not impeded by the low Reynolds number of a small scale. In the model the thermotether together with the protein export apparatus evolved into a 'flagellar proton pump' that turned into today's bacterial flagellar motor after the acquisition of the proton-pumping respiratory chain. The flagellar pump resembles Feynman's ratchet, and the 'flagellar computer' that implements chemotaxis a Turing machine: the stator would have functioned as Turing's paper tape and the stator's proton-transferring subunits with their variable conformation as the symbols on the tape. The existence of a cellular control centre in the cilium of the eukaryotic cell is proposed that would have evolved from the prokaryotic flagellar computer.
Emergence of Animals from Heat Engines. Part 1. Before the Snowball Earths
We show that the recent discovery of a new boson at the LHC, which we assume to be a Higgs boson, and the observed enhancement in its diphoton decays compared to the SM prediction, can be explained by a new doublet of charged vector bosons from an extended electroweak gauge sector model with SU(3)$_C\otimesSU(3)_L\otimesU(1)_X$ symmetry. Our results show a good agreement between our theoretical expected sensitivity to a 126--125 GeV Higgs boson and the experimental significance observed in the diphoton channel at the 8 TeV LHC. Effects of an invisible decay channel for the Higgs boson are also taken into account, in order to anticipate a possible confirmation of deficits in the branching ratios into $ZZ^*$, $WW^*$, bottom quarks, and tau leptons.
Explaining the Higgs Decays at the LHC with an Extended Electroweak Model
In modern databases, transaction processing technology provides ACID (Atomicity, Consistency, Isolation, Durability) features. Consistency refers to the correctness of databases and is a crucial property for many applications, such as financial and banking services. However, there exist typical challenges for consistency. Theoretically, the current two definitions of consistency express quite different meanings, which are causal and sometimes controversial. Practically, it is notorious to check the consistency of databases, especially in terms of the verification cost. This paper proposes Coo, a framework to check the consistency of databases. Specifically, Coo has the following advancements. First, Coo proposes partial order pair (POP) graph, which has a better expressiveness on transaction conflicts in a schedule by considering stateful information like Commit and Abort. By POP graph with no cycle, Coo defines consistency completely. Secondly, Coo can construct inconsistent test cases based on POP cycles. These test cases can be used to check the consistency of databases in accurate (all types of anomalies), user-friendly (SQL-based test), and cost-effective (one-time checking in a few minutes) ways. We evaluate Coo with eleven databases, both centralized and distributed, under all supported isolation levels. The evaluation shows that databases did not completely follow the ANSI SQL standard (e.g., Oracle claimed to be serializable but appeared in some inconsistent cases), and have different implementation methods and behaviors for concurrent controls (e.g., PostgreSQL, MySQL, and SQL Server performed quite differently at Repeatable Read level). Coo aids to comprehend the gap between coarse levels, finding more detailed and complete inconsistent behaviors.
Coo: Consistency Check for Transactional Databases
The Earth contains between one and ten oceans of water, including water within the mantle, where one ocean is the mass of water on the Earth's surface today. With $n$-body simulations we consider how much water could have been delivered from the asteroid belt to the Earth after its formation. Asteroids are delivered from unstable regions near resonances with the giant planets. We compare the relative impact efficiencies from the $\nu_6$ resonance, the 2:1 mean motion resonance with Jupiter and the outer asteroid belt. The $\nu_6$ resonance provides the largest supply of asteroids to the Earth, with about $2\%$ of asteroids from that region colliding with the Earth. Asteroids located in mean motion resonances with Jupiter and in the outer asteroid belt have negligible Earth-collision probabilities. The maximum number of Earth collisions occurs if the asteroids in the primordial asteroid belt are first moved into the $\nu_6$ resonance location (through asteroid-asteroid interactions or otherwise) before their eccentricity is excited sufficiently for Earth collision. A maximum of about eight oceans of water may be delivered to the Earth. Thus, if the Earth contains ten or more oceans of water, the Earth likely formed with a significant fraction of this water.
How much water was delivered from the asteroid belt to the Earth after its formation?
We report on the design and performance of a velocity map imaging (VMI) spectrometer optimized for experiments using high-intensity extreme ultraviolet (XUV) sources such as laser-driven high-order harmonic generation (HHG) sources and free-electron lasers (FELs). Typically exhibiting low repetition rates and high single-shot count rates, such~experiments do not easily lend themselves to coincident detection of photo-electrons and -ions. In order to obtain molecular frame or reaction channel-specific information, one has to rely on other correlation techniques, such as covariant detection schemes. Our device allows for combining different photo-electron and -ion detection modes for covariance analysis. We present the expected performance in the different detection modes and present the first results using an intense high-order harmonic generation (HHG) source.
A Versatile Velocity Map Ion-Electron Covariance Imaging Spectrometer for High-Intensity XUV Experiments
In this note, we establish a compact law of the iterated logarithm under the upper capacity for independent and identically distributed random variables in a sub-linear expectation space. For showing the result, a self-normalized law of the iterated logarithm is established.
A note on the cluster set of the law of the iterated logarithm under sub-linear expectations
Nahm sums are specific $q$-hypergeometric series associated with symmetric positive definite matrices. In this paper we study Nahm sums associated with symmetrizable matrices. We show that one direction of Nahm's conjecture, which was proven by Calegari, Garoufalidis, and Zagier for the symmetric case, also holds for the symmetrizable case. This asserts that the modularity of a Nahm sum implies that a certain element in a Bloch group associated with the Nahm sum is a torsion element. During the proof, we investigate the radial asymptotics of Nahm sums. Finally, we provide lists of candidates of modular Nahm sums for symmetrizable matrices based on numerical experiments.
Remarks on Nahm sums for symmetrizable matrices
Let $\mathbf{v}_i$ be vectors in $\mathbb{R}^d$ and $\{\varepsilon_i\}$ be independent Rademacher random variables. Then the Littlewood-Offord problem entails finding the best upper bound for $\sup_{\mathbf{x} \in \mathbb{R}^d} \mathbb{P}(\sum \varepsilon_i \mathbf{v}_i = \mathbf{x})$. Generalizing the uniform bounds of Littlewood-Offord, Erd\H{o}s and Kleitman, a recent result of Dzindzalieta and Ju\v{s}kevi\v{c}ius provides a non-uniform bound that is optimal in its dependence on $\|\mathbf{x}\|_2$. In this short note, we provide a simple alternative proof of their result. Furthermore, our proof demonstrates that the bound applies to any norm on $\mathbb{R}^d$, not just the $\ell_2$ norm. This resolves a conjecture of Dzindzalieta and Ju\v{s}kevi\v{c}ius.
A nonuniform Littlewood-Offord inequality for all norms
We determine the drag and the momentum diffusion coefficients of heavy fermion in dense plasma. It is seen that in degenerate matter drag coefficient at the leading order mediated by transverse photon is proportional to $(E-\mu)^2$ while for the longitudinal exchange this goes as $(E-\mu)^3$. We also calculate the longitudinal diffusion coefficient to obtain the Einstein relation in a relativistic degenerate plasma. Finally, finite temperature corrections are included both for the drag and the diffusion coefficients.
Energy and momentum relaxation of heavy fermion in dense and warm plasma
The Nonlinear Schroedinger Equation (NLSE) with a random potential is motivated by experiments in optics and in atom optics and is a paradigm for the competition between the randomness and nonlinearity. The analysis of the NLSE with a random (Anderson like) potential has been done at various levels of control: numerical, analytical and rigorous. Yet, this model equation presents us with a highly inconclusive and often contradictory picture. We will describe the main recent results obtained in this field and propose a list of specific problems to focus on, that we hope will enable to resolve these outstanding questions.
The Nonlinear Schroedinger Equation with a random potential: Results and Puzzles
This article deals with variational optimal-control problems on time scales in the presence of delay in the state variables. The problem is considered on a time scale unifying the discrete, the continuous and the quantum cases. Two examples in the discrete and quantum cases are analyzed to illustrate our results.
Variational Optimal-Control Problems with Delayed Arguments on Time Scales
We study 2D non-linear sigma models on a group manifold with a special form of the metric. We address the question of integrability for this special class of sigma models. We derive two algebraic conditions for the metric on the group manifold. Each solution of these conditions defines an integrable model. Although the algebraic system is overdetermined in general, we give two examples of solutions. We find the Lax field for these models and calculate their Poisson brackets. We also obtain the renormalization group (RG) equations, to first order, for the generic model. We solve the RG equations for the examples we have and show that they are integrable along the RG flow.
Integrable Generalized Principal Chiral Models
The variation in histologic staining between different medical centers is one of the most profound challenges in the field of computer-aided diagnosis. The appearance disparity of pathological whole slide images causes algorithms to become less reliable, which in turn impedes the wide-spread applicability of downstream tasks like cancer diagnosis. Furthermore, different stainings lead to biases in the training which in case of domain shifts negatively affect the test performance. Therefore, in this paper we propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN. Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models. We perform an extensive evaluation of our method using various metrics and compare it to commonly used methods that are multi-domain capable. First, we evaluate how well our method fools a domain classifier that tries to assign a medical center to an image. Then, we test our normalization on the tumor classification performance of a downstream classifier. Furthermore, we evaluate the image quality of the normalized images using the Structural similarity index and the ability to reduce the domain shift using the Fr\'echet inception distance. We show that our method proves to be multi-domain capable, provides the highest image quality among the compared methods, and can most reliably fool the domain classifier while keeping the tumor classifier performance high. By reducing the domain influence, biases in the data can be removed on the one hand and the origin of the whole slide image can be disguised on the other, thus enhancing patient data privacy.
Multi-domain stain normalization for digital pathology: A cycle-consistent adversarial network for whole slide images
Present attack methods can make state-of-the-art classification systems based on deep neural networks misclassify every adversarially modified test example. The design of general defense strategies against a wide range of such attacks still remains a challenging problem. In this paper, we draw inspiration from the fields of cybersecurity and multi-agent systems and propose to leverage the concept of Moving Target Defense (MTD) in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks (DNNs) for visual classification tasks against such adversarial attacks. To classify an input image, a trained network is picked randomly from this set of networks by formulating the interaction between a Defender (who hosts the classification networks) and their (Legitimate and Malicious) users as a Bayesian Stackelberg Game (BSG). We empirically show that this approach, MTDeep, reduces misclassification on perturbed images in various datasets such as MNIST, FashionMNIST, and ImageNet while maintaining high classification accuracy on legitimate test images. We then demonstrate that our framework, being the first meta-defense technique, can be used in conjunction with any existing defense mechanism to provide more resilience against adversarial attacks that can be afforded by these defense mechanisms. Lastly, to quantify the increase in robustness of an ensemble-based classification system when we use MTDeep, we analyze the properties of a set of DNNs and introduce the concept of differential immunity that formalizes the notion of attack transferability.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
Large language models like GPT-4 exhibit emergent capabilities across general-purpose tasks, such as basic arithmetic, when trained on extensive text data, even though these tasks are not explicitly encoded by the unsupervised, next-token prediction objective. This study investigates how small transformers, trained from random initialization, can efficiently learn arithmetic operations such as addition, multiplication, and elementary functions like square root, using the next-token prediction objective. We first demonstrate that conventional training data is not the most effective for arithmetic learning, and simple formatting changes can significantly improve accuracy. This leads to sharp phase transitions as a function of training data scale, which, in some cases, can be explained through connections to low-rank matrix completion. Building on prior work, we then train on chain-of-thought style data that includes intermediate step results. Even in the complete absence of pretraining, this approach significantly and simultaneously improves accuracy, sample complexity, and convergence speed. We also study the interplay between arithmetic and text data during training and examine the effects of few-shot prompting, pretraining, and model scale. Additionally, we discuss length generalization challenges. Our work highlights the importance of high-quality, instructive data that considers the particular characteristics of the next-word prediction objective for rapidly eliciting arithmetic capabilities.
Teaching Arithmetic to Small Transformers
The glacial cycles are attributed to the climatic response of the orbital changes in the irradiance to the Earth. These changes in the forcing are to small to explain the observed climate variations as simple linear responses. Non-linear amplifications are necessary to account for the glacial cycles. Here an empirical model of the non-linear response is presented. From the model it is possible to assess the role of stochastic noise in comparison to the deterministic orbital forcing of the ice ages. The model is based on the bifurcation structure derived from the climate history. It indicates the dynamical origin of the Mid-Pleistocene transition (MPT) from the '41 kyr world' to the '100 kyr world'. The dominant forcing in the latter is still the 41 kyr obliquity cycle, but the bifurcation structure of the climate system is changed. The model indicates that transitions between glacial and interglacial climate are assisted by internal stochastic noise in the period prior to the last five glacial cycles, while the last five cycles are deterministic responses to the orbital forcing.
The bifurcation structure and noise induced transitions in the Pleistocene glacial cycles
This paper presents a transformational approach for model checking two important classes of metric temporal logic (MTL) properties, namely, bounded response and minimum separation, for nonhierarchical object-oriented Real-Time Maude specifications. We prove the correctness of our model checking algorithms, which terminate under reasonable non-Zeno-ness assumptions when the reachable state space is finite. These new model checking features have been integrated into Real-Time Maude, and are used to analyze a network of medical devices and a 4-way traffic intersection system.
Model Checking Classes of Metric LTL Properties of Object-Oriented Real-Time Maude Specifications
Sheep are gregarious animals, and they often aggregate into dense, cohesive flocks, especially under stress. In this paper, we use image processing tools to analyze a publicly available aerial video showing a dense sheep flock moving under the stimulus of a shepherding dog. Inspired by the fluidity of the motion, we implement a hydrodynamics approach, extracting velocity fields, and measuring their propagation and correlations in space and time. We find that while the flock overall is stationary, significant dynamics happens at the edges, notably in the form of fluctuations propagating like waves, and large-scale correlations spanning the entire flock. These observations highlight the importance of incorporating interfacial dynamics, for instance in the form of line tension, when using a hydrodynamics framework to model the dynamics of dense, non-polarized swarms.
Hydrodynamics of a dense flock of sheep: edge motion and long-range correlations
A simple ansatz that is well-motivated by group-theoretical considerations is proposed in the context of the type III neutrino see-saw mechanism. It results in predictions for m_s/m_b and m_b/m_tau that relate these quantities to the masses and mixings of neutrinos.
A Prediction from the Type III See-saw Mechanism
We examine the lepton dipole moments in an extension of the Standard Model (SM), which contains vector-like leptons that couple only to the second-generation SM leptons. The model naturally leads to sizable contributions to the muon $g-2$ and the muon electric dipole moment (EDM). One feature of this model is that a sizable electron EDM is also induced at the two-loop level due to the existence of new vector-like leptons in the loops. We find parameter regions that can explain the muon $g-2$ anomaly and are also consistent with the experimental constraints coming from the electron EDM and the Higgs decay $h\rightarrow \mu^{+}\mu^{-}$. The generated EDMs can be as large as $O(10^{-22})~e \cdot \mathrm{cm}$ for the muon and $O(10^{-30})~e \cdot \mathrm{cm}$ for the electron, respectively, which can be probed in future experiments for the EDM measurements.
Probing New Physics in the Vector-like Lepton Model by Lepton Electric Dipole Moments
The question how the outer solar atmosphere is heated from solar photospheric temperatures of about 5800K up to solar chromospheric and coronal temperatures of about 20.000K and millions of degrees respectively, remained without any satisfying answer for centuries. On 4 May 2005, I recorded several time series of Halpha line scans with the GREGOR Fabry-Perot Interferometer, still deployed at the German Vacuum Tower Telescope (VTT), for different solar limb and on-disk positions as well as for quiet sun at solar disk center. The spatially and temporally highly resolved time series of Halpha line parameters reveal the entire and detailed complexity as well as the overwhelming dynamics of spicules covering the entire solar disk, thus apparently confirming spicules as the potential driver of chromospheric heating for both the Sun and sun-like stars, with an expected mass flux larger than 100 times that of the solar wind. Spicules seem to be the result of the interaction of the highly dynamic photospheric quiet-sun or active-region small-scale magnetic field, which is dominated by convective processes and is predominantly located in intergranular lanes and at meso- or supergranular scales.
Spicules and their on-disk counterparts, the main driver for solar chromospheric heating?
We compute the one loop graviton contribution to the self-energy of a very light fermion on a locally de Sitter background. This result can be used to study the effect that a small mass has on the propagation of fermions through the sea of infrared gravitons generated by inflation. We employ dimensional regularization and obtain a fully renormalized result by absorbing all divergences with BPHZ counterterms. An interesting technical aspect of this computation is the need for two noninvariant counterterms owing to the breaking of de Sitter invariance by our gauge condition.
Quantum Gravitational Effects on Massive Fermions during Inflation I
Large-scale asymmetries (i.e. lopsidedness) are a common feature in the stellar density distribution of nearby disk galaxies both in low- and high-density environments. In this work, we characterize the present-day lopsidedness in a sample of 1435 disk-like galaxies selected from the TNG50 simulation. We find that the percentage of lopsided galaxies (10%-30%) is in good agreement with observations if we use similar radial ranges to the observations. However, the percentage (58%) significantly increases if we extend our measurement to larger radii. We find a mild or lack of correlation between lopsidedness amplitude and environment at z=0 and a strong correlation between lopsidedness and galaxy morphology regardless of the environment. Present-day galaxies with more extended disks, flatter inner galactic regions and lower central stellar mass density (i.e. late-type disk galaxies) are typically more lopsided than galaxies with smaller disks, rounder inner galactic regions and higher central stellar mass density (i.e. early-type disk galaxies). Interestingly, we find that lopsided galaxies have, on average, a very distinct star formation history within the last 10 Gyr, with respect to their symmetric counterparts. Symmetric galaxies have typically assembled at early times (~8-6 Gyr ago) with relatively short and intense bursts of central star formation, while lopsided galaxies have assembled on longer timescales and with milder initial bursts of star formation, continuing building up their mass until z=0. Overall, these results indicate that lopsidedness in present-day disk galaxies is connected to the specific evolutionary histories of the galaxies that shaped their distinct internal properties.
Lopsidedness as a tracer of early galactic assembly history
Time-domain astronomy is progressing rapidly with the ongoing and upcoming large-scale photometric sky surveys led by the Vera C. Rubin Observatory project (LSST). Billions of variable sources call for better automatic classification algorithms for light curves. Among them, periodic variable stars are frequently studied. Different categories of periodic variable stars have a high degree of class imbalance and pose a challenge to algorithms including deep learning methods. We design two kinds of architectures of neural networks for the classification of periodic variable stars in the Catalina Survey's Data Release 2: a multi-input recurrent neural network (RNN) and a compound network combing the RNN and the convolutional neural network (CNN). To deal with class imbalance, we apply Gaussian Process to generate synthetic light curves with artificial uncertainties for data augmentation. For better performance, we organize the augmentation and training process in a "bagging-like" ensemble learning scheme. The experimental results show that the better approach is the compound network combing RNN and CNN, which reaches the best result of 86.2% on the overall balanced accuracy and 0.75 on the macro F1 score. We develop the ensemble augmentation method to solve the data imbalance when classifying variable stars and prove the effectiveness of combining different representations of light curves in a single model. The proposed methods would help build better classification algorithms of periodic time series data for future sky surveys (e.g., LSST).
Periodic Variable Star Classification with Deep Learning: Handling Data Imbalance in an Ensemble Augmentation Way
We use Hubble Space Telescope (HST) to reach the end of the white dwarf (WD) cooling sequence (CS) in the solar-metallicity open cluster NGC 6819. Our photometry and completeness tests show a sharp drop in the number of WDs along the CS at magnitudes fainter than mF606W = 26.050+/- 0.075. This implies an age of 2.25+/-0.20 Gyr, consistent with the age of 2.25+/-0.30 Gyr obtained from fits to the main-sequence turn-off. The use of different WD cooling models and initial-final-mass relations have a minor impact the WD age estimate, at the level of ~0.1 Gyr. As an important by-product of this investigation we also release, in electronic format, both the catalogue of all the detected sources and the atlases of the region (in two filters). Indeed, this patch of sky studied by HST (of size ~70 arcmin sq.) is entirely within the main Kepler-mission field, so the high-resolution images and deep catalogues will be particularly useful.
Hubble Space Telescope observations of the Kepler-field cluster NGC 6819. I. The bottom of the white dwarf cooling sequence
Deep neural networks often fail catastrophically by relying on spurious correlations. Most prior work assumes a clear dichotomy into spurious and reliable features; however, this is often unrealistic. For example, most of the time we do not want an autonomous car to simply copy the speed of surrounding cars -- we don't want our car to run a red light if a neighboring car does so. However, we cannot simply enforce invariance to next-lane speed, since it could provide valuable information about an unobservable pedestrian at a crosswalk. Thus, universally ignoring features that are sometimes (but not always) reliable can lead to non-robust performance. We formalize a new setting called contextual reliability which accounts for the fact that the "right" features to use may vary depending on the context. We propose and analyze a two-stage framework called Explicit Non-spurious feature Prediction (ENP) which first identifies the relevant features to use for a given context, then trains a model to rely exclusively on these features. Our work theoretically and empirically demonstrates the advantages of ENP over existing methods and provides new benchmarks for contextual reliability.
Contextual Reliability: When Different Features Matter in Different Contexts
We have measured the time dependence of scintillation light from electronic and nuclear recoils in liquid neon, finding a slow time constant of 15.4+-0.2 us. Pulse shape discrimination is investigated as a means of identifying event type in liquid neon. Finally, the nuclear recoil scintillation efficiency is measured to be 0.26+-0.03 for 387 keV nuclear recoils.
Scintillation of liquid neon from electronic and nuclear recoils
Extremely well! In the $\Lambda$CDM model, the spacetime metric, $g_{ab}$, of our universe is approximated by an FLRW metric, $g_{ab}^{(0)}$, to about 1 part in $10^4$ or better on both large and small scales, except in the immediate vicinity of very strong field objects, such as black holes. However, derivatives of $g_{ab}$ are not close to derivatives of $g_{ab}^{(0)}$, so there can be significant differences in the behavior of geodesics and huge differences in curvature. Consequently, observable quantities in the actual universe may differ significantly from the corresponding observables in the FLRW model. Nevertheless, as we shall review here, we have proven general results showing that---within the framework of our approach to treating backreaction---the large matter inhomogeneities that occur on small scales cannot produce significant effects on large scales, so $g_{ab}^{(0)}$ satisfies Einstein's equation with the averaged stress-energy tensor of matter as its source. We discuss the flaws in some other approaches that have suggested that large backreaction effects may occur. As we also will review here, with a suitable "dictionary," Newtonian cosmologies provide excellent approximations to cosmological solutions to Einstein's equation (with dust and a cosmological constant) on all scales. Our results thereby provide strong justification for the mathematical consistency and validity of the $\Lambda$CDM model within the context of general relativistic cosmology.
How well is our universe described by an FLRW model?
Wave transport in a media with slow spatial gradient of its characteristics is found to exhibit a universal wave pattern ("gradient marker") in a vicinity of the maxima/minima of the gradient. The pattern is common for optics, quantum mechanics and any other propagation governed by the same wave equation. Derived analytically in adiabatic limit, it has an elegantly simple yet nontrivial single-cycle profile, which is found in perfect agreement with numerical simulations for specific examples. We also found resonant states in continuum in the case of quantum wells, and formulated criterium for their existence.
"Gradient marker" - a universal wave pattern in inhomogeneous continuum
We derive an adjoint method for the Direct Simulation Monte Carlo (DSMC) method for the spatially homogeneous Boltzmann equation with a general collision law. This generalizes our previous results in [Caflisch, R., Silantyev, D. and Yang, Y., 2021. Journal of Computational Physics, 439, p.110404], which was restricted to the case of Maxwell molecules, for which the collision rate is constant. The main difficulty in generalizing the previous results is that a rejection sampling step is required in the DSMC algorithm in order to handle the variable collision rate. We find a new term corresponding to the so-called score function in the adjoint equation and a new adjoint Jacobian matrix capturing the dependence of the collision parameter on the velocities. The new formula works for a much more general class of collision models.
Adjoint DSMC for Nonlinear Spatially-Homogeneous Boltzmann Equation With a General Collision Model
We propose a new type of hidden layer for a multilayer perceptron, and demonstrate that it obtains the best reported performance for an MLP on the MNIST dataset.
Piecewise Linear Multilayer Perceptrons and Dropout
Launched on June 11, 2008, the LAT instrument onboard the $Fermi$ Gamma-ray Space Telescope has provided a rare opportunity to study high energy photon emission from gamma-ray bursts. Although the majority of such events (27) have been iden tified by the Fermi LAT Collaboration, four were uncovered by using more sensiti ve statistical techniques (Akerlof et al 2010, Akerlof et al 2011, Zheng et al 2 012). In this paper, we continue our earlier work by finding three more GRBs ass ociated with high energy photon emission, GRB 110709A, 111117A and 120107A. To s ystematize our matched filter approach, a pipeline has been developed to identif y these objects in near real time. GRB 120107A is the first product of this anal ysis procedure. Despite the reduced threshold for identification, the number of GRB events has not increased significantly. This relative dearth of events with low photon number prompted a study of the apparent photon number distribution. W e find an extremely good fit to a simple power-law with an exponent of -1.8 $\pm $ 0.3 for the differential distribution. As might be expected, there is a substa ntial correlation between the number of lower energy photons detected by the GBM and the number observed by the LAT. Thus, high energy photon emission is associ ated with some but not all of the brighter GBM events. Deeper studies of the pro perties of the small population of high energy emitting bursts may eventually yi eld a better understanding of these entire phenomena.
GRB 110709A, 111117A and 120107A: Faint high-energy gamma-ray photon emission from Fermi/LAT observations and demographic implications
Finding mechanisms to promote the use of face masks is fundamental during the second phase of the COVID-19 pandemic response, when shelter-in-place rules are relaxed and some segments of the population are allowed to circulate more freely. Here we report three pre-registered studies (total N = 1,920), using an heterogenous sample of people living in the USA, showing that priming people to "rely on their reasoning" rather than to "rely on their emotions" significantly increases their intentions to wear a face covering. Compared to the baseline, priming reasoning promotes intentions to wear a face covering, whereas priming emotion has no significant effect. These findings have theoretical and practical implications. Practically, they offer a simple and scalable intervention to promote intentions to wear a face mask. Theoretically, they shed light on the cognitive basis of intentions to wear a face covering.
Priming reasoning increases intentions to wear a face covering to slow down COVID-19 transmission
The rigid body attitude estimation problem is treated using the discrete-time Lagrange-d'Alembert principle. Three different possibilities are considered for the multi-rate relation between angular velocity measurements and direction vector measurements for attitude: 1) integer relation between sampling rates, 2) time-varying sampling rates, 3) non-integer relation between sampling rates. In all cases, it is assumed that angular velocity measurements are sampled at a higher rate compared to the inertial vectors. The attitude determination problem from two or more vector measurements in the body-fixed frame is formulated as Wahba's problem. At instants when direction vector measurements are absent, a discrete-time model for attitude kinematics is used to propagate past measurements. A discrete-time Lagrangian is constructed as the difference between a kinetic energy-like term that is quadratic in the angular velocity estimation error and an artificial potential energy-like term obtained from Wahba's cost function. An additional dissipation term is introduced and the discrete-time Lagrange-d'Alembert principle is applied to the Lagrangian with this dissipation to obtain an optimal filtering scheme. A discrete-time Lyapunov analysis is carried out to show that the optimal filtering scheme is asymptotically stable in the absence of measurement noise and the domain of convergence is almost global. For a realistic evaluation of the scheme, numerical experiments are conducted with inputs corrupted by bounded measurement noise. These numerical simulations exhibit convergence of the estimated states to a bounded neighborhood of the actual states.
Asymptotically Stable Optimal Multi-rate Rigid Body Attitude Estimation based on Lagrange-d'Alembert Principle
Depth of Field (DoF) in games is usually achieved as a post-process effect by blurring pixels in the sharp rasterized image based on the defined focus plane. This paper describes a novel real-time DoF technique that uses ray tracing with image filtering to achieve more accurate partial occlusion semi-transparencies on edges of blurry foreground geometry. This hybrid rendering technique leverages ray tracing hardware acceleration as well as spatio-temporal reconstruction techniques to achieve interactive frame rates.
Hybrid DoF: Ray-Traced and Post-Processed Hybrid Depth of Field Effect for Real-Time Rendering
We develop a new interior-point method (IPM) for symmetric-cone optimization, a common generalization of linear, second-order-cone, and semidefinite programming. In contrast to classical IPMs, we update iterates with a geodesic of the cone instead of the kernel of the linear constraints. This approach yields a primal-dual-symmetric, scale-invariant, and line-search-free algorithm that uses just half the variables of a standard primal-dual IPM. With elementary arguments, we establish polynomial-time convergence matching the standard square-root-n bound. Finally, we prove global convergence of a long-step variant and provide an implementation that supports all symmetric cones. For linear programming, our algorithms reduce to central-path tracking in the log domain.
A geodesic interior-point method for linear optimization over symmetric cones
This article deals with stabilizing discrete-time switched linear systems. Our contributions are threefold: Firstly, given a family of linear systems possibly containing unstable dynamics, we propose a large class of switching signals that stabilize a switched system generated by the switching signal and the given family of systems. Secondly, given a switched system, a sufficient condition for the existence of the proposed switching signal is derived by expressing the switching signal as an infinite walk on a directed graph representing the switched system. Thirdly, given a family of linear systems, we propose an algorithmic technique to design a switching signal for stabilizing the corresponding switched system.
Stabilizing discrete-time switched linear systems
Measuring the Higgs couplings accurately at colliders is one of the best routes for finding physics Beyond the Standard Model (BSM). If the measured couplings deviate from the SM predictions, then this would give rise to energy-growing processes that violate tree-level unitarity at some energy scale, indicating new physics. In this paper, we extend previous work on unitarity bounds from the Higgs potential and the Higgs couplings to vector bosons and the top quark; to the Higgs couplings to $\gamma\gamma$ and $\gamma Z$. We find that while the HL-LHC might be able to find new physics in the $\gamma Z$ sector, the scale of new physics in both sectors is mostly beyond its reach. However, accurate measurements of the leading couplings of the two sectors in the HL-LHC can place stringent limits on both the scale of new physics and on other Higgs couplings that are difficult to measure. In addition, the scale of new physics is mostly within the reach of the $100$ TeV collider.
The Scale of New Physics from the Higgs Couplings to $\gamma\gamma$ and $\gamma Z$
The computational complexity of winner determination is a classical and important problem in computational social choice. Previous work based on worst-case analysis has established NP-hardness of winner determination for some classic voting rules, such as Kemeny, Dodgson, and Young. In this paper, we revisit the classical problem of winner determination through the lens of semi-random analysis, which is a worst average-case analysis where the preferences are generated from a distribution chosen by the adversary. Under a natural class of semi-random models that are inspired by recommender systems, we prove that winner determination remains hard for Dodgson, Young, and some multi-winner rules such as the Chamberlin-Courant rule and the Monroe rule. Under another natural class of semi-random models that are extensions of the Impartial Culture, we show that winner determination is hard for Kemeny, but is easy for Dodgson. This illustrates an interesting separation between Kemeny and Dodgson.
Beyond the Worst Case: Semi-Random Complexity Analysis of Winner Determination
In all Friedman models, the cosmological redshift is widely interpreted as a consequence of the general-relativistic phenomenon of EXPANSION OF SPACE. Other commonly believed consequences of this phenomenon are superluminal recession velocities of distant galaxies and the distance to the particle horizon greater than c*t (where t is the age of the Universe), in apparent conflict with special relativity. Here, we study a particular Friedman model: empty universe. This model exhibits both cosmological redshift, superluminal velocities and infinite distance to the horizon. However, we show that the cosmological redshift is there simply a relativistic Doppler shift. Moreover, apparently superluminal velocities and `acausal' distance to the horizon are in fact a direct consequence of special-relativistic phenomenon of time dilation, as well as of the adopted definition of distance in cosmology. There is no conflict with special relativity, whatsoever. In particular, INERTIAL recession velocities are subluminal. Since in the real Universe, sufficiently distant galaxies recede with relativistic velocities, these special-relativistic effects must be at least partly responsible for the cosmological redshift and the aforementioned `superluminalities', commonly attributed to the expansion of space. Let us finish with a question resembling a Buddhism-Zen `koan': in an empty universe, what is expanding?
Is space really expanding? A counterexample
In this paper we make two observations related to discrete torsion. First, we observe that an old obscure degree of freedom (momentum/translation shifts) in (symmetric) string orbifolds is related to discrete torsion. We point out how our previous derivation of discrete torsion from orbifold group actions on B fields includes these momentum lattice shift phases, and discuss how they are realized in terms of orbifold group actions on D-branes. Second, we describe the M theory dual of IIA discrete torsion, a duality relation to our knowledge not previously understood. We show that IIA discrete torsion is encoded in analogues of the shift orbifolds above for the M theory C field.
Discrete Torsion and Shift Orbifolds
The population of Milky Way (MW) satellites contains the faintest known galaxies and thus provides essential insight into galaxy formation and dark matter microphysics. Here we combine a model of the galaxy--halo connection with newly derived observational selection functions based on searches for satellites in photometric surveys over nearly the entire high Galactic latitude sky. In particular, we use cosmological zoom-in simulations of MW-like halos that include realistic Large Magellanic Cloud (LMC) analogs to fit the position-dependent MW satellite luminosity function. We report decisive evidence for the statistical impact of the LMC on the MW satellite population due to an estimated $6\pm 2$ observed LMC-associated satellites, consistent with the number of LMC satellites inferred from Gaia proper-motion measurements, confirming the predictions of cold dark matter models for the existence of satellites within satellite halos. Moreover, we infer that the LMC fell into the MW within the last $2\ \rm{Gyr}$ at high confidence. Based on our detailed full-sky modeling, we find that the faintest observed satellites inhabit halos with peak virial masses below $3.2\times 10^{8}\ M_{\rm{\odot}}$ at $95\%$ confidence, and we place the first robust constraints on the fraction of halos that host galaxies in this regime. We predict that the faintest detectable satellites occupy halos with peak virial masses above $10^{6}\ M_{\rm{\odot}}$, highlighting the potential for powerful galaxy formation and dark matter constraints from future dwarf galaxy searches.
Milky Way Satellite Census. II. Galaxy--Halo Connection Constraints Including the Impact of the Large Magellanic Cloud
Lomonaco and Kauffman developed a knot mosaic system to introduce a precise and workable definition of a quantum knot system. This definition is intended to represent an actual physical quantum system. A knot (m,n)-mosaic is an $m \times n$ matrix of mosaic tiles ($T_0$ through $T_{10}$ depicted in the introduction) representing a knot or a link by adjoining properly that is called suitably connected. $D^{(m,n)}$ is the total number of all knot (m,n)-mosaics. This value indicates the dimension of the Hilbert space of these quantum knot system. $D^{(m,n)}$ is already found for $m,n \leq 6$ by the authors. In this paper, we construct an algorithm producing the precise value of $D^{(m,n)}$ for $m,n \geq 2$ that uses recurrence relations of state matrices that turn out to be remarkably efficient to count knot mosaics. $$ D^{(m,n)} = 2 \, \| (X_{m-2}+O_{m-2})^{n-2} \| $$ where $2^{m-2} \times 2^{m-2}$ matrices $X_{m-2}$ and $O_{m-2}$ are defined by $$ X_{k+1} = \begin{bmatrix} X_k & O_k \\ O_k & X_k \end{bmatrix} \ \mbox{and } \ O_{k+1} = \begin{bmatrix} O_k & X_k \\ X_k & 4 \, O_k \end{bmatrix} $$ for $k=0,1, \cdots, m-3$, with $1 \times 1$ matrices $X_0 = \begin{bmatrix} 1 \end{bmatrix}$ and $O_0 = \begin{bmatrix} 1 \end{bmatrix}$. Here $\|N\|$ denotes the sum of all entries of a matrix $N$. For $n=2$, $(X_{m-2}+O_{m-2})^0$ means the identity matrix of size $2^{m-2} \times 2^{m-2}$.
Quantum knots and the number of knot mosaics
In a prior work, the galaxies of the nonstandard enlargements of conventionally infinite graphs and also of transfinite graphs of the first rank of transfiniteness were defined, examined, and illustrated by some examples. In this work it is shown how the results of the prior work extend to graphs of higher ranks.
The Galaxies of Nonstandard Enlargements of Transfinite Graphs of Higher Rsnks
The sensitivity of the deuteron and of $pp$ scattering to the $\pi NN$ and $\rho NN$ coupling constants is investigated systematically. We find that the deuteron can be described about equally well with either {\it large $\pi$ and $\rho$} or {\it small $\pi$ and $\rho$} coupling constants. However, $pp$ scattering clearly {\it requires} the strong $\rho$, but favors the weak $\pi$ (particularly, in $^3P_0$ at low energies). This apparent contradiction between bound-state and scattering can be resolved by either assuming charge-dependent $\pi NN$ coupling constants or by adding a heavy pion to the NN model. In both cases, the neutral-pion coupling constant is small ($g^2_{\pi^0}/4\pi= 13.5$).
Constraints on the $\pi NN$ Coupling Constant from the $NN$ System
Given a finite non-cyclic group $G$, call $\sigma(G)$ the smallest number of proper subgroups of $G$ needed to cover $G$. Lucchini and Detomi conjectured that if a nonabelian group $G$ is such that $\sigma(G) < \sigma(G/N)$ for every non-trivial normal subgroup $N$ of $G$ then $G$ is \textit{monolithic}, meaning that it admits a unique minimal normal subgroup. In this paper we show how this conjecture can be attacked by the direct study of monolithic groups.
Covering monolithic groups with proper subgroups
The ALICE experiment at LHC is dedicated to study matter formed in heavy-ion collisions, but also has a strong physics program for $pp$ collisions. In these collisions, protons will collide at energies never reached before under laboratory conditions. At the high energies, ALICE will enable us to study jet physics in detail, especially the production of multiple jet events, setting the baseline for heavy-ion. Three-jet events allow us to examine the properties of quark and gluon jets, providing a suitable tool for testing QCD experimentally. We discuss the selection method and topology of three-jet events in ALICE. The analysis was performed on two PYTHIA data sets, both involving $pp$ collisions at $\sqrt{s} = 14$ TeV with enhanced jet production. The results from the dedicated jet MC production are discussed and compared to previous studies at CDF and D\O. We investigate the possibilities to determine gluon jet candidates.
Topological study of three-jet events in ALICE
Let $\mathcal A$ be a infinite dimensional C*-algebra. We show that the Szlenk index of $\mathcal A$ is $\Gamma'(i(\mathcal A))$, where $i(\mathcal A)$ is the noncommutative Cantor-Bendixson index, $\Gamma'(\xi)$ is the minimum ordinal number which is greater than $\xi$ of the form $\omega^\zeta$ for some $\zeta$ and we agree that $\Gamma'(\infty)=\infty$. As a application, we compute the Szlenk index of a C*-tensor product $\mathcal A\otimes_\beta\mathcal B$ of non-zero C*-algebras $\mathcal A$ and $\mathcal B$ in terms of $Sz(\mathcal A)$ and $Sz(\mathcal B)$. When $\mathcal A$ is a separable C*-algebra, we show that there has some $a\in \mathcal A_h$ such that $Sz(\mathcal A)=Sz(C^\ast(a))$, where $C^\ast(a)$ is the C*-subalgebra of $\mathcal A$ generated by $a$.
The Szlenk index of C*-algebras