text
stringlengths
6
128k
We introduce a nonlinear modification of the classical Hawkes process, which allows inhibitory couplings between units without restrictions. The resulting system of interacting point processes provides a useful mathematical model for recurrent networks of spiking neurons with exponential transfer functions. The expected rates of all neurons in the network are approximated by a first-order differential system. We study the stability of the solutions of this equation, and use the new formalism to implement a winner-takes-all network that operates robustly for a wide range of parameters. Finally, we discuss relations with the generalised linear model that is widely used for the analysis of spike trains.
Space weather is a multidisciplinary research area connecting scientists from across heliophysics domains seeking a coherent understanding of our space environment that can also serve modern life and society's needs. COSPAR's ISWAT (International Space Weather Action Teams) 'clusters' focus attention on different areas of space weather study while ensuring the coupled system is broadly addressed via regular communications and interactions. The ISWAT cluster "H3: Radiation Environment in the Heliosphere" (https://www.iswat-cospar.org/h3) has been working to provide a scientific platform to understand, characterize and predict the energetic particle radiation in the heliosphere with the practical goal of mitigating radiation risks associated with areospace activities, satellite industry and human space explorations. In particular, present approaches help us understand the physical phenomena at large, optimizing the output of multi-viewpoint observations and pushing current models to their limits. In this paper, we review the scientific aspects of the radiation environment in the heliosphere covering four different radiation types: Solar Energetic Particles (SEPs), Ground Level Enhancement (GLE, a type of SEP events with energies high enough to trigger the enhancement of ground-level detectors), Galactic Cosmic Rays (GCRs) and Anomalous Cosmic Rays (ACRs). We focus on related advances in the research community in the past 10-20 years and what we still lack in terms of understanding and predictive capabilities. Finally we also consider some recommendations related to the improvement of both observational and modeling capabilities in the field of space radiation environment.
A mean field description of the Dicke model is presented, employing the Holstein-Primakoff realization of the angular momentum algebra. It is shown that, in the thermodynamic limit, when the number of atoms interacting with the photons goes to infinity the energy surface takes a simple form, allowing for a direct description of many observables.
We study the reduced dynamics of a spin (qubit) coupled to a spin-boson environment in the case of pure dephasing. We derive formal exact expressions which can be cast in terms of exact integro-differential master equations. We present results for a SB environment with ohmic dissipation at finite temperatures. For the special value of the ohmic damping strength K=1/2 the reduced dynamics is found in analytic form. For K<<1 we discuss the possibility of modulating the effect of the SB environment on the qubit. In particular we study the effect of the crossover to a slow environment dynamics, which may be triggered by changing both the temperature and the system-environment coupling.
Knapsack is one of the most fundamental problems in theoretical computer science. In the $(1 - \epsilon)$-approximation setting, although there is a fine-grained lower bound of $(n + 1 / \epsilon) ^ {2 - o(1)}$ based on the $(\min, +)$-convolution hypothesis ([K{\"u}nnemann, Paturi and Stefan Schneider, ICALP 2017] and [Cygan, Mucha, Wegrzycki and Wlodarczyk, 2017]), the best algorithm is randomized and runs in $\tilde O\left(n + (\frac{1}{\epsilon})^{11/5}/2^{\Omega(\sqrt{\log(1/\epsilon)})}\right)$ time [Deng, Jin and Mao, SODA 2023], and it remains an important open problem whether an algorithm with a running time that matches the lower bound (up to a sub-polynomial factor) exists. We answer the question positively by showing a deterministic $(1 - \epsilon)$-approximation scheme for knapsack that runs in $\tilde O(n + (1 / \epsilon) ^ {2})$ time. We first extend a known lemma in a recursive way to reduce the problem to $n \epsilon$-additive approximation for $n$ items with profits in $[1, 2)$. Then we give a simple efficient geometry-based algorithm for the reduced problem.
We show several calculations to identify the critical point in the ground state in random spin systems including spin glasses on the basis of the duality analysis. The duality analysis is a profound method to obtain the precise location of the critical point in finite temperature even for spin glasses. We propose a single equality for identifying the critical point in the ground state from several speculations. The equality can indeed give the exact location of the critical points for the bond-dilution Ising model on several lattices and provides insight on further analysis on the ground state in spin glasses.
We show that observation of the time-dependent effect of microlensing of relativistically broadened emission lines (such as e.g. the Fe Kalpha line in X-rays) in strongly lensed quasars could provide data on celestial mechanics of circular orbits in the direct vicinity of the horizon of supermassive black holes. This information can be extracted from the observation of evolution of red / blue edge of the magnified line just before and just after the period of crossing of the innermost stable circular orbit by the microlensing caustic. The functional form of this evolution is insensitive to numerous astrophysical parameters of the accreting black hole and of the microlensing caustics network system (as opposed to the evolution the full line spectrum). Measurement of the temporal evolution of the red / blue edge could provide a precision measurement of the radial dependence of the gravitational redshift and of velocity of the circular orbits, down to the innermost stable circular orbit. These measurements could be used to discriminate between the General Relativity and alternative models of the relativistic gravity in which the dynamics of photons and massive bodies orbiting the gravitating centre is different from that of the geodesics in the Schwarzschild or Kerr space-times.
We report the detection and characterization of two short-period, Neptune-sized planets around the active host star Kepler-210. The host star's parameters derived from those planets are (a) mutually inconsistent and (b) do not conform to the expected host star parameters. We furthermore report the detection of transit timing variations (TTVs) in the O-C diagrams for both planets. We explore various scenarios that explain and resolve those discrepancies. A simple scenario consistent with all data appears to be one that attributes substantial eccentricities to the inner short-period planets and that interprets the TTVs as due to the action of another, somewhat longer period planet. To substantiate our suggestions, we present the results of N-body simulations that modeled the TTVs and that checked the stability of the Kepler-210 system.
Recent AI-assistant agents, such as ChatGPT, predominantly rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback (RLHF) to align the output of large language models (LLMs) with human intentions, ensuring they are helpful, ethical, and reliable. However, this dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision and the related issues on quality, reliability, diversity, self-consistency, and undesirable biases. To address these challenges, we propose a novel approach called SELF-ALIGN, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision. Our approach encompasses four stages: first, we use an LLM to generate synthetic prompts, and a topic-guided method to augment the prompt diversity; second, we use a small set of human-written principles for AI models to follow, and guide the LLM through in-context learning from demonstrations (of principles application) to produce helpful, ethical, and reliable responses to user's queries; third, we fine-tune the original LLM with the high-quality self-aligned responses so that the resulting model can generate desirable responses for each query directly without the principle set and the demonstrations anymore; and finally, we offer a refinement step to address the issues of overly-brief or indirect responses. Applying SELF-ALIGN to the LLaMA-65b base language model, we develop an AI assistant named Dromedary. With fewer than 300 lines of human annotations (including < 200 seed prompts, 16 generic principles, and 5 exemplars for in-context learning). Dromedary significantly surpasses the performance of several state-of-the-art AI systems, including Text-Davinci-003 and Alpaca, on benchmark datasets with various settings.
Bourgain posed the problem of calculating $$ \Sigma = \sup_{n \geq 1} ~\sup_{k_1 <... < k_n} \frac{1}{\sqrt{n}}\| \sum_{j=1}^n e^{2 \pi i k_j \theta}\|_{L^1([0,1])}. $$ It is clear that $\Sigma \leq 1$; beyond that, determining whether $\Sigma < 1$ or $\Sigma=1$ would have some interesting implications, for example concerning the problem whether all rank one transformations have singular maximal spectral type. In the present paper we prove $\Sigma \geq \sqrt{\pi}/2 \approx 0.886$, by this means improving a result of Karatsuba. For the proof we use a quantitative two-dimensional version of the central limit theorem for lacunary trigonometric series, which in its original form is due to Salem and Zygmund.
We establish dynamical Borel-Cantelli lemmas for nested balls and rectangles centered at generic points in the setting of geometric Lorenz maps. We also establish extreme value statistics for observations maximized at generic points for geometric Lorenz maps and the associated flow.
We explicitly compute the dynamics of closed homogeneous and isotropic universes permeated by a single perfect fluid with a constant equation of state parameter $w$ in the context of a recent reformulation of general relativity, proposed in [1], which prevents the vacuum energy from acting as a gravitational source. This is done using an iterative algorithm, taking as an initial guess the background cosmological evolution obtained using standard general relativity in the absence of a cosmological constant. We show that, in general, the impact of the vacuum energy sequestering mechanism on the dynamics of the universe is significant, except for the $w=1/3$ case where the results are identical to those obtained in the context of general relativity with a null cosmological constant. We also show that there are well behaved models in general relativity that do not have a well behaved counterpart in the vacuum energy sequestering paradigm studied in this paper, highlighting the specific case of a quintessence scalar field with a linear potential.
We investigate the equilibrium of a fluid in contact with a solid boundary through a density-functional theory. Depending on the conditions, the fluid can be in one phase, gas or liquid, or two phases, while the wall induces an external field acting on the fluid particles. We first examine the case of a liquid film in contact with the wall. We construct bifurcation diagrams for the film thickness as a function of the chemical potential. At a specific value of the chemical potential, two equally stable films, a thin one and a thick one, can coexist. As saturation is approached, the thickness of the thick film tends to infinity. This allows the construction of a liquid-gas interface that forms a well defined contact angle with the wall.
In this paper we present a one dimensional second order accurate method to solve Elliptic equations with discontinuous coefficients on an arbitrary interface. Second order accuracy for the first derivative is obtained as well. The method is based on the Ghost Fluid Method, making use of ghost points on which the value is defined by suitable interface conditions. The multi-domain formulation is adopted, where the problem is split in two sub-problems and interface conditions will be enforced to close the problem. Interface conditions are relaxed together with the internal equations, leading to an iterative method on all the set of grid values (inside points and ghost points). A multigrid approach with a suitable definition of the restriction operator is provided. The restriction of the defect is performed separately for both sub-problems, providing a convergence factor close to the one measured in the case of smooth coefficient and independent on the magnitude of the jump in the coefficient. Numerical tests will confirm the second order accuracy. Although the method is proposed in one dimension, the extension in higher dimension is currently underway.
We study extensions of Sobolev and BV functions on infinite-dimensional domains. Along with some positive results we present a negative solution of the long-standing problem of existence of Sobolev extensions of functions in Gaussian Sobolev spaces from a convex domain to the whole space.
A graph is used to represent data in which the relationships between the objects in the data are at least as important as the objects themselves. Over the last two decades nearly a hundred file formats have been proposed or used to provide portable access to such data. This paper seeks to review these formats, and provide some insight to both reduce the ongoing creation of unnecessary formats, and guide the development of new formats where needed.
We investigate the dynamics of electron-electron recollisions in the double ionization of atoms in strong laser fields. The statistics of recollisions can be reformulated in terms of an area-preserving map from the observation that the outer electron is driven by the laser field to kick the remaining core electrons periodically. The phase portraits of this map reveals the dynamics of these recollisions in terms of their probability and efficiency.
Autonomous vehicles need to travel over 11 billion miles to ensure their safety. Therefore, the importance of simulation testing before real-world testing is self-evident. In recent years, the release of 3D simulators for autonomous driving, represented by Carla and CarSim, marks the transition of autonomous driving simulation testing environments from simple 2D overhead views to complex 3D models. During simulation testing, experimenters need to build static scenes and dynamic traffic flows, pedestrian flows, and other experimental elements to construct experimental scenarios. When building static scenes in 3D simulators, experimenters often need to manually construct 3D models, set parameters and attributes, which is time-consuming and labor-intensive. This thesis proposes an automated program generation framework. Based on deep reinforcement learning, this framework can generate different 2D ground script codes, on which 3D model files and map model files are built. The generated 3D ground scenes are displayed in the Carla simulator, where experimenters can use this scene for navigation algorithm simulation testing.
We present a general framework to tackle quantum optics problems with giant atoms, i.e. quantum emitters each coupled {\it non-locally} to a structured photonic bath (typically a lattice) of any dimension. The theory encompasses the calculation and general properties of Green's functions, atom-photon bound states (BSs), collective master equations and decoherence-free Hamiltonians (DFHs), and is underpinned by a formalism where a giant atom is formally viewed as a normal atom lying at a fictitious location. As a major application, we provide for the first time a general criterion to predict/engineer DFHs of giant atoms, which can be applied both in and out of the photonic continuum and regardless of the structure or dimensionality of the photonic bath. This is used to show novel DFHs in 2D baths such as a square lattice and photonic graphene.
The groundbreaking work of Rothvo{\ss} [arxiv:1311.2369] established that every linear program expressing the matching polytope has an exponential number of inequalities (formally, the matching polytope has exponential extension complexity). We generalize this result by deriving strong bounds on the polyhedral inapproximability of the matching polytope: for fixed $0 < \varepsilon < 1$, every polyhedral $(1 + \varepsilon / n)$-approximation requires an exponential number of inequalities, where $n$ is the number of vertices. This is sharp given the well-known $\rho$-approximation of size $O(\binom{n}{\rho/(\rho-1)})$ provided by the odd-sets of size up to $\rho/(\rho-1)$. Thus matching is the first problem in $P$, whose natural linear encoding does not admit a fully polynomial-size relaxation scheme (the polyhedral equivalent of an FPTAS), which provides a sharp separation from the polynomial-size relaxation scheme obtained e.g., via constant-sized odd-sets mentioned above. Our approach reuses ideas from Rothvo{\ss} [arxiv:1311.2369], however the main lower bounding technique is different. While the original proof is based on the hyperplane separation bound (also called the rectangle corruption bound), we employ the information-theoretic notion of common information as introduced in Braun and Pokutta [http://eccc.hpi-web.de/report/2013/056/], which allows to analyze perturbations of slack matrices. It turns out that the high extension complexity for the matching polytope stem from the same source of hardness as for the correlation polytope: a direct sum structure.
Let $X$ be a connected, noetherian scheme and $\mathcal{A}$ be a sheaf of Azumaya algebras on $X$ which is a locally free $\mathcal{O}_{X}$-module of rank $a$. We show that the kernel and cokernel of $K_{i}(X) \to K_{i}(\mathcal{A}) $ are torsion groups with exponent $a^{m}$ for some $m$ and any $i\geq 0$, when $X$ is regular or $X$ is of dimension $d$ with an ample sheaf (in this case $m\leq d+1$). As a consequence, $K_{i}(X,\mathbb Z/m)\cong K_{i}(\mA,\mathbb Z/m)$, for any $m$ relatively prime to $a$.
Self-assembly is one of the prevalent strategies used by living systems to fabricate ensembles of precision nanometer-scale structures and devices. The push for analogous approaches to create synthetic nanomaterials has led to the development of a large class of programmable crystalline structures. However, many applications require `self-limiting' assemblies, which autonomously terminate growth at a well-defined size and geometry. For example, curved architectures such as tubules, vesicles, or capsids can be designed to self-close at a particular size, symmetry, and topology. But developing synthetic strategies for self-closing assembly has been challenging, in part because such structures are prone to polymorphism that arises from thermal fluctuations of their local curvature, a problem that worsens with increased target size. Here we demonstrate a strategy to eliminate this source of polymorphism in self-closing assembly of tubules by increasing the assembly complexity. In the limit of single-component assembly, we find that thermal fluctuations allow the system to assemble nearby, off-target structures with varying widths, helicities, and chirality. By increasing the number of distinct components, we reduce the density of off-target states, thereby increasing the selectivity of a user-specified target structure to nearly 100%. We further show that by reducing the design constraints by targeting either the pitch or the width of tubules, fewer components are needed to reach complete selectivity. Combining experiments with theory, our results reveal an economical limit, which determines the minimum number of components that are required to create arbitrary assembly sizes with full selectivity. In the future, this approach could be extended to more complex self-limited structures, such as shells or triply periodic surfaces.
Many of the input-parameter-to-output-quantity-of-interest maps that arise in computational science admit a surprising low-dimensional structure, where the outputs vary primarily along a handful of directions in the high-dimensional input space. This type of structure is well modeled by a ridge function, which is a composition of a low-dimensional linear transformation with a nonlinear function. If the goal is to compute statistics of the output (e.g., as in uncertainty quantification or robust design) then one should exploit this low-dimensional structure, when present, to accelerate computations. We develop Gaussian quadrature and the associated polynomial approximation for one-dimensional ridge functions. The key elements of our method are (i) approximating the univariate density of the given linear combination of inputs by repeated convolutions and (ii) a Lanczos-Stieltjes method for constructing orthogonal polynomials and Gaussian quadrature.
In this paper, we extend the fundamental theorem for submanifolds to general ambient spaces by viewing it as a higher codimensional Cartan-Ambrose-Hicks theorem. The key ingredient in obtaining this is a generalization of development of curves in the positive codimensional case. One advantage of our results is that it also provide a geometric construction of the isometric immersion when the isometric immersion exists.
We investigate the thermodynamical features of two Lorentzian signature backgrounds that arise in string theory as exact CFTs and possess more than two disconnected asymptotic regions: the 2-d charged black hole and the Nappi-Witten cosmological model. We find multiple smooth disconnected Euclidean versions of the charged black hole background. They are characterized by different temperatures and electro-chemical potentials. We show that there is no straightforward analog of the Hartle-Hawking state that would express these thermodynamical features. We also obtain multiple Euclidean versions of the Nappi-Witten cosmological model and study their singularity structure. It suggests to associate a non-isotropic temperature with this background.
We survey results produced from the interaction between methods in prime characteristic and combinatorial commutative algebra. We showcase results for edge ideals, toric varieties, Stanley-Reisner rings, and initial ideals that were proven via Frobenius. We also discuss results for monomial ideals obtained using Frobenius-like maps. Finally, we present results for $F$-pure rings that were inspired by work done for Stanley-Reisner rings.
An algorithmic stablecoin is a type of cryptocurrency managed by algorithms (i.e., smart contracts) to dynamically minimize the volatility of its price relative to a specific form of asset, e.g., US dollar. As algorithmic stablecoins have been growing rapidly in recent years, they become much more volatile than expected. In this paper, we took a deep dive into the core of algorithmic stablecoins and shared our answer to two fundamental research questions, i.e., Are algorithmic stablecoins volatile by design? Are they volatile in practice? Specifically, we introduced an in-depth study on three popular types of algorithmic stablecoins and developed a modeling framework to formalize their key design protocols. Through formal verification, the framework can identify critical conditions under which stablecoins might become volatile. Furthermore, we performed a systematic empirical analysis on real transaction activities of the Basis Cash stablecoin to relate theoretical possibilities to market observations. Lastly, we highlighted key design decisions for future development of algorithmic stablecoins.
Genetic association studies have been a popular approach for assessing the association between common Single Nucleotide Polymorphisms (SNPs) and complex diseases. However, other genomic data involved in the mechanism from SNPs to disease, for example, gene expressions, are usually neglected in these association studies. In this paper, we propose to exploit gene expression information to more powerfully test the association between SNPs and diseases by jointly modeling the relations among SNPs, gene expressions and diseases. We propose a variance component test for the total effect of SNPs and a gene expression on disease risk. We cast the test within the causal mediation analysis framework with the gene expression as a potential mediator. For eQTL SNPs, the use of gene expression information can enhance power to test for the total effect of a SNP-set, which is the combined direct and indirect effects of the SNPs mediated through the gene expression, on disease risk. We show that the test statistic under the null hypothesis follows a mixture of $\chi^2$ distributions, which can be evaluated analytically or empirically using the resampling-based perturbation method. We construct tests for each of three disease models that are determined by SNPs only, SNPs and gene expression, or include also their interactions. As the true disease model is unknown in practice, we further propose an omnibus test to accommodate different underlying disease models. We evaluate the finite sample performance of the proposed methods using simulation studies, and show that our proposed test performs well and the omnibus test can almost reach the optimal power where the disease model is known and correctly specified. We apply our method to reanalyze the overall effect of the SNP-set and expression of the ORMDL3 gene on the risk of asthma.
We study diffusion damping of acoustic waves in the photon-baryon fluid due to cosmic strings, and calculate the induced $\mu$- and $y$-type spectral distortions of the cosmic microwave background. For cosmic strings with tension within current bounds, their contribution to the spectral distortions is subdominant compared to the distortions from primordial density perturbations.
Samples for single-emitter spectroscopy are usually prepared by spin-coating a dilute solution of emitters on a microscope cover slip of silicate based glass (such as quartz). Here, we show that both borosilicate glass and quartz contain intrinsic defect colour centres that fluoresce when excited at 532 nm. In a microscope image the defect emission is indistinguishable from spin-coated emitters. The emission spectrum is characterised by multiple peaks, most likely due to coupling to a silica vibration with an energy of 160-180 meV. The defects are single-photon emitters, do not blink, and have photoluminescence lifetimes of a few nanoseconds. Photoluminescence from such defects may previously have been misinterpreted as originating from single nanocrystal quantum dots.
(Withdrawn) Collaborative security initiatives are increasingly often advocated to improve timeliness and effectiveness of threat mitigation. Among these, collaborative predictive blacklisting (CPB) aims to forecast attack sources based on alerts contributed by multiple organizations that might be targeted in similar ways. Alas, CPB proposals thus far have only focused on improving hit counts, but overlooked the impact of collaboration on false positives and false negatives. Moreover, sharing threat intelligence often prompts important privacy, confidentiality, and liability issues. In this paper, we first provide a comprehensive measurement analysis of two state-of-the-art CPB systems: one that uses a trusted central party to collect alerts [Soldo et al., Infocom'10] and a peer-to-peer one relying on controlled data sharing [Freudiger et al., DIMVA'15], studying the impact of collaboration on both correct and incorrect predictions. Then, we present a novel privacy-friendly approach that significantly improves over previous work, achieving a better balance of true and false positive rates, while minimizing information disclosure. Finally, we present an extension that allows our system to scale to very large numbers of organizations.
The library QIBSH++ is a C++ object oriented library for the solution of Quasi Interpolation problems. The library is based on a Hermite Quasi Interpolating operator, which was derived as continuous extensions of linear multistep methods applied for the numerical solution of Boundary Value Problems for Ordinary Differential Equations. The library includes the possibility to use Hermite data or to apply a finite difference scheme for derivative approximations, when derivative values are not directly available. The generalization of the quasi interpolation procedure to surfaces and volumes approximation by means of a tensor product technique is also implemented. The method has been also generalized for one dimensional vectorial data, periodic data, and for two dimensional data in cylindrical coordinates, periodic with respect to the angular argument. Numerical tests show that the library could be used efficiently in many practical problems.
Wrist-worn smart devices are providing increased insights into human health, behaviour and performance through sophisticated analytics. However, battery life, device cost and sensor performance in the face of movement-related artefact present challenges which must be further addressed to see effective applications and wider adoption through commoditisation of the technology. We address these challenges by demonstrating, through using a simple optical measurement, photoplethysmography (PPG) used conventionally for heart rate detection in wrist-worn sensors, that we can provide improved heart rate and human activity recognition (HAR) simultaneously at low sample rates, without an inertial measurement unit. This simplifies hardware design and reduces costs and power budgets. We apply two deep learning pipelines, one for human activity recognition and one for heart rate estimation. HAR is achieved through the application of a visual classification approach, capable of robust performance at low sample rates. Here, transfer learning is leveraged to retrain a convolutional neural network (CNN) to distinguish characteristics of the PPG during different human activities. For heart rate estimation we use a CNN adopted for regression which maps noisy optical signals to heart rate estimates. In both cases, comparisons are made with leading conventional approaches. Our results demonstrate a low sampling frequency can achieve good performance without significant degradation of accuracy. 5 Hz and 10 Hz were shown to have 80.2% and 83.0% classification accuracy for HAR respectively. These same sampling frequencies also yielded a robust heart rate estimation which was comparative with that achieved at the more energy-intensive rate of 256 Hz.
The following problem is addressed: A $3$-manifold $M$ is endowed with a triple $\Omega = \big(\Omega^1,\Omega^2,\Omega^3\big)$ of closed $2$-forms. One wants to construct a coframing $\omega = \big(\omega^1,\omega^2,\omega^3\big)$ of $M$ such that, first, ${\rm d}\omega^i = \Omega^i$ for $i=1,2,3$, and, second, the Riemannian metric $g=\big(\omega^1\big)^2+\big(\omega^2\big)^2+\big(\omega^3\big)^2$ be flat. We show that, in the 'nonsingular case', i.e., when the three $2$-forms $\Omega^i_p$ span at least a $2$-dimensional subspace of $\Lambda^2(T^*_pM)$ and are real-analytic in some $p$-centered coordinates, this problem is always solvable on a neighborhood of $p\in M$, with the general solution $\omega$ depending on three arbitrary functions of two variables. Moreover, the characteristic variety of the generic solution $\omega$ can be taken to be a nonsingular cubic. Some singular situations are considered as well. In particular, we show that the problem is solvable locally when $\Omega^1$, $\Omega^2$, $\Omega^3$ are scalar multiples of a single 2-form that do not vanish simultaneously and satisfy a nondegeneracy condition. We also show by example that solutions may fail to exist when these conditions are not satisfied.
Compositional synthesis relies on the discovery of assumptions, i.e., restrictions on the behavior of the remainder of the system that allow a component to realize its specification. In order to avoid losing valid solutions, these assumptions should be necessary conditions for realizability. However, because there are typically many different behaviors that realize the same specification, necessary behavioral restrictions often do not exist. In this paper, we introduce a new class of assumptions for compositional synthesis, which we call information flow assumptions. Such assumptions capture an essential aspect of distributed computing, because components often need to act upon information that is available only in other components. The presence of a certain flow of information is therefore often a necessary requirement, while the actual behavior that establishes the information flow is unconstrained. In contrast to behavioral assumptions, which are properties of individual computation traces, information flow assumptions are hyperproperties, i.e., properties of sets of traces. We present a method for the automatic derivation of information-flow assumptions from a temporal logic specification of the system. We then provide a technique for the automatic synthesis of component implementations based on information flow assumptions. This provides a new compositional approach to the synthesis of distributed systems. We report on encouraging first experiments with the approach, carried out with the BoSyHyper synthesis tool.
The cosmological 21 cm signal is one of the most promising avenues to study the Epoch of Reionization. One class of experiments aiming to detect this signal is global signal experiments measuring the sky-averaged 21 cm brightness temperature as a function of frequency. A crucial step in the interpretation and analysis of such measurements is separating foreground contributions from the remainder of the signal, requiring accurate models for both components. Current models for the signal (non-foreground) component, which may contain cosmological and systematic contributions, are incomplete and unable to capture the full signal. We propose two new methods for extracting this component from the data: Firstly, we employ a foreground-orthogonal Gaussian Process to extract the part of the signal that cannot be explained by the foregrounds. Secondly, we use a FlexKnot parameterization to model the full signal component in a free-form manner, not assuming any particular shape or functional form. This method uses Bayesian model selection to find the simplest signal that can explain the data. We test our methods on both, synthetic data and publicly available EDGES low-band data. We find that the Gaussian Process can clearly capture the foreground-orthogonal signal component of both data sets. The FlexKnot method correctly recovers the full shape of the input signal used in the synthetic data and yields a multi-modal distribution of different signal shapes that can explain the EDGES observations.
The purpose of this short note is to study dominant rational maps from punctual Hilbert schemes of length $k>1$ of projective K3 surfaces $S$ containing infinitely many rational curves. Precisely, we prove that their image is necessarily rationally connected if this rational map is not generically finite. As an application, we simplify the proof of C. Voisin's of the fact that symplectic involutions of any projective K3 surface $S$ act trivially on $\mathrm{CH}_0(S)$.
We consider a class of separately convex phase field energies employed in fracture mechanics, featuring non-interpenetration and a general softening behavior. We analyze the time-discrete evolutions generated by a staggered minimization scheme, where fracture irreversibility is modeled by a monotonicity constraint on the phase field variable. After recasting the staggered scheme by means of gradient flows, we characterize the time-continuous limits of the discrete solutions in terms of balanced viscosity evolutions, parametrized by their arc-length with respect to the L2-norm (for the phase field) and the H1-norm (for the displacement field). By a careful study of the energy balance we deduce that time-continuous evolutions may still exhibit an alternate behavior in discontinuity times.
In cooperative game theory, games in partition function form are real-valued function on the set of so-called embedded coalitions, that is, pairs $(S,\pi)$ where $S$ is a subset (coalition) of the set $N$ of players, and $\pi$ is a partition of $N$ containing $S$. Despite the fact that many studies have been devoted to such games, surprisingly nobody clearly defined a structure (i.e., an order) on embedded coalitions, resulting in scattered and divergent works, lacking unification and proper analysis. The aim of the paper is to fill this gap, thus to study the structure of embedded coalitions (called here embedded subsets), and the properties of games in partition function form.
We present a pair of high-resolution smoothed particle hydrodynamics (SPH) simulations that explore the evolution and cooling behavior of hot gas around Milky-Way size galaxies. The simulations contain the same total baryonic mass and are identical other than their initial gas density distributions. The first is initialised with a low entropy hot gas halo that traces the cuspy profile of the dark matter, and the second is initialised with a high-entropy hot halo with a cored density profile as might be expected in models with pre-heating feedback. Galaxy formation proceeds in dramatically different fashion depending on the initial setup. While the low-entropy halo cools rapidly, primarily from the central region, the high-entropy halo is quasi-stable for ~4 Gyr and eventually cools via the fragmentation and infall of clouds from ~100 kpc distances. The low-entropy halo's X-ray surface brightness is ~100 times brighter than current limits and the resultant disc galaxy contains more than half of the system's baryons. The high-entropy halo has an X-ray brightness that is in line with observations, an extended distribution of pressure-confined clouds reminiscent of observed populations, and a final disc galaxy that has half the mass and ~50% more specific angular momentum than the disc formed in the low-entropy simulation. The final high-entropy system retains the majority of its baryons in a low-density hot halo. The hot halo harbours a trace population of cool, mostly ionised, pressure-confined clouds that contain ~10% of the halo's baryons after 10 Gyr of cooling. The covering fraction for HI and MgII absorption clouds in the high-entropy halo is ~0.4 and ~0.6, respectively, although most of the mass that fuels disc growth is ionised, and hence would be under counted in HI surveys.
In their celebrated paper "On Siegel's Lemma", Bombieri and Vaaler found an upper bound on the height of integer solutions of systems of linear Diophantine equations. Calculating the bound directly, however, requires exponential time. In this paper, we present the bound in a different form that can be computed in polynomial time. We also give an elementary (and arguably simpler) proof for the bound.
The variational perturbation theory for wave functions, which has been shown to work well for bound states of the anharmonic oscillator, is applied to resonance states of the anharmonic oscillator with negative coupling constant. We obtain uniformly accurate wave functions starting from the bound states.
Optimal deployment of deep neural networks (DNNs) on state-of-the-art Systems-on-Chips (SoCs) is crucial for tiny machine learning (TinyML) at the edge. The complexity of these SoCs makes deployment non-trivial, as they typically contain multiple heterogeneous compute cores with limited, programmer-managed memory to optimize latency and energy efficiency. We propose HTVM - a compiler that merges TVM with DORY to maximize the utilization of heterogeneous accelerators and minimize data movements. HTVM allows deploying the MLPerf(TM) Tiny suite on DIANA, an SoC with a RISC-V CPU, and digital and analog compute-in-memory AI accelerators, at 120x improved performance over plain TVM deployment.
Dark photon dark matter will resonantly convert into visible photons when the dark photon mass is equal to the plasma frequency of the ambient medium. In cosmological contexts, this transition leads to an extremely efficient, albeit short-lived, heating of the surrounding gas. Existing work in this field has been predominantly focused on understanding the implications of these resonant transitions in the limit that the plasma frequency of the Universe can be treated as being perfectly homogeneous, \ie neglecting inhomogeneities in the electron number density. In this work we focus on the implications of heating from dark photon dark matter in the presence of inhomogeneous structure (which is particularly relevant for dark photons with masses in the range $10^{-15} \, {\rm eV} \, \lesssim m_{A^\prime} \lesssim 10^{-12}$ eV), emphasizing both the importance of inhomogeneous energy injection, as well as the sensitivity of cosmological observations to the inhomogeneities themselves. More specifically, we derive modified constraints on dark photon dark matter from the Ly-$\alpha$ forest, and show that the presence of inhomogeneities allows one to extend constraints to masses outside of the range that would be obtainable in the homogeneous limit, while only slightly relaxing their strength. We then project sensitivity for near-future cosmological surveys that are hoping to measure the 21cm transition in neutral hydrogen prior to reionization, and demonstrate that these experiments will be extremely useful in improving sensitivity to masses near $\sim 10^{-14}$ eV, potentially by several orders of magnitude. Finally, we discuss implications for reionization, early star formation, and late-time $y$-type spectral distortions, and show that probes which are inherently sensitive to the inhomogeneous state of the Universe could resolve signatures unique to the light dark photon...
We investigate local MCMC algorithms, namely the random-walk Metropolis and the Langevin algorithms, and identify the optimal choice of the local step-size as a function of the dimension $n$ of the state space, asymptotically as $n\to\infty$. We consider target distributions defined as a change of measure from a product law. Such structures arise, for instance, in inverse problems or Bayesian contexts when a product prior is combined with the likelihood. We state analytical results on the asymptotic behavior of the algorithms under general conditions on the change of measure. Our theory is motivated by applications on conditioned diffusion processes and inverse problems related to the 2D Navier--Stokes equation.
We study polynomial deformations of the fuzzy sphere, specifically given by the cubic or the Higgs algebra. We derive the Higgs algebra by quantizing the Poisson structure on a surface in $\mathbb{R}^3$. We find that several surfaces, differing by constants, are described by the Higgs algebra at the fuzzy level. Some of these surfaces have a singularity and we overcome this by quantizing this manifold using coherent states for this nonlinear algebra. This is seen in the measure constructed from these coherent states. We also find the star product for this non-commutative algebra as a first step in constructing field theories on such fuzzy spaces.
The first author's recent unexpected discovery of torsion in the integral cohomology of the T\"ubingen Triangle Tiling has led to a re-evaluation of current descriptions of and calculational methods for the topological invariants associated with aperiodic tilings. The existence of torsion calls into question the previously assumed equivalence of cohomological and K-theoretic invariants as well as the supposed lack of torsion in the latter. In this paper we examine in detail the topological invariants of canonical projection tilings; we extend results of Forrest, Hunton and Kellendonk to give a full treatment of the torsion in the cohomology of such tilings in codimension at most 3, and present the additions and amendments needed to previous results and calculations in the literature. It is straightforward to give a complete treatment of the torsion components for tilings of codimension 1 and 2, but the case of codimension 3 is a good deal more complicated, and we illustrate our methods with the calculations of all four icosahedral tilings previously considered. Turning to the K-theoretic invariants, we show that cohomology and K-theory agree for all canonical projection tilings in (physical) dimension at most 3, thus proving the existence of torsion in, for example, the K-theory of the T\"ubingen Triangle Tiling. The question of the equivalence of cohomology and K-theory for tilings of higher dimensional euclidean space remains open.
Motion retargeting from a human demonstration to a robot is an effective way to reduce the professional requirements and workload of robot programming, but faces the challenges resulting from the differences between humans and robots. Traditional optimization-based methods are time-consuming and rely heavily on good initialization, while recent studies using feedforward neural networks suffer from poor generalization to unseen motions. Moreover, they neglect the topological information in human skeletons and robot structures. In this paper, we propose a novel neural latent optimization approach to address these problems. Latent optimization utilizes a decoder to establish a mapping between the latent space and the robot motion space. Afterward, the retargeting results that satisfy robot constraints can be obtained by searching for the optimal latent vector. Alongside with latent optimization, neural initialization exploits an encoder to provide a better initialization for faster and better convergence of optimization. Both the human skeleton and the robot structure are modeled as graphs to make better use of topological information. We perform experiments on retargeting Chinese sign language, which involves two arms and two hands, with additional requirements on the relative relationships among joints. Experiments include retargeting various human demonstrations to YuMi, NAO, and Pepper in the simulation environment and to YuMi in the real-world environment. Both efficiency and accuracy of the proposed method are verified.
A novel solve-training framework is proposed to train neural network in representing low dimensional solution maps of physical models. Solve-training framework uses the neural network as the ansatz of the solution map and train the network variationally via loss functions from the underlying physical models. Solve-training framework avoids expensive data preparation in the traditional supervised training procedure, which prepares labels for input data, and still achieves effective representation of the solution map adapted to the input data distribution. The efficiency of solve-training framework is demonstrated through obtaining solutions maps for linear and nonlinear elliptic equations, and maps from potentials to ground states of linear and nonlinear Schr\"odinger equations.
We have observed YZ Cnc at two day intervals from 6 to 24 April 1998, covering two full outburst cycles. The 0.1-2.4 keV flux is lower during optical outburst than in quiescence, and lowest at the end of the outburst. The decline of the X-ray flux in the quiescent interval appears to be in contrast to prediction of simple models for accretion-disk instabilities. Variability on \~hour time scales is present, but appears not related to the orbital phase. YZ Cnc was less luminous in X-rays during our 1998 observations than in earlier ROSAT observations.
We construct fractional Brownian motion (fBm), sub-fractional Brownian motion (sub-fBm), negative sub-fractional Brownian motion (nsfBm) and the odd part of fBm in the sense of Dzhaparidze and van Zanten (2004) by means of limiting procedures applied to some particle systems. These processes are obtained for full ranges of Hurst parameter. Particle picture interpretations of sub-fBm and nsfBm were known earlier (using a different approach) for narrow ranges of parameters; the odd part of fBm process had not been given any physical interpretation at all. Our approach consists in representing these processes as $<X(1),1_{[0,t]}>$, $<X(1),1_{[0,t]}-1_{[-t,0]}>$, $<X(1),1_{[-t,t]}>$, respectively, where X(1) is an (extended) $S'$-random variable obtained as the fluctuation limit of either empirical process or the occupation time process of an appropriate particle system. In fact, our construction is more general, permitting to obtain some new Gaussian processes, as well as multidimensional random fields. In particular, we generalize and presumably simplify some results by Hambly and Jones (2007). We also obtain a new class of $S'$-valued density processes, containing as a particular case the density process of Martin-L\"of (1976).
Oscillatory magnetoresistance measurements on graphene have revealed a wealth of novel physics. These phenomena are typically studied at low currents. At high currents, electrons are driven far from equilibrium with the atomic lattice vibrations so that their kinetic energy can exceed the thermal energy of the phonons. Here, we report three non-equilibrium phenomena in monolayer graphene at high currents: (i) a "Doppler-like" shift and splitting of the frequencies of the transverse acoustic (TA) phonons emitted when the electrons undergo inter-Landau level (LL) transitions; (ii) an intra-LL Mach effect with the emission of TA phonons when the electrons approach supersonic speed, and (iii) the onset of elastic inter-LL transitions at a critical carrier drift velocity, analogous to the superfluid Landau velocity. All three quantum phenomena can be unified in a single resonance equation. They offer avenues for research on out-of-equilibrium phenomena in other two-dimensional fermion systems.
In this brief report we consider a non-local Abelian Higgs model in the presence of a neutralizing uniform background charge. We show that such a system possesses vortices which key feature is a strong radial electric field. We estimate the basic properties of such an object and characteristic length scales in this model.
This work considers the orthogonal frequency division multiple access (OFDMA) technology that enables multiple unmanned aerial vehicles (multi-UAV) communication systems to provide on-demand services. The main aim of this work is to derive the optimal allocation of radio resources, 3D placement of UAVs, and user association matrices. To achieve the desired objectives, we decoupled the original joint optimization problem into two sub-problems: i) 3D placement and user association and ii) sum-rate maximization for optimal radio resource allocation, which are solved iteratively. The proposed iterative algorithm is shown via numerical results to achieve fast convergence speed after less than 10 iterations. The benefits of the proposed design are demonstrated via superior sum-rate performance compared to existing reference designs. Moreover, the results declared that the optimal power and sub-carrier allocation helped mitigate the co-cell interference that directly impacts the system's performance.
Let A be a class of objects, equipped with an integer size such that for all n the number a(n) of objects of size n is finite. We are interested in the case where the generating fucntion sum_n a(n) t^n is rational, or more generally algebraic. This property has a practical interest, since one can usually say a lot on the numbers a(n), but also a combinatorial one: the rational or algebraic nature of the generating function suggests that the objects have a (possibly hidden) structure, similar to the linear structure of words in the rational case, and to the branching structure of trees in the algebraic case. We describe and illustrate this combinatorial intuition, and discuss its validity. While it seems to be satisfactory in the rational case, it is probably incomplete in the algebraic one. We conclude with open questions.
The self-attention mechanism has attracted wide publicity for its most important advantage of modeling long dependency, and its variations in computer vision tasks, the non-local block tries to model the global dependency of the input feature maps. Gathering global contextual information will inevitably need a tremendous amount of memory and computing resources, which has been extensively studied in the past several years. However, there is a further problem with the self-attention scheme: is all information gathered from the global scope helpful for the contextual modelling? To our knowledge, few studies have focused on the problem. Aimed at both questions this paper proposes the salient positions-based attention scheme SPANet, which is inspired by some interesting observations on the attention maps and affinity matrices generated in self-attention scheme. We believe these observations are beneficial for better understanding of the self-attention. SPANet uses the salient positions selection algorithm to select only a limited amount of salient points to attend in the attention map computing. This approach will not only spare a lot of memory and computing resources, but also try to distill the positive information from the transformation of the input feature maps. In the implementation, considering the feature maps with channel high dimensions, which are completely different from the general visual image, we take the squared power of the feature maps along the channel dimension as the saliency metric of the positions. In general, different from the non-local block method, SPANet models the contextual information using only the selected positions instead of all, along the channel dimension instead of space dimension. Our source code is available at https://github.com/likyoo/SPANet.
The Vortex-induced vibration (VIV) prediction of long flexible cylindrical structures relies on the accuracy of the hydrodynamic database constructed via rigid cylinder forced vibration experiments. However, to create a comprehensive hydrodynamic database with tens of input parameters including vibration amplitudes and frequencies and Reynolds number, surface roughness and so forth is technically challenging and virtually impossible due to the large number of experiments required. The current work presents an alternative approach to approximate the crossflow (CF) hydrodynamic coefficient database in a carefully chosen parameterized form. The learning of the parameters is posed as a constraint optimization, where the objective function is constructed based on the error between the experimental response and theoretical prediction assuming energy balance between fluid and structure. Such a method yields the optimal estimation of the CF parametric hydrodynamic database and produces the VIV response prediction based on the updated hydrodynamic database. The method then was tested on several experiments, including freely-mounted rigid cylinder in large Reynolds number with combined crossflow and inline vibrations and large-scale flexible cylinder test in the Norwegian Deepwater Program, and the result is shown to robustly and significantly reduce the error in predicting cylinder VIVs.
In this paper, we consider coupled forward-backward stochastic differential equations (FBSDEs in short) with parameter $\varepsilon >0$. We study the asymptotic behavior of its solutions and establish a large deviation principle for the corresponding processes.
We study the joint probability distributions of separation, $R$, and radial component of the relative velocity, $V_{\rm R}$, of particles settling under gravity in a turbulent flow. We also obtain the moments of these distributions and analyze their anisotropy using spherical harmonics. We find that the qualitative nature of the joint distributions remains the same as no gravity case. Distributions of $V_{\rm R}$ for fixed values of $R$ show a power-law dependence on $V_{\rm R}$ for a range of $V_{\rm R}$, exponent of the power-law depends on the gravity. Effects of gravity are also manifested in the following ways: (a) moments of the distributions are anisotropic; the degree of anisotropy depends on particle's Stokes number, but does not depend on $R$ for small values of $R$. (b) mean velocity of collision between two particles is decreased for particles having equal Stokes numbers but increased for particles having different Stokes numbers. For the later, collision velocity is set by the difference in their settling velocities.
Script identification plays a significant role in analysing documents and videos. In this paper, we focus on the problem of script identification in scene text images and video scripts. Because of low image quality, complex background and similar layout of characters shared by some scripts like Greek, Latin, etc., text recognition in those cases become challenging. In this paper, we propose a novel method that involves extraction of local and global features using CNN-LSTM framework and weighting them dynamically for script identification. First, we convert the images into patches and feed them into a CNN-LSTM framework. Attention-based patch weights are calculated applying softmax layer after LSTM. Next, we do patch-wise multiplication of these weights with corresponding CNN to yield local features. Global features are also extracted from last cell state of LSTM. We employ a fusion technique which dynamically weights the local and global features for an individual patch. Experiments have been done in four public script identification datasets: SIW-13, CVSI2015, ICDAR-17 and MLe2e. The proposed framework achieves superior results in comparison to conventional methods.
An $(n, k_1, \dots, k_t)$-cross intersecting system is a set of non-empty pairwise cross-intersecting families $\mathcal{F}_1\subset{[n]\choose k_1}, \mathcal{F}_2\subset{[n]\choose k_2}, \dots, \mathcal{F}_t\subset{[n]\choose k_t}$ with $t\geq 2$ and $k_1\geq k_2\geq \cdots \geq k_t$. If an $(n, k_1, \dots, k_t)$-cross intersecting system contains at least two families which are cross intersecting freely and at least two families which are cross intersecting but not freely, then we say that the cross intersecting system is of mixed type. All previous studies are on non-mixed type, i.e, under the condition that $n \ge k_1+k_2$. In this paper, we study for the first interesting mixed type, an $(n, k_1, \dots, k_t)$-cross intersecting system with $k_1+k_3\leq n <k_1+k_2$, i.e., families $\mathcal{F}_i\subseteq {[n]\choose k_i}$ and $\mathcal{F}_j\subseteq {[n]\choose k_j}$ are cross intersecting freely if and only if $\{i, j\}=\{1, 2\}$. Let $M(n, k_1, \dots, k_t)$ denote the maximum sum of sizes of families in an $(n, k_1, \dots, k_t)$-cross intersecting system. We determine $M(n, k_1, \dots, k_t)$ and characterize all extremal $(n, k_1, \dots, k_t)$-cross intersecting systems for $k_1+k_3\leq n <k_1+k_2$. We think that the characterization of maximal cross intersecting L-initial families and the unimodality of functions in this paper are interesting in their own, in addition to the extremal result. The most general condition on $n$ is that $n\ge k_1+k_t$. This paper provides foundation work for the solution to the most general condition $n\ge k_1+k_t$.
Cylindrical Algebraic Decomposition (CAD) is a key proof technique for formal verification of cyber-physical systems. CAD is computationally expensive, with worst-case doubly-exponential complexity. Selecting an optimal variable ordering is paramount to efficient use of CAD. Prior work has demonstrated that machine learning can be useful in determining efficient variable orderings. Much of this work has been driven by CAD problems extracted from applications of the MetiTarski theorem prover. In this paper, we revisit this prior work and consider issues of bias in existing training and test data. We observe that the classical MetiTarski benchmarks are heavily biased towards particular variable orderings. To address this, we apply symmetries to create a new dataset containing more than 41K MetiTarski challenges designed to remove bias. Furthermore, we evaluate issues of information leakage, and test the generalizability of our models on the new dataset.
A Dispersive Wave Equation in 2+1 dimensions (2LDW) widely discussed by different authors is shown to be nothing but the modified version of the Generalized Dispersive Wave Equation (GLDW). Using Singularity Analysis and techniques based upon the Painleve Property leading to the Double Singular Manifold Expansion we shall find the Miura Transformation which converts the 2LDW Equation into the GLDW Equation. Through this Miura Transformation we shall also present the Lax pair of the 2LDW Equation as well as some interesting reductions to several already known integrable systems in 1+1 dimensions.
Rate of period change $\dot{P}$ for a Cepheid is shown to be a parameter that is capable of indicating the instability strip crossing mode for individual objects, and, in conjunction with light amplitude, likely location within the instability strip. Observed rates of period change in over 200 Milky Way Cepheids are demonstrated to be in general agreement with predictions from stellar evolutionary models, although the sample also displays features that are inconsistent with some published models and indicative of the importance of additional factors not fully incorporated in models to date.
In this article we study the inverse problem of thermoacoustic tomography (TAT) on a medium with attenuation represented by a time- convolution (or memory) term, and whose consideration is motivated by the modeling of ultrasound waves in heterogeneous tissue via fractional derivatives with spatially dependent parameters. Under the assumption of being able to measure data on the whole boundary, we prove uniqueness and stability, and propose a convergent reconstruction method for a class of smooth variable sound speeds. By a suitable modification of the time reversal technique, we obtain a Neumann series reconstruction formula.
Storing and processing massive small files is one of the major challenges for the Hadoop Distributed File System (HDFS). In order to provide fast data access, the NameNode (NN) in HDFS maintains the metadata of all files in its main-memory. Hadoop performs well with a small number of large files that require relatively little metadata in the NN s memory. But for a large number of small files, Hadoop has problems such as NN memory overload caused by the huge metadata size of these small files. We present a new type of archive file, Hadoop Perfect File (HPF), to solve HDFS s small files problem by merging small files into a large file on HDFS. Existing archive files offer limited functionality and have poor performance when accessing a file in the merged file due to the fact that during metadata lookup it is necessary to read and process the entire index file(s). In contrast, HPF file can directly access the metadata of a particular file from its index file without having to process it entirely. The HPF index system uses two hash functions: file s metadata are distributed through index files by using a dynamic hash function and, for each index file, we build an order preserving perfect hash function that preserves the position of each file s metadata in the index file. The HPF design will only read the part of the index file that contains the metadata of the searched file during its access. HPF file also supports the file appending functionality after its creation. Our experiments show that HPF can be more than 40% faster file s access from the original HDFS. If we don t consider the caching effect, HPF s file access is around 179% faster than MapFile and 11294% faster than HAR file. If we consider caching effect, HPF is around 35% faster than MapFile and 105% faster than HAR file.
Communication compression techniques are of growing interests for solving the decentralized optimization problem under limited communication, where the global objective is to minimize the average of local cost functions over a multi-agent network using only local computation and peer-to-peer communication. In this paper, we propose a novel compressed gradient tracking algorithm (C-GT) that combines gradient tracking technique with communication compression. In particular, C-GT is compatible with a general class of compression operators that unifies both unbiased and biased compressors. We show that C-GT inherits the advantages of gradient tracking-based algorithms and achieves linear convergence rate for strongly convex and smooth objective functions. Numerical examples complement the theoretical findings and demonstrate the efficiency and flexibility of the proposed algorithm.
Cross-sectional scanning tunnelling microscopy and spectroscopy (XSTM/S) were used to map out the band alignment across the complex oxide interface of La$_{2/3}$Ca$_{1/3}$MnO$_{3}$/Nb-doped SrTiO$_{3}$. By a controlled cross-sectional fracturing procedure, unit-cell high steps persist near the interface between the thin film and the substrate in the non-cleavable perovskite materials. The abrupt changes of the mechanical and electronic properties were visualized directly by XSTM/S. Using changes in the DOS as probe by STM, the electronic band alignment across the heterointerface was mapped out providing a new approach to directly measure the electronic properties at complex oxide interfaces.
We show that a cubic fourfold F that is apolar to a Veronese surface has the property that its variety of power sums VSP(F,10) is singular along a K3 surface of genus 20. We prove that these cubics form a divisor in the moduli space of cubic fourfolds and that this divisor is not a Noether-Lefschetz divisor. We use this result to prove that there is no nontrivial Hodge correspondence between a very general cubic and its VSP.
Admission control schemes and scheduling algorithms are designed to offer QoS services in 802.16/802.16e networks and a number of studies have investigated these issues. But the channel condition and priority of traffic classes are very rarely considered in the existing scheduling algorithms. Although a number of energy saving mechanisms have been proposed for the IEEE 802.16e, to minimize the power consumption of IEEE 802.16e mobile stations with multiple real-time connections has not yet been investigated. Moreover, they mainly consider non real- time connections in IEEE 802.16e networks. In this paper, we propose to design an adaptive power efficient packet scheduling algorithm that provides a minimum fair allocation of the channel bandwidth for each packet flow and additionally minimizes the power consumption. In the adaptive scheduling algorithm, packets are transmitted as per allotted slots from different priority of traffic classes adaptively, depending on the channel condition. Suppose if the buffer size of the high priority traffic queues with bad channel condition exceeds a threshold, then the priority of those flows will be increased by adjusting the sleep duty cycle of existing low priority traffic, to prevent the starvation. By simulation results, we show that our proposed scheduler achieves better channel utilization while minimizing the delay and power consumption.
Photonic crystal waveguides are known to support C-points - point-like polarisation singularities with local chirality. Such points can couple with dipole-like emitters to produce highly directional emission, from which spin-photon entanglers can be built. Much is made of the promise of using slow-light modes to enhance this light-matter coupling. Here we explore the transition from travelling to standing waves for two different photonic crystal waveguide designs. We find that time-reversal symmetry and the reciprocal nature of light places constraints on using C-points in the slow-light regime. We observe two distinctly different mechanisms through which this condition is satisfied in the two waveguides. In the waveguide designs we consider, a modest group-velocity of $v_g \approx c/10$ is found to be the optimum for slow-light coupling to the C-points.
Low-complexity near-optimal signal detection in large dimensional communication systems is a challenge. In this paper, we present a reactive tabu search (RTS) algorithm, a heuristic based combinatorial optimization technique, to achieve low-complexity near-maximum likelihood (ML) signal detection in linear vector channels with large dimensions. Two practically important large-dimension linear vector channels are considered: i) multiple-input multiple-output (MIMO) channels with large number (tens) of transmit and receive antennas, and ii) severely delay-spread MIMO inter-symbol interference (ISI) channels with large number (tens to hundreds) of multipath components. These channels are of interest because the former offers the benefit of increased spectral efficiency (several tens of bps/Hz) and the latter offers the benefit of high time-diversity orders. Our simulation results show that, while algorithms including variants of sphere decoding do not scale well for large dimensions, the proposed RTS algorithm scales well for signal detection in large dimensions while achieving increasingly closer to ML performance for increasing number of dimensions.
Tapered and dispersion managed (DM) silicon nanophotonic waveguides are investigated for the generation of optimal ultra broadband supercontinuum (SC). DM waveguides are structures showing a longitudinally dependent group velocity dispersion that results from the variation of the waveguide width with the propagation distance. For the generation of optimal SC, a genetic algorithm has been used to find the best dispersion map. This allows for the generation of highly coherent supercontinuums that span over 1.14 octaves from 1300 nm to 2860 nm and 1.25 octaves from 1200 nm to 2870 nm at -20 dB level for the tapered and DM waveguides respectively, for a 2 $\mu$m, 200 fs and 6.4 pJ input pulse. The comparison of these two structures with the usually considered optimal fixed width waveguide shows that the SC is broader and flatter in the more elaborated DM waveguide, while the high coherence is ensured by the varying dispersion.
We perform a fully self-consistent 3-D numerical simulation for a compressible, dissipative magneto-plasma driven by large-scale perturbations, that contain a fairly broader spectrum of characteristic modes, ranging from largest scales to intermediate scales and down to the smallest scales, where the energy of the system are dissipated by collisional (Ohmic) and viscous dissipations. Additionally, our simulation includes nonlinear interactions amongst a wide range of fluctuations that are initialized with random spectral amplitudes, leading to the cascade of spectral energy in the inertial range spectrum, and takes into account large scale as well as small scale perturbation that may have been induced by the background plasma fluctuations, also the non adiabatic exchange of energy leading to the migration of energy from the energy containing modes or randomly injected energy driven by perturbations and further dissipated by the smaller scales. Besides demonstrating the comparative decays of total energy and dissipation rate of energy, our results show the existence of a perpendicular component of current, thus clearly confirming that the self-organized state is non-force free.
We show that an angular analysis of B -> V1 V2 decays yields numerous tests for new physics in the decay amplitudes. Many of these new-physics observables are nonzero even if the strong phase differences vanish. For certain observables, neither time-dependent measurements nor tagging is necessary. Should a signal for new physics be found, one can place a lower limit on the size of the new-physics parameters, as well as bound its effect on the measurement of the B^0--Bbar^0 mixing phase.
We study conformal field theories with Yukawa interactions in dimensions between 2 and 4; they provide UV completions of the Nambu-Jona-Lasinio and Gross-Neveu models which have four-fermion interactions. We compute the sphere free energy and certain operator scaling dimensions using dimensional continuation. In the Gross-Neveu CFT with $N$ fermion degrees of freedom we obtain the first few terms in the $4-\epsilon$ expansion using the Gross-Neveu-Yukawa model, and the first few terms in the $2+\epsilon$ expansion using the four-fermion interaction. We then apply Pade approximants to produce estimates in $d=3$. For $N=1$, which corresponds to one 2-component Majorana fermion, it has been suggested that the Yukawa theory flows to a ${\cal N}=1$ supersymmetric CFT. We provide new evidence that the $4-\epsilon$ expansion of the $N=1$ Gross-Neveu-Yukawa model respects the supersymmetry. Our extrapolations to $d=3$ appear to be in good agreement with the available results obtained using the numerical conformal bootstrap. Continuation of this CFT to $d=2$ provides evidence that the Yukawa theory flows to the tri-critical Ising model. We apply a similar approach to calculate the sphere free energy and operator scaling dimensions in the Nambu-Jona-Lasinio-Yukawa model, which has an additional $U(1)$ global symmetry. For $N=2$, which corresponds to one 2-component Dirac fermion, this theory has an emergent supersymmetry with 4 supercharges, and we provide new evidence for this.
Aiming to understand real-world hierarchical networks whose degree distributions are neither power law nor exponential, we construct a hybrid clique network that includes both homogeneous and inhomogeneous parts, and introduce an inhomogeneity parameter to tune the ratio between the homogeneous part and the inhomogeneous one. We perform Monte-Carlo simulations to study various properties of such a network, including the degree distribution, the average shortest-path-length, the clustering coefficient, the clustering spectrum, and the communicability.
We study the analyticity of the value function in optimal investment with expected utility from terminal wealth and the relation to stochastically dominant financial models. We identify both a class of utilities and a class of semi-martingale models for which we establish analyticity. Specifically, these utilities have completely monotonic inverse marginals, while the market models have a maximal element in the sense of infinite-order stochastic dominance. We construct two counterexamples, themselves of independent interest, which show that analyticity fails if either the utility or the market model does not belong to the respective special class. We also provide explicit formulas for the derivatives, of all orders, of the value functions as well as their optimizers. Finally, we show that for the set of supermartingale deflators, stochastic dominance of infinite order is equivalent to the apparently stronger dominance of second order.
Here, we propose novel transparency effect in cylindrical all-dielectric metamaterials. We show that cancellation of multipole moments of the same kind lead to almost zero radiation losses due to the counter-directed multipolar moments in metamolecule. . Nullifying of multipoles, mainly dipoles and suppression of higher multipoles results in ideal transmission of incident wave through the designed metamaterial. The observed effect could pave the road to new generation of light-manipulating transparent metadevices such as filters, waveguides, cloaks and more.
An outstanding image-text retrieval model depends on high-quality labeled data. While the builders of existing image-text retrieval datasets strive to ensure that the caption matches the linked image, they cannot prevent a caption from fitting other images. We observe that such a many-to-many matching phenomenon is quite common in the widely-used retrieval datasets, where one caption can describe up to 178 images. These large matching-lost data not only confuse the model in training but also weaken the evaluation accuracy. Inspired by visual and textual entailment tasks, we propose a multi-modal entailment classifier to determine whether a sentence is entailed by an image plus its linked captions. Subsequently, we revise the image-text retrieval datasets by adding these entailed captions as additional weak labels of an image and develop a universal variable learning rate strategy to teach a retrieval model to distinguish the entailed captions from other negative samples. In experiments, we manually annotate an entailment-corrected image-text retrieval dataset for evaluation. The results demonstrate that the proposed entailment classifier achieves about 78% accuracy and consistently improves the performance of image-text retrieval baselines.
We study theoretically edge transport of a fractional quantum Hall liquid, in the presence of a quantum dot inside the Hall bar with well controlled electron density and Landau level filling factor \nu, and show that such transport studies can help reveal the nature of the fractional quantum Hall liquid. In our first example we study the \nu=1/3 and \nu=2/3 liquids in the presence of a \nu=1 quantum dot. When the quantum dot becomes large, its edge states join those of the Hall bar to reconstruct the edge states configuration. Taking randomness around the edges into account, we find that in the disorder-irrelevant phase the two-terminal conductance of the original \nu=1/3 system vanishes at zero temperature, while that of the \nu=2/3 case remains finite. This distinction is rooted in the fact that the \nu=2/3 state is built upon the \nu=1 state. In the disorder-dominated phase, the two-terminal conductance of \nu=1/3 system is (1/5)\frac{e^2}{h} and that of \nu=2/3 system is (1/2)\frac{e^2}{h}. We further apply the same idea to the \nu=5/2 system which realizes either the Pfaffian or the anti-Pfaffian states. In this case we study the edge transport in the presence of a central \nu=3 quantum dot. If the quantum dot is large enough for its edge states joining those of the Hall bar, in the disorder-irrelevant phase the total two-terminal conductance in the Pfaffian case is G^{Pf}_{tot}\rightarrow 2 \frac{e^2}{h} while that of anti-Pfaffian case is higher but not universal, G^{aPf}_{tot}> 2 \frac{e^2}{h}. This difference can be used to determine which one of these two states is realized at \nu=5/2. In the disorder-dominated phase, however, the total two-terminal conductances in these two systems are exactly the same, G^{Pf/aPf}_{tot}=(7/3)\frac{e^2}{h}.
We find that a system of particles interacting through a simple isotropic potential with a softened core is able to exhibit a rich phase behavior including: a liquid-liquid phase transition in the supercooled phase, as has been suggested for water; a gas-liquid-liquid triple point; a freezing line with anomalous reentrant behavior. The essential ingredient leading to these features resides in that the potential investigated gives origin to two effective core radii.
Faint high latitude carbon stars are rare objects commonly thought to be distant, luminous giants. For this reason they are often used to probe the structure of the Galactic halo; however more accurate investigation of photometric and spectroscopic surveys has revealed an increasing percentage of nearby objects with luminosities of main sequence stars. Aiming at clarifyin the nature of the ten carbon star candidates present in the General Catalog of the Second Byurakan Survey we analyzed new optical spectra and photometry and used astronomical databases available on the web. We verified that two stars are N-type giants already confirmed by other surveys. We found that four candidates are M type stars and confirmed the carbon nature of the remaining four stars; the characteristics of three of them are consistent with an early CH giant type. The fourth candidate, SBS1310+561 identified with a high proper motion star, is a rare type of dwarf carbon showing emission lines in its optical spectrum. We estimated absolute magnitudes and distances to the dwarf carbon and the three CH stars. Our limited sample confirmed the increasing evidence that spectroscopy or colour alone are not conclusive luminosity discriminants for CH$-$type carbon stars .
We construct Bridgeland stability conditions on the derived category of smooth quasi-projective Deligne-Mumford surfaces whose coarse moduli spaces have ADE singularities. This unifies the construction for smooth surfaces and Bridgeland's work on Kleinian singularities. The construction hinges on an orbifold version of the Bogomolov-Gieseker inequality for slope semistable sheaves on the stack, and makes use of the To\"en-Hirzebruch-Riemann-Roch theorem.
The Yang-Mills equations generalize Maxwell's equations to nonabelian gauge groups, and a quantity analogous to charge is locally conserved by the nonlinear time evolution. Christiansen and Winther observed that, in the nonabelian case, the Galerkin method with Lie algebra-valued finite element differential forms appears to conserve charge globally but not locally, not even in a weak sense. We introduce a new hybridization of this method, give an alternative expression for the numerical charge in terms of the hybrid variables, and show that a local, per-element charge conservation law automatically holds.
Episodic accretion is an important process in the evolution of young stars and their environment. The observed strong luminosity bursts of young stellar objects likely have a long lasting impact on the chemical evolution of the disk and envelope structure. We want to investigate observational signatures of the chemical evolution in the post-burst phase for embedded sources. With such signatures it is possible to identify targets that experienced a recent luminosity burst. We present a new model for episodic accretion chemistry based on the 2D, radiation thermo-chemical disk code ProDiMo. We have extended ProDiMo with a proper treatment for envelope structures. For a representative Class I model, we calculated the chemical abundances in the post-burst phase and produced synthetic observables like intensity maps and radial profiles. During a burst many chemical species, like CO, sublimate from the dust surfaces. As the burst ends they freeze out again (post-burst phase). This freeze-out happens from inside-out due to the radial density gradient in the disk and envelope structure. This inside-out freeze-out produces clear observational signatures in spectral line emission, like rings and distinct features in the slope of radial intensity profiles. We fitted synthetic C18O J=2-1 observations with single and two component fits and find that post-burst images are much better matched by the latter. Comparing the quality of such fits allows identification of post-burst targets in a model-independent way. Our models confirm that it is possible to identify post-burst objects from spatially resolved CO observations. However, to derive proper statistics, like frequencies of bursts, from observations it is important to consider aspects like the inclination and structure of the target and also dust properties as those have a significant impact on the freeze-out timescale.
Understanding nonthermal particle generation, transport, and escape in solar flares requires detailed quantification of the particle evolution in the realistic 3D domain where the flare takes place. Rather surprisingly, apart of standard flare scenario and integral characteristics of the nonthermal electrons, not much is known about actual evolution of nonthermal electrons in the 3D spatial domain. This paper attempts to begin to remedy this situation by creating sets of evolving 3D models, the synthesized emission from which matches the evolving observed emission. Here we investigate two contrasting flares: a dense, "coronal-thick-target" flare SOL2002-04-12T17:42, that contained a single flare loop observed in both microwave and X-ray, and a more complex flare, SOL2015-06-22T17:50, that contained at least four distinct flaring loops needed to consistently reproduce the microwave and X-ray emission. Our analysis reveals differing evolution pattern of the nonthermal electrons in the dense and tenuous loops; however, both of which imply the central role of resonant wave-particle interaction with turbulence. These results offer new constraints for theory and models of the particle acceleration and transport in solar flares.
We report a temperature-dependent Raman scattering investigation of thin film rare earth nickelates SmNiO3, NdNiO3 and Sm0.60Nd0.40NiO3, which present a metal-to-insulator (MI) transition at TMI and an antiferromagnetic-paramagnetic Neel transition at TN. Our results provide evidence that all investigated samples present a structural phase transition at TMI but the Raman signature across TMI is significantly different for NdNiO3 (TMI = TN) compared to SmNiO3 and Sm0.60Nd0.40NiO3 (TMI =/ TN). It is namely observed that the paramagnetic-insulator phase (TN < T < TMI) in SmNiO3 and Sm0.60Nd0.40NiO3 is characterized by a pronounced softening of one particular phonon band around 420 cm-1. This signature is unusual and points to an important and continuous change in the distortion of NiO6 octahedra (thus the Ni-O bonding) which stabilizes upon cooling at the magnetic transition. The observed behaviour might well be a general feature for all rare earth nickelates with TMI =/ TN and illustrates intriguing coupling mechanism in the TMI > T > TN regime.
We present the relationship between the Schur function and the projective Schur functions for the general case where when no restrictions are imposed on either the argument or the partition with which the Schur function is labeled. This work is a continuation of our joint works with John Harnad.
As protein folding is a NP-complete problem, artificial intelligence tools like neural networks and genetic algorithms are used to attempt to predict the 3D shape of an amino acids sequence. Underlying these attempts, it is supposed that this folding process is predictable. However, to the best of our knowledge, this important assumption has been neither proven, nor studied. In this paper the topological dynamic of protein folding is evaluated. It is mathematically established that protein folding in 2D hydrophobic-hydrophilic (HP) square lattice model is chaotic as defined by Devaney. Consequences for both structure prediction and biology are then outlined.
We provide a combinatorial algorithm for constructing the stable Auslander-Reiten component containing a given indecomposable module of a symmetric special biserial algebra using only information from its underlying Brauer graph. We also show that the structure of the Auslander-Reiten quiver is closely related to the distinct Green walks around the Brauer graph and detail the relationship between the precise shape of the stable Auslander-Reiten components for domestic Brauer graph algebras and their underlying graph. Furthermore, we show that the specific component containing a given simple or indecomposable projective module for any Brauer graph algebra is determined by the edge in the Brauer graph associated to the module.
We study the evolution of dynamic properties of the BCS/BEC (Bose-Einstein Condensate) crossover in a relativistic superfluid as well as its thermodynamics. We put particular focus on the change in the soft mode dynamics throughout the crossover, and find that three different effective theories describe it; these are, the time-dependent Ginzburg-Landau (TDGL) theory in the BCS regime, the Gross-Pitaevskii (GP) theory in the BEC regime, and the relativistic Gross-Pitaevskii (RGP) equation in the relativistic BEC (RBEC) regime. Based on these effective theories, we discuss how the physical nature of soft mode changes in the crossover. We also discuss some fluid-dynamic aspects of the crossover using these effective theories with particular focus on the shear viscosity. In addition to the study of soft modes, we show that the ``quantum fluctuation'' is present in the relativistic fermion system, which is in contrast to the usual Nozi`eres--Schmit-Rink (NSR) theory. We clarify the physical meaning of the quantum fluctuation, and find that it drastically increases the critical temperature in the weak coupling BCS regime.
In this paper we develop an artificial initial boundary value problem for the high-order heat equation in a bounded domain $\Omega$. It is found an unique classical solution of this problem in an explicit form and shown that the solution of the artificial initial boundary value problem is equal to the solution of the infinite problem (Cauchy problem) in $\Omega$.
We study the dynamics of Abelian gauge fields invariant under transverse diffeomorphisms (TDiff) in cosmological contexts. We show that in the geometric optics approximation, very much as for Diff invariant theories, the corresponding massless gauge bosons propagate along null geodesics and particle number is conserved. In addition, the polarization vectors are orthogonal to the propagation direction and the physical (transverse projection) polarization is parallel transported along the geodesics. We also consider TDiff invariant Dirac spinors, study the coupling to the gauge fields and analyze the conditions in order to avoid violations of Einstein's Equivalence Principle. The contributions to the energy-momentum tensor of the gauge field are also analyzed. We find that, in general, the breaking of Diff invariance makes the electric and magnetic parts of the vector field to gravitate in a different way. In the sub-Hubble regime we recover the standard radiation-like behaviour of the energy density, however in the super-Hubble regime the behaviour is totally different to the Diff case, thus opening up a wide range of possibilities for cosmological model building. In particular, possible effects on the evolution of large-scale primordial magnetic fields are discussed.
In this work, we develop a novel Monte Carlo method for solving the electromagnetic scattering problem. The method is based on a formal solution of the scattering problem as a modified Born series whose coefficients are found by a conformal transformation. The terms of the Born series are approximated by sampling random elements of its matrix representation, computed by the Method of Moments. Unlike other techniques as the Fast Multiple Method, this Monte Carlo method does not require communications between processors, which makes it suitable for large parallel executions.
Volumetric crystal structure indexing and orientation mapping are key data processing steps for virtually any quantitative study of spatial correlations between the local chemistry and the microstructure of a material. For electron and X-ray diffraction methods it is possible to develop indexing tools which compare measured and analytically computed patterns to decode the structure and relative orientation within local regions of interest. Consequently, a number of numerically efficient and automated software tools exist to solve the above characterisation tasks. For atom probe tomography (APT) experiments, however, the strategy of making comparisons between measured and analytically computed patterns is less robust because many APT datasets may contain substantial noise. Given that general enough predictive models for such noise remain elusive, crystallography tools for APT face several limitations: Their robustness to noise, and therefore, their capability to identify and distinguish different crystal structures and orientation is limited. In addition, the tools are sequential and demand substantial manual interaction. In combination, this makes robust uncertainty quantifying with automated high-throughput studies of the latent crystallographic information a difficult task with APT data. To improve the situation, we review the existent methods and discuss how they link to those in the diffraction communities. With this we modify some of the APT methods to yield more robust descriptors of the atomic arrangement. We report how this enables the development of an open-source software tool for strong-scaling and automated identifying of crystal structure and mapping crystal orientation in nanocrystalline APT datasets with multiple phases.
The specific heat $C$ of the single-layer cuprate superconductor HgBa$_2$CuO$_{4 + \delta}$ was measured in an underdoped crystal with $T_{\rm c} = 72$ K at temperatures down to $2$ K in magnetic fields up to $35$ T, a field large enough to suppress superconductivity at that doping ($p \simeq 0.09$). In the normal state at $H = 35$ T, a residual linear term of magnitude $\gamma = 12 \pm 2$ mJ/K$^2$mol is observed in $C/T$ as $T \to 0$, a direct measure of the electronic density of states. This high value of $\gamma$ has two major implications. First, it is significantly larger than the value measured in overdoped cuprates outside the pseudogap phase ($p >p^\star$), such as La$_{2-x}$Sr$_x$CuO$_4$ and Tl$_2$Ba$_2$CuO$_{6 + \delta}$ at $p \simeq 0.3$, where $\gamma \simeq 7$ mJ/K$^2$mol. Given that the pseudogap causes a loss of density of states, and assuming that HgBa$_2$CuO$_{4 + \delta}$ has the same $\gamma$ value as other cuprates at $p \simeq 0.3$, this implies that $\gamma$ in HgBa$_2$CuO$_{4 + \delta}$ must peak between $p \simeq 0.09$ and $p \simeq 0.3$, namely at (or near) the critical doping $p^\star$ where the pseudogap phase is expected to end ($p^\star\simeq 0.2$). Secondly, the high $\gamma$ value implies that the Fermi surface must consist of more than the single electron-like pocket detected by quantum oscillations in HgBa$_2$CuO$_{4 + \delta}$ at $p \simeq 0.09$, whose effective mass $m^\star= 2.7\times m_0$ yields only $\gamma = 4.0$ mJ/K$^2$mol. This missing mass imposes a revision of the current scenario for how pseudogap and charge order respectively transform and reconstruct the Fermi surface of cuprates.
Wolf-Rayet (WR) stars have a severe impact on their environments owing to their strong ionizing radiation fields and powerful stellar winds. Since these winds are considered to be driven by radiation pressure, it is theoretically expected that the degree of the wind mass-loss depends on the initial metallicity of WR stars. Following our comprehensive studies of WR stars in the Milky Way, M31, and the LMC, we derive stellar parameters and mass-loss rates for all seven putatively single WN stars known in the SMC. Based on these data, we discuss the impact of a low-metallicity environment on the mass loss and evolution of WR stars. The quantitative analysis of the WN stars is performed with the Potsdam Wolf-Rayet (PoWR) model atmosphere code. The physical properties of our program stars are obtained from fitting synthetic spectra to multi-band observations. In all SMC WN stars, a considerable surface hydrogen abundance is detectable. The majority of these objects have stellar temperatures exceeding 75 kK, while their luminosities range from 10^5.5 to 10^6.1 Lsun. The WN stars in the SMC exhibit on average lower mass-loss rates and weaker winds than their counterparts in the Milky Way, M31, and the LMC. By comparing the mass-loss rates derived for WN stars in different Local Group galaxies, we conclude that a clear dependence of the wind mass-loss on the initial metallicity is evident, supporting the current paradigm that WR winds are driven by radiation. A metallicity effect on the evolution of massive stars is obvious from the HRD positions of the SMC WN stars at high temperatures and high luminosities. Standard evolution tracks are not able to reproduce these parameters and the observed surface hydrogen abundances. Homogeneous evolution might provide a better explanation for their evolutionary past.
This article reviews some recent developments for new cooling technologies in the fields of condensed matter physics and cold gases, both from an experimental and theoretical point of view. The main idea is to make use of distinct many-body interactions of the system to be cooled which can be some cooling stage or the material of interest itself, as is the case in cold gases. For condensed matter systems, we discuss magnetic cooling schemes based on a large magnetocaloric effect as a result of a nearby quantum phase transition and consider effects of geometrical frustration. For ultracold gases, we review many-body cooling techniques, such as spin-gradient and Pomeranchuk cooling, which can be applied in the presence of an optical lattice. We compare the cooling performance of these new techniques with that of conventional approaches and discuss state-of-the-art applications.
Let $q$ be a nontrivial odd prime power, and let $n \ge 2$ be a natural number with $(n,q) \ne (2,3)$. We characterize the groups $PSL_n(q)$ and $PSU_n(q)$ by their $2$-fusion systems. This contributes to a programme of Aschbacher aiming at a simplified proof of the classification of finite simple groups.