text
stringlengths
133
1.92k
summary
stringlengths
24
228
We explore the extinction/reddening of ~35,000 uniformly selected quasars with 0<z<5.3 in order to better understand their intrinsic optical/ultraviolet spectral energy distributions. Using rest-frame optical-UV photometry taken from the Sloan Digital Sky Survey's (SDSS) 7th data release, cross-matched to WISE in the mid-infrared, 2MASS and UKIDSS in the near-infrared, and GALEX in the UV, we isolate outliers in the color distribution and find them well described by an SMC-like reddening law. A hierarchical Bayesian model with a Markov Chain Monte Carlo sampling method was used to find distributions of powerlaw indices and E(B-V) consistent with both the broad absorption line (BAL) and non-BAL samples. We find that, of the ugriz color-selected type 1 quasars in SDSS, 2.5% (13%) of the non-BAL (BAL) sample are consistent with E(B-V)>0.1 and 0.1% (1.3%) with E(B-V)>0.2. Simulations show both populations of quasars are intrinsically bluer than the mean composite, with a mean spectral index (${\alpha}_{\lambda}$) of -1.79 (-1.83). The emission and absorption-line properties of both samples reveal that quasars with intrinsically red continua have narrower Balmer lines and stronger ionizing spectral lines, the latter indicating a harder continuum in the extreme-UV and the former pointing to differences in black hole mass and/or orientation.
Mining for Dust in Type 1 Quasars
This paper produces an efficient Semidefinite Programming (SDP) solution for community detection that incorporates non-graph data, which in this context is known as side information. SDP is an efficient solution for standard community detection on graphs. We formulate a semi-definite relaxation for the maximum likelihood estimation of node labels, subject to observing both graph and non-graph data. This formulation is distinct from the SDP solution of standard community detection, but maintains its desirable properties. We calculate the exact recovery threshold for three types of non-graph information, which in this paper are called side information: partially revealed labels, noisy labels, as well as multiple observations (features) per node with arbitrary but finite cardinality. We find that SDP has the same exact recovery threshold in the presence of side information as maximum likelihood with side information. Thus, the methods developed herein are computationally efficient as well as asymptotically accurate for the solution of community detection in the presence of side information. Simulations show that the asymptotic results of this paper can also shed light on the performance of SDP for graphs of modest size.
Semidefinite Programming for Community Detection with Side Information
Main goal of this note is to give a result for nonexistence of global solutions and determine the critical exponent as well to a semi-linear structurally damped wave equation.
A result for nonexistence of global solutions to semi-linear structural damped wave model
We consider heavy sterile neutrinos $\nu_s$, with mass in the range 10 MeV $\lesssim m_s \lesssim m_{\pi} \sim 135$ MeV, thermally produced in the early universe and freezing out after the QCD phase transition. The existence of these neutrinos may alter the value of the effective number of neutrino species $N_{\rm eff}$, measured by the cosmic microwave background (CMB), and the ${}^4$He production during the BBN. We provide a detailed account of the solution of the relevant Boltzmann equations and we identify the parameter space constrained by current Planck satellite data and forecast the parameter space probed by future Stage-4 ground-based CMB (CMB-S4) observations.
Cosmological constraints on heavy sterile neutrinos
Weinberg et al. calculated the anthropic likelihood of the cosmological constant using a model assuming that the number of observers is proportional to the total mass of gravitationally collapsed objects, with mass greater than a certain threshold, at t \rightarrow \infty. We argue that Weinberg's model is biased toward small \Lambda, and to try to avoid this bias we modify his model in a way that the number of observers is proportional to the number of collapsed objects, with mass and time equal to certain preferred mass and time scales. Compared to Weinberg's model, this model gives a lower anthropic likelihood of \Lambda_0 (T_+(\Lambda_0) ~ 5%). On the other hand, the anthropic likelihood of the primordial density perturbation amplitude from this model is high, while the likelihood from Weinberg's model is low. Furthermore, observers will be affected by the history of the collapsed object, and we introduce a method to calculate the anthropic likelihoods of \Lambda and Q from the mass history using the extended Press-Schechter formalism. The anthropic likelihoods for $\Lambda$ and Q from this method are similar to those from our single mass constraint model, but, unlike models using the single mass constraint which always have degeneracies between \Lambda and Q, the results from models using the mass history are robust even if we allow both \Lambda and Q to vary. In the case of Weinberg's flat prior distribution of \Lambda (pocket based multiverse measure), our mass history model gives T_+(\Lambda_0) ~ 10%, while the scale factor cutoff measure and the causal patch measure give T_+(\Lambda_0) \geq 30%.
Anthropic Likelihood for the Cosmological Constant and the Primordial Density Perturbation Amplitude
We study effects of quenched disorder on coupled two-dimensional arrays of Luttinger liquids (LL) as a model for stripes in high-T_c compounds. In the framework of a renormalization-group analysis, we find that weak inter-LL charge-density-wave couplings are always irrelevant as opposed to the pure system. By varying either disorder strength, intra- or inter-LL interactions, the system can undergo a delocalization transition between an insulator and a novel strongly anisotropic metallic state with LL-like transport. This state is characterized by short-ranged charge-density-wave order, the superconducting order is quasi long-ranged along the stripes and short-ranged in the transversal direction.
Delocalization in Coupled Luttinger Liquids with Impurities
Using a first-principles approach we calculate the acoustic electron-phonon couplings in graphene for the transverse (TA) and longitudinal (LA) acoustic phonons. Analytic forms of the coupling matrix elements valid in the long-wavelength limit are found to give an almost quantitative description of the first-principles based matrix elements even at shorter wavelengths. Using the analytic forms of the coupling matrix elements, we study the acoustic phonon-limited carrier mobility for temperatures 0-200 K and high carrier densities of 10^{12}-10^{13} cm^{-2}. We find that the intrinsic effective acoustic deformation potential of graphene is \Xi_eff = 6.8 eV and that the temperature dependence of the mobility \mu ~ T^{-\alpha} increases beyond an \alpha = 4 dependence even in the absence of screening when the full coupling matrix elements are considered. The large disagreement between our calculated deformation potential and those extracted from experimental measurements (18-29 eV) indicates that additional or modified acoustic phonon-scattering mechanisms are at play in experimental situations.
Unraveling the acoustic electron-phonon interaction in graphene
Ceramics of A_{2}FeReO_{6} double-perovskite have been prepared and studied for A=Ba and Ca. Ba_{2}FeReO_{6} has a cubic structure (Fm3m) with $a\approx $8.0854(1) \AA whereas Ca_{2}FeReO_{6} has a distorted monoclinic symmetry with $a\approx 5.396(1) \AA, b\approx 5.522(1) \AA, c\approx 7.688(2) \AA$ and $\beta =90.4^{\circ} (P21/n)$. The barium compound is metallic from 5 K to 385 K, i.e. no metal-insulator transition has been seen up to 385 K, and the calcium compound is semiconducting from 5 K to 385 K. Magnetization measurements show a ferrimagnetic behavior for both materials, with T_{c}=315 K for Ba_{2}FeReO_{6} and above 385 K for Ca_{2}FeReO_{6}. A specific heat measurement on the barium compound gave an electron density of states at the Fermi level, N(E_{F}) equal to 6.1$\times 10^{24} eV^{-1}mole^{-1}$. At 5 K, we observed a negative magnetoresistance of 10 % in a magnetic field of 5 T, but only for Ba_{2}FeReO_{6}. Electrical, thermal and magnetic properties are discussed and compared to the analogous compounds Sr_{2}Fe(Mo,Re)O_{6}.
Properties of the ferrimagnetic double-perovskite A_{2}FeReO_{6} (A=Ba and Ca)
We construct low-energy Goldstone superfield actions describing various patterns of the partial spontaneous breakdown of two-dimensional N=(1,1), N=(2,0) and N=(2,2) supersymmetries, with the main focus on the last case. These nonlinear actions admit a representation in the superspace of the unbroken supersymmetry as well as in a superspace of the full supersymmetry. The natural setup for implementing the partial breaking in a self-consistent way is provided by the appropriate central extensions of D=2 supersymmetries, with the central charges generating shift symmetries on the Goldstone superfields. The Goldstone superfield actions can be interpreted as manifestly world-sheet supersymmetric actions in the static gauge of some superstrings and D1-branes in D=3 and D=4 Minkowski spaces. As an essentially new example, we elaborate on the action representing the 1/4 partial breaking pattern N=(2,2) -> N=(1,0).
Partial spontaneous breaking of two-dimensional supersymmetry
In the literature on projection-based nonlinear model order reduction for fluid dynamics problems, it is often claimed that due to modal truncation, a projection-based reduced-order model (PROM) does not resolve the dissipative regime of the turbulent energy cascade and therefore is numerically unstable. Efforts at addressing this claim have ranged from attempting to model the effects of the truncated modes to enriching the classical subspace of approximation in order to account for the truncated phenomena. This paper challenges this claim. Exploring the relationship between projection-based model order reduction and semi-discretization and using numerical evidence from three relevant flow problems, it argues in an orderly manner that the real culprit behind most if not all reported numerical instabilities of PROMs for turbulence and convection-dominated turbulent flow problems is the Galerkin framework that has been used for constructing the PROMs. The paper also shows that alternatively, a Petrov-Galerkin framework can be used to construct numerically stable PROMs for convection-dominated laminar as well as turbulent flow problems that are numerically stable and accurate, without resorting to additional closure models or tailoring of the subspace of approximation. It also shows that such alternative PROMs deliver significant speedup factors.
On the stability of projection-based model order reduction for convection-dominated laminar and turbulent flows
This PhD thesis mainly deals with deformations of locally anti-de Sitter black holes, focusing in particular on BTZ black holes. We first study the generic rotating and (extended) non-rotating BTZ black holes within a pseudo-Riemannian symmetric spaces framework, emphasize on the role played by solvable subgroups of SL(2,R) in the black hole structure and derive their global geometry in a group-theoretical way. We analyse how these observations are transposed in the case of higher-dimensional locally AdS black holes. We then show that there exists, in SL(2,R), a family of twisted conjugacy classes which give rise to winding symmetric WZW D1-branes in a BTZ black hole background. The term "deformation" is then considered in two distinct ways. On the one hand, we deform the algebra of functions on the branes in the sense of (strict) deformation quantization, giving rise to a "noncommutative black hole". In the same context, we investigate the question of invariant deformations of the hyperbolic plane and present explicit formulae. On the other hand, we explore the moduli space of the (orbifolded) SL(2,R) WZW model by studying its marginal deformations, yielding namely a new class of exact black string solutions in string theory. These deformations also allow us to relate the D1-branes in BTZ black holes to D0-branes in the 2D black hole. A fair proportion of this thesis consists of (hopefully) pedagogical short introductions to various subjects: deformation quantization, string theory, WZW models, symmetric spaces, symplectic and Poisson geometry.
Deformations of anti-de Sitter black holes
PatentTransformer is our codename for patent text generation based on Transformer-based models. Our goal is "Augmented Inventing." In this second version, we leverage more of the structural metadata in patents. The structural metadata includes patent title, abstract, and dependent claim, in addition to independent claim previously. Metadata controls what kind of patent text for the model to generate. Also, we leverage the relation between metadata to build a text-to-text generation flow, for example, from a few words to a title, the title to an abstract, the abstract to an independent claim, and the independent claim to multiple dependent claims. The text flow can go backward because the relation is trained bidirectionally. We release our GPT-2 models trained from scratch and our code for inference so that readers can verify and generate patent text on their own. As for generation quality, we measure it by both ROUGE and Google Universal Sentence Encoder.
PatentTransformer-2: Controlling Patent Text Generation by Structural Metadata
The tension between measurements of the Hubble constant obtained at different redshifts may provide a hint of new physics active in the relatively early universe, around the epoch of matter-radiation equality. A leading paradigm to resolve the tension is a period of early dark energy, in which a scalar field contributes a subdominant part of the energy budget of the universe at this time. This scenario faces significant fine-tuning problems which can be ameliorated by a non-trivial coupling of the scalar to the standard model neutrinos. These become non-relativistic close to the time of matter-radiation equality, resulting in an energy injection into the scalar that kick-starts the early dark energy phase, explaining its coincidence with this seemingly unrelated epoch. We present a minimal version of this neutrino-assisted early dark energy model, and perform a detailed analysis of its predictions and theoretical constraints. We consider both particle physics constraints -- that the model constitute a well-behaved effective field theory for which the quantum corrections are under control, so that the relevant predictions are within its regime of validity -- and the constraints provided by requiring a consistent cosmological evolution from early through to late times. Our work paves the way for testing this scenario using cosmological data sets.
Neutrino-Assisted Early Dark Energy: Theory and Cosmology
Second and third harmonic signals have been usually generated by using nonlinear crystals, but that method suffers from the low efficiency in small thicknesses. Metamaterials can be used to generate harmonic signals in small thicknesses. Here, we introduce a new method for amplifying second and third harmonic generation from magnetic metamaterials. We show that by using a grating structure under the metamaterial, the grating and the metamaterial form a resonator, and amplify the resonant behavior of the metamaterial. Therefore, we can generate second and third harmonic signals with high efficiency from this metamaterial-based nonlinear media.
A highly efficient method for second and third harmonic generation from magnetic metamaterials
We study the Fujita-type conjecture proposed by Popa and Schnell. We obtain an effective bound on the global generation of direct images of pluri-adjoint line bundles on the regular locus. We also obtain an effective bound on the generic global generation for a Kawamata log canonical $\mathbb{Q}$-pair. We use analytic methods such as $L^2$ estimates, $L^2$ extensions and injective theorems of cohomology groups.
On the global generation of direct images of pluri-adjoint line bundles
We estimate the energy reservoir available in the deconfinement phase transition induced collapse of a neutron star to its hybrid star mass twin on the "third family" branch, using a recent equation of state of dense matter. The available energy corresponding to the mass-energy difference between configurations is comparable with energies of the most violent astrophysical burst processes. An observational outcome of such a dynamical transition might be fast radio bursts, specifically a recent example of a FRB with a double-peak structure in its light curve.
Energy bursts from deconfinement in high-mass twin stars
The singly charged $SU(2)_L$ singlet scalar, with its necessarily flavour violating couplings to leptons, lends itself particularly well for an explanation of the Cabibbo Angle Anomaly and of hints for lepton flavour universality violation in $\tau \to \mu\bar \nu\nu$. In a setup addressing both anomalies, we predict loop-induced effects in $\tau\to e\gamma$ and in $\tau \to e\mu\mu$. A recast of ATLAS selectron and smuon searches allows us to derive a coupling-independent lower limit of $\approx 200$ GeV on the mass of the singly charged singlet scalar. At a future $e^+e^-$ collider, dark matter mono-photon searches could provide a complementary set of bounds.
Explaining the Cabibbo Angle Anomaly and Lepton Flavour Universality Violation in Tau Decays With a Singly-Charged Scalar Singlet
We show the uniqueness and existence of the Euler form for a simply-laced generalized root system. This enables us to show that the Coxeter element for a simply-laced generalized root system is admissible in the sense of R.~W.~Carter. As an application, the isomorphism classes of simply-laced generalized root systems with positive definite Cartan forms are classified by Cartar's admissible diagrams associated to their Coxeter elements.
On simply-laced generalized root systems
A permutation sequence $(\sigma_n)_{n \in \mathbb{N}}$ is said to be convergent if, for every fixed permutation $\tau$, the density of occurrences of $\tau$ in the elements of the sequence converges. We prove that such a convergent sequence has a natural limit object, namely a Lebesgue measurable function $Z:[0,1]^2 \to [0,1]$ with the additional properties that, for every fixed $x \in [0,1]$, the restriction $Z(x,\cdot)$ is a cumulative distribution function and, for every $y \in [0,1]$, the restriction $Z(\cdot,y)$ satisfies a "mass" condition. This limit process is well-behaved: every function in the class of limit objects is a limit of some permutation sequence, and two of these functions are limits of the same sequence if and only if they are equal almost everywhere. An important ingredient in the proofs is a new model of random permutations, which generalizes previous models and is interesting for its own sake.
Limits of permutation sequences through permutation regularity
Layered hybrid perovskites have attracted much attention in recent years due to their emergent physical properties and exceptional functional performances, but the coexistence of lattice order and structural disorder severely hinders our understanding of these materials. One unsolved problem regards how the lattice dynamics are affected by the dimensional engineering of the inorganic frameworks and the interaction with the molecular moieties. Here, we address this question by using a combination of high-resolution spontaneous Raman scattering, high-field terahertz spectroscopy, and molecular dynamics simulations. This approach enables us to reveal the structural vibrations and disorder in and out of equilibrium and provides surprising observables that differentiate single- and double-layered perovskites. While no distinct vibrational coherence is observed in double-layer perovskites, we discover that an off-resonant terahertz pulse can selectively drive a long-lived coherent phonon mode through a two-photon process in the single-layered system. This difference highlights the dramatic change in the lattice environment as the dimension is reduced. The present findings pave the way for the ultrafast structural engineering of hybrid lattices as well as for developing high-speed optical modulators based on layered perovskites.
Discovery of enhanced lattice dynamics in a single-layered hybrid perovskite
$\eta$ Car is a massive, eccentric binary with a rich observational history. We obtained the first high-cadence, high-precision light curves with the BRITE-Constellation nanosatellites over 6 months in 2016 and 6 months in 2017. The light curve is contaminated by several sources including the Homunculus nebula and neighboring stars, including the eclipsing binary CPD$-$59$^\circ$2628. However, we found two coherent oscillations in the light curve. These may represent pulsations that are not yet understood but we postulate that they are related to tidally excited oscillations of $\eta$ Car's primary star, and would be similar to those detected in lower-mass eccentric binaries. In particular, one frequency was previously detected by van Genderen et al. and Sterken et al. through the time period of 1974 to 1995 through timing measurements of photometric maxima. Thus, this frequency seems to have been detected for nearly four decades, indicating that it has been stable in frequency over this time span. These pulsations could help provide the first direct constraints on the fundamental parameters of the primary star if confirmed and refined with future observations.
BRITE-Constellation reveals evidence for pulsations in the enigmatic binary $\eta$ Carinae
As the capabilities of language models continue to advance, it is conceivable that "one-size-fits-all" model will remain as the main paradigm. For instance, given the vast number of languages worldwide, many of which are low-resource, the prevalent practice is to pretrain a single model on multiple languages. In this paper, we add to the growing body of evidence that challenges this practice, demonstrating that monolingual pretraining on the target language significantly improves models already extensively trained on diverse corpora. More specifically, we further pretrain GPT-J and LLaMA models on Portuguese texts using 3% or less of their original pretraining budget. Few-shot evaluations on Poeta, a suite of 14 Portuguese datasets, reveal that our models outperform English-centric and multilingual counterparts by a significant margin. Our best model, Sabi\'a-65B, performs on par with GPT-3.5-turbo. By evaluating on datasets originally conceived in the target language as well as translated ones, we study the contributions of language-specific pretraining in terms of 1) capturing linguistic nuances and structures inherent to the target language, and 2) enriching the model's knowledge about a domain or culture. Our results indicate that the majority of the benefits stem from the domain-specific knowledge acquired through monolingual pretraining.
Sabi\'a: Portuguese Large Language Models
As a tool for capturing irregular temporal dependencies (rather than resorting to binning temporal observations to construct time series), Hawkes processes with exponential decay have seen widespread adoption across many application domains, such as predicting the occurrence time of the next earthquake or stock market spike. However, practical applications of Hawkes processes face a noteworthy challenge: There is substantial and often unquantified variance in decay parameter estimations, especially in the case of a small number of observations or when the dynamics behind the observed data suddenly change. We empirically study the cause of these practical challenges and we develop an approach to surface and thereby mitigate them. In particular, our inspections of the Hawkes process likelihood function uncover the properties of the uncertainty when fitting the decay parameter. We thus propose to explicitly capture this uncertainty within a Bayesian framework. With a series of experiments with synthetic and real-world data from domains such as "classical" earthquake modeling or the manifestation of collective emotions on Twitter, we demonstrate that our proposed approach helps to quantify uncertainty and thereby to understand and fit Hawkes processes in practice.
Surfacing Estimation Uncertainty in the Decay Parameters of Hawkes Processes with Exponential Kernels
This paper presents a control design for the one-phase Stefan problem under actuator delay via a backstepping method. The Stefan problem represents a liquid-solid phase change phenomenon which describes the time evolution of a material's temperature profile and the interface position. The actuator delay is modeled by a first-order hyperbolic partial differential equation (PDE), resulting in a cascaded transport-diffusion PDE system defined on a time-varying spatial domain described by an ordinary differential equation (ODE). Two nonlinear backstepping transformations are utilized for the control design. The setpoint restriction is given to guarantee a physical constraint on the proposed controller for the melting process. This constraint ensures the exponential convergence of the moving interface to a setpoint and the exponential stability of the temperature equilibrium profile and the delayed controller in the ${\cal H}_1$ norm. Furthermore, robustness analysis with respect to the delay mismatch between the plant and the controller is studied, which provides analogous results to the exact compensation by restricting the control gain.
Delay Compensated Control of the Stefan Problem and Robustness to Delay Mismatch
A multilayer network is composed of multiple layers, where different layers have the same set of vertices but represent different types of interactions. Nevertheless, some layers are interdependent or structurally similar in the multilayer network. In this paper, we present a maximum a posteriori estimation based model to reconstruct a specific layer in the multilayer network. The SimHash algorithm is used to compute the similarities between various layers. And the layers with similar structures are used to determine the parameters of the conjugate prior. With this model, we can also predict missing links and direct experiments for finding potential links. We test the method through two real multilayer networks, and the results show that the maximum a posteriori estimation is promising in reconstructing the layer of interest even with a large number of missing links.
Layer reconstruction and missing link prediction of multilayer network with a Maximum A Posteriori estimation
In this work, we present a mixed sensorless strategy for Permanent Magnets Synchronous Machines, combining a torque/current controller and an observer for position, speed, flux and stator resistance. The proposed co-design is motivated by the need of an appropriate signal injection technique, in order to guarantee full state observability. Neither the typical constant or slowly-varying speed assumptions, nor a priori mechanical model information are used in the observer design. Instead, the rotor speed is modeled as an unknown input disturbance with constant (unknown) sign and uniformly non-zero magnitude. With the proposed architecture, it is shown that the torque tracking and signal injection tasks can be achieved and asymptotically decoupled. Because of these features, we refer to this strategy as a sensorless controller-observer with no mechanical model. Employing a gradient descent resistance/back-EMF estimation, combined with the unit circle formalism to describe the rotor position, we rigorously prove regional practical asymptotic stability of the overall structure, with a domain of attraction that can be made arbitrarily large, not including a lower dimensional manifold. The effectiveness of this design is further validated with numerical simulations, related to a challenging application of UAV propellers control.
A Robust Sensorless Controller-Observer Strategy for PMSMs with Unknown Resistance and Mechanical Model
We present a simple and effective multigrid-based Poisson solver of second-order accuracy in both gravitational potential and forces in terms of the one, two and infinity norms. The method is especially suitable for numerical simulations using nested mesh refinement. The Poisson equation is solved from coarse to fine levels using a one-way interface scheme. We introduce anti-symmetrically linear interpolation for evaluating the boundary conditions across the multigrid hierarchy. The spurious forces commonly observed at the interfaces between refinement levels are effectively suppressed. We validate the method using two- and three-dimensional density-force pairs that are sufficiently smooth for probing the order of accuracy.
Self-gravitational Force Calculation of Second Order Accuracy Using Multigrid Method on Nested Grids
In this paper an iterated function system on the space of distribution functions is built. The inverse problem is introduced and studied by convex optimization problems. Some applications of this method to approximation of distribution functions and to estimation theory are given.
Approximating distribution functions by iterated function systems
We present optical and ultraviolet spectra, lightcurves, and Doppler tomograms of the low-mass X-ray binary EXO 0748-676. Using an extensive set of 15 emission line tomograms, we show that, along with the usual emission from the stream and ``hot spot'', there is extended non-axisymmetric emission from the disk rim. Some of the emission and Halpha and beta absorption features lend weight to the hypothesis that part of the stream overflows the disk rim and forms a two phase medium. The data are consistent with a 1.35M_sun neutron star with a main sequence companion and hence a mass ratio q~0.34.
Multiwavelength Observations of EXO 0748--676 -- II. Emission Line Behavior
This paper presents a geometric microcanonical ensemble perspective on two-dimensional Truncated Euler flows, which contain a finite number of (Fourier) modes and conserve energy and enstrophy. We explicitly perform phase space volume integrals over shells of constant energy and enstrophy. Two applications are considered. In a first part, we determine the average energy spectrum for highly condensed flow configurations and show that the result is consistent with Kraichnan's canonical ensemble description, despite the fact that no thermodynamic limit is invoked. In a second part, we compute the probability density for the largest-scale mode of a free-slip flow in a square, which displays reversals. We test the results against numerical simulations of a minimal model and find excellent agreement with the microcanonical theory, unlike the canonical theory, which fails to describe the bimodal statistics. This article is part of the theme issue "Mathematical problems in physical fluid dynamics".
Geometric microcanonical theory of two-dimensional Truncated Euler flows
In this work, we consider the performance of using a quantum algorithm to predict a result for a binary classification problem if a machine learning model is an ensemble from any simple classifiers. Such an approach is faster than classical prediction and uses quantum and classical computing, but it is based on a probabilistic algorithm. Let $N$ be a number of classifiers from an ensemble model and $O(T)$ be the running time of prediction on one classifier. In classical case, an ensemble model gets answers from each classifier and "averages" the result. The running time in classical case is $O\left( N \cdot T \right)$. We propose an algorithm which works in $O\left(\sqrt{N} \cdot T\right)$.
The Quantum Version of Prediction for Binary Classification Problem by Ensemble Methods
Using the results on the $1/n$-expansion of the Verblunsky coefficients for a class of polynomials orthogonal on the unit circle with $n$ varying weight, we prove that the local eigenvalue statistic for unitary matrix models is independent of the form of the potential, determining the matrix model. Our proof is applicable to the case of four times differentiable potentials and of supports, consisting of one interval.
Universality at the Edge for Unitary Matrix Models
In solid state physics, the electron-phonon interaction (EPI) is central to many phenomena. The theory of the renormalization of electronic properties due to EPIs became well established with the theory of Allen-Heine-Cardona, usually applied to second order in perturbation theory (P2). However, this is only valid in the weak coupling regime, while strong EPIs have been reported in many materials. Although non-perturbative (NP) methods have started to arise in the last years, they are usually not well justified, and it is not clear to what degree they reproduce the exact theory. To address this issue, we present a stochastic approach for the evaluation of the non-perturbative interacting Green's function in the adiabatic limit, and show it is equivalent to the Feynman expansion to all orders in the perturbation. Also, by defining a self-energy, we can reduce the effect of broadening needed in numerical calculations, improving convergence in the supercell size. In addition, we clarify whether it is better to average the Green's function or self-energy. Then we apply the method to a graphene tight-binding model, and obtain several interesting results: (i) The Debye-Waller term, which is normally neglected, does affect the change of the Fermi velocity. (ii) The P2 and NP self-energies differ even at room temperature for some k-points, raising the question of how well P2 works in other materials. (iii) Close to the Dirac point, positive and negative energy peaks merge. (iv) In the strong coupling regime, a peak appears at energy E=0, which is consistent with previous works on disorder and localization in graphene. (v) The spectral function becomes more asymmetric at stronger coupling and higher temperatures. Finally, in the Appendix we show that the method has better convergent properties when the coupling is strong relative to when it is weak, and discuss other technical aspects.
Non-perturbative Green's function method to determine the electronic spectral function due to electron-phonon interactions: Application to a graphene model from weak to strong coupling
We consider the evolution of the one-particle function in the weak-coupling limit in three space dimensions, obtained by truncating the BBGKY hierarchy under a propagation of chaos approximation. For this dynamics, we rigorously show the convergence to a solution of the Landau equation, keeping the full singularity of the Landau kernel. This resolves the issue arising from [10], where the singular region has been removed artificially. Since the singularity appears in the Landau equation due to the geometry of particle interactions, it is an intrinsic physical property of the weak-coupling limit which is crucial to the understanding of the transition from particle systems to the Landau equation.
Convergence to the Landau equation from the truncated BBGKY hierarchy in the weak-coupling limit
Dijkstra's algorithm is one of the most popular classic path planning algorithms, achieving optimal solutions across a wide range of challenging tasks. However, it only calculates the shortest distance from one vertex to another, which is hard to directly apply to the Dynamic Multi-Sources to Single-Destination (DMS-SD) problem. This paper proposes a modified Dijkstra algorithm to address the DMS-SD problem, where the destination can be dynamically changed. Our method deploys the concept of Adjacent Matrix from Floyd's algorithm and achieves the goal with mathematical calculations. We formally show that all-pairs shortest distance information in Floyd's algorithm is not required in our algorithm. Extensive experiments verify the scalability and optimality of the proposed method.
An Efficient Dynamic Multi-Sources To Single-Destination (DMS-SD) Algorithm In Smart City Navigation Using Adjacent Matrix
This paper improves several previously known results. First, the results describing the R-skewsymmetric algebra and the quadratic dual of the R-symmetric algebra as Frobenius algebras are shown to be true with any restriction on the parameter q of the Hecke relation being removed. An even Hecke symmetry gives rise to a pair of graded Frobenius algebras. We describe interrelation between the Nakayama automorphisms of the two algebras. As an illustration of general technique we give full details of the verification that Artin-Schelter regular algebras of global dimension 3 and elliptic type A are not associated with any quantum GL(3).
Hecke symmetries: an overview of Frobenius properties
Multi-dimensional complex optical potentials with partial parity-time (PT) symmetry are proposed. The usual PT symmetry requires that the potential is invariant under complex conjugation and simultaneous reflection in all spatial directions. However, we show that if the potential is only partially PT-symmetric, i.e., it is invariant under complex conjugation and reflection in a single spatial direction, then it can also possess all-real spectra and continuous families of solitons. These results are established analytically and corroborated numerically.
Partially-PT-symmetric optical potentials with all-real spectra and soliton families in multi-dimensions
In this paper we consider multi-agent coalitional games with uncertain value functions for which we establish distribution-free guarantees on the probability of allocation stability, i.e., agents do not have incentives to defect from the grand coalition to form subcoalitions for unseen realizations of the uncertain parameter. In case the set of stable allocations, the so called core of the game, is empty, we propose a randomized relaxation of the core. We then show that those allocations that belong to this relaxed set can be accompanied by stability guarantees in a probably approximately correct fashion. Finally, numerical experiments corroborate our theoretical findings.
Probabilistically robust stabilizing allocations in uncertain coalitional games
While programming is one of the most broadly applicable skills in modern society, modern machine learning models still cannot code solutions to basic problems. Despite its importance, there has been surprisingly little work on evaluating code generation, and it can be difficult to accurately assess code generation performance rigorously. To meet this challenge, we introduce APPS, a benchmark for code generation. Unlike prior work in more restricted settings, our benchmark measures the ability of models to take an arbitrary natural language specification and generate satisfactory Python code. Similar to how companies assess candidate software developers, we then evaluate models by checking their generated code on test cases. Our benchmark includes 10,000 problems, which range from having simple one-line solutions to being substantial algorithmic challenges. We fine-tune large language models on both GitHub and our training set, and we find that the prevalence of syntax errors is decreasing exponentially as models improve. Recent models such as GPT-Neo can pass approximately 20% of the test cases of introductory problems, so we find that machine learning models are now beginning to learn how to code. As the social significance of automatic code generation increases over the coming years, our benchmark can provide an important measure for tracking advancements.
Measuring Coding Challenge Competence With APPS
Analyzing and predicting the traffic scene around the ego vehicle has been one of the key challenges in autonomous driving. Datasets including the trajectories of all road users present in a scene, as well as the underlying road topology are invaluable to analyze the behavior of the different traffic participants. The interaction between the various traffic participants is especially high in intersection types that are not regulated by traffic lights, the most common one being the roundabout. We introduce the openDD dataset, including 84,774 accurately tracked trajectories and HD map data of seven different roundabouts. The openDD dataset is annotated using images taken by a drone in 501 separate flights, totalling in over 62 hours of trajectory data. As of today, openDD is by far the largest publicly available trajectory dataset recorded from a drone perspective, while comparable datasets span 17 hours at most. The data is available, for both commercial and noncommercial use, at: http://www.l3pilot.eu/openDD.
openDD: A Large-Scale Roundabout Drone Dataset
We present gravitational N-body simulations of the secular morphological evolution of disk galaxies induced by density wave modes. In particular, we address the demands collective effects place on the choice of simulation parameters, and show that the common practice of the use of a large gravity softening parameter was responsible for the failure of past simulations to correctly model the secular evolution process in galaxies, even for those simulations where the choice of basic state allows an unstable mode to emerge, a prerequisite for obtaining the coordinated radial mass flow pattern needed for secular evolution of galaxies along the Hubble sequence. We also demonstrate that the secular evolution rates measured in our improved simulations agree to an impressive degree with the corresponding rates predicted by the recently-advanced theories of dynamically-driven secular evolution of galaxies. The results of the current work, besides having direct implications on the cosmological evolution of galaxies, also shed light on the general question of how irreversibility emerges from a nominally reversible physical system.
N-Body Simulations of Collective Effects in Spiral and Barred Galaxies
This thesis concerns sequential-access data compression, i.e., by algorithms that read the input one or more times from beginning to end. In one chapter we consider adaptive prefix coding, for which we must read the input character by character, outputting each character's self-delimiting codeword before reading the next one. We show how to encode and decode each character in constant worst-case time while producing an encoding whose length is worst-case optimal. In another chapter we consider one-pass compression with memory bounded in terms of the alphabet size and context length, and prove a nearly tight tradeoff between the amount of memory we can use and the quality of the compression we can achieve. In a third chapter we consider compression in the read/write streams model, which allows us passes and memory both polylogarithmic in the size of the input. We first show how to achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for achieving good grammar-based compression. Finally, we show that two streams are necessary and sufficient for achieving entropy-only bounds.
New Algorithms and Lower Bounds for Sequential-Access Data Compression
Data assimilation provides algorithms for widespread applications in various fields. It is of practical use to deal with a large amount of information in the complex system that is hard to estimate. Weather forecasting is one of the applications, where the prediction of meteorological data are corrected given the observations. Numerous approaches are contained in data assimilation. One specific sequential method is the Kalman Filter. The core is to estimate unknown information with the new data that is measured and the prior data that is predicted. As a matter of fact, there are different improved methods in the Kalman Filter. In this project, the Ensemble Kalman Filter with perturbed observations is considered. It is achieved by Monte Carlo simulation. In this method, the ensemble is involved in the calculation instead of the state vectors. In addition, the measurement with perturbation is viewed as the suitable observation. These changes compared with the Linear Kalman Filter make it more advantageous in that applications are not restricted in linear systems any more and less time is taken when the data are calculated by computers. The thesis seeks to develop the Ensemble Kalman Filter with perturbed observation gradually. With the Mathematical preliminaries including the introduction of dynamical systems, the Linear Kalman Filter is built. Meanwhile, the prediction and analysis processes are derived. After that, we use the analogy thoughts to lead in the non-linear Ensemble Kalman Filter with perturbed observations. Lastly, a classic Lorenz 63 model is illustrated by MATLAB. In the example, we experiment on the number of ensemble members and seek to investigate the relationships between the error of variance and the number of ensemble members. We reach the conclusion that on a limited scale the larger number of ensemble members indicates the smaller error of prediction.
Ensemble Kalman Filter with perturbed observations in weather forecasting and data assimilation
We investigate collective dissipative properties of vibrated granular materials by means of molecular dynamics simulations. Rates of energy losses indicate three different regimes or "phases"in the amplitude-frequency plane of the external forcing, namely, solid, convective, and gas-like regimes. The behavior of effective damping decrement in the solid regime is glassy. Practical applications are dicussed.
Dissipative properties of vibrated granular materials
Let $\mathcal{C}$ and $\mathcal{D}$ be hereditary graph classes. Consider the following problem: given a graph $G\in\mathcal{D}$, find a largest, in terms of the number of vertices, induced subgraph of $G$ that belongs to $\mathcal{C}$. We prove that it can be solved in $2^{o(n)}$ time, where $n$ is the number of vertices of $G$, if the following conditions are satisfied: * the graphs in $\mathcal{C}$ are sparse, i.e., they have linearly many edges in terms of the number of vertices; * the graphs in $\mathcal{D}$ admit balanced separators of size governed by their density, e.g., $\mathcal{O}(\Delta)$ or $\mathcal{O}(\sqrt{m})$, where $\Delta$ and $m$ denote the maximum degree and the number of edges, respectively; and * the considered problem admits a single-exponential fixed-parameter algorithm when parameterized by the treewidth of the input graph. This leads, for example, to the following corollaries for specific classes $\mathcal{C}$ and $\mathcal{D}$: * a largest induced forest in a $P_t$-free graph can be found in $2^{\tilde{\mathcal{O}}(n^{2/3})}$ time, for every fixed $t$; and * a largest induced planar graph in a string graph can be found in $2^{\tilde{\mathcal{O}}(n^{3/4})}$ time.
Subexponential-time algorithms for finding large induced sparse subgraphs
Thermal Sunyaev-Zel'dovich effect is one of the recent probes of cosmology and large scale structures. We update constraints on cosmological parameters from galaxy clusters observed by the Planck satellite in a first attempt to combine cluster number counts and power spectrum of hot gas, using the new value of the optical depth, and sampling at the same time on cosmological and scaling-relation parameters. We find that in the $\Lambda$CDM model, the addition of tSZ power spectrum provides only small improvements with respect to number counts only, leading to the $68\%$ c.l. constraints $\Omega_m = 0.32 \pm 0.02$, $\sigma_8 = 0.77\pm0.03 $ and $\sigma_8 (\Omega_m/0.3)^{1/3}= 0.78\pm0.03$ and lowering the discrepancy with CMB primary anisotropies results (updated with the new value of $\tau$) to $\simeq 1.6\, \sigma$ on $\sigma_8$. We analyse extensions to standard model, considering the effect of massive neutrinos and varying the equation of state parameter for dark energy. In the first case, we find that the addition of tSZ power spectrum helps in strongly improving cosmological constraints with respect to number counts only results, leading to the $95\%$ upper limit $\sum m_{\nu}< 1.53 \, \text{eV}$. For the varying dark energy EoS scenario, we find again no important improvements when adding tSZ power spectrum, but still the combination of tSZ probes is able in providing constraints, producing $w = -1.0\pm 0.2$. In all cosmological scenari the mass bias to reconcile CMB and tSZ probes remains low: $(1-b)\lesssim 0.66$ as compared to estimates from weak lensing and Xray mass estimate comparisons or numerical simulations.
Constraints from thermal Sunyaev-Zeldovich cluster counts and power spectrum combined with CMB
We study the face-centered cubic lattice (fcc) in up to six dimensions. In particular, we are concerned with lattice Green's functions (LGF) and return probabilities. Computer algebra techniques, such as the method of creative telescoping, are used for deriving an ODE for a given LGF. For the four- and five-dimensional fcc lattices, we give rigorous proofs of the ODEs that were conjectured by Guttmann and Broadhurst. Additionally, we find the ODE of the LGF of the six-dimensional fcc lattice, a result that was not believed to be achievable with current computer hardware.
Lattice Green's Functions of the Higher-Dimensional Face-Centered Cubic Lattices
We measure the branching ratios of the Cabibbo-suppressed decays $\Lambda^+_c$ $\to$ $\Lambda$ $K^+$ and $\Lambda^+_c$ $\to$ $\Sigma^{0}$ $K^+$ %(measured with improved accuracy). relative to the Cabibbo-favored decay modes $\Lambda^+_c$ $\to$ $\Lambda$ $\pi^+$ and $\Lambda^+_c$ $\to$ $\Sigma^{0}$ $\pi^+$ to be $ 0.044 \pm 0.004 (\textnormal{stat.}) \pm 0.003 \ (\textnormal{syst.})$ and $ 0.039 \pm 0.005 (\textnormal{stat.}) \pm 0.003 (\textnormal{syst.})$, respectively. We set an upper limit on the branching ratio at 90 % confidence level for $\Lambda^+_c$ $\to$ $\Lambda$ $K^+ \pi^+ \pi^-$ to be $ 4.1 \times 10^{-2}$ relative to $\Lambda^+_c$ $\to$ $\Lambda$ $\pi^+$ and for $\Lambda^+_c$ $\to$ $\Sigma^{0}$ $K^+ \pi^+ \pi^-$ to be $ 2.0 \times 10^{-2}$ relative to $\Lambda^+_c$ $\to$ $\Sigma^{0}$ $\pi^+$. We also measure the branching fraction for the Cabibbo-favored mode $\Lambda^+_c$ $\to$ $\Sigma^{0}$ $\pi^+$ relative to $\Lambda^+_c$ $\to$ $\Lambda$ $\pi^+$ to be $0.977 \pm 0.015 (\textnormal{stat.}) \pm 0.051 (\textnormal{syst.})$. This analysis was performed using a data sample with an integrated luminosity of 125 fb$^{-1}$ collected by the $BABAR$ detector at the PEP-II asymmetric-energy $B$ factory at SLAC.
Measurements of $\Lambda^+_c$ Branching Fractions of Cabibbo-Suppressed Decay Modes involving $\Lambda$ and $\Sigma^{0}$
We present new Suzaku and Fermi data, and re-analyzed archival hard X-ray data from INTEGRAL and Swift-BAT survey, to investigate the physical properties of the luminous, high-redshift, hard X-ray selected blazar IGR J22517+2217, through the modelization of its broad band spectral energy distribution (SED) in two different activity states. Through the analysis of the new Suzaku data and the flux selected data from archival hard X-ray observations, we build the source SED in two different states, one for the newly discovered flare occurred in 2005 and one for the following quiescent period. Both SEDs are strongly dominated by the high energy hump peaked at 10^20 -10^22 Hz, that is at least two orders of magnitude higher than the low energy (synchrotron) one at 10^11 -10^14 Hz, and varies by a factor of 10 between the two states. In both states the high energy hump is modeled as inverse Compton emission between relativistic electrons and seed photons produced externally to the jet, while the synchrotron self-Compton component is found to be negligible. In our model the observed variability can be accounted for by a variation of the total number of emitting electrons, and by a dissipation region radius changing from within to outside the broad line region as the luminosity increases. In its flaring activity, IGR J22517+2217 shows one of the most powerful jet among the population of extreme, hard X-ray selected, high redshift blazar observed so far.
Modeling the flaring activity of the high z, hard X-ray selected blazar IGR J22517+2217
Magnetic field line curvature scattering(FLCS) happens when particle gyro-radius is comparable to magnetic field line curvature radius, and the conservation of the first adiabatic invariant is broken. As a collision-less particle scattering mechanism, FLCS plays a vital role in affecting the distribution of energetic electrons in the radiation belt, precipitation of ring current ions, and the formation of proton isotropic boundary(IB). In recent years, it has been increasingly used in research on particle dynamics. Previous studies focused on quantifying the effects of curvature scattering in a single case. This paper uses the analytic model to calculate the spatial distribution of diffusion coefficients induced by FLCS in a real magnetic field model. Various energy particles are used to analyze the coverage of curvature scattering in radial and MLT directions, as well as the influence of $Kp$ index. The decay time of particles due to curvature scattering is estimated based on diffusion coefficients which suggest that the time scale of curvature scattering is too long to explain the rapid loss of electrons in the radiation belt and the ring current with the energy of MeV or below MeV. However, the decay time of ring current protons is on the order of hours or even minutes which can be used to explain the ring current decay during the recovery phase of magnetic storm. Finally, the effects of resonant wave-particle scattering and FLCS in the vicinity of the midnight equator are briefly compared. It is found that the influence of FLCS on hundred-keV protons in the ring current region is comparable to or even greater than that of EMIC wave scattering. Our results suggest the non-negligible effects of the FLCS and should be considered in radiation belt and ring current modeling.
Quantifying the Effects of Magnetic Field Line Curvature Scattering on Radiation Belt and Ring Current Particles
Quantum codewords are highly entangled combinations of two-state systems. The standard assumptions of local realism lead to logical contradictions similar to those found by Bell, Kochen and Specker, Greenberger, Horne and Zeilinger, and Mermin. The new contradictions have some noteworthy features that did not appear in the older ones.
Quantum codewords contradict local realism
In this paper we study the sum of powers in the Gaussian integers $\mathbf{G}_k(n):=\sum_{a,b \in [1,n]} (a+b i)^k$. We give an explicit formula for $\mathbf{G}_k(n) \pmod n $ in terms of the prime numbers $p \equiv 3 \pmod 4$ with $p \mid \mid n$ and $p-1 \mid k$, similar to the well known one due to von Staudt for $\sum_{i=1}^n i^k \pmod n$. We apply this formula to study the set of integers $n$ which divide $\mathbf{G}_n(n)$ and compute its asymptotic density with six exact digits: $0.971000\ldots$.
A von Staudt-type formula for $\displaystyle{\sum_{z\in\mathbb{Z}_n[i]} z^k }$
This volume contains the proceedings of the Eighth International Symposium on Games, Automata, Logic and Formal Verification (GandALF 2017). The symposium took place in Roma, Italy, from the 20th to the 22nd of September 2017. The GandALF symposium was established by a group of Italian computer scientists interested in mathematical logic, automata theory, game theory, and their applications to the specification, design, and verification of complex systems. Its aim is to provide a forum where people from different areas, and possibly with different backgrounds, can fruitfully interact. GandALF has a truly international spirit, as witnessed by the composition of the program and steering committee and by the country distribution of the submitted papers.
Proceedings Eighth International Symposium on Games, Automata, Logics and Formal Verification
A theory of stress fields in two-dimensional granular materials based on directed force chain networks is presented. A general equation for the densities of force chains in different directions is proposed and a complete solution is obtained for a special case in which chains lie along a discrete set of directions. The analysis and results demonstrate the necessity of including nonlinear terms in the equation. A line of nontrivial fixed point solutions is shown to govern the properties of large systems. In the vicinity of a generic fixed point, the response to a localized load shows a crossover from a single, centered peak at intermediate depths to two propagating peaks at large depths that broaden diffusively.
Directed force chain networks and stress response in static granular materials
Purpose. Elevations in initially obtained serum lactate levels are strong predictors of mortality in critically ill patients. Identifying patients whose serum lactate levels are more likely to increase can alert physicians to intensify care and guide them in the frequency of tending the blood test. We investigate whether machine learning models can predict subsequent serum lactate changes. Methods. We investigated serum lactate change prediction using the MIMIC-III and eICU-CRD datasets in internal as well as external validation of the eICU cohort on the MIMIC-III cohort. Three subgroups were defined based on the initial lactate levels: i) normal group (<2 mmol/L), ii) mild group (2-4 mmol/L), and iii) severe group (>4 mmol/L). Outcomes were defined based on increase or decrease of serum lactate levels between the groups. We also performed sensitivity analysis by defining the outcome as lactate change of >10% and furthermore investigated the influence of the time interval between subsequent lactate measurements on predictive performance. Results. The LSTM models were able to predict deterioration of serum lactate values of MIMIC-III patients with an AUC of 0.77 (95% CI 0.762-0.771) for the normal group, 0.77 (95% CI 0.768-0.772) for the mild group, and 0.85 (95% CI 0.840-0.851) for the severe group, with a slightly lower performance in the external validation. Conclusion. The LSTM demonstrated good discrimination of patients who had deterioration in serum lactate levels. Clinical studies are needed to evaluate whether utilization of a clinical decision support tool based on these results could positively impact decision-making and patient outcomes.
Prediction of Blood Lactate Values in Critically Ill Patients: A Retrospective Multi-center Cohort Study
Generative models that can model and predict sequences of future events can, in principle, learn to capture complex real-world phenomena, such as physical interactions. However, a central challenge in video prediction is that the future is highly uncertain: a sequence of past observations of events can imply many possible futures. Although a number of recent works have studied probabilistic models that can represent uncertain futures, such models are either extremely expensive computationally as in the case of pixel-level autoregressive models, or do not directly optimize the likelihood of the data. To our knowledge, our work is the first to propose multi-frame video prediction with normalizing flows, which allows for direct optimization of the data likelihood, and produces high-quality stochastic predictions. We describe an approach for modeling the latent space dynamics, and demonstrate that flow-based generative models offer a viable and competitive approach to generative modelling of video.
VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation
Graham-Pollak showed that for $D = D_T$ the distance matrix of a tree $T$, det$(D)$ depends only on its number of edges. Several other variants of $D$, including directed/multiplicative/$q$- versions were studied, and always, det$(D)$ depends only on the edge-data. We introduce a general framework for bi-directed weighted trees, with threefold significance. First, we improve on state-of-the-art for all known variants, even in the classical Graham-Pollak case: we delete arbitrary pendant nodes (and more general subsets) from the rows/columns of $D$, and show these minors do not depend on the tree-structure. Second, our setting unifies all known variants (with entries in a commutative ring). We further compute $D^{-1}$ in closed form, extending a result of Graham-Lovasz [Adv. Math. 1978] and answering an open question of Bapat-Lal-Pati [Lin. Alg. Appl. 2006] in greater generality. Third, we compute a second function of the matrix $D$: the sum of all its cofactors, cof$(D)$. This was worked out in the simplest setting by Graham-Hoffman-Hosoya (1978), but is relatively unexplored for other variants. We prove a stronger result, in our general setting, by computing cof$(.)$ for minors as above, and showing these too depend only on the edge-data. Finally, we show our setting is the "most general possible", in that with more freedom in the edgeweights, det$(D)$ and cof$(D)$ depend on the tree structure. In a sense, this completes the study of the invariants det$(D_T)$, cof$(D_T)$ for trees $T$ with edge-data in a commutative ring. Moreover: for a bi-directed graph $G$ we prove multiplicative Graham-Hoffman-Hosoya type formulas for det$(D_G)$, cof$(D_G)$, $D_G^{-1}$. We then show how this subsumes their 1978 result. The final section introduces and computes a third, novel invariant for trees and a Graham-Hoffman-Hosoya type result for our "most general" distance matrix $D_T$.
Distance matrices of a tree: two more invariants, and in a unified framework
We thoroughly analyse the novel quantum key distribution protocol introduced recently in quant-ph/0412075, which is based on minimal qubit tomography. We examine the efficiency of the protocol for a whole range of noise parameters and present a general analysis of incoherent eavesdropping attacks with arbitrarily many steps in the iterative key generation process. The comparison with the tomographic 6-state protocol shows that our protocol has a higher efficiency (up to 20%) and ensures the security of the established key even for noise parameters far beyond the 6-state protocol's noise threshold.
The Singapore Protocol: Incoherent Eavesdropping Attacks
This work is devoted to study the global behavior of viscous flows contained in a symmetric domain with complete slip boundary. In such scenario the boundary no longer provides friction and therefore the perturbation of angular velocity lacks decaying structure. In fact, we show the existence of uniformly rotating solutions as steady states for the compressible Navier-Stokes equations. By manipulating the conservation law of angular momentum, we establish a suitable Korn's type inequality to control the perturbation and show the asymptotic stability of the uniformly rotating solutions with small angular velocity. In particular, the initial perturbation which preserves the angular momentum would decay exponentially in time and the solution to the Navier-Stokes equations converges to the steady state as time grows up.
Compressible Viscous Flows in a Symmetric Domain with Complete Slip Boundary
We homogeneously analyse $\sim 3.2\times 10^5$ photometric measurements for $\sim 1100$ transit lightcurves belonging to $17$ exoplanet hosts. The photometric data cover $16$ years 2004--2019 and include amateur and professional observations. Old archival lightcurves were reprocessed using up-to-date exoplanetary parameters and empirically debiased limb-darkening models. We also derive self-consistent transit and radial-velocity fits for $13$ targets. We confirm the nonlinear TTV trend in the WASP-12 data at a high significance, and with a consistent magnitude. However, Doppler data reveal hints of a radial acceleration about $(-7.5\pm 2.2)$~m/s/yr, indicating the presence of unseen distant companions, and suggesting that roughly $10$ per cent of the observed TTV was induced via the light-travel (or Roemer) effect. For WASP-4, a similar TTV trend suspected after the recent TESS observations appears controversial and model-dependent. It is not supported by our homogeneus TTV sample, including $10$ ground-based EXPANSION lightcurves obtained in 2018 simultaneously with TESS. Even if the TTV trend itself does exist in WASP-4, its magnitude and tidal nature are uncertain. Doppler data cannot entirely rule out the Roemer effect induced by possible distant companions.
Homogeneously derived transit timings for 17 exoplanets and reassessed TTV trends for WASP-12 and WASP-4
The orthogonality catastrophe (OC) problem is considered solved for 50 years. It has important consequences for numerous dynamic phenomena in fermionic systems, including Kondo effect, X-ray spectroscopy, and quantum diffusion of impurities, and is often used in the context of metals. However, the key assumptions on which the known solution is based---impurity potentials with finite cross-section and non-interacting fermions---are both highly inaccurate for problems involving charged particles in metals. As far as we know, the OC problem for the "all Coulomb" case has never been addressed systematically, leaving it unsolved for the most relevant practical applications. In this work we include effects of dynamic screening in a consistent way and demonstrate that for short-range impurity potentials the non-interacting Fermi-sea approximation radically overestimates the power-law decay exponent of the overlap integral. We also find that the dynamically screened Coulomb potential leads to a larger exponent than the often used static Yukawa potential. Finally, by employing the Diagrammatic Monte Carlo technique, we quantify effects of a finite impurity mass and reveal how OC physics leads to small, but finite, impurity residues.
Orthogonality catastrophe in Coulomb systems
We give an autoequivalence of the derived category of the Ginzburg dg algebra for a mutation loop satisfying the sign stability introduced in [IK21]. We compute the categorical entropies of their restrictions to some subcategories and conclude that they are both given by the logarithm of the cluster stretch factor. Moreover, we discuss the pseudo-Anosovness of them in the sense of [FFHKL19] and [DHKK14].
Categorical dynamical systems arising from sign-stable mutation loops
Interplay of transition of floating potential fluctuations in a glow discharge plasma in the toroidal vacuum vessel of SINP tokamak has been observed. With variation in the strength of the vertical and toroidal magnetic fields, regular and inverted relaxation oscillations as well as sinusoidal oscillations are observed with the slow and fast time scale of the relaxation oscillations reversing their nature at a high value of vertical magnetic field strength. However for small value of toroidal magnetic field the transitions follow relaxation $\rightarrow$ chaotic oscillations with the chaotic nature prevailing at higher values of toroidal magnetic field. Evolution of associated anode fireball dynamics under the action of increasing vertical, toroidal as well as increasing vertical field at a fixed toroidal field (mixed field) of different strength has been studied. Estimation of phase coherence index for each case has been carried out to examine the evidence of finite nonlinear interaction. A comprehensive study of the dynamics of the fireball is found to be associated with the values of phase coherence index. The index is found to take maximum values for the case of toroidal, mixed field when there is an existence of power/energy concentration in a large region of frequency band. A detailed study of the scaling region using detrended fluctuation analysis (DFA) by estimating the scaling exponent has been carried out for increasing values of discharge voltage, vertical, toroidal as well as the mixed field (toroidal plus vertical). A persistence long range behaviour associated with the nature of the anode glow has been investigated in case of higher values of toroidal, mixed field whereas increasing DV, vertical magnetic field leads to a perfectly correlated dynamics with values of scaling exponent greater than unity
Interplay of transitions between oscillations with emergence of fireballs and quantification of phase coherence, scaling index in a magnetized glow discharge plasma of toroidal assembly
We present a simple library which equips MPI implementations with truly asynchronous non-blocking point-to-point operations, and which is independent of the underlying communication infrastructure. It utilizes the MPI profiling interface (PMPI) and the MPI_THREAD_MULTIPLE thread compatibility level, and works with current versions of Intel MPI, Open MPI, MPICH2, MVAPICH2, Cray MPI, and IBM MPI. We show performance comparisons on a commodity InfiniBand cluster and two tier-1 systems in Germany, using low-level and application benchmarks. Issues of thread/process placement and the peculiarities of different MPI implementations are discussed in detail. We also identify the MPI libraries that already support asynchronous operations. Finally we show how our ideas can be extended to MPI-IO.
Asynchronous MPI for the Masses
We study the onset of the propagation failure of wave fronts in systems of coupled cells. We introduce a new method to analyze the scaling of the critical external field at which fronts cease to propagate, as a function of intercellular coupling. We find the universal scaling of the field throughout the range of couplings, and show that the field becomes exponentially small for large couplings. Our method is generic and applicable to a wide class of cellular dynamics in chemical, biological, and engineering systems. We confirm our results by direct numerical simulations.
Universal Scaling of Wave Propagation Failure in Arrays of Coupled Nonlinear Cells
The production of charged pion pairs via multiphoton absorption from an intense X-ray laser wave colliding with an ultrarelativistic proton beam is studied. Our calculations include the contributions from both the electromagnetic and hadronic interactions where the latter are described approximately by a phenomenological Yukawa potential. Order-of-magnitude estimates for $\pi^+\pi^-$ production on the proton by two- and three-photon absorption from the high-frequency laser field are obtained and compared with the corresponding rates for $\mu^+\mu^-$ pair creation.
Phenomenological model of multiphoto-production of charged pion pairs on the proton
Algebraic space-time coding allows for reliable data exchange across fading multiple-input multiple-output channels. A powerful technique for decoding space-time codes in Maximum-Likelihood (ML) decoding, but well-performing and widely-used codes such as the Golden code often suffer from high ML-decoding complexity. In this article, a recursive algorithm for decoding general algebraic space-time codes of arbitrary dimension is proposed, which reduces the worst-case decoding complexity from $O(|S|^{n^2})$ to $O(|S|^n)$.
Reduced Complexity Decoding of n x n Algebraic Space-Time Codes
Recently, two new parallel algorithms for on-the-fly model checking of LTL properties were presented at the same conference: Automated Technology for Verification and Analysis, 2011. Both approaches extend Swarmed NDFS, which runs several sequential NDFS instances in parallel. While parallel random search already speeds up detection of bugs, the workers must share some global information in order to speed up full verification of correct models. The two algorithms differ considerably in the global information shared between workers, and in the way they synchronize. Here, we provide a thorough experimental comparison between the two algorithms, by measuring the runtime of their implementations on a multi-core machine. Both algorithms were implemented in the same framework of the model checker LTSmin, using similar optimizations, and have been subjected to the full BEEM model database. Because both algorithms have complementary advantages, we constructed an algorithm that combines both ideas. This combination clearly has an improved speedup. We also compare the results with the alternative parallel algorithm for accepting cycle detection OWCTY-MAP. Finally, we study a simple statistical model for input models that do contain accepting cycles. The goal is to distinguish the speedup due to parallel random search from the speedup that can be attributed to clever work sharing schemes.
Variations on Multi-Core Nested Depth-First Search
In vitro fertilization (IVF) comprises a sequence of interventions concerned with the creation and culture of embryos which are then transferred to the patient's uterus. While the clinically important endpoint is birth, the responses to each stage of treatment contain additional information about the reasons for success or failure. As such, the ability to predict not only the overall outcome of the cycle, but also the stage-specific responses, can be useful. This could be done by developing separate models for each response variable, but recent work has suggested that it may be advantageous to use a multivariate approach to model all outcomes simultaneously. Here, joint analysis of the sequential responses is complicated by mixed outcome types defined at two levels (patient and embryo). A further consideration is whether and how to incorporate information about the response at each stage in models for subsequent stages. We develop a case study using routinely collected data from a large reproductive medicine unit in order to investigate the feasibility and potential utility of multivariate prediction in IVF. We consider two possible scenarios. In the first, stage-specific responses are to be predicted prior to treatment commencement. In the second, responses are predicted dynamically, using the outcomes of previous stages as predictors. In both scenarios, we fail to observe benefits of joint modelling approaches compared to fitting separate regression models for each response variable.
Multivariate prediction of mixed, multilevel, sequential outcomes arising from in vitro fertilisation
A general electrodynamic theory of a grating coupled two dimensional electron system (2DES) is developed. The 2DES is treated quantum mechanically, the grating is considered as a periodic system of thin metal strips or as an array of quantum wires, and the interaction of collective (plasma) excitations in the system with electromagnetic field is treated within the classical electrodynamics. It is assumed that a dc current flows in the 2DES. We consider a propagation of an electromagnetic wave through the structure, and obtain analytic dependencies of the transmission, reflection, absorption and emission coefficients on the frequency of light, drift velocity of 2D electrons, and other physical and geometrical parameters of the system. If the drift velocity of 2D electrons exceeds a threshold value, a current-driven plasma instability is developed in the system, and an incident far infrared radiation is amplified. We show that in the structure with a quantum wire grating the threshold velocity of the amplification can be essentially reduced, as compared to the commonly employed metal grating, down to experimentally achievable values. Physically this is due to a considerable enhancement of the grating coupler efficiency because of the resonant interaction of plasma modes in the 2DES and in the grating. We show that tunable far infrared emitters, amplifiers and generators can thus be created at realistic parameters of modern semiconductor heterostructures.
Plasma instability and amplification of electromagnetic waves in low-dimensional electron systems
In the primordial Universe, neutrino decoupling occurs only slightly before electron-positron annihilations. This leads notably to an increased neutrino energy density compared to the standard instantaneous decoupling approximation, parametrized by the effective number of neutrino species $N_{\rm eff}$. A precise calculation of neutrino evolution is needed to assess its consequences during the later cosmological stages, and requires to take into account multiple effects such as neutrino oscillations, which represents a genuine numerical challenge. Recently, several key improvements have allowed such a precise numerical calculation, leading to the new reference value $N_{\rm eff}=3.0440$.
Precision calculation of neutrino evolution in the early Universe
We compare the stellar structure of the isolated, Local Group dwarf galaxy Pegasus (DDO216) with low resolution HI maps from Young et al. (2003). Our comparison reveals that Pegasus displays the characteristic morphology of ram pressure stripping; in particular, the HI has a ``cometary'' appearance which is not reflected in the regular, elliptical distribution of the stars. This is the first time this phenomenon has been observed in an isolated Local Group galaxy. The density of the medium required to ram pressure strip Pegasus is at least $10^{-5} - 10^{-6}$, cm$^{-3}$. We conclude that this is strong evidence for an inter-galactic medium associated with the Local Group.
Ram pressure stripping of an isolated Local Group dwarf galaxy: evidence for an intra-group medium
We use the overlap formalism to define a topological index on the lattice. We study the spectral flow of the hermitian Wilson-Dirac operator and identify zero crossings with topological objects. We determine the topological susceptibility and zero mode size distribution, and we comment on the stability of our results.
Topological Susceptibility and Zero Mode Size in Lattice QCD
Birds-eye-view (BEV) semantic segmentation is critical for autonomous driving for its powerful spatial representation ability. It is challenging to estimate the BEV semantic maps from monocular images due to the spatial gap, since it is implicitly required to realize both the perspective-to-BEV transformation and segmentation. We present a novel two-stage Geometry Prior-based Transformation framework named GitNet, consisting of (i) the geometry-guided pre-alignment and (ii) ray-based transformer. In the first stage, we decouple the BEV segmentation into the perspective image segmentation and geometric prior-based mapping, with explicit supervision by projecting the BEV semantic labels onto the image plane to learn visibility-aware features and learnable geometry to translate into BEV space. Second, the pre-aligned coarse BEV features are further deformed by ray-based transformers to take visibility knowledge into account. GitNet achieves the leading performance on the challenging nuScenes and Argoverse Datasets.
GitNet: Geometric Prior-based Transformation for Birds-Eye-View Segmentation
The aim of this paper is to give a proof of the restriction theorems for principal bundles with a reductive algebraic group as structure group in arbitrary characteristic. Let $G$ be a reductive algebraic group over any field $k=\bar{k}$, let $X$ be a smooth projective variety over $k$, let $H$ be a very ample line bundle on $X$ and let $E$ be a semistable (resp. stable) principal $G$-bundle on $X$ w.r.t. $H$. The main result of this paper is that the restriction of $E$ to a general smooth curve which is a complete intersection of ample hypersurfaces of sufficiently high degree's is again semistable (resp. stable).
Restriction Theorems for Principal Bundles and Some Consequences
The oxidation-related issues in controlling Si doping from the Si source material in oxide molecular beam epitaxy (MBE) is addressed by using solid SiO as an alternative source material in a conventional effusion cell. Line-of-sight quadrupole mass spectrometry of the direct SiO-flux ($\Phi_{SiO}$) from the source at different temperatures ($T_{SiO}$) confirmed SiO molecules to sublime with an activation energy of 3.3eV. The $T_{SiO}$-dependent $\Phi_{SiO}$ was measured in vacuum before and after subjecting the source material to an O$_{2}$-background of $10^{-5}$ mbar (typical oxide MBE regime). The absence of a significant $\Phi_{SiO}$ difference indicates negligible source oxidation in molecular O$_{2}$. Mounted in an oxygen plasma-assisted MBE, Si-doped $\beta$-Ga2O3 layers were grown using this source. The $\Phi_{SiO}$ at the substrate was evaluated [from 2.9x10$^{9}$ cm$^{-2}$s$^{-1}$ ($T_{SiO}$=700{\deg}C) to 5.5x10$^{13}$ cm$^{-2}$s$^{-1}$ (T$_{SiO}$=1000{\deg}C)] and Si-concentration in the $\beta$-Ga2O3 layers measured by secondary ion mass spectrometry highlighting unprecedented control of continuous Si-doping for oxide MBE, i.e., $N_{Si}$ from 4x10$^{17}$ cm$^{-3}$ ($T_{SiO}$=700{\deg}C) up to 1.7x10$^{20}$ cm$^{-3}$ ($T_{SiO}$=900{\deg}C). For a homoepitaxial $\beta$-Ga2O3 layer an Hall charge carrier concentration of 3x10$^{19}$ cm$^{-3}$ in line with the provided $\Phi_{SiO}$ ($T_{SiO}$=800{\deg}C) is demonstrated. No SiO-incorporation difference was found between $\beta$-Ga2O3(010) layers homoepitaxially grown at 750{\deg}C and $\beta$-Ga2O3(-201) layers heteroepitaxially grown at 550{\deg}C. The presence of activated oxygen (plasma) resulted in partial source oxidation and related decrease of doping concentration (particularly at $T_{SiO}$<800{\deg}C) which has been tentatively explained with a simple model. Degassing the source at 1100{\deg}C reverted the oxidation.
Towards controllable Si-doping in oxide molecular beam epitaxy using a solid SiO source: Application to $\beta$-Ga2O3
Low-frequency radio surveys allow in-depth studies and new analyses of classes of sources previously known and characterised only in other bands. In recent years, low radio frequency observations of blazars have been available thanks to new surveys, such as the GaLactic and Extragalactic All-sky MWA Survey (GLEAM). We search for gamma-ray blazars in a low frequency (${\nu}$ < 240MHz) survey, to characterise the spectral properties of the spatial components. We cross-correlate GLEAM with the fourth catalogue of active galactic nuclei (4LAC) detected by the Fermi satellite. This improves over previous works using a low frequency catalogue that is wider, deeper, with a better spectral coverage and the latest and most sensitive gamma-ray source list. In comparison to the previous study based on the commissioning survey, the detection rate increased from 35% to 70%. We include Australia Telescope 20GHz (AT20G) Survey data to extract high-frequency high-angular resolution information on the radio cores of blazars. We find low radio frequency counterparts for 1274 out of 1827 blazars in the 72-231 MHz range. Blazars have at spectrum at $\sim$ 100MHz regime, with a mean spectral index ${\alpha}$ = -0.44 +-0.01 (assuming S $\propto$ ${\nu}^ {\alpha}$ ). Low synchrotron peaked objects have a scatter spectrum than high synchrotron peaked objects. Low frequency radio and gamma-ray emission show a significant but scattered correlation. The ratio between lobe and core radio emission in gamma-ray blazars is smaller than previously estimated.
Radio spectral properties of cores and extended regions in blazars in the MHz regime
Question answering (QA) is an important natural language processing (NLP) task and has received much attention in academic research and industry communities. Existing QA studies assume that questions are raised by humans and answers are generated by machines. Nevertheless, in many real applications, machines are also required to determine human needs or perceive human states. In such scenarios, machines may proactively raise questions and humans supply answers. Subsequently, machines should attempt to understand the true meaning of these answers. This new QA approach is called reverse-QA (rQA) throughout this paper. In this work, the human answer understanding problem is investigated and solved by classifying the answers into predefined answer-label categories (e.g., True, False, Uncertain). To explore the relationships between questions and answers, we use the interactive attention network (IAN) model and propose an improved structure called semi-interactive attention network (Semi-IAN). Two Chinese data sets for rQA are compiled. We evaluate several conventional text classification models for comparison, and experimental results indicate the promising performance of our proposed models.
Semi-interactive Attention Network for Answer Understanding in Reverse-QA
The Compact Linear Collider (CLIC) is a TeV-scale high-luminosity linear $e^+e^-$ collider under development by international collaborations hosted by CERN. This document provides an overview of the design, technology, and implementation aspects of the CLIC accelerator. For an optimal exploitation of its physics potential, CLIC is foreseen to be built and operated in stages, at centre-of-mass energies of 380 GeV, 1.5 TeV and 3 TeV, for a site length ranging between 11 km and 50 km. CLIC uses a Two-Beam acceleration scheme, in which normal-conducting high-gradient 12 GHz accelerating structures are powered via a high-current Drive Beam. For the first stage, an alternative with X-band klystron powering is also considered. CLIC accelerator optimisation, technical developments, and system tests have resulted insignificant progress in recent years. Moreover, this has led to an increased energy efficiency and reduced power consumption of around 170 MW for the 380 GeV stage, together with a reduced cost estimate of approximately 6 billion CHF. The construction of the first CLIC energy stage could start as early as 2026 and first beams would be available by 2035, marking the beginning of a physics programme spanning 25-30 years and providing excellent sensitivity to Beyond Standard Model physics, through direct searches and via a broad set of precision measurements of Standard Model processes, particularly in the Higgs and top-quark sectors.
The Compact Linear Collider (CLIC) - Project Implementation Plan
Real active distribution networks with associated smart meter (SM) data are critical for power researchers. However, it is practically difficult for researchers to obtain such comprehensive datasets from utilities due to privacy concerns. To bridge this gap, an implicit generative model with Wasserstein GAN objectives, namely unbalanced graph generative adversarial network (UG-GAN), is designed to generate synthetic three-phase unbalanced active distribution system connectivity. The basic idea is to learn the distribution of random walks both over a real-world system and across each phase of line segments, capturing the underlying local properties of an individual real-world distribution network and generating specific synthetic networks accordingly. Then, to create a comprehensive synthetic test case, a network correction and extension process is proposed to obtain time-series nodal demands and standard distribution grid components with realistic parameters, including distributed energy resources (DERs) and capacity banks. A Midwest distribution system with 1-year SM data has been utilized to validate the performance of our method. Case studies with several power applications demonstrate that synthetic active networks generated by the proposed framework can mimic almost all features of real-world networks while avoiding the disclosure of confidential information.
Synthetic Active Distribution System Generation via Unbalanced Graph Generative Adversarial Network
The Mallows measure is measure on permutations which was introduced by Mallows in connection with ranking problems in statistics. Under this measure, the probability of a permutation $\pi$ is proportional to $q^{Inv(\pi)}$ where $q$ is a positive parameter and $Inv(\pi)$ is the number of inversions in $\pi$. We consider the length of the longest common subsequence (LCS) of two independently permutations drawn according to $\mu_{n,q}$ and $\mu_{n,q'}$ for some $q,q' >0$. We show that when $0<q,q'<1$, the limiting law of the LCS is Gaussian. In the regime that $n(1-q) \to \infty$ and $n(1-q') \to \infty$ we show a weak law of large numbers for the LCS. These results extend the results of \cite{Basu} and \cite{Naya} showing weak laws and a limiting law for the distribution of the longest increasing subsequence to showing corresponding results for the longest common subsequence.
Limit Theorems for the Length of the Longest Common Subsequence of Mallows Permutations
Helmut Hofer introduced in '93 a novel technique based on holomorphic curves to prove the Weinstein conjecture. Among the cases where these methods apply are all contact 3--manifolds $(M,\xi)$ with $\pi_2(M) \ne 0$. We modify Hofer's argument to prove the Weinstein conjecture for some examples of higher dimensional contact manifolds. In particular, we are able to show that the connected sum with a real projective space always has a closed contractible Reeb orbit.
The Weinstein conjecture in the presence of submanifolds having a Legendrian foliation
All the coherent pairs of measures associated to linear functionals $c_0$ and $c_1$, introduced by Iserles et al in 1991, have been given by Meijer in 1997. There exist seven kinds of coherent pairs. All these cases are explored in order to give three term recurrence relations satisfied by polynomials. The smallest zero $\mu_{1,n}$ of each of them of degree $n$ has a link with the Markov-Bernstein constant $M_n$ appearing in the following Markov-Bernstein inequalities: $$ c_1((p^\prime)^2) \le M_n^2 c_0(p^2), \quad \forall p \in \mathcal{P}_n, $$ where $M_n=\frac{1}{\sqrt{\mu_{1,n}}}$. The seven kinds of three term recurrence relations are given. In the case where $c_0 =e^{-x} dx+\delta(0)$ and $c_1 =e^{-x} dx$, explicit upper and lower bounds are given for $\mu_{1,n}$, and the asymptotic behavior of the corresponding Markov-Bernstein constant is stated. Except in a part of one case, $\lim_{n \to \infty} \mu_{1,n}=0$ is proved in all the cases.
Coherent pairs of measures and Markov-Bernstein inequalities
A particularly promising pathway to enhance the efficiency of thermoelectric materials lies in the use of resonant states, as suggested by experimentalists and theorists alike. In this paper, we go over the mechanisms used in the literature to explain how resonant levels affect the thermoelectric properties, and we suggest that the effects of hybridization are crucial yet ill-understood. In order to get a good grasp of the physical picture and to draw guidelines for thermoelectric enhancement, we use a tight-binding model containing a conduction band hybridized with a flat band. We find that the conductivity is suppressed in a wide energy range near the resonance, but that the Seebeck coefficient can be boosted for strong enough hybridization, thus allowing for a significant increase of the power factor. The Seebeck coefficient can also display a sign change as the Fermi level crosses the resonance. Our results suggest that in order to boost the power factor, the hybridization strength must not be too low, the resonant level must not be too close to the conduction (or valence) band edge, and the Fermi level must be located around, but not inside, the resonant peak.
Boosting the power factor with resonant states: a model study
Abnormal electrical activity from the boundaries of ischemic cardiac tissue is recognized as one of the major causes in generation of ischemia-reperfusion arrhythmias. Here we present theoretical analysis of the waves of electrical activity that can rise on the boundary of cardiac cell network upon its recovery from ischaemia-like conditions. The main factors included in our analysis are macroscopic gradients of the cell-to-cell coupling and cell excitability and microscopic heterogeneity of individual cells. The interplay between these factors allows one to explain how spirals form, drift together with the moving boundary, get transiently pinned to local inhomogeneities, and finally penetrate into the bulk of the well-coupled tissue where they reach macroscopic scale. The asymptotic theory of the drift of spiral and scroll waves based on response functions provides explanation of the drifts involved in this mechanism, with the exception of effects due to the discreteness of cardiac tissue. In particular, this asymptotic theory allows an extrapolation of 2D events into 3D, which has shown that cells within the border zone can give rise to 3D analogues of spirals, the scroll waves. When and if such scroll waves escape into a better coupled tissue, they are likely to collapse due to the positive filament tension. However, our simulations have shown that such collapse of newly generated scrolls is not inevitable and that under certain conditions filament tension becomes negative, leading to scroll filaments to expand and multiply leading to a fibrillation-like state within small areas of cardiac tissue.
Evolution of spiral and scroll waves of excitation in a mathematical model of ischaemic border zone
A model for a possible variable cosmic object is presented. The model consists of a massive shell surrounding a compact object. The gravitational and self-gravitational forces tend to collapse the shell, but the internal tangential stresses oppose the collapse. The combined action of the two types of forces is studied and several cases are presented. In particular, we investigate the spherically symmetric case in which the shell oscillates radially around a central compact object.
Oscillating shells: A model for a variable cosmic object
A current interest in nuclear reactions, specifically with rare isotopes concentrates on their reaction with neutrons, in particular neutron capture. In order to facilitate reactions with neutrons one must use indirect methods using deuterons as beam or target of choice. For adding neutrons, the most common reaction is the (d,p) reaction, in which the deuteron breaks up and the neutron is captured by the nucleus. Those (d,p) reactions may be viewed as a three-body problem in a many-body context. This contribution reports on a feasibility study for describing phenomenological nucleon-nucleus optical potentials in momentum space in a separable form, so that they may be used for Faddeev calculations of (d,p) reactions.
Nuclear Reactions: A Challenge for Few- and Many-Body Theory
We present 6.5-meter MMT and 3.5m APO spectrophotometry of 69 H II regions in 42 low-metallicity emission-line galaxies, selected from the Data Release 7 of the Sloan Digital Sky Survey to have mostly [O III]4959/Hbeta < 1 and [N II]6583/Hbeta < 0.1. The electron temperature-sensitive emission line [O III] 4363 is detected in 53 H II regions allowing a direct abundance determination. The oxygen abundance in the remaining 16 H II regions is derived using a semi-empirical method. The oxygen abundance of the galaxies in our sample ranges from 12 + log O/H ~ 7.1 to ~ 7.9, with 14 H II regions in 7 galaxies with 12 +log O/H < 7.35. In 5 of the latter galaxies, the oxygen abundance is derived here for the first time. Including other known extremely metal-deficient emission-line galaxies from the literature, e.g. SBS 0335-052W, SBS 0335-052E and I Zw 18, we have compiled a sample of the 17 most metal-deficient (with 12 +log O/H < 7.35) emission-line galaxies known in the local universe. There appears to be a metallicity floor at 12 +log O/H ~ 6.9, suggesting that the matter from which dwarf emission-line galaxies formed was pre-enriched to that level by e.g. Population III stars.
Hunting for extremely metal-poor emission-line galaxies in the Sloan Digital Sky Survey: MMT and 3.5m APO observations
An exact description of integrable spin chains at finite temperature is provided using an elementary algebraic approach in the complete Hilbert space of the system. We focus on spin chain models that admit a description in terms of free fermions, including paradigmatic examples such as the one-dimensional transverse-field quantum Ising and XY models. The exact partition function is derived and compared with the ubiquitous approximation in which only the positive parity sector of the energy spectrum is considered. Errors stemming from this approximation are identified in the neighborhood of the critical point at low temperatures. We further provide the full counting statistics of a wide class of observables at thermal equilibrium and characterize in detail the thermal distribution of the kink number and transverse magnetization in the transverse-field quantum Ising chain.
Exact thermal properties of free-fermionic spin chains
Smart contracts are Turing-complete programs that are executed across a blockchain. Unlike traditional programs, once deployed, they cannot be modified. As smart contracts carry more value, they become more of an exciting target for attackers. Over the last years, they suffered from exploits costing millions of dollars due to simple programming mistakes. As a result, a variety of tools for detecting bugs have been proposed. Most of these tools rely on symbolic execution, which may yield false positives due to over-approximation. Recently, many fuzzers have been proposed to detect bugs in smart contracts. However, these tend to be more effective in finding shallow bugs and less effective in finding bugs that lie deep in the execution, therefore achieving low code coverage and many false negatives. An alternative that has proven to achieve good results in traditional programs is hybrid fuzzing, a combination of symbolic execution and fuzzing. In this work, we study hybrid fuzzing on smart contracts and present ConFuzzius, the first hybrid fuzzer for smart contracts. ConFuzzius uses evolutionary fuzzing to exercise shallow parts of a smart contract and constraint solving to generate inputs that satisfy complex conditions that prevent evolutionary fuzzing from exploring deeper parts. Moreover, ConFuzzius leverages dynamic data dependency analysis to efficiently generate sequences of transactions that are more likely to result in contract states in which bugs may be hidden. We evaluate the effectiveness of ConFuzzius by comparing it with state-of-the-art symbolic execution tools and fuzzers for smart contracts. Our evaluation on a curated dataset of 128 contracts and 21K real-world contracts shows that our hybrid approach detects more bugs (up to 23%) while outperforming state-of-the-art in terms of code coverage (up to 69%), and that data dependency analysis boosts bug detection up to 18%.
ConFuzzius: A Data Dependency-Aware Hybrid Fuzzer for Smart Contracts
The superconformal central charge is an important quantity for theories emerging from string theory geometrical implementation of Quantum Field Theory since it is linked, for example, to the scaling dimension of fields. Butti and Zaffaroni construction of the central charge for toric Calabi-Yau threefold geometries is a powerful tool but its implementation could be quite tricky. Here we present an equivalent new construction based on a 2-simplexes decomposition of the toric diagram.
2-simplexes and superconformal central charges
A quantum annealer heuristically minimizes quadratic unconstrained binary optimization (QUBO) problems, but is limited by the physical hardware in the size and density of the problems it can handle. We have developed a meta-heuristic solver that utilizes D-Wave Systems' quantum annealer (or any other QUBO problem optimizer) to solve larger or denser problems, by iteratively solving subproblems, while keeping the rest of the variables fixed. We present our algorithm, several variants, and the results for the optimization of standard QUBO problem instances from OR-Library of sizes 500 and 2500 as well as the Palubeckis instances of sizes 3000 to 7000. For practical use of the solver, we show the dependence of the time to best solution on the desired gap to the best known solution. In addition, we study the dependence of the gap and the time to best solution on the size of the problems solved by the underlying optimizer.
Building an iterative heuristic solver for a quantum annealer
Although unsupervised domain adaptation (UDA) is a promising direction to alleviate domain shift, they fall short of their supervised counterparts. In this work, we investigate relatively less explored semi-supervised domain adaptation (SSDA) for medical image segmentation, where access to a few labeled target samples can improve the adaptation performance substantially. Specifically, we propose a two-stage training process. First, an encoder is pre-trained in a self-learning paradigm using a novel domain-content disentangled contrastive learning (CL) along with a pixel-level feature consistency constraint. The proposed CL enforces the encoder to learn discriminative content-specific but domain-invariant semantics on a global scale from the source and target images, whereas consistency regularization enforces the mining of local pixel-level information by maintaining spatial sensitivity. This pre-trained encoder, along with a decoder, is further fine-tuned for the downstream task, (i.e. pixel-level segmentation) using a semi-supervised setting. Furthermore, we experimentally validate that our proposed method can easily be extended for UDA settings, adding to the superiority of the proposed strategy. Upon evaluation on two domain adaptive image segmentation tasks, our proposed method outperforms the SoTA methods, both in SSDA and UDA settings. Code is available at https://github.com/hritam-98/GFDA-disentangled
Semi-supervised Domain Adaptive Medical Image Segmentation through Consistency Regularized Disentangled Contrastive Learning
Let $R$ be a commutative complex Banach algebra with the involution $\cdot ^\star$ and suppose that $A\in R^{n\times n}$, $B\in R^{n\times m}$, $C\in R^{p\times n}$. The question of when the Riccati equation $$ PBB^\star P-PA-A^\star P-C^\star C=0 $$ has a solution $P\in R^{n\times n}$ is investigated. A counterexample to a previous result in the literature on this subject is given, followed by sufficient conditions on the data guaranteeing the existence of such a $P$. Finally, applications to spatially distributed systems are discussed.
On Riccati equations in Banach algebras
In two-dimensional (2D) electron systems, an off-resonant high-frequency circularly polarized electromagnetic field can induce the quasi-stationary bound electron states of repulsive scatterers. As a consequence, the resonant scattering of conduction electrons through the quasi-stationary states and the capture of conduction electrons by the states appear. The present theory describes the transport properties of 2D electron gas irradiated by a circularly polarized light, which are modified by these processes. Particularly, it is demonstrated that irradiation of 2D electron systems by the off-resonant field results in the quantum correction to conductivity of resonant kind.
Light-induced bound electron states in two-dimensional systems: Contribution to electron transport
The crucial observation on the occurrence of subpulses (overtones) in the Power Spectral Density of the August 27 (1998) event from SGR1900+14, as discovered by BeppoSAX (Feroci et al. 1999), has received no consistent explanation in the context of the competing theories to explain the SGRs phenomenology: the magnetar and accretion-driven models. Based on the ultra-relativistic, ultracompact X-ray binary model introduced in the accompanying paper (Mosquera Cuesta 2004a), I present here a self-consistent explanation for such an striking feature. I suggest that both the fundamental mode and the overtones observed in SGR1900+14 stem from pulsations of a massive white dwarf (WD). The fundamental mode (and likely some of its harmonics) is excited because of the mutual gravitational interaction with its orbital companion (a NS, envisioned here as point mass object) whenever the binary Keplerian orbital frequency is a multiple integer number ($m$) of that mode frequency (Pons et al. 2002). Besides, a large part of the powerful irradiation from the fireball-like explosion occurring on the NS (after partial accretion of disk material) is absorbed in different regions of the star driving the excitation of other multipoles (Podsiadlowski 1991,1995), i.e., the overtones (fluid modes of higher frequency). Part of this energy is then reemitted into space from the WD surface or atmosphere. This way, the WD lightcurve carries with it the signature of these pulsations inasmuch the way as it happens with the Sun pulsations in Helioseismology. It is shown that our theoretical prediction on the pulsation spectrum agrees quite well with the one found by BeppoSAX (Feroci et al. 1999). A feature confirmed by numerical simulations (Montgomery & Winget 2000).
An origin for the main pulsation and overtones of SGR1900+14 during the August 27 (1998) superoutburst
Spectrum scarcity has been a major concern for achieving the desired quality of experience (QoE) in next-generation (5G/6G and beyond) networks supporting a massive volume of mobile and IoT devices with low-latency and seamless connectivity. Hence, spectrum sharing systems have been considered as a major enabler for next-generation wireless networks in meeting QoE demands. While most current coexistence solutions and standards focus on performance improvement and QoE optimization, the emerging security challenges of such network environments have been ignored in the literature. The security framework of standalone networks (either 5G or WiFi) assumes the ownership of entire network resources from spectrum to core functions. Hence, all accesses to the network shall be authenticated and authorized within the intra-network security system and is deemed illegal otherwise. However, coexistence network environments can lead to unprecedented security vulnerabilities and breaches as the standalone networks shall tolerate unknown and out-of-network accesses, specifically in the medium access. In this paper, for the first time in literature, we review some of the critical and emerging security vulnerabilities in the 5G/WiFi coexistence network environment which have not been observed previously in standalone networks. Specifically, independent medium access control (MAC) protocols and the resulting hidden node issues can result in exploitation such as service blocking, deployment of rogue base-stations, and eavesdropping attacks. We study potential vulnerabilities in the perspective of physical layer authentication, network access security, and cross-layer authentication mechanisms. This study opens a new direction of research in the analysis and design of a security framework that can address the unique challenges of coexistence networks.
Security and Privacy vulnerabilities of 5G/6G and WiFi 6: Survey and Research Directions from a Coexistence Perspective
Human mobility is one of the key factors at the basis of the spreading of diseases in a population. Containment strategies are usually devised on movement scenarios based on coarse-grained assumptions. Mobility phone data provide a unique opportunity for building models and defining strategies based on very precise information about the movement of people in a region or in a country. Another very important aspect is the underlying social structure of a population, which might play a fundamental role in devising information campaigns to promote vaccination and preventive measures, especially in countries with a strong family (or tribal) structure. In this paper we analyze a large-scale dataset describing the mobility and the call patterns of a large number of individuals in Ivory Coast. We present a model that describes how diseases spread across the country by exploiting mobility patterns of people extracted from the available data. Then, we simulate several epidemics scenarios and we evaluate mechanisms to contain the epidemic spreading of diseases, based on the information about people mobility and social ties, also gathered from the phone call data. More specifically, we find that restricting mobility does not delay the occurrence of an endemic state and that an information campaign based on one-to-one phone conversations among members of social groups might be an effective countermeasure.
Exploiting Cellular Data for Disease Containment and Information Campaigns Strategies in Country-Wide Epidemics
Hard spheres in Newtonian fluids serve as paradigms for Non-Newtonian materials phenomena exhibited by colloidal suspensions. A recent experimental study (Cheng et al. 2011 Science, 333, 1276) showed that upon application of shear to such a system, the particles form string-like structures aligned in the vorticity direction. We explore the mechanism underlying this out-of-equilibrium organization with Steered Transition Path Sampling, which allows us to bias the Brownian contribution to rotations of close pairs of particles and alter the dynamics of the suspension in a controlled fashion. Our results show a strong correlation between the string structures and the rotation dynamics. Specifically, the simulations show that accelerating the rotations of close pairs of particles, not increasing their frequency, favors formation of the strings. This insight delineates the roles of hydrodynamics, Brownian motion, and particle packing, and, in turn, informs design strategies for controlling the assembly of large-scale particle structures.
Influence of Inter-Layer Exchanges on Vorticity-Aligned Colloidal String Assembly in a Simple Shear Flow
Answering a question posed by Adam Epstein, we show that the collection of conjugacy classes of polynomials admitting a parabolic fixed point and at most one infinite critical orbit is a set of bounded height in the relevant moduli space. We also apply the methods over function fields to draw conclusions about algebraically parametrized families, and prove an analogous result for quadratic rational maps.
Critical orbits of polynomials with a periodic point of specified multiplier