text
stringlengths
6
128k
We introduce canonical correlation forests (CCFs), a new decision tree ensemble method for classification and regression. Individual canonical correlation trees are binary decision trees with hyperplane splits based on local canonical correlation coefficients calculated during training. Unlike axis-aligned alternatives, the decision surfaces of CCFs are not restricted to the coordinate system of the inputs features and therefore more naturally represent data with correlated inputs. CCFs naturally accommodate multiple outputs, provide a similar computational complexity to random forests, and inherit their impressive robustness to the choice of input parameters. As part of the CCF training algorithm, we also introduce projection bootstrapping, a novel alternative to bagging for oblique decision tree ensembles which maintains use of the full dataset in selecting split points, often leading to improvements in predictive accuracy. Our experiments show that, even without parameter tuning, CCFs out-perform axis-aligned random forests and other state-of-the-art tree ensemble methods on both classification and regression problems, delivering both improved predictive accuracy and faster training times. We further show that they outperform all of the 179 classifiers considered in a recent extensive survey.
The Wu invariant is a regular homotopy invariant for immersions of oriented 3-manifolds into the 5-space. In this paper, we present new expressions and vanishing theorems for the invariant, from the viewpoint of almost contact structures and complex tangents. As an application, we determine the regular homotopy classes of inclusion maps of the surface singularity links of type ADE into the 5-sphere.
Bribery in an election is one of the well-studied control problems in computational social choice. In this paper, we propose and study the safe bribery problem. Here the goal of the briber is to ask the bribed voters to vote in such a way that the briber never prefers the original winner (of the unbribed election) more than the new winner, even if the bribed voters do not fully follow the briber's advice. Indeed, in many applications of bribery, campaigning for example, the briber often has limited control on whether the bribed voters eventually follow her recommendation and thus it is conceivable that the bribed voters can either partially or fully ignore the briber's recommendation. We provide a comprehensive complexity theoretic landscape of the safe bribery problem for many common voting rules in this paper.
We give a new conceptual proof of the classification of cuspidal modules for the solenoidal Lie algebra. This classification was originally published by Y.Su. Our proof is based on the theory of modules for the solenoidal Lie algebras that admit a compatible action of the commutative algebra of functions on a torus.
Results of experiments on the dynamics and kinetic roughening of one-dimensional slow-combustion fronts in three grades of paper are reported. Extensive averaging of the data allows a detailed analysis of the spatial and temporal development of the interface fluctuations. The asymptotic scaling properties, on long length and time scales, are well described by the Kardar-Parisi-Zhang (KPZ) equation with short-range, uncorrelated noise. To obtain a more detailed picture of the strong-coupling fixed point, characteristic of the KPZ universality class, universal amplitude ratios, and the universal coupling constant are computed from the data and found to be in good agreement with theory. Below the spatial and temporal scales at which a cross-over takes place to the standard KPZ behavior, the fronts display higher apparent exponents and apparent multiscaling. In this regime the interface velocities are spatially and temporally correlated, and the distribution of the magnitudes of the effective noise has a power-law tail. The relation of the observed short-range behavior and the noise as determined from the local velocity fluctuations is discussed.
The Thomson effect induces heat release or absorption under the simultaneous application of a charge current and a temperature gradient to conductors. Here, we theoretically investigate the temperature profile due to the Thomson-effect-induced heat release/absorption in junctionless single conductors which can be a simple temperature modulator. We also perform analysis of the temperature profile for realistic conductors. As a result, we find that, for a conductor with a large Thomson coefficient, the temperature derivative of the Seebeck coefficient, the Thomson-effect-induced heat absorption overcomes the Joule heating, resulting in current-induced cooling in the bulk region. We also elucidate that a feedback effect of the Thomson effect stabilizes the system temperature to one-side of the heat bath, which reflects the fact that the Thomson effect is dependent on the position and proportional to the local temperature gradient. This work will be the basis for thermal management utilizing the Thomson effect.
Transverse momentum spectra of charged particles including pions, kaons and (anti-)protons measured by ALICE experiment in the pT range of 0.1-2.5 GeV/c and pseudorapidity less than 0.5 are studied in pp collisions at 900 GeV center of mass energy using modified Hagedorn function with embedded transverse flow velocity and are compared to the predictions of EPOS-LHC, Pythia, QGSJET and Sibyll models. We find that the average transverse flow velocity decreases with increasing the mass of the particle while the kinetic freeze-out temperature extracted from the function increases with the particle's mass. The former varies from 0.36 c to 0.25 c for pions to protons while the latter from 76 MeV to 95 MeV respectively. The fit of the models predictions also yield the same values for T0 and beta as the experimental data. The only difference is in the values of n, and N0 which yields different values for different models. The EPOS-LHC, Pythia, and QGSJET models reproduce the data in most of the pT range for pions, EPOS-LHC and Sibyll for kaons up to 1.5 GeV/c and EPOS-LHC for protons up to 1.6 GeV/c. The model simulations also reproduced the behavior of increasing average transverse momentum with mass reported by the ALICE experiment.
LHCb Collaboration studied the resonant structure of $B_s\to \overline{D}^0K^-\pi^+$ decays using the Dalitz plot analysis technique, based on a data sample corresponding to an integrated luminosity of $3.0{\rm fb}^{-1}$ of $pp$ collision. The $K^-\pi^+$ components have been analyzed in the amplitude model, where the decay amplitude is modeled to be the resonant contributions with respect to the intermediate resonances $K^*(892)$, $K_0^*(1430)$ and $K_2^*(1430)$. Motivated by the experimental results, we investigate the color-favored quasi-two-body $B \to \overline{D}^0K\pi$ decays in the framework of the perturbative QCD (PQCD) approach. We calculate the the branching fractions by introducing the appropriate wave functions of $K\pi$ pair. Our results are in agreement well the available data, and others can be tested in LHCb and Belle-II experiments. Using the narrow-width-approximation, we also extract the branching fractions of the corresponding two-body $B\to \overline D R$ decays, which agree to the previous theoretical calculations and the experimental data within the errors. There are no $CP$ asymmetries in these decays in the standard model, because these decays are all governed by only the tree operators.
The binodals and the non-ergodicity lines of a binary mixture of hard sphere-like particles with large size ratio are computed for studying the interplay between dynamic arrest and phase separation in depletion-driven colloidal mixtures. Contrarily to the case of hard core plus short range effective attraction, physical gellation without competition with the fluid-phase separation can occur in such mixtures. This behavior due to the oscillations in the depletion potential should concern all simple mixtures with non-ideal depletant, justifying further studies of their dynamic properties.
Lately, network sampling proved as a promising tool for simplifying large real-world networks and thus providing for their faster and more efficient analysis. Still, understanding the changes of network structure and properties under different sampling methods remains incomplete. In this paper, we analyze the presence of characteristic group of nodes (i.e., communities, modules and mixtures of the two) in social and information networks. Moreover, we observe the changes of node group structure under two sampling methods, random node selection based on degree and breadth-first sampling. We show that the sampled information networks contain larger number of mixtures than original networks, while the structure of sampled social networks exhibits stronger characterization by communities. The results also reveal there exist no significant differences in the behavior of both sampling methods. Accordingly, the selection of sampling method impact on the changes of node group structure to a much smaller extent that the type and the structure of analyzed network.
We report the independent discovery of PSR J0027-1956 with the Murchison Widefield Array (MWA) in the ongoing Southern-sky MWA Rapid Two-meter (SMART) pulsar survey. J0027-1956 has a period of ~1.306 s, a dispersion measure (DM) of ~20.869 pc cm^-3 , and a nulling fraction of ~77%. This pulsar highlights the advantages of the survey's long dwell times (~80 min), which, when fully searched, will be sensitive to the expected population of similarly bright, intermittent pulsars with long nulls. A single-pulse analysis in the MWA's 140-170 MHz band also reveals a complex sub-pulse drifting behavior, including both rapid changes of the drift rate characteristic of mode switching pulsars, as well as a slow, consistent evolution of the drift rate within modes. In some longer drift sequences, interruptions in the otherwise smooth drift rate evolution occur preferentially at a particular phase, typically lasting a few pulses. These properties make this pulsar an ideal test bed for prevailing models of drifting behavior such as the carousel model.
Grid-interfacing inverters act as the interface between renewable resources and the electric grid, and have the potential to offer fast and programmable controls compared to synchronous generators. With this flexibility there has been significant research efforts into determining the best way to control these inverters. Inverters are limited in their maximum current output in order to protect semiconductor devices, presenting a nonlinear constraint that needs to be accounted for in their control algorithms. Existing approaches either simply saturate a controller that is designed for unconstrained systems, or assume small perturbations and linearize a saturated system. These approaches can lead to stability issues or limiting the control actions to be too conservative. In this paper, we directly focus on a nonlinear system that explicitly accounts for the saturation of the current magnitude. We use a Lyapunov stability approach to determine a stability condition for the system, guaranteeing that a class of controllers would be stabilizing if they satisfy a simple SDP condition. With this condition we fit a linear-feedback controller by sampling the output (offline) model predictive control problems. This learned controller has improved performances with existing designs.
We grow strained Ge/SiGe heterostructures by reduced-pressure chemical vapor deposition on 100 mm Ge wafers. The use of Ge wafers as substrates for epitaxy enables high-quality Ge-rich SiGe strain-relaxed buffers with a threading dislocation density of (6$\pm$1)$\times$10$^5$ cm$^{-2}$, nearly an order of magnitude improvement compared to control strain-relaxed buffers on Si wafers. The associated reduction in short-range scattering allows for a drastic improvement of the disorder properties of the two-dimensional hole gas, measured in several Ge/SiGe heterostructure field-effect transistors. We measure an average low percolation density of (1.22$\pm$0.03)$\times$10$^{10}$ cm$^{-2}$, and an average maximum mobility of (3.4$\pm$0.1)$\times$10$^{6}$ cm$^2$/Vs and quantum mobility of (8.4$\pm$0.5)$\times$10$^{4}$ cm$^2$/Vs when the hole density in the quantum well is saturated to (1.65$\pm$0.02)$\times$10$^{11}$ cm$^{-2}$. We anticipate immediate application of these heterostructures for next-generation, higher-performance Ge spin-qubits and their integration into larger quantum processors.
We study some convergence issues for a recent approach to the problem of transparent boundary conditions for the Helmholtz equation in unbounded domains. The approach is based on the minimization on an integral functional which arises from an integral formulation of the radiation condition at infinity. In this Letter, we implement a Fourier-Chebyschev collocation method and show that this approach reduce the computational cost significantly. As a consequence, we give numerical evidence of some convergence estimates available in literature and we study the robustness of the algorithm at low and mid-high frequencies.
In this paper, given a reflexive real Banach space X and two sequentially weakly lower semicontinuous functionals Phi, Psi on X with Psi strongly continuous and coercive, we are mainly interested in the existence of infinitely many local minima of the functional capital Phi + r Psi for each sufficiently real r.
We exploit the many-body self-consistent Green's function method to analyze finite-temperature properties of infinite nuclear matter and to explore the behavior of the thermal index used to simulate thermal effects in equations of state for astrophysical applications. We show how the thermal index is both density and temperature dependent, unlike often considered, and we provide an error estimate based on our ${\it ab~initio}$ calculations. The inclusion of many-body forces is found to be critical for the density dependence of the thermal index. We also compare our results to a parametrization in terms of the density dependence of the nucleon effective mass. Our study questions the validity of predictions made for the gravitational-wave signal from neutron-star merger simulations with a constant thermal index.
The synthesis of stoichiometric and epitaxial pyrochlore iridate thin films presents significant challenges yet is critical for unlocking experimental access to novel topological and magnetic states. Towards this goal, we unveil an in-situ two-stage growth mechanism that facilitates the synthesis of high-quality oriented pyrochlore iridate thin films. The growth starts with the deposition of a pyrochlore titanate as an active iso-structural template, followed by the application of an in-situ solid phase epitaxy technique in the second stage to accomplish the formation of single crystalline, large-area films. This novel protocol ensures the preservation of stoichiometry and structural homogeneity, leading to a marked improvement in surface and interface qualities over previously reported methods. The success of this synthesis approach is attributed to the application of directional laser-heat annealing, which effectively reorganizes the continuous random network of ions into a crystalline structure, as evidenced by our comprehensive analysis of the growth kinetics. This new synthesis approach advances our understanding of pyrochlore iridate film fabrication and opens a new perspective for investigating their unique physical properties.
The scission kinetics of bottle-brush molecules in solution and on an adhesive substrate is modeled by means of Molecular Dynamics simulation with Langevin thermostat. Our macromolecules comprise a long flexible polymer backbone with $L$ segments, consisting of breakable bonds, along with two side chains of length $N$, tethered to each segment of the backbone. In agreement with recent experiments and theoretical predictions, we find that bond cleavage is significantly enhanced on a strongly attractive substrate even though the chemical nature of the bonds remains thereby unchanged. We find that the mean bond life time $<\tau>$ decreases upon adsorption by more than an order of magnitude even for brush molecules with comparatively short side chains $N=1 \div 4$. The distribution of scission probability along the bonds of the backbone is found to be rather sensitive regarding the interplay between length and grafting density of side chains. The life time $<\tau>$ declines with growing contour length $L$ as $<\tau>\propto L^{-0.17}$, and with side chain length as $<\tau>\propto N^{-0.53}$. The probability distribution of fragment lengths at different times agrees well with experimental observations. The variation of the mean length $L(t)$ of the fragments with elapsed time confirms the notion of the thermal degradation process as a first order reaction.
Observations of Halpha emission measures and pulsar dispersion measures at high Galactic latitude (|b| > 10 deg) provide information about the density and distribution of the diffuse warm ionized medium (WIM). The diffuse WIM has a lognormal distribution of EM sin |b|, which is consistent with a density structure established by isothermal turbulence. The H+ responsible for most of the emission along high-EM sin |b| sightlines is clumped in high density (> 0.1 cm^{-3}) regions that occupy only a few parsecs along the line of sight, while the H+ along low-EM sightlines occupies hundreds of parsecs with considerably lower densities.
We present a novel optimised design for a source of cold atomic cadmium, compatible with continuous operation and potentially quantum degenerate gas production. The design is based on spatially segmenting the first and second-stages of cooling with the the strong dipole-allowed $^1$S$_0$-$^1$P$_1$ transition at 229 nm and the 326 nm $^1$S$_0$-$^3$P$_1$ intercombination transition, respectively. Cooling at 229 nm operates on an effusive atomic beam and takes the form of a compact Zeeman slower ($\sim$5 cm) and two-dimensional magneto-optical trap (MOT), both based on permanent magnets. This design allows for reduced interaction time with the photoionising 229 nm photons and produces a slow beam of atoms that can be directly loaded into a three-dimensional MOT using the intercombination transition. The efficiency of the above process is estimated across a broad range of experimentally feasible parameters via use of a Monte Carlo simulation, with loading rates up to 10$^8$ atoms/s into the 326 nm MOT possible with the oven at only 100 $^\circ$C. The prospects for further cooling in a far-off-resonance optical-dipole trap and atomic launching in a moving optical lattice are also analysed, especially with reference to the deployment in a proposed dual-species cadmium-strontium atom interferometer.
The edge isoperimetric inequality in the discrete cube specifies, for each pair of integers $m$ and $n$, the minimum size $g_n(m)$ of the edge boundary of an $m$-element subset of $\{0,1\}^{n}$; the extremal families (up to automorphisms of the discrete cube) are initial segments of the lexicographic ordering on $\{0,1\}^n$. We show that for any $m$-element subset $\mathcal{F} \subset \{0,1\}^n$ and any integer $l$, if the edge boundary of $\mathcal{F}$ has size at most $g_n(m)+l$, then there exists an extremal family $\mathcal{G} \subset \{0,1\}^n$ such that $|\mathcal{F} \Delta \mathcal{G}| \leq Cl$, where $C$ is an absolute constant. This is best-possible, up to the value of $C$. Our result can be seen as a `stability' version of the edge isoperimetric inequality in the discrete cube, and as a discrete analogue of the seminal stability result of Fusco, Maggi and Pratelli concerning the isoperimetric inequality in Euclidean space.
Angular momentum evolution in low-mass stars is determined by initial conditions during star formation, stellar structure evolution, and the behaviour of stellar magnetic fields. Here we show that the empirical picture of angular momentum evolution arises naturally if rotation is related to magnetic field strength instead of to magnetic flux, and formulate a corrected braking law based on this. Angular momentum evolution then becomes a strong function of stellar radius, explaining the main trends observed in open clusters and field stars at a few Gyr: the steep transition in rotation at the boundary to full convection arises primarily from the large change in radius across this boundary, and does not require changes in dynamo mode or field topology. Additionally, the data suggest transient core-envelope decoupling among solar-type stars, and field saturation at longer periods in very low-mass stars. For solar-type stars, our model is also in good agreement with the empirical Skumanich law. Finally, in further support of the theory, we show that the predicted age at which low-mass stars spin down from the saturated to unsaturated field regimes in our model corresponds remarkably well to the observed lifetime of magnetic activity in these stars.
We study the classical dynamics of the collinear positron-hydrogen scattering system below the three-body breakup threshold. Observing the chaotic behavior of scattering time signals, we in- troduce a code system appropriate to a coarse grained description of the dynamics. And, for the purpose of systematic analysis of the phase space structure, a surface of section is introduced being chosen to match the code system. Partition of the surface of section leads us to a surprising conjec- ture that the topological structure of the phase space of the system is invariant under exchange of the dynamical variables of proton with those of positron. It is also found that there is a finite set of forbidden patterns of symbol sequences. And the shortest periodic orbit is found to be stable, around which invariant tori form an island of stability in the chaotic sea. Finally we discuss a possible quantum manifestation of the classical phase space structure relevant to resonances in scattering cross sections.
We introduce a renormalized Jellium model to calculate the equation of state for charged colloidal suspensions. An almost perfect agreement with Monte Carlo simulations is found. Our self-consistent approach naturally allows to define the effective charge of particles {\em at finite colloidal density}. Although this quantity may differ significantly from its counterpart obtained from the standard Poisson-Boltzmann cell approach, the osmotic pressures for both models are in good agreement. We argue that by construction, the effective charge obtained using the Jellium approximation is more appropriate to the study of colloidal interactions. We also discuss a possibility of a fluid-fluid critical point and show how the new equation of state can be used to shed light on the surprising results found in recent sedimentation experiments.
The nitrogen-vacancy (NV) center is a promising candidate to realize practical quantum sensors with high sensitivity and high spatial resolution, even at room temperature and atmospheric pressure. In conventional high-frequency AC magnetometry with NV centers, the setup requires a pulse sequence with an appropriate time synchronization and strong microwave power. To avoid these practical difficulties, AC magnetic field sensing using continuous-wave opticallydetected magnetic resonance (CW-ODMR) was recently demonstrated. That previous study utilized radio frequency (RF) dressed states generated by the coherent interaction between the electron spin of the NV center and the RF wave. However, the drawback of this method is that the detectable frequency of the AC magnetic fields is fixed. Here, we propose and demonstrate frequency-tunable magnetic field sensing based on CW-ODMR. In the new sensing scheme, we obtain RF double-dressed states by irradiation with a RF field at two different frequencies. One creates the RF dressed states and changes the frequency of the target AC field. The other is a target AC field that induces a change in the CW-ODMR spectrum by generating the RF double-dressed states through coherent interaction with the RF dressed states. The sensitivity of our method is estimated to be comparable to or even higher than that of the conventional method based on a RF field with a single frequency. The estimated bandwidth is 7.45 MHz, higher than that of the conventional method using the RF dressed states. Our frequency-tunable magnetic field sensor based on CW-ODMR paves the way for new applications in diamond devices.
In this paper, we study distributed channel triggering mechanisms for wireless networked control systems (WNCSs) for conventional and smart sensors, i.e., sensors without and with computational power, respectively. We first consider the case of conventional sensors in which the state estimate is performed based on the intermittent raw measurements received from the sensor and we show that the priority measure is associated with the statistical properties of the observations, as it is the case of the cost of information loss (CoIL) [1]. Next, we consider the case of smart sensors and despite the fact that CoIL can also be deployed, we deduce that it is more beneficial to use the available measurements and we propose a function of the value of information (VoI) [2], [3] that also incorporates the channel conditions as the priority measure. The different scenarios and priority measures are discussed and compared for simple scenarios via simulations.
The response of oxide thin films to polar discontinuities at interfaces and surfaces has generated an enormous activity due to the variety of interesting effects it gives rise to. A case in point is the discovery of the electron gas at the interface between LaAlO3 and SrTiO3, which has since been shown to be quasi-two-dimensional, switchable, magnetic and/or superconducting. Despite these findings, the origin of the two-dimensional electron gas is highly debated and several possible mechanisms remain. Here we review the main proposed mechanisms and attempt to model expected effects in a quantitative way with the ambition of better constraining what effects can/cannot explain the observed phenomenology. We do it in the framework of a phenomenological model for understanding electronic and/or redox screening of the chemical charge in oxide heterostructures. We also discuss the effect of intermixing, both conserving and non-conserving the total stoichiometry.
In this article we study linear complementarity problem with hidden $Z$-matrix. We extend the results of Fiedler and Pt{\'a}k for the linear system in complementarity problem using game theoretic approach. We establish a result related to singular hidden $Z$-matrix. We show that for a non-degenerate feasible basis, linear complementarity problem with hidden $Z$-matrix has unique non-degenerate solution under some assumptions. The purpose of this paper is to study some properties of hidden $Z$-matrix.
Using methods from coarse topology we show that fundamental classes of closed enlargeable manifolds map non-trivially both to the rational homology of their fundamental groups and to the K-theory of the corresponding reduced C*-algebras. Our proofs do not depend on the Baum--Connes conjecture and provide independent confirmation for specific predictions derived from this conjecture.
Data reuse is a common practice in the social sciences. While published data play an essential role in the production of social science research, they are not consistently cited, which makes it difficult to assess their full scholarly impact and give credit to the original data producers. Furthermore, it can be challenging to understand researchers' motivations for referencing data. Like references to academic literature, data references perform various rhetorical functions, such as paying homage, signaling disagreement, or drawing comparisons. This paper studies how and why researchers reference social science data in their academic writing. We develop a typology to model relationships between the entities that anchor data references, along with their features (access, actions, locations, styles, types) and functions (critique, describe, illustrate, interact, legitimize). We illustrate the use of the typology by coding multidisciplinary research articles (n=30) referencing social science data archived at the Inter-university Consortium for Political and Social Research (ICPSR). We show how our typology captures researchers' interactions with data and purposes for referencing data. Our typology provides a systematic way to document and analyze researchers' narratives about data use, extending our ability to give credit to data that support research.
Category theory has been successfully applied in various domains of science, shedding light on universal principles unifying diverse phenomena and thereby enabling knowledge transfer between them. Applications to machine learning have been pursued recently, and yet there is still a gap between abstract mathematical foundations and concrete applications to machine learning tasks. In this paper we introduce DisCoPyro as a categorical structure learning framework, which combines categorical structures (such as symmetric monoidal categories and operads) with amortized variational inference, and can be applied, e.g., in program learning for variational autoencoders. We provide both mathematical foundations and concrete applications together with comparison of experimental performance with other models (e.g., neuro-symbolic models). We speculate that DisCoPyro could ultimately contribute to the development of artificial general intelligence.
We prove a general result concerning cyclic orderings of the elements of a matroid. For each matroid $M$, weight function $\omega:E(M)\rightarrow\mathbb{N}$, and positive integer $D$, the following are equivalent. (1) For all $A\subseteq E(M)$, we have $\sum_{a\in A}\omega(a)\le D\cdot r(A)$. (2) There is a map $\phi$ that assigns to each element $e$ of $E(M)$ a set $\phi(e)$ of $\omega(e)$ cyclically consecutive elements in the cycle $(1,2,...,D)$ so that each set $\{e|i\in\phi(e)\}$, for $i=1,...,D$, is independent. As a first corollary we obtain the following. For each matroid $M$ so that $|E(M)|$ and $r(M)$ are coprime, the following are equivalent. (1) For all non-empty $A\subseteq E(M)$, we have $|A|/r(A)\le|E(M)|/r(M)$. (2) There is a cyclic permutation of $E(M)$ in which all sets of $r(M)$ cyclically consecutive elements are bases of $M$. A second corollary is that the circular arboricity of a matroid is equal to its fractional arboricity. These results generalise classical results of Edmonds, Nash-Williams and Tutte on covering and packing matroids by bases and graphs by spanning trees.
We prove that the volumes determined by the lengths of the non-zero vectors $\pm\vecx$ in a random lattice L of covolume 1 define a stochastic process that, as the dimension n tends to infinity, converges weakly to a Poisson process on the positive real line with intensity 1/2. This generalizes earlier results by Rogers and Schmidt.
We perform a systematic search for long-term extreme variability quasars (EVQs) in the overlapping Sloan Digital Sky Survey (SDSS) and 3-Year Dark Energy Survey (DES) imaging, which provide light curves spanning more than 15 years. We identified ~1000 EVQs with a maximum g band magnitude change of more than 1 mag over this period, about 10% of all quasars searched. The EVQs have L_bol~10^45-10^47 erg/s and L/L_Edd~0.01-1. Accounting for selection effects, we estimate an intrinsic EVQ fraction of ~30-50% among all g<~22 quasars over a baseline of ~15 years. These EVQs are good candidates for so-called "changing-look quasars", where a spectral transition between the two types of quasars (broad-line and narrow-line) is observed between the dim and bright states. We performed detailed multi-wavelength, spectral and variability analyses for the EVQs and compared to their parent quasar sample. We found that EVQs are distinct from a control sample of quasars matched in redshift and optical luminosity: (1) their UV broad emission lines have larger equivalent widths; (2) their Eddington ratios are systematically lower; and (3) they are more variable on all timescales. The intrinsic difference in quasar properties for EVQs suggest that internal processes associated with accretion are the main driver for the observed extreme long-term variability. However, despite their different properties, EVQs seem to be in the tail of a continuous distribution of quasar properties, rather than standing out as a distinct population. We speculate that EVQs are normal quasars accreting at relatively low accretion rates, where the accretion flow is more likely to experience instabilities that drive the factor of few changes in flux on multi-year timescales.
A topological mechanism is a zero elastic-energy deformation of a mechanical structure that is robust against smooth changes in system parameters. Here, we map the nonlinear elasticity of a paradigmatic class of topological mechanisms onto linear fermionic models using a supersymmetric field theory introduced by Witten and Olive. Heuristically, this approach consists of taking the square root of a non-linear Hamiltonian and generalizes the standard procedure of obtaining two copies of Dirac equation from the square root of the linear Klein Gordon equation. Our real space formalism goes beyond topological band theory by incorporating non-linearities and spatial inhomogeneities, such as domain walls, where topological states are typically localized. By viewing the two components of the real fermionic field as site and bond displacements respectively, we determine the relation between the supersymmetry transformations and the Bogomolny-Prasad-Sommerfield (BPS) bound saturated by the mechanism. We show that the mechanical constraint, which enforces a BPS saturated kink into the system, simultaneously precludes an anti-kink. This mechanism breaks the usual kink-antikink symmetry and can be viewed as a manifestation of the underlying supersymmetry being half-broken.
The paper continues previous works which study the behavior of second correlation function of characteristic polynomials of the special case of $n\times n$ one-dimensional Gaussian Hermitian random band matrices, when the covariance of the elements is determined by the matrix $J=(-W^2\triangle+1)^{-1}$. Applying the transfer matrix approach, we study the case when the bandwidth $W$ is proportional to the threshold $\sqrt{n}$
A search for the decay of the Standard Model Higgs boson into a $b\bar{b}$ pair when produced in association with a $W$ or $Z$ boson is performed with the ATLAS detector. The analysed data, corresponding to an integrated luminosity of 36.1 fb$^{-1}$, were collected in proton-proton collisions in Run 2 of the Large Hadron Collider at a centre-of-mass energy of 13 TeV. Final states containing zero, one and two charged leptons (electrons or muons) are considered, targeting the decays $Z\to\nu\nu$, $W\to\ell\nu$ and $Z\to\ell\ell$. For a Higgs boson mass of 125 GeV, an excess of events over the expected background from other Standard Model processes is found with an observed significance of 3.5 standard deviations, compared to an expectation of 3.0 standard deviations. This excess provides evidence for the Higgs boson decay into $b$-quarks and for its production in association with a vector boson. The combination of this result with that of the Run 1 analysis yields a ratio of the measured signal events to the Standard Model expectation equal to $0.90 \pm 0.18 \rm{(stat.)} ^{+0.21}_{-0.19} \rm{(syst.)}$. Assuming the Standard Model production cross-section, the results are consistent with the value of the Yukawa coupling to $b$-quarks in the Standard Model.
We present a modification of the standard halo model with the goal of providing an improved description of galaxy clustering. Recent surveys, like the Sloan Digital Sky Survey (SDSS) and the Anglo-Australian Two-degree survey (2dF), have shown that there seems to be a correlation between the clustering of galaxies and their properties such as metallicity and star formation rate, which are believed to be environment-dependent. This environmental dependence is not included in the standard halo model where the host halo mass is the only variable specifying galaxy properties. In our approach, the halo properties i.e., the concentration, and the Halo Occupation Distribution --HOD-- prescription, will not only depend on the halo mass (like in the standard halo model) but also on the halo environment. We examine how different environmental dependence of halo concentration and HOD prescription affect the correlation function. We see that at the level of dark matter, the concentration of haloes affects moderately the dark matter correlation function only at small scales. However the galaxy correlation function is extremely sensitive to the HOD details, even when only the HOD of a small fraction of haloes is modified.
Higher cluster categories were recently introduced as a generalization of cluster categories. This paper shows that in Dynkin types A and D, half of all higher cluster categories are actually just quotients of cluster categories. The other half can be obtained as quotients of 2-cluster categories, the "lowest" type of higher cluster categories. Hence, in Dynkin types A and D, all higher cluster phenomena are implicit in cluster categories and 2-cluster categories. In contrast, the same is not true in Dynkin type E.
In 1957, Hadwiger made the famous conjecture that any convex body of $n$-dimensional Euclidean space $\mathbb{E}^n$ can be covered by $2^n$ smaller positive homothetic copies. Up to now, this conjecture is still open for all $n\geq 3$. Denote by $\gamma_{m}(K)$ the smallest positive number $\lambda$ such that $K$ can be covered by $m$ translations of $\lambda K$. The values of $\gamma_m(K)$ for some particular $m$ and $K$ have been studied. In this article, we will focus on the situation where $K$ is the unit crosspolytope of the three-dimensional.
Any one measurement with polarized light makes it possible to fix the Mueller matrices of the Lorentz type with up to four arbitrary numeric parameters (x, u; z, w). These parameters are subject to the quadratic condition. It is demonstrated that the quadratic form can be diagonalized; in the case of partially polarized light four diagonal coefficients turn out to benon-zero and positive; in the case of completely polarized light two diagonal coefficients equal to zero.
In this paper we proposed two new quasi-boundary value methods for regularizing the ill-posed backward heat conduction problems. With a standard finite difference discretization in space and time, the obtained all-at-once nonsymmetric sparse linear systems have the desired block $\omega$-circulant structure, which can be utilized to design an efficient parallel-in-time (PinT) direct solver that built upon an explicit FFT-based diagonalization of the time discretization matrix. Convergence analysis is presented to justify the optimal choice of the regularization parameter. Numerical examples are reported to validate our analysis and illustrate the superior computational efficiency of our proposed PinT methods.
Protein-stabilised emulsions can be seen as mixtures of unadsorbed proteins and of protein-stabilised droplets. To identify the contributions of these two components to the overall viscosity of sodium caseinate o/w emulsions, the rheological behaviour of pure suspensions of proteins and droplets were characterised, and their properties used to model the behaviour of their mixtures. These materials are conveniently studied in the framework developed for soft colloids. Here, the use of viscosity models for the two types of pure suspensions facilitates the development of a semi-empirical model that relates the viscosity of protein-stabilised emulsions to their composition.
Despite their exceptional performance on various tasks after fine-tuning, pre-trained language models (PLMs) face significant challenges due to growing privacy concerns with data in centralized training methods. We consider federated learning (FL) to fine-tune PLMs in this paper. However, the substantial number of parameters in PLMs poses significant difficulties for client devices with limited communication and computational resources. One promising solution is to exploit parameter-efficient fine-tuning (PEFT) into FL, which trains a much smaller set of parameters than full parameter fine-tuning (FFT). Although remarkably improving training efficiency, PEFT methods may lead to degraded performance especially when data across different clients are non i.i.d, as revealed by experimental results. To overcome this, we propose FeDeRA, which extends and improves a widely used PEFT method, i.e., low-rank adaption (LoRA). FeDeRA follows LoRA by decomposing the weight matrices of the PLMs into low-rank matrices, which allows for more efficient computation and parameter updates during fine-tuning. Different from LoRA which simply initializes these low-rank matrices by random sampling or zeros, the proposed FeDeRA initializes these matrices by the results of performing singular value decomposition (SVD) on the pre-trained weight matrices. Extensive experiments across various tasks and datasets show that FeDeRA outperforms the considered PEFT baselines and is comparable to or even surpasses FFT method within the FL setting in terms of task performance. Moreover, FeDeRA requires only 1% trainable paramentes compared to FFT, significantly reducing training time costs by more than 90% to achieve the same task performance level. The experimental results also highlight the robustness of FeDeRA against data heterogeneity, as it maintains stable task performance even as data heterogeneity increases.
We generalize the effective field theory of single clock inflation to include dissipative effects. Working in unitary gauge we couple a set of composite operators in the effective action which is constrained solely by invariance under time-dependent spatial diffeomorphisms. We restrict ourselves to situations where the degrees of freedom responsible for dissipation do no contribute to the density perturbations at late time. The dynamics of the perturbations is then modified by the appearance of `friction' and noise terms, and assuming certain locality properties for the Green's functions of these composite operators, we show that there is a regime characterized by a large friction term \gamma >> H in which the \zeta-correlators are dominated by the noise and the power spectrum can be significantly enhanced. We also compute the three point function <\zeta\zeta\zeta> for a wide class of models and discuss under which circumstances large friction leads to an increased level of non-Gaussianities. In particular, under our assumptions, we show that strong dissipation together with the required non-linear realization of the symmetries implies |f_NL| ~ \gamma/(c_s^2H) >> 1. As a paradigmatic example we work out a variation of the `trapped inflation' scenario with local response functions and perform the matching with our effective theory. A detection of the generic type of signatures that result from incorporating dissipative effects during inflation, as we describe here, would teach us about the dynamics of the early universe and also extend the parameter space of inflationary models.
Simplicial toric stack bundles are smooth Deligne-Mumford stacks over smooth varieties with fibre a toric Deligne-Mumford stack. We compute the Grothendieck $K$-theory of simplicial toric stack bundles and study the Chern character homomorphism.
This note is the sequel to [A note on secondary K-theory. Algebra and Number Theory 10 (2016), no. 4, 887-906]. Making use of the recent theory of noncommutative motives, we prove that the canonical map from the derived Brauer group to the secondary Grothendieck ring has the following injectivity properties: in the case of a regular integral quasi-compact quasi-separated scheme, it is injective; in the case of an integral normal Noetherian scheme with a single isolated singularity, it distinguishes any two derived Brauer classes whose difference is of infinite order. As an application, we show that the canonical map is injective in the case of affine cones over smooth projective plane complex curves of degree greater than or equal to four as well as in the case of Mumford's (celebrated) singular surface.
Monocular depth estimation is a challenging problem on which deep neural networks have demonstrated great potential. However, depth maps predicted by existing deep models usually lack fine-grained details due to the convolution operations and the down-samplings in networks. We find that increasing input resolution is helpful to preserve more local details while the estimation at low resolution is more accurate globally. Therefore, we propose a novel depth map fusion module to combine the advantages of estimations with multi-resolution inputs. Instead of merging the low- and high-resolution estimations equally, we adopt the core idea of Poisson fusion, trying to implant the gradient domain of high-resolution depth into the low-resolution depth. While classic Poisson fusion requires a fusion mask as supervision, we propose a self-supervised framework based on guided image filtering. We demonstrate that this gradient-based composition performs much better at noisy immunity, compared with the state-of-the-art depth map fusion method. Our lightweight depth fusion is one-shot and runs in real-time, making our method 80X faster than a state-of-the-art depth fusion method. Quantitative evaluations demonstrate that the proposed method can be integrated into many fully convolutional monocular depth estimation backbones with a significant performance boost, leading to state-of-the-art results of detail enhancement on depth maps.
Evert and Helton proved that real free spectrahedra are the matrix convex hulls of their absolute extreme points. However, this result does not extend to complex free spectrahedra, and we examine multiple ways in which the analogous result can fail. We also develop some local techniques to determine when matrix convex sets are not (duals of) free spectrahedra, as part of a continued study of minimal and maximal matrix convex sets and operator systems. These results apply to both the real and complex cases.
Recently, decentralized optimization over the Stiefel manifold has attacked tremendous attentions due to its wide range of applications in various fields. Existing methods rely on the gradients to update variables, which are not applicable to the objective functions with non-smooth regularizers, such as sparse PCA. In this paper, to the best of our knowledge, we propose the first decentralized algorithm for non-smooth optimization over Stiefel manifolds. Our algorithm approximates the non-smooth part of objective function by its Moreau envelope, and then existing algorithms for smooth optimization can be deployed. We establish the convergence guarantee with the iteration complexity of $\mathcal{O} (\epsilon^{-4})$. Numerical experiments conducted under the decentralized setting demonstrate the effectiveness and efficiency of our algorithm.
The new complete orthonormal sets of -Laguerre type polynomials (-LTP,) are suggested. Using Schr\"odinger equation for complete orthonormal sets of -exponential type orbitals (-ETO) introduced by the author, it is shown that the origin of these polynomials is the centrally symmetric potential which contains the core attraction potential and the quantum frictional potential of the field produced by the particle itself. The quantum frictional forces are the analog of radiation damping or frictional forces suggested by Lorentz in classical electrodynamics. The new -LTP are complete without the inclusion of the continuum states of hydrogen like atoms. It is shown that the nonstandard and standard conventions of -LTP and their weight functions are the same. As an application, the sets of infinite expansion formulas in terms of -LTP and L-Generalized Laguerre polynomials (L-GLP) for atomic nuclear attraction integrals of Slater type orbitals (STO) and Coulomb-Yukawa like correlated interaction potentials (CIP) with integer and noninteger indices are obtained. The arrange and rearranged power series of a general power function are also investigated. The convergence of these series is tested by calculating concrete cases for arbitrary values of parameters of orbitals and power function.
A stream of new theta relations is obtained. They follow from the general Thomae formula, which is a new result giving expressions for theta derivatives (the zero values of the lowest non-vanishing derivatives of theta functions with singular half-period characteristics) in terms of branch points and the period matrix of a hyperelliptic Riemann surface. The new theta relations contain (i) linear relations on the vector space of first order theta derivatives which are arranged in gradients, (ii) relations between second order theta derivatives and symmetric bilinear forms on the vector space of the gradients, (iii) relations between third order theta derivatives and symmetric trilinear forms on the vector space of the gradients, and (iv) a conjecture regarding higher order theta derivatives. It is shown how the Schottky identity (in the hyperelliptic case) is derived from the obtained relations.
We investigate the charge-detection-induced dephasing of a charge qubit interacting with an electronic beam collider composed of a quantum point contact. We report that, while the qubit is dephased by the partitioned beam of uncorrelated electrons, the interference of the qubit is fully restored when the two inputs are identically biased so that all the electrons suffer two-electron collision. This phenomenon is related to Fermi statistics and illustrates the peculiar nonlocality of dephasing. We also describe detection properties for the injection of entangled electron pairs.
This study introduces a new approach to power analysis in the context of estimating a local average treatment effect (LATE), where the study subjects exhibit noncompliance with treatment assignment. As a result of distributional complications in the LATE context, compared to the simple ATE context, there is currently no standard method of power analysis for the LATE. Moreover, existing methods and commonly used substitutes - which include instrumental variable (IV), intent-to-treat (ITT), and scaled ATE power analyses - require specifying generally unknown variance terms and/or rely upon strong and unrealistic assumptions, thus providing unreliable guidance on the power of tests of the LATE. This study develops a new approach that uses standardized effect sizes to place bounds on the power for the most commonly used estimator of the LATE, the Wald IV estimator, whereby variance terms and distributional parameters need not be specified nor assumed. Instead, in addition to the effect size, sample size, and error tolerance parameters, the only other parameter that must be specified by the researcher is the compliance rate. Additional conditions can also be introduced to further narrow the bounds on the power calculation. The result is a generalized approach to power analysis in the LATE context that is simple to implement.
We use solvable two-dimensional gauge theories to illustrate the issues in relating large N gauge theory to string theory. We also give an introduction to recent mathematical work which allows constructing master fields for higher dimensional large N theories. We illustrate this with a new derivation of the Hopf equation governing the evolution of the spectral density in matrix quantum mechanics. Based on lectures given at the 1994 Trieste Spring School on String Theory, Gauge Theory and Quantum Gravity.
This note describes an application of the theory of generalised Burnside rings to algebraic representation theory. Tables of marks are given explicitly for the groups $S_4$ and $S_5$ which are of particular interest in the context of reductive algebraic groups. As an application, the base sets for the nilpotent element $F_4 (a_3)$ are computed.
Compact groups (CGs) of galaxies are defined as isolated and dense galaxy systems that appear to be a unique site of multiple galaxy interactions. Semi-analytical models of galaxy formation (SAMs) are a prime tool to understand CGs. We investigate how the frequency and the three-dimensional nature of CGs depends on the SAM and its underlying cosmological parameters. Extracting 9 lightcones of galaxies from 5 different SAMs and selecting CGs as in observed samples, we find that the frequency and nature of CGs depends strongly on the cosmological parameters. Moving from the WMAP1 to the WMAP7 and Planck cosmologies (increasing density of the Universe and decreasing normalisation of the power spectrum), the space density of CGs is decreased by a factor 2.5, while the fraction of CGs that are physically dense falls from 50 to 35 percent. The lower $\sigma_8$ leads to fewer dense groups, while the higher $\Omega_{\rm m}$ causes more chance alignments. However, with increased mass and spatial resolution, the fraction of CGs that are physically dense is pushed back up to 50 percent. The intrinsic differences in the SAM recipes also lead to differences in the frequency and nature of CGs, particularly those related to how SAMs treat orphan galaxies. We find no dependence of CG properties on the flux limit of the mock catalogues nor on the waveband in which galaxies are selected. One should thus be cautious when interpreting a particular SAM for the frequency and nature of CGs.
In this work, we provide some novel results that establish both the existence of Henig global proper efficient points and their density in the efficient set for vector optimization problems in arbitrary normed spaces. Our results do not require the assumption of convexity, and in certain cases, can be applied to unbounded sets. However, it is important to note that a weak compactness condition on the set (or on a section of it) and a separation property between the order cone and its conical neighborhoods remains necessary. The weak compactness condition ensures that certain convergence properties hold. The separation property enables the interpolation of a family of Bishop-Phelps cones between the order cone and each of its conic neighborhoods. This interpolation, combined with the proper handling of two distinct types of conic neighborhoods, plays a crucial role in the proofs of our results, which include as a particular case other results that have already been established under more restrictive conditions.
Long Range (LoRa) has become a key enabler technology for low power wide area networks. However, due to its ALOHA-based medium access scheme, LoRa has to cope with collisions that limit the capacity and network scalability. Collisions between randomly overlapped signals modulated with different spreading factors (SFs) result in inter-SF interference, which increases the packet loss likelihood when signal-to-interference ratio (SIR) is low. This issue cannot be resolved by channel coding since the probability of error distance is not concentrated around the adjacent symbol. In this paper, we analytically model this interference, and propose an interference cancellation method based on the idea of segmentation of the received signal. This scheme has three steps. First, the SF of the interference signal is identified, then the equivalent data symbol and complex amplitude of the interference are estimated. Finally, the estimated interference signal is subtracted from the received signal before demodulation. Unlike conventional serial interference cancellation (SIC), this scheme can directly estimate and reconstruct the non-aligned inter-SF interference without synchronization. Simulation results show that the proposed method can significantly reduce the symbol error rate (SER) under low SIR compared with the conventional demodulation. Moreover, it also shows high robustness to fractional sample timing offset (STO) and carrier frequency offset (CFO) of interference. The presented results clearly show the effectiveness of the proposed method in terms of the SER performance.
In this paper, we study a generalization of twisted (groupoid) equivariant $\mathrm{K}$-theory in the sense of Freed-Moore for $\mathbb{Z}_2$-graded $\mathrm{C}^*$-algebras. It is defined by using Fredholm operators on Hilbert modules with twisted representations. We compare it with another description using odd symmetries, which is a generalization of van Daele's $\mathrm{K}$-theory for $\mathbb{Z}_2$-graded Banach algebras. In particular, we obtain a simple presentation of the twisted equivariant $\mathrm{K}$-group when the $\mathrm{C}^*$-algebra is trivially graded. It is applied for the bulk-edge correspondence of topological insulators with CT-type symmetries.
For popular websites most important concern is to handle incoming load dynamically among web servers, so that they can respond to their client without any wait or failure. Different websites use different strategies to distribute load among web servers but most of the schemes concentrate on only one factor that is number of requests, but none of the schemes consider the point that different type of requests will require different level of processing efforts to answer, status record of all the web servers that are associated with one domain name and mechanism to handle a situation when one of the servers is not working. Therefore, there is a fundamental need to develop strategy for dynamic load allocation on web side. In this paper, an effort has been made to introduce a cluster based frame work to solve load distribution problem. This framework aims to distribute load among clusters on the basis of their operational capabilities. Moreover, the experimental results are shown with the help of example, algorithm and analysis of the algorithm.
The dynamical responses of Blume-Capel (S=1) ferromagnet to the plane propagating (with fixed frequency and wavelength) and standing magnetic field waves are studied sepa- rately in two dimensions by extensive Monte Carlo simulation. Depending on the values of temperature, amplitude of the propagating magnetic field and the strength of anisotropy, two different dynamical phases are observed. For a fixed value of anisotropy and the amplitude of the propagating magnetic field, the system undergoes a dynamical phase transition from a driven spin wave propagating phase to a pinned or spin frozen state as the system is cooled down. The time averaged magnetisation over a full cycle of the propagating magnetic field plays the role of the dynamic order parameter. A comprehensive phase diagram is plotted in the plane formed by the amplitude of the propagating wave and the temperature of the system. It is found that the phase boundary shrinks inward as the anisotropy increases. The phase boundary, in the plane described by the strength of the anisotropy and temperature, is also drawn. This phase boundary was observed to shrink inward as the field amplitude increases.
Two quadrature-based algorithms for computing the matrix fractional power $A^\alpha$ are presented in this paper. These algorithms are based on the double exponential (DE) formula, which is well-known for its effectiveness in computing improper integrals as well as in treating nearly arbitrary endpoint singularities. The DE formula transforms a given integral into another integral that is suited for the trapezoidal rule; in this process, the integral interval is transformed to the infinite interval. Therefore, it is necessary to truncate the infinite interval into an appropriate finite interval. In this paper, a truncation method, which is based on a truncation error analysis specialized to the computation of $A^\alpha$, is proposed. Then, two algorithms are presented -- one computes $A^\alpha$ with a fixed number of abscissas, and the other computes $A^\alpha$ adaptively. Subsequently, the convergence rate of the DE formula for Hermitian positive definite matrices is analyzed. The convergence rate analysis shows that the DE formula converges faster than the Gaussian quadrature when $A$ is ill-conditioned and $\alpha$ is a non-unit fraction. Numerical results show that our algorithms achieved the required accuracy and were faster than other algorithms in several situations.
The noncritical $D=4$ $W_3$ string is a model of $W_3$ gravity coupled to two free scalar fields. In this paper we discuss its BRST quantization in direct analogy with that of the $D=2$ (Virasoro) string. In particular, we calculate the physical spectrum as a problem in BRST cohomology. The corresponding operator cohomology forms a BV-algebra. We model this BV-algebra on that of the polyderivations of a commutative ring on six variables with a quadratic constraint, or, equivalently, on the BV-algebra of (polynomial) polyvector fields on the base affine space of $SL(3,C)$. In this paper we attempt to present a complete summary of the progress made in these studies. [...]
For any factorization domain $\cal A$ and an algebra endomorphism $\sigma$ of $\cal A$, there exists a non-associative algebra $({\cal A},\sigma,[\cdot,\cdot])$ with multiplication satisfying skew-symmetry and generalized (twisted) Jacobi identities, called a $\sigma$-deformed Witt algebra. In this paper, we obtain the necessary and sufficient conditions for the algebra $({\cal A},\sigma,[\cdot,\cdot])$ to be simple.
The many-body Hamiltonians and other fermionic physical observables are expressed in terms of fermionic creation and annihilation operators, which form the algebra of canonical anti-commutation relations (CAR). In this work we use a canonical isomorphism between CAR and $\mathcal M_{2^\infty}$ algebras to derive analytic matrix representations of many-fermion operators. Code-lines implementing these matrix representations are supplied and Hubbard-type Hamiltonians are worked out explicitly.
We study the production of the spin partner of the X(3872), which is a D^{*}\bar D^{*} bound state with quantum numbers J^{PC}=2^{++} and named X_2(4012) here, with the associated emission of a photon in electron--positron collisions. The results show that the ideal energy region to observe the X_2(4012) in e^+e^- annihilations is from 4.4~GeV to 4.5~GeV, due to the presence of the S-wave \bar D^{*} D_1(2420) and \bar D^{*} D_2(2460) thresholds, respectively. We also point out that it will be difficult to observe the \gamma X(4012) at the e^+e^- center-of-mass energy around 4.26~GeV.
We present a study to detect friendship, its strength, and its change from smartphone location data collectedamong members of a fraternity. We extract a rich set of co-location features and build classifiers that detectfriendships and close friendship at 30% above a random baseline. We design cross-validation schema to testour model performance in specific application settings, finding it robust to seeing new dyads and to temporalvariance.
The Hubble expansion of galaxies, the $2.73\dK$ blackbody radiation background and the cosmic abundances of the light elements argue for a hot, dense origin of the universe --- the standard Big Bang cosmology --- and enable its evolution to be traced back fairly reliably to the nucleosynthesis era when the temperature was of $\Or(1)$ MeV corresponding to an expansion age of $\Or(1)$ sec. All particles, known and hypothetical, would have been created at higher temperatures in the early universe and analyses of their possible effects on the abundances of the synthesized elements enable many interesting constraints to be obtained on particle properties. These arguments have usefully complemented laboratory experiments in guiding attempts to extend physics beyond the Standard $SU(3)_{\c}{\otimes}SU(2)_{\L}{\otimes}U(1)_{Y}$ Model, incorporating ideas such as supersymmetry, compositeness and unification. We first present a pedagogical account of relativistic cosmology and primordial nucleosynthesis, discussing both theoretical and observational aspects, and then proceed to examine such constraints in detail, in particular those pertaining to new massless particles and massive unstable particles. Finally, in a section aimed at particle physicists, we illustrate applications of such constraints to models of new physics.
We report on a search for direct scalar bottom quark (sbottom) pair production in $p \bar{p}$ collisions at $\sqrt{s}=1.96$~TeV, in events with large missing transverse energy and two jets of hadrons in the final state, where at least one of the jets is required to be identified as originating from a $b$ quark. The study uses a CDF Run~II data sample corresponding to 2.65~fb${}^{-1}$ of integrated luminosity. The data are in agreement with the standard model. In an R-parity conserving minimal supersymmetric scenario, and assuming that the sbottom decays exclusively into a bottom quark and a neutralino, 95$\%$ confidence-level upper limits on the sbottom pair production cross section of 0.1~pb are obtained. For neutralino masses below 70~GeV/$c^2$, sbottom masses up to 230~GeV/$c^2$ are excluded at 95$\%$ confidence level.
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, $\epsilon$, of the product is \emph{less} than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, $P_\epsilon$. We argue that, in the limit as $\epsilon\to 0$, $P_\epsilon\sim (\ln (1/\epsilon))^\mu \epsilon^\gamma$, where $\mu $ and $\gamma$ are two real parameters. Our motivation for analysing this \emph{matrix contraction process} is that it serves as a model for describing the fine-structure of strange attractors, where a dense concentration of trajectories results from the differential of the flow being contracting in some region. We exhibit a matrix-product model for the differential of the flow in a random velocity field, and show that there is a phase transition, with the parameter $\mu$ changing abruptly from $\mu=0$ to $\mu=-\frac{3}{2}$ as a parameter of the flow field model is varied.
Fairness in graph neural networks has been actively studied recently. However, existing works often do not explicitly consider the role of message passing in introducing or amplifying the bias. In this paper, we first investigate the problem of bias amplification in message passing. We empirically and theoretically demonstrate that message passing could amplify the bias when the 1-hop neighbors from different demographic groups are unbalanced. Guided by such analyses, we propose BeMap, a fair message passing method, that leverages a balance-aware sampling strategy to balance the number of the 1-hop neighbors of each node among different demographic groups. Extensive experiments on node classification demonstrate the efficacy of BeMap in mitigating bias while maintaining classification accuracy. The code is available at https://github.com/xiaolin-cs/BeMap.
In this paper, we study two important metrics in multiple-input multiple-output (MIMO) time-varying Rayleigh flat fading channels. One is the eigen-mode, and the other is the instantaneous mutual information (IMI). Their second-order statistics, such as the correlation coefficient, level crossing rate (LCR), and average fade/outage duration, are investigated, assuming a general nonisotropic scattering environment. Exact closed-form expressions are derived and Monte Carlo simulations are provided to verify the accuracy of the analytical results. For the eigen-modes, we found they tend to be spatio-temporally uncorrelated in large MIMO systems. For the IMI, the results show that its correlation coefficient can be well approximated by the squared amplitude of the correlation coefficient of the channel, under certain conditions. Moreover, we also found the LCR of IMI is much more sensitive to the scattering environment than that of each eigen-mode.
For $f$ analytic on the unit disc let $r_t(f)(z)=f(e^{it}z)$ and $f_r(z)=f(rz)$, rotations and dilations respectively. We show that for $f$ in the Bergman space $A^p$ and $0<\alpha\leq 1$ the following are equivalent. \begin{itemize} \item[(i)] $\n{r_t(f)-f}_{A^p}=\og(|t|^{\alpha}), \quad t\to 0$, \item[(ii)] $\n{(f')_r}_{A^p} =\og\left (1-r)^{\alpha-1}\right ), \quad r\to 1^{-}$, \item[(iii)] $\n{f_r-f}_{A^p}=\og((1-r)^{\alpha}),\quad r\to 1^{-}$. \end{itemize} The Hardy space analogues of these conditions are known to be equivalent by results of Hardy and Littlewood and of E. Storozhenko, and in that setting they describe the mean Lipschitz spaces $\Lambda (p, \alpha)$. On the way, we provide an elementary proof of the equivalence of $(ii)$ and $(iii)$ in Hardy spaces, and show that similar assertions are valid for certain weighted mean Lipschitz spaces.
A redshift survey has been carried out in the region of the Hubble Deep Field North using the Low Resolution Imaging Spectrograph at the Keck Observatory. The resulting redshift catalog, which contains 671 entries, is a compendium of our own data together with published LRIS/Keck data. It is more than 92% complete for objects, irrespective of morphology, to $R = 24$ mag in the HDF itself and to $R = 23$ mag in the Flanking Fields within a diameter of 8 arcmin centered on the HDF, an unusually high completion for a magnitude limited survey performed with a large telescope. A median redshift $z = 1.0$ is reached at $R \sim 23.8$. Strong peaks in the redshift distribution, which arise when a group or poor cluster of galaxies intersect the area surveyed, can be identified to $z \sim 1.2$ in this dataset. More than 68% of the galaxies are members of these redshift peaks. In a few cases, closely spaced peaks in $z$ can be resolved into separate groups of galaxies that can be distinguished in both velocity and location on the sky. The radial separation of these peaks in the pencil-beam survey is consistent with a characteristic length scale for the their separation of $\approx$70 Mpc in our adopted cosmology ($h = 0.6, \Omega_M = 0.3$, $\Lambda = 0$). Strong galaxy clustering is in evidence at all epochs back to $z \le 1.1$. (abstract abridged)
From geometry and conservation we derive two nonlinear evolution equations for sand ripples. In the case of a strong wind leading to a net erosion of the sand bed, ripples obey the Benney equation. This leads either to order or disorder depending on whether dispersion is strong or weak. In the most frequent case where erosion is counterbalanced by deposition, we derive a new one-parameter nonlinear equation. It reveals ripple structures which then undergo a coarsening process at long times, a process which then slows down dramatically with the growth of the ripple wavelength.
The derivative discontinuity in the exact exchange-correlation potential of ensemble Density Functional Theory (DFT) is investigated at the specific integer number that corresponds to the maximum number of bound electrons, $J_{max}$. A recently developed complex-scaled analog of DFT is extended to fractional particle numbers and used to study ensembles of both bound and metastable states. It is found that the exact exchange-correlation potential experiences discontinuous jumps at integer particle numbers including $J_{max}$. For integers below $J_{max}$ the jump is purely real because of the real shift in the chemical potential. At $J_{max}$, the jump has a non-zero imaginary component reflecting the finite lifetime of the $(J_{max}+1)$ state.
We present the first observational evidence for a collimated jet in a cataclysmic variable system; the recurrent nova T Pyxidis. Optical spectra show bipolar components of H$\alpha$ with velocities $\sim 1400 km/s$, very similar to those observed in the supersoft X-ray sources and in SS 433. We argue that a key ingredient of the formation of jets in the supersoft X-ray sources and T Pyx (in addition to an accretion disk threaded by a vertical magnetic field), is the presence of nuclear burning on the surface of the white dwarf.
The electrical properties of superconducting tapes and coatings in the direction transverse to the long dimension of the composite has been rarely studied. However, transverse dissipation can eventually determine the behavior of a transmission line in the case of failure due to the presence of transversal cracks, and is also fundamental in the AC regime. In this paper we present a preliminary experimental study of the electrical transport properties along the transverse direction of BSCCO-metal tapes, and compare them with those measured along the long axis of the material. In spite of the fact that the tapes under study are not multi-filamentary, our experiments suggest that there is a measurable anisotropy of the transport properties between the longitudinal and transverse directions.
In the present paper, questions about a local behavior of mappings $f:D\rightarrow \overline{{\Bbb R}^n},$ $n\ge 2,$ in $\overline{D}$ are studied. Under some conditions on a measurable function $Q(x),$ $Q:D\rightarrow [0, \infty],$ and boundaries of $D$ and $D^{\,\prime}=f(D),$ it is showed that a family of open discrete map\-ping $f:D\rightarrow \overline{{\Bbb R}^n},$ $n\ge 2,$ with characteristic of quasiconformality $Q(x),$ is equicontinuous in $\overline{D}.$
We investigate the boson star with the self-interacting scalar field as a model of galactic halos. The model has slightly increasing rotation curves and allows wider ranges of the mass($m$) and coupling($\lambda$) of the halo dark matter particle than the non-interacting model previously suggested(ref.\cite{sin1}). Two quantities are related by $\lambda^{\frac{1}{2}} (m_p/m)^2\st{>}{\sim} 10^{50}$.
We present a comprehensive spectral analysis of all INTEGRAL data obtained so far for the X-ray--bright Seyfert galaxy NGC 4151. We also use all contemporaneous data from RXTE, XMM, Swift and Suzaku. We find a linear correlation between the medium and hard-energy X-ray fluxes measured by INTEGRAL, which indicates an almost constant spectral index over six years. The majority of INTEGRAL observations were made when the source was either at a very bright or very dim hard--X-ray state. We find that thermal Comptonization models applied to the bright state yields the plasma temperature of 50--70 keV and its optical depth of 1.3--2.6, depending on the assumed source geometry. For the dim state, these parameters are in the ranges of 180--230 keV and 0.3--0.7, respectively. The Compton parameter is y = 1 for all the spectra, indicating a stable geometry. Using this result, we can determine the reflection effective solid angles associated with the close and distant reprocessing media as = 0.3 x 2pi and 0.2 x 2pi, respectively. The plasma energy balance, the weak disc reflection and a comparison of the UV fluxes illuminating the plasma to the observed ones are all consistent with an inner hot accretion surrounded by an outer cold disc. The disc truncation radius can be determined from an approximate equipartition between the observed UV and X-ray emission, and from the fitted disc blackbody model, as 15 gravitational radii. Alternatively, our results can be explained by a mildly relativistic coronal outflow.
In this paper we address the problem of interpolating a spline developable patch bounded by a given spline curve and the first and the last rulings of the developable surface. In order to complete the boundary of the patch a second spline curve is to be given. Up to now this interpolation problem could be solved, but without the possibility of choosing both endpoints for the rulings. We circumvent such difficulty here by resorting to degree elevation of the developable surface. This is useful not only to solve this problem, but also other problems dealing with triangular developable patches.
Most large web-scale applications are now built by composing collections (from a few up to 100s or 1000s) of microservices. Operators need to decide how many resources are allocated to each microservice, and these allocations can have a large impact on application performance. Manually determining allocations that are both cost-efficient and meet performance requirements is challenging, even for experienced operators. In this paper we present AutoTune, an end-to-end tool that automatically minimizes resource utilization while maintaining good application performance.
We present the results of a search for pair production of scalar top quarks in an R-parity violating supersymmetry scenario in 106 pb-1 of ppbar collisions at $\sqrt{s} = 1.8$ TeV collected by the Collider Detector at Fermilab. In this mode each scalar top quark decays into a tau lepton and a b quark. We search for events with two tau's, one decaying leptonically (e or mu) and one decaying hadronically, and two jets. No candidate events pass our final selection criteria. We set a 95% confidence level lower limit on the scalar top quark mass at 122 GeV/c2 for Br (stop-> tau + b) = 1.
This paper is part of a series of articles on noncommutative geometry and conformal geometry. In this paper, we reformulate the local index formula in conformal geometry in such a way to take into account of the action of conformal diffeomorphisms. We also construct and compute a whole new family of geometric conformal invariants associated with conformal diffeomorphisms. This includes conformal invariants associated with equivariant characteristic classes. The approach of this paper involves using various tools from noncommutative geometry, such as twisted spectral triples and cyclic theory. An important step is to establish the conformal invariance of the Connes-Chern character of the conformal Dirac spectral triple of Connes-Moscovici. Ultimately, however, the main results of the paper are stated in a purely differential-geometric fashion.
This paper focuses on quantifying the outage performance of terahertz (THz) relaying systems. In this direction, novel closed-form expressions for the outage probability of a dual-hop relaying system, in which both the source-relay and relay-destination links suffer from fading and stochastic beam misalignment, are extracted. Our results reveal the importance of taking into account the impact of beam misalignment when characterizing the outage performance of the system as well as when selecting the transmission frequencies.
Radio astronomy has changed. For years it studied relatively rare sources, which emit mostly non-thermal radiation across the entire electromagnetic spectrum, i.e. radio quasars and radio galaxies. Now it is reaching such faint flux densities that it detects mainly star-forming galaxies and the more common radio-quiet active galactic nuclei. These sources make up the bulk of the extragalactic sky, which has been studied for decades in the infrared, optical, and X-ray bands. I follow the transformation of radio astronomy by reviewing the main components of the radio sky at the bright and faint ends, the issue of their proper classification, their number counts, luminosity functions, and evolution. The overall "big picture" astrophysical implications of these results, and their relevance for a number of hot topics in extragalactic astronomy, are also discussed. The future prospects of the faint radio sky are very bright, as we will soon be flooded with survey data. This review should be useful to all extragalactic astronomers, irrespective of their favourite electromagnetic band(s), and even stellar astronomers might find it somewhat gratifying.
In this work, we attempt to solve the Hit Song Science problem, which aims to predict which songs will become chart-topping hits. We constructed a dataset with approximately 1.8 million hit and non-hit songs and extracted their audio features using the Spotify Web API. We test four models on our dataset. Our best model was random forest, which was able to predict Billboard song success with 88% accuracy.
We report direct measurement of population dynamics in the excited state manifold of a nitrogen-vacancy (NV) center in diamond. We quantify the phonon-induced mixing rate and demonstrate that it can be completely suppressed at low temperatures. Further, we measure the intersystem crossing (ISC) rate for different excited states and develop a theoretical model that unifies the phonon-induced mixing and ISC mechanisms. We find that our model is in excellent agreement with experiment and that it can be used to predict unknown elements of the NV center's electronic structure. We discuss the model's implications for enhancing the NV center's performance as a room-temperature sensor.
{Results from a long-term observational project called the Araucaria Project are presented. Based on Wide Field optical monitoring of 8 nearby galaxies, covering a large range of metallicities, more than 500 Cepheids and a few hundred Blue Supergiant candidates were identified. From the analysis of Cepheid P-L relations of outstanding quality derived from our data we conclude that the slope of these relations in the I band and Wesenheit index are not dependent on metallicity. Comparing the I-band magnitudes of Cepheids of a period of ten days, as computed from our P-L relations, to the I-band magnitudes of the tip of the RGB, which is widely believed to be independent of population effects, we cannot see any obvious dependence of the zero point of the I-band P-L relation on metallicity. A preliminary analysis of IR follow-up observations of sub-samples of the identified Cepheids in various galaxies of the project show that the distances obtained from these data are systematically shorter by about of 0.1 mag than those derived from the optical photometry. It is likely that this effect can be attributed to the internal reddening in the program galaxies.
The contribution of nucleon isobar $N^*$ exchanges to backward elastic pd-scattering is calculated on the basis of deuteron 6q-model and found to be negligible in comparison with the neutron exchange. It is shown that the pole amplitude of neutron pickup from the deuteron $nN^*$-component is favoured in the reaction $pd\to dN^*$ for backward going $N^*(1440)$ and $N^*(1710)$ at kinetic energy of incident proton of 1.5--2 GeV whereas the triangular diagram with subprocess $pp\to d\pi^+$ related to the usual $pn-$component of deuteron is considerable suppressed.
Neutron stars may harbour the true ground state of matter in the form of strange quark matter. If present, this type of matter is expected to be a color superconductor, a consequence of quark pairing with respect to the color/flavor degrees of freedom. The stellar magnetic field threading the quark core becomes a color-magnetic admixture and, in the event that superconductivity is of type II, leads to the formation of color-magnetic vortices. In this Letter we show that the volume-averaged color-magnetic vortex tension force should naturally lead to a significant degree of non-axisymmetry in systems like radio pulsars. We show that gravitational radiation from such color-magnetic `mountains' in young pulsars like the Crab and Vela could be observable by the future Einstein Telescope, thus becoming a probe of paired quark matter in neutron stars. The detectability threshold can be pushed up toward the sensitivity level of Advanced LIGO if we invoke an interior magnetic field about a factor ten stronger than the surface polar field.
The examination of parity symmetry in gravitational interactions has drawn increasing attention. Although Einstein's General Relativity is parity-conserved, numerous theories of parity-violating (PV) gravity in different frameworks have recently been proposed for different motivations. In this review, we briefly summarize the recent progress of these theories, and focus on the observable effects of PV terms in the gravitational waves (GWs), which are mainly reflected in the difference between the left-hand and right-hand polarization modes. We are primarily concerned with the implications of these theories for GWs generated by the compact binary coalescences and the primordial GWs generated in the early Universe. The deviation of GW waveforms and/or primordial power spectrum can always be quantified by the energy scale of parity violation of the theory. Applying the current and future GW observation from laser interferometers and cosmic microwave background radiation, the current and potential constraints on the PV energy scales are presented, which indicates that the parity symmetry of gravity can be tested in high energy scale in this new era of gravitational waves.
Quasars contribute to the 21-cm signal from the Epoch of Reionization (EoR) primarily through their ionizing UV and X-ray emission. However, their radio continuum and Lyman-band emission also regulates the 21-cm signal in their direct environment, potentially leaving the imprint of their duty cycle. We develop a model for the radio and UV luminosity functions of quasars from the EoR, and constrain it using recent observations. Our model is consistent with the z~7.5 quasar from Banados et al 2017, and also predicts only a few quasars suitable for 21-cm forest observations (10mJy) in the sky. We exhibit a new effect on the 21-cm signal observed against the CMB: a radio-loud quasar can leave the imprint of its duty cycle on the 21-cm tomography. We apply this effect in a cosmological simulation and conclude that the effect of typical radio-loud quasars is most likely negligible in an SKA field of view. For a 1-10mJy quasar the effect is stronger though hardly observable at SKA resolution. Then we study the contribution of the lyman band Ly-alpha to Ly-beta) emission of quasars to the Wouthuisen-Field coupling. The collective effect of quasars on the 21-cm power spectrum is larger than the thermal noise at low k, though featureless. However, a distinctive pattern around the brightest quasars in an SKA field of view may be observable in the tomography, encoding the duration of their duty cycle. This pattern has a high signal-to-noise ratio for the brightest quasar in a typical SKA shallow survey.
In federated learning (FL), a number of devices train their local models and upload the corresponding parameters or gradients to the base station (BS) to update the global model while protecting their data privacy. However, due to the limited computation and communication resources, the number of local trainings (a.k.a. local update) and that of aggregations (a.k.a. global update) need to be carefully chosen. In this paper, we investigate and analyze the optimal trade-off between the number of local trainings and that of global aggregations to speed up the convergence and enhance the prediction accuracy over the existing works. Our goal is to minimize the global loss function under both the delay and the energy consumption constraints. In order to make the optimization problem tractable, we derive a new and tight upper bound on the loss function, which allows us to obtain closed-form expressions for the number of local trainings and that of global aggregations. Simulation results show that our proposed scheme can achieve a better performance in terms of the prediction accuracy, and converge much faster than the baseline schemes.
We study corotational wave maps from $(1+4)$-dimensional Minkowski space into the $4$-sphere. We prove the stability of an explicitly known self-similar wave map under perturbations that are small in the critical Sobolev space.
We consider linear reaction systems with slow and fast reactions, which can be interpreted as master equations or Kolmogorov forward equations for Markov processes on a finite state space. We investigate their limit behavior if the fast reaction rates tend to infinity, which leads to a coarse-grained model where the fast reactions create microscopically equilibrated clusters, while the exchange mass between the clusters occurs on the slow time scale. Assuming detailed balance the reaction system can be written as a gradient flow with respect to the relative entropy. Focusing on the physically relevant cosh-type gradient structure we show how an effective limit gradient structure can be rigorously derived and that the coarse-grained equation again has a cosh-type gradient structure. We obtain the strongest version of convergence in the sense of the Energy-Dissipation Principle (EDP), namely EDP-convergence with tilting.
The physics of critical phenomena in a many-body system far from thermal equilibrium is an interesting and important issue to be addressed both experimentally and theoretically. The trapped cold atoms have been actively used as a clean and versatile simulator for classical and quantum-mechanical systems, deepening understanding of the many-body physics behind. Here we review the nonlinear and collective dynamics in a periodically modulated magneto-optically trapped cold atoms. By temporally modulating the intensity of the trapping lasers with the controlled phases, one can realize two kinds of nonlinear oscillators, the parametrically driven oscillator and the resonantly driven Duffing oscillator, which exhibit the dynamical bistable states. Cold atoms behave not only as the single-particle nonlinear oscillators, but also as the coupled oscillators by the light-induced inter-atomic interaction, which leads to the phase transitions far from equilibrium in a way similar to the phase transition in equilibrium. The parametrically driven cold atoms show the ideal mean-field symmetry-breaking transition, and the symmetry is broken with respect to time translation by the modulation period. Such a phase transition results from the cooperation and competition between the inter-particle interaction and the fluctuations, which lead to the nonlinear switching of atoms between the vibrational states, and the experimentally measured critical characteristics prove it as the ideal mean-field transition class. On the other hand, the resonantly driven cold atoms that possess the coexisting periodic attractors exhibit the kinetic phase transition analogous to the discontinuous gas-liquid phase transition in equilibrium, and interestingly the global interaction between atoms causes the shift of the phase-transition boundary.
We present the fully integrated form of the two-loop four-gluon amplitude in $\mathcal{N} = 2$ supersymmetric quantum chromodynamics with gauge group SU$(N_c)$ and with $N_f$ massless supersymmetric quarks (hypermultiplets) in the fundamental representation. Our result maintains full dependence on $N_c$ and $N_f$, and relies on the existence of a compact integrand representation that exhibits the duality between color and kinematics. Specializing to the $\mathcal{N} = 2$ superconformal theory, where $N_f = 2N_c$ , we obtain remarkably simple amplitudes that have an analytic structure close to that of $\mathcal{N} = 4$ super-Yang-Mills theory, except that now certain lower-weight terms appear. We comment on the corresponding results for other gauge groups.