text
stringlengths
133
1.92k
summary
stringlengths
24
228
We describe subgroups and overgroups of the generalised Thompson groups $V_n$ which arise via conjugation by rational homeomorphisms of Cantor space. We specifically consider conjugating $V_n$ by homeomorphisms induced by synchronizing transducers and their inverses. Our descriptions of the subgroups and overgroups use properties of the conjugating transducer to either restrict or augment the action of $V_n$ on Cantor space.
Conjugate subgroups and overgroups of $V_n$
We assess the possibility that planets Gliese 581 c and d are within the habitable zone. In analogy with our solar system, we use an empirical definition of the habitable zone. We include assumptions such as planetary climates, and atmospheric circulation on gravitationally locked synchronous rotation. Based on the different scenarios, we argue that both planets in Gliese 581 could develop conditions for a habitable zone. In an Earth-like environment planet d could be within a habitable zone, if an atmosphere producing greenhouse effect of 100K could have developed. If the planets are gravitationally locked-in, planet c could develop atmospheric circulation that would allow it to reach temperatures consistent with the existence of surface liquid water, which in turn could support life.
Considerations for the habitable zone of super-Earth planets in Gliese 581
High-spatial-resolution vibrational spectroscopy is one of the principal techniques for nanoscale compositional analysis in biological materials. Here, we present a new method for the analysis of whole-cell biological specimens through nanoscale vibrational electron energy-loss spectroscopy (EELS) in the monochromated scanning transmission electron microscope. Using the combined spatial and spectral resolution of the technique, we examine the vascular system of a cucumber stem and identify clear physical and vibrational signatures from the different cellular regions with high spatial resolution. Furthermore, using first-principles calculations combined with optical and EELS spectroscopy on the individual components that make up the cucumber stem, we unravel the physical mechanisms of the vibrational signatures and directly assign compositional origins to the cell walls and bodies of different cellular regions. These results demonstrate that monochromated electron energy-loss spectroscopy is a promising technique for nanoscale spatial mapping of the chemical composition of biological materials.
A New Path to Nanoscale Cellular Analysis with Monochromated Electron Energy-Loss Spectroscopy
We review some aspects of spin physics where QCD instantons play an important role. In particular, their large contributions in semi-inclusive deep-inelastic scattering and polarized proton on proton scattering. We also review their possible contribution in the $\mathcal{P}$-odd pion azimuthal charge correlations in peripheral $AA$ scattering at collider energies.
Spin Physics through QCD Instantons
We consider the quickest change detection problem where both the parameters of pre- and post- change distributions are unknown, which prevents the use of classical simple hypothesis testing. Without additional assumptions, optimal solutions are not tractable as they rely on some minimax and robust variant of the objective. As a consequence, change points might be detected too late for practical applications (in economics, health care or maintenance for instance). Available constant complexity techniques typically solve a relaxed version of the problem, deeply relying on very specific probability distributions and/or some very precise additional knowledge. We consider a totally different approach that leverages the theoretical asymptotic properties of optimal solutions to derive a new scalable approximate algorithm with near optimal performance that runs~in~$\mathcal{O}(1)$, adapted to even more complex Markovian settings.
Quickest change detection with unknown parameters: Constant complexity and near optimality
Through a program of narrow band imaging, we have observed the changing structure of the H alpha emission line around Nova Cyg 1992 (V1974 Cyg) at regular intervals from 1996 to 1999. Between 1994 and 1996, the nebular boundary advanced to the southwest at nearly the speed of light, implying that the nebula was created by an expanding wave of radiation originating in the explosion interacting with surrounding material. The expansion speed dropped to 0.35c during 1996-1999. We have taken spectra of the nebula in 1998 and 1999. Only Balmer lines are detected, no He I, [OIII], [OII], [NII], or [SII]. There is also no trace of the high excitation nova lines (He II, NeV, etc). These spectra show that the nebula is NOT a reflection nebula, a conventional HII region, or a shock involving motions of the gas. The integrated H alpha luminosity of the nebula between 1996 and 1999 is in the range =~1.3-2.2 x 10(35) erg s(-1). The density is poorly determined, but is probably very large (10(4) cm(-3)) in order to explain the brightest region of the nebula. The dynamical timescale is about a year and the recombination timescale of the same order. Bright patches are observed to fade in these times. The energy required to ionize the nebula is the bolometric luminosity of the nova for 30 days, smaller than the time during which the temperature of the nova photosphere was in the right range to produce the ionizing photons. We have also undertaken sensitive surveys of H alpha nebulae around recent novae but find no evidence of other such nebulae, so this type of object must be rare.
The emission nebula associated with V1974 Cyg: a unique object?
We classify all subsets $S$ of the projective Hilbert space with the following property: for every point $\pm s_0\in S$, the spherical projection of $S\backslash\{\pm s_0\}$ to the hyperplane orthogonal to $\pm s_0$ is isometric to $S\backslash\{\pm s_0\}$. In probabilistic terms, this means that we characterize all zero-mean Gaussian processes $Z=(Z(t))_{t\in T}$ with the property that for every $s_0\in T$ the conditional distribution of $(Z(t))_{t\in T}$ given that $Z(s_0)=0$ coincides with the distribution of $(\varphi(t; s_0) Z(t))_{t\in T}$ for some function $\varphi(t;s_0)$. A basic example of such process is the stationary zero-mean Gaussian process $(X(t))_{t\in\mathbb R}$ with covariance function $\mathbb E [X(s) X(t)] = 1/\cosh (t-s)$. We show that, in general, the process $Z$ can be decomposed into a union of mutually independent processes of two types: (i) processes of the form $(a(t) X(\psi(t)))_{t\in T}$, with $a: T\to \mathbb R$, $\psi(t): T\to \mathbb R$, and (ii) certain exceptional Gaussian processes defined on four-point index sets. The above problem is reduced to the classification of metric spaces in which in every triangle the largest side equals the sum of the remaining two sides.
An infinite-dimensional helix invariant under spherical projections
When bacteria are grown on a mixture of two growth-limiting substrates, they exhibit a rich spectrum of substrate consumption patterns including diauxic growth, simultaneous consumption, and bistable growth. In previous work, we showed that a minimal model accounting only for enzyme induction and dilution captures all the substrate consumption patterns. Here, we construct the bifurcation diagram of the minimal model. The bifurcation diagram explains several general properties of mixed-substrate growth. (1) In almost all cases of diauxic growth, the "preferred" substrate is the one that, by itself, supports a higher specific growth rate. In the literature, this property is often attributed to optimality of regulatory mechanisms. Here, we show that the minimal model, which contains only induction, displays the property under fairly general conditions. This suggests that the higher growth rate of the preferred substrate is an intrinsic property of the induction and dilution kinetics.(2) The model explains the phenotypes of various mutants containing lesions in the regions encoding for the operator, repressor, and peripheral enzymes. A particularly striking phenotype is the "reversal of the diauxie" in which the wild-type and mutant strains consume the very same two substrates in opposite order. This phenotype is difficult to explain in terms of molecular mechanisms, but it turns out to be a natural consequence of the model. We show furthermore that the model is robust. The key property of the model, namely, the competitive dynamics of the enzymes, is preserved even if the model is modified to account for various regulatory mechanisms. Finally, the model has important implications for size regulation in development, since it suggests that protein dilution is one mechanism for coupling patterning and growth.
Bacterial gene regulation in diauxic and nondiauxic growth
Motivated by studies of critical phenomena in the gravitational collapse of vacuum gravitational waves we compare, at the linear level, two common approaches to constructing gravitational-wave initial data. Specifically, we construct analytical, linear Brill wave initial data and compare these with Teukolsky waves in an attempt to understand the different numerical behavior observed in dynamical (nonlinear) evolutions of these two different sets of data. In general, the Brill waves indeed feature higher multipole moments than the quadrupolar Teukolsky waves, which might have provided an explanation for the differences observed in the dynamical evolution of the two types of waves. However, we also find that, for a common choice of the Brill-wave seed function, all higher-order moments vanish identically, rendering the (linear) Brill initial data surprisingly similar to the Teukolsky data for a similarly common choice of its seed function.
Comparison of linear Brill and Teukolsky waves
An accurate fitting formula is reported for the two-point correlation function of the dark matter halos in hierarchical clustering models, It is valid for the the linearly clustering regime, and its accuracy is about 10% in the correlation amplitude for the halos with mass M greater than 1/100 -- 1/1000 of the characteristic non-linear mass M_c. The result is found on the basis of a careful analysis for a large set of scale-free simulations with 17 million particles. The fitting formula has a weak explicit dependence on the index n of the initial power spectrum, but can be equally well applied to the cold dark matter (CDM) cosmological models if the effective index n_{eff} of the CDM power spectrum at the scale of the halo mass replaces the index n. The formula agrees with the analytical formula of Mo & White (MW96) for massive halos with M>M_c, but the MW96 formula significantly underpredicts the correlation for the less massive halos. The difference between the fitting and the analytical formulae amounts to a factor => 2 in the correlation amplitude for M=0.01 M_c. One of the most interesting applications of this fitting formula would be the clustering of galaxies since the majority of halos hosting galaxies satisfies M<< M_c.
Accurate fitting formula for the two-point correlation function of the dark matter halos
The optimal stopping problem for the risk process with interests rates and when claims are covered immediately is considered. An insurance company receives premiums and pays out claims which have occured according to a renewal process and which have been recognized by them. The capital of the company is invested at interest rate $\alpha\in\Re^{+}$, the size of claims increase at rate $\beta\in\Re^{+}$ according to inflation process. The immediate payment of claims decreases the company investment by rate $\alpha_1$. The aim is to find the stopping time which maximizes the capital of the company. The improvement to the known models by taking into account different scheme of claims payment and the possibility of rejection of the request by the insurance company is made. It leads to essentially new risk process and the solution of optimal stopping problem is different.
Optimal stopping of a risk process when claims are covered immediately
We propose a research initiative to explore and evaluate end-user technology, infrastructure, business imperatives, and regulatory policy to support the privacy, dignity, and market power of individual persons in the context of the emerging digital economy. Our work shall take a "system-level" approach to the design of technology and policy, considering the outcomes associated with the implementation and deployment of systems consisting of operational infrastructure, policies, and protocols for humans and computers alike. We seek to define and evaluate a set of approaches to the design and implementation of systems whose features specifically support the rights and market power of individual persons and local organisations, for the explicit goal of supporting truly consensual trust relationships and empowering local communities and organisations.
Decentralised Trust for the Digital Economy
In this letter, we analytically investigate the sensitivity of stability index to its dependent variables in general power systems. Firstly, we give a small-signal model, the stability index is defined as the solution to a semidefinite program (SDP) based on the related Lyapunov equation. In case of stability, the stability index also characterizes the convergence rate of the system after disturbances. Then, by leveraging the duality of SDP, we deduce an analytical formula of the stability sensitivity to any entries of the system Jacobian matrix in terms of the SDP primal and dual variables. Unlike the traditional numerical perturbation method, the proposed sensitivity evaluation method is more accurate with a much lower computational burden. This letter applies a modified microgrid for comparative case studies. The results reveal the significant improvements on the accuracy and computational efficiency of stability sensitivity evaluation.
An Analytical Formula for Stability Sensitivity Using SDP Dual
Using radial-velocity data from the Habitable-zone Planet Finder, we have measured the mass of the Neptune-sized planet K2-25b, as well as the obliquity of its M4.5-dwarf host star in the 600-800MYr Hyades cluster. This is one of the youngest planetary systems for which both of these quantities have been measured, and one of the very few M dwarfs with a measured obliquity. Based on a joint analysis of the radial velocity data, time-series photometry from the K2 mission, and new transit light curves obtained with diffuser-assisted photometry, the planet's radius and mass are $3.44\pm 0.12 \mathrm{R_\oplus}$ and $24.5_{-5.2}^{+5.7} \mathrm{M_\oplus}$. These properties are compatible with a rocky core enshrouded by a thin hydrogen-helium atmosphere (5% by mass). We measure an orbital eccentricity of $e=0.43 \pm 0.05$. The sky-projected stellar obliquity is $\lambda=3 \pm 16^{\circ}$, compatible with spin-orbit alignment, in contrast to other "hot Neptunes" that have been studied around older stars.
The Habitable-zone Planet Finder Reveals A High Mass and a Low Obliquity for the Young Neptune K2-25b
High-level vibrational calculations have been used to investigate anharmonicity in a wide variety of materials using density-functional-theory (DFT) methods. We have developed a new and efficient approach for describing strongly-anharmonic systems using a vibrational self-consistent-field (VSCF) method. By far the most computationally expensive part of the calculations is the mapping of an accurate Born-Oppenheimer (BO) energy surface within the region of interest. Here we present an improved method which reduces the computational cost of the mapping. In this approach we use data from a set of energy calculations for different vibrational distortions of the materials and the corresponding forces on the atoms. Results using both energies and forces are presented for the test cases of the hydrogen molecule, solid hydrogen under high pressure including mapping of two-dimensional subspaces of the BO surface, and the bcc phases of the metals Li and Zr. The use of forces data speeds up the anharmonic calculations by up to 40%.
Using forces to accelerate first-principles anharmonic vibrational calculations
Alzheimer's Disease (AD) is a neurodegenerative disorder that lacks effective treatment options. Anti-amyloid beta (ABeta) antibodies are the leading drug candidates to treat AD, but the results of clinical trials have been disappointing. Introducing rational mutations into anti-ABeta antibodies to increase their effectiveness is a way forward, but the path to take is unclear. In this study, we demonstrate the use of computational fragment-based docking and MMPBSA binding free energy calculations in the analysis of anti-ABeta antibodies for rational drug design efforts. Our fragment-based docking method successfully predicted the emergence of the common EFRH epitope, MD simulations coupled with MMPBSA binding free energy calculations were used to analyze scenarios described in prior studies, and we introduced rational mutations into PFA1 to improve its calculated binding affinity towards the pE3-ABeta3-8 form of ABeta. Two out of four proposed mutations stabilized binding. Our study demonstrates that a computational approach may lead to an improved drug candidate for AD in the future.
Computational Analysis for the Rational Design of Anti-Amyloid Beta (ABeta) Antibodies
We introduce an efficient scheme for the molecular dynamics of electronic systems by means of quantum Monte Carlo. The evaluation of the (Born-Oppenheimer) forces acting on the ionic positions is achieved by two main ingredients: i) the forces are computed with finite and small variance, which allows the simulation of a a large number of atoms, ii) the statistical noise corresponding to the forces is used to drive the dynamics at finite temperature by means of an appropriate friction matrix. A first application to the high-density phase of Hydrogen is given, supporting the stability of the liquid phase at = 300GP a and = 400K.
Ab-initio molecular dynamics for high-pressure liquid Hydrogen
In this paper we exhibit the notion of (uniformly) good sections of arithmetic fundamental groups. We introduce and investigate the problem of cuspidalisation of sections of arithmetic fundamental groups, its ultimate aim is to reduce the solution of the Grothendieck anabelian section conjecture to the solution of its birational version. We show that (uniformly) good sections of arithmetic fundamental groups of smooth, proper, and geometrically connected hyperbolic curves over slim (and regular) fields can be lifted to sections of cuspidally abelian absolute Galois groups. As an application we prove a (pro-p) p-adic version of the Grothendieck anabelian section conjecture for hyperbolic curves, under the assumption that the existence of sections of arithmetic fundamental groups, and cuspidally abelian Galois groups, implies the existence of tame points. We also prove that the existence of uniformly good sections of arithmetic fundamental groups for hyperbolic curves over number fields implies the existence of divisors of degree 1, under a finiteness condition of the Tate-Shafarevich group of the jacobian of the curve.
Good Sections of Arithmetic Fundamental Groups
Multi-robot teams can achieve more dexterous, complex and heavier payload tasks than a single robot, yet effective collaboration is required. Multi-robot collaboration is extremely challenging due to the different kinematic and dynamics capabilities of the robots, the limited communication between them, and the uncertainty of the system parameters. In this paper, a Decentralized Ability-Aware Adaptive Control is proposed to address these challenges based on two key features. Firstly, the common manipulation task is represented by the proposed nominal task ellipsoid, which is used to maximize each robot force capability online via optimizing its configuration. Secondly, a decentralized adaptive controller is designed to be Lyapunov stable in spite of heterogeneous actuation constraints of the robots and uncertain physical parameters of the object and environment. In the proposed framework, decentralized coordination and load distribution between the robots is achieved without communication, while only the control deficiency is broadcast if any of the robots reaches its force limits. In this case, the object reference trajectory is modified in a decentralized manner to guarantee stable interaction. Finally, we perform several numerical and physical simulations to analyse and verify the proposed method with heterogeneous multi-robot teams in collaborative manipulation tasks.
Decentralized Ability-Aware Adaptive Control for Multi-robot Collaborative Manipulation
We consider a wireless device-to-device (D2D) network where $n$ nodes are uniformly distributed at random over the network area. We let each node with storage capacity $M$ cache files from a library of size $m \geq M$. Each node in the network requests a file from the library independently at random, according to a popularity distribution, and is served by other nodes having the requested file in their local cache via (possibly) multihop transmissions. Under the classical "protocol model" of wireless networks, we characterize the optimal per-node capacity scaling law for a broad class of heavy-tailed popularity distributions including Zipf distributions with exponent less than one. In the parameter regimes of interest, we show that a decentralized random caching strategy with uniform probability over the library yields the optimal per-node capacity scaling of $\Theta(\sqrt{M/m})$, which is constant with $n$, thus yielding throughput scalability with the network size. Furthermore, the multihop capacity scaling can be significantly better than for the case of single-hop caching networks, for which the per-node capacity is $\Theta(M/m)$. The multihop capacity scaling law can be further improved for a Zipf distribution with exponent larger than some threshold $> 1$, by using a decentralized random caching uniformly across a subset of most popular files in the library. Namely, ignoring a subset of less popular files (i.e., effectively reducing the size of the library) can significantly improve the throughput scaling while guaranteeing that all nodes will be served with high probability as $n$ increases.
Wireless Multihop Device-to-Device Caching Networks
Let M be a compact simply connected hyperk\"ahler (or holomorphically symplectic) manifold, \dim H^2(M)=n. Assume that M is not a product of hyperkaehler manifolds. We prove that the Lie algebra so(n-3,3) acts by automorphisms on the cohomology ring H^*(M). Under this action, the space H^2(M) is isomorphic to the fundamental representation of so(n-3,3). Let A^r be the subring of H^*(M) generated by H^2(M). We construct an action of the Lie algebra so(n-2,4) on the space A, which preserves A^r. The space A^r is an irreducible representation of so(n-2,4). This makes it possible to compute the ring A^r explicitely.
Cohomology of compact hyperkaehler manifolds
Communication and privacy are two critical concerns in distributed learning. Many existing works treat these concerns separately. In this work, we argue that a natural connection exists between methods for communication reduction and privacy preservation in the context of distributed machine learning. In particular, we prove that Count Sketch, a simple method for data stream summarization, has inherent differential privacy properties. Using these derived privacy guarantees, we propose a novel sketch-based framework (DiffSketch) for distributed learning, where we compress the transmitted messages via sketches to simultaneously achieve communication efficiency and provable privacy benefits. Our evaluation demonstrates that DiffSketch can provide strong differential privacy guarantees (e.g., $\varepsilon$= 1) and reduce communication by 20-50x with only marginal decreases in accuracy. Compared to baselines that treat privacy and communication separately, DiffSketch improves absolute test accuracy by 5%-50% while offering the same privacy guarantees and communication compression.
Privacy for Free: Communication-Efficient Learning with Differential Privacy Using Sketches
The latest results from the H1 and ZEUS collaborations are presented on leptoquark production and rare Standard Model processes. The data were taken in the period 1994-2005, at a centre of mass energy of up to 319 GeV. Intriguing events containing isolated leptons and missing transverse momentum, as well as multi-lepton events, are observed by H1 in regions of phase space where the SM prediction is low. Interpretations of the observed excesses in terms of physics Beyond the Standard Model are also discussed.
Searches for New Physics in ep Scattering at HERA
In this paper, we consider the matrix completion problem when the observations are one-bit measurements of some underlying matrix M, and in particular the observed samples consist only of ones and no zeros. This problem is motivated by modern applications such as recommender systems and social networks where only "likes" or "friendships" are observed. The problem of learning from only positive and unlabeled examples, called PU (positive-unlabeled) learning, has been studied in the context of binary classification. We consider the PU matrix completion problem, where an underlying real-valued matrix M is first quantized to generate one-bit observations and then a subset of positive entries is revealed. Under the assumption that M has bounded nuclear norm, we provide recovery guarantees for two different observation models: 1) M parameterizes a distribution that generates a binary matrix, 2) M is thresholded to obtain a binary matrix. For the first case, we propose a "shifted matrix completion" method that recovers M using only a subset of indices corresponding to ones, while for the second case, we propose a "biased matrix completion" method that recovers the (thresholded) binary matrix. Both methods yield strong error bounds --- if M is n by n, the Frobenius error is bounded as O(1/((1-rho)n), where 1-rho denotes the fraction of ones observed. This implies a sample complexity of O(n\log n) ones to achieve a small error, when M is dense and n is large. We extend our methods and guarantees to the inductive matrix completion problem, where rows and columns of M have associated features. We provide efficient and scalable optimization procedures for both the methods and demonstrate the effectiveness of the proposed methods for link prediction (on real-world networks consisting of over 2 million nodes and 90 million links) and semi-supervised clustering tasks.
PU Learning for Matrix Completion
This paper presents new mappings of 2D and 3D geometrical transformation on the MorphoSys (M1) reconfigurable computing (RC) prototype [2]. This improves the system performance as a graphics accelerator [1-5]. Three algorithms are mapped including two for calculating 2D transformations, and one for 3D transformations. The results presented indicate an improved performance. The speedup achieved is explained as well as the advantages in the mapping of the application. The transformations on an 8x8 RC array were run, and numerical examples were simulated to validate our results, using the MorphoSys mULATE program, which simulates MorphoSys operations. Comparisons with other systems are presented, namely, with Intel processing systems and Celoxica RC-1000 FPGA.
2D and 3D Computer Graphics Algorithms under MorphoSys
Monte Carlo (MC) sampling algorithms are an extremely widely-used technique to estimate expectations of functions f(x), especially in high dimensions. Control variates are a very powerful technique to reduce the error of such estimates, but in their conventional form rely on having an accurate approximation of f, a priori. Stacked Monte Carlo (StackMC) is a recently introduced technique designed to overcome this limitation by fitting a control variate to the data samples themselves. Done naively, forming a control variate to the data would result in overfitting, typically worsening the MC algorithm's performance. StackMC uses in-sample / out-sample techniques to remove this overfitting. Crucially, it is a post-processing technique, requiring no additional samples, and can be applied to data generated by any MC estimator. Our preliminary experiments demonstrated that StackMC improved the estimates of expectations when it was used to post-process samples produces by a "simple sampling" MC estimator. Here we substantially extend this earlier work. We provide an in-depth analysis of the StackMC algorithm, which we use to construct an improved version of the original algorithm, with lower estimation error. We then perform experiments of StackMC on several additional kinds of MC estimators, demonstrating improved performance when the samples are generated via importance sampling, Latin-hypercube sampling and quasi-Monte Carlo sampling. We also show how to extend StackMC to combine multiple fitting functions, and how to apply it to discrete input spaces x.
Reducing the error of Monte Carlo Algorithms by Learning Control Variates
We explain the recent diphoton excesses around $750$ GeV by both ATLAS and CMS as a singlet scalar $\Phi$ which couples to SM gluon and neutral gauge bosons only through higher dimensional operators. A natural explanation is that $\Phi$ is a pseudo-Nambu-Goldstone boson (pNGB) which receives parity violation through anomaly if there exists a hidden strong dynamics. The singlet and other light pNGBs will decay into two SM gauge bosons and even serves as the meta-stable coloured states which can be probed in the future. By accurately measuring their relative decay and the total production rate in the future, we will learn the underlying strong dynamics parameter. The lightest baryon in this confining theory could serve as a viable dark matter candidate.
A hidden confining world on the 750 GeV diphoton excess
We investigate the temporal evolution of an axisymmetric magnetosphere around a rapidly rotating, stellar-mass black hole, applying a two-dimensional particle-in-cell simulation scheme. Adopting a homogeneous pair production, and assuming that the mass accretion rate is much less than the Eddington limit, we find that the black hole's rotational energy is preferentially extracted from the middle latitudes, and that this outward energy flux exhibits an enhancement that lasts approximately 160 dynamical time scales. It is demonstrated that the magnetohydrodynamic approximations cannot be justified in such a magnetically-dominated magnetosphere, because the Ohm's law completely breaks down, and because the charge-separated electron-positron plasmas are highly non-neutral. An implication is given regarding the collimation of relativistic jets.
Two-dimensional Particle-in-Cell simulations of axisymmetric black hole magnetospheres
We show that naturally associated to a SIC (symmetric informationally complete positive operator valued measure or SIC-POVM) in dimension d there are a number of higher dimensional structures: specifically a projector and a complex Hadamard matrix in dimension d squared and a pair of ETFs (equiangular tight frames) in dimensions d(d-1)/2, d(d+1)/2. We also show that a WH (Weyl Heisenberg covariant) SIC in odd dimension d is naturally associated to a pair of symmetric tight fusion frames in dimension d. We deduce two relaxations of the WH SIC existence problem. We also find a reformulation of the problem in which the number of equations is fewer than the number of variables. Finally, we show that in at least four cases the structures associated to a SIC lie on continuous manifolds of such structures. In two of these cases the manifolds are non-linear. Restricted defect calculations are consistent with this being true for the structures associated to every known SIC with d between 3 and 16, suggesting it may be true for all d greater than 2.
Tight Frames, Hadamard Matrices and Zauner's Conjecture
In this paper, we study the logarithmic growth (log-growth) filtration, a mysterious invariant found by B. Dwork, for $(\varphi,\nabla)$-modules over the bounded Robba ring. The main result is a proof of a conjecture proposed by B. Chiarellotto and N. Tsuzuki on a comparison between the log-growth filtration and Frobenius slope filtration. One of the ingredients of the proof is a new criterion for pure of bounded quotient, which is a notion introduced by Chiarellotto and Tsuzuki to formulate their conjecture. We also give several applications to log-growth Newton polygons, including a conjecture of Dwork on the semicontinuity, and an analogue of a theorem due to V. Drinfeld and K. Kedlaya on Frobenius Newton polygons for indecomposable convergent $F$-isocrystals.
Logarithmic growth filtrations for $(\varphi,\nabla)$-modules over the bounded Robba ring
Scalable architectures characterized by quantum bits (qubits) with low error rates are essential to the development of a practical quantum computer. In the superconducting quantum computing implementation, understanding and minimizing materials losses is crucial to the improvement of qubit performance. A new material that has recently received particular attention is indium, a low-temperature superconductor that can be used to bond pairs of chips containing standard aluminum-based qubit circuitry. In this work, we characterize microwave loss in indium and aluminum/indium thin films on silicon substrates by measuring superconducting coplanar waveguide resonators and estimating the main loss parameters at powers down to the sub-photon regime and at temperatures between 10 and 450 mK. We compare films deposited by thermal evaporation, sputtering, and molecular beam epitaxy. We study the effects of heating in vacuum and ambient atmospheric pressure as well as the effects of pre-deposition wafer cleaning using hydrofluoric acid. The microwave measurements are supported by thin film metrology including secondary-ion mass spectrometry. For thermally evaporated and sputtered films, we find that two-level states (TLSs) are the dominating loss mechanism at low photon number and temperature. Thermally evaporated indium is determined to have a TLS loss tangent due to indium oxide of ~5x1e-05. The molecular beam epitaxial films show evidence of formation of a substantial indium-silicon eutectic layer, which leads to a drastic degradation in resonator performance.
Thin film metrology and microwave loss characterization of indium and aluminum/indium superconducting planar resonators
Let $d(n)$ denote the number of divisors of $n$. In this paper, we study the average value of $d(a(p))$, where $p$ is a prime and $a(p)$ is the $p$-th Fourier coefficient of a normalized Hecke eigenform of weight $k \ge 2$ for $\Gamma_0(N)$ having rational integer Fourier coefficients.
Divisors of Fourier coefficients of modular forms
The cross section for anti-deuteron photoproduction is measured at HERA at a mean centre-of-mass energy of W_{\gamma p} = 200 GeV in the range 0.2 < p_T/M < 0.7 and |y| < 0.4, where M, p_T and y are the mass, transverse momentum and rapidity in the laboratory frame of the anti-deuteron, respectively. The numbers of anti-deuterons per event are found to be similar in photoproduction to those in central proton-proton collisions at the CERN ISR but much lower than those in central Au-Au collisions at RHIC. The coalescence parameter B_2, which characterizes the likelihood of anti-deuteron production, is measured in photoproduction to be 0.010 \pm 0.002 \pm 0.001, which is much higher than in Au-Au collisions at a similar nucleon-nucleon centre-of-mass energy. No significant production of particles heavier than deuterons is observed and upper limits are set on the photoproduction cross sections for such particles.
Measurement of Anti-Deuteron Photoproduction and a Search for Heavy Stable Charged Particles at HERA
Although point caustics harbour a larger potential for measuring the brightness profile of stars during the course of a microlensing event than (line-shaped) fold caustics, the effect of lens binarity significantly limits the achievable accuracy. Therefore, corresponding close-impact events make a less favourable case for limb-darkening measurements than those events that involve fold-caustic passages, from which precision measurements can easily and routinely be obtained. Examples involving later Bulge giants indicate that a ~ 10 % misestimate on the limb-darkening coefficient can result with the assumption of a single-lens model that looks acceptable, unless the precision of the photometric measurements is pushed below the 1 %-level even for these favourable targets. In contrast, measurement uncertainties on the proper motion between lens and source are dominated by the assessment of the angular radius of the source star and remain practically unaffected by lens binarity. Rather than judging the goodness-of-fit by means of a chi^2 test only, run tests provide useful additional information that can lead to the rejection of models and the detection of lens binarity in close-impact microlensing events.
Lens binarity vs limb darkening in close-impact galactic microlensing events
Estimating causal effects has become an integral part of most applied fields. Solving these modern causal questions requires tackling violations of many classical causal assumptions. In this work we consider the violation of the classical no-interference assumption, meaning that the treatment of one individuals might affect the outcomes of another. To make interference tractable, we consider a known network that describes how interference may travel. However, unlike previous work in this area, the radius (and intensity) of the interference experienced by a unit is unknown and can depend on different sub-networks of those treated and untreated that are connected to this unit. We study estimators for the average direct treatment effect on the treated in such a setting. The proposed estimator builds upon a Lepski-like procedure that searches over the possible relevant radii and treatment assignment patterns. In contrast to previous work, the proposed procedure aims to approximate the relevant network interference patterns. We establish oracle inequalities and corresponding adaptive rates for the estimation of the interference function. We leverage such estimates to propose and analyze two estimators for the average direct treatment effect on the treated. We address several challenges steaming from the data-driven creation of the patterns (i.e. feature engineering) and the network dependence. In addition to rates of convergence, under mild regularity conditions, we show that one of the proposed estimators is asymptotically normal and unbiased.
Neighborhood Adaptive Estimators for Causal Inference under Network Interference
Synthetic likelihood (SL) is a strategy for parameter inference when the likelihood function is analytically or computationally intractable. In SL, the likelihood function of the data is replaced by a multivariate Gaussian density over summary statistics of the data. SL requires simulation of many replicate datasets at every parameter value considered by a sampling algorithm, such as Markov chain Monte Carlo (MCMC), making the method computationally-intensive. We propose two strategies to alleviate the computational burden. First, we introduce an algorithm producing a proposal distribution that is sequentially tuned and made conditional to data, thus it rapidly \textit{guides} the proposed parameters towards high posterior density regions. In our experiments, a small number of iterations of our algorithm is enough to rapidly locate high density regions, which we use to initialize one or several chains that make use of off-the-shelf adaptive MCMC methods. Our "guided" approach can also be potentially used with MCMC samplers for approximate Bayesian computation (ABC). Second, we exploit strategies borrowed from the correlated pseudo-marginal MCMC literature, to improve the chains mixing in a SL framework. Moreover, our methods enable inference for challenging case studies, when the posterior is multimodal and when the chain is initialised in low posterior probability regions of the parameter space, where standard samplers failed. To illustrate the advantages stemming from our framework we consider five benchmark examples, including estimation of parameters for a cosmological model and a stochastic model with highly non-Gaussian summary statistics.
Sequentially guided MCMC proposals for synthetic likelihoods and correlated synthetic likelihoods
Consider a systematic linear code where some (local) parity symbols depend on few prescribed symbols, while other (heavy) parity symbols may depend on all data symbols. Local parities allow to quickly recover any single symbol when it is erased, while heavy parities provide tolerance to a large number of simultaneous erasures. A code as above is maximally-recoverable if it corrects all erasure patterns which are information theoretically recoverable given the code topology. In this paper we present explicit families of maximally-recoverable codes with locality. We also initiate the study of the trade-off between maximal recoverability and alphabet size.
Explicit Maximally Recoverable Codes with Locality
In a bilayer of ferromagnetic and non-magnetic metal, spin pumping can be generated by a thermal gradient. The spin current generation depends on the spin mixing conductance of the interface and the magnetic properties of the ferromagnetic layer. Due to its low intrinsic damping, rare earth iron garnet is often used for the ferromagnetic layer in the spin Seebeck experiment. However, it is actually a ferrimagnetic with antiferromagnetically coupled magnetic lattices and the contribution of rare earth magnetic lattice of rare earth iron garnet on thermal spin pumping is not well understand. Here we focus on the effect of magnetic properties of lanthanide and show that the orbital angular momentum of rare earth iron garnet enhances thermal spin current generation of lanthanide substituted yttrium iron garnet.
Enhancement of thermal spin pumping by orbital angular momentum of rare earth iron garnet
The quantum modes of a new family of relativistic oscillators are studied by using the supersymmetry and shape invariance in a version suitable for (1+1) dimensional relativistic systems. In this way one obtains the Rodrigues formulas of the normalized energy eigenfunctions of the discrete spectra and the corresponding rising and lowering operators. Pacs: 04.62.+v, 03.65.Ge
The quantum modes of the (1+1)-dimensional oscillators in general relativity
In this work we have derived the expressions of the mean free path (MFP) and emissivity of the neutrinos by incorporating non-Fermi liquid (NFL) corrections upto next to leading order (NLO). We have shown how such corrections affect the cooling of the neutron star composed of quark matter core.
Non-Fermi liquid correction to the neutrino mean free path and emissivity in neutron star beyond the leading order
A fully realistic unified theory is constructed, with SU(5) gauge symmetry and supersymmetry both broken by boundary conditions in a fifth dimension. Despite the local explicit breaking of SU(5) at a boundary of the dimension, the large size of the extra dimension allows precise predictions for gauge coupling unification, alpha_s(M_Z) = 0.118 \pm 0.003, and for Yukawa coupling unification, m_b(M_Z) = 3.3 \pm 0.2 GeV. A complete understanding of the MSSM Higgs sector is given; with explanations for why the Higgs triplets are heavy, why the Higgs doublets are protected from a large tree-level mass, and why the mu and B parameters are naturally generated to be of order the SUSY breaking scale. All sources of d=4,5 proton decay are forbidden, while a new origin for d=6 proton decay is found to be important. Several aspects of flavor follow from an essentially unique choice of matter location in the fifth dimension: only the third generation has an SU(5) mass relation, and the lighter two generations have small mixings with the heaviest generation. The entire superpartner spectrum is predicted in terms of only two free parameters. The squark and slepton masses are determined by their location in the fifth dimension, allowing a significant experimental test of the detailed structure of the extra dimension. Lepton flavor violation is found to be generically large in higher dimensional unified theories with high mediation scales of SUSY breaking. In our theory this forces a common location for all three neutrinos, predicting large neutrino mixing angles. Rates for mu -> e gamma, mu -> e e e, mu -> e conversion and tau -> mu gamma are larger in our theory than in conventional 4D supersymmetric GUTs. Proposed experiments probing mu -> e transitions will probe the entire interesting parameter space of our theory.
A Complete Theory of Grand Unification in Five Dimensions
We evaluate the spin-orbit and spin-spin interaction between two fermions in strongly coupled gauge theories in their Coulomb phase. We use the quasi-instantaneous character of Coulomb's law at strong coupling to resum a class of ladder diagrams. For ${\cal N}=4$ SYM we derive both weak and strong coupling limits of the the spin-orbit and spin-spin interactions, and find that in the latter case these interactions are subleading corrections and do not seriously affect the deeply bound Coulomb states with large angular momentum, pointed out in our previous paper. The results are important for understanding of the regime of intermediate coupling, which is the case for QCD somewhat above the chiral transition temperature.
Spin-Spin and Spin-Orbit Interactions in Strongly Coupled Gauge Theories
Metastasis, the spread of cancer cells from a primary tumor to secondary location(s) in the human organism, is the ultimate cause of death for the majority of cancer patients. That is why, it is crucial to understand metastases evolution in order to successfully combat the disease. We consider a metastasized cancer cell population after medical treatment (e.g. chemotherapy). Arriving in a different environment the cancer cells may change their lifespan and reproduction, thus they may proliferate into different types. If the treatment is effective, in the context of branching processes it means, the reproduction of cancer cells is such that the mean offspring of each cell is less than one. However, it is possible mutations to occur during cell division cycle. These mutations can produce a new cancer cell type, which is resistant to the treatment. Cancer cells from this new type may lead to the rise of a non-extinction branching process. The above scenario leads us to the choice of a reducible multi-type age-dependent branching process as a relevant framework for studying the asymptotic behavior of such complex structures. Our previous theoretical results are related to the asymptotic behavior of the waiting time until the first occurrence of a mutant starting a non-extinction process and the modified hazard function as a measure of immediate recurrence of cancer disease. In the present paper these asymptotic results are used for developing numerical schemes and algorithms implemented in Python via the NumPy package for approximate calculation of the corresponding quantities. In conclusion, our conjecture is that this methodology can be advantageous in revealing the role of the lifespan distribution of the cancer cells in the context of cancer disease evolution and other complex cell population systems, in general.
Computational modelling of cancer evolution by multi-type branching processes
Research over the years has shown that the formation of the Fe$_3$Si phase in FINEMET (Fe-Si-Nb-B-Cu) alloys leads to superior soft magnetic properties. In this work, we use a CALPHAD approach to derive Fe-Si phase diagrams to identify the composition-temperature domain where the Fe$_3$Si phase can be stabilized. Thereafter, we have developed a precipitation model capable of simulating the nucleation and growth of Fe$_3$Si nanocrystals via Langer-Schwartz theory. For optimum magnetic properties, prior work suggests that it is desirable to precipitate Fe$_3$Si nanocrystals with 10-15 nm diameter and with the crystalline volume fraction of about 70 \%. Based on our parameterized model, we simulated the nucleation and growth of Fe$_3$Si nanocrystals by isothermal annealing of Fe$_{72.89}$Si$_{16.21}$B$_{6.90}$Nb$_{3}$Cu$_{1}$ (composition in atomic \%). In numerical experiments, the alloys were annealed at a series of temperatures from 490 to 550 \degree C for two hours to study the effect of holding time on mean radius, volume fraction, size distribution, nucleation rate, number density, and driving force for the growth of Fe$_3$Si nanocrystals. With increasing annealing temperature, the mean radius of Fe$_3$Si nanocrystals increases, while the volume fraction decreases. We have also studied the effect of composition variations on the nucleation and growth of Fe$_3$Si nanocrystals. As Fe content decreases, it is possible to achieve the desired mean radius and volume fraction within one hour holding time. The CALPHAD approach presented here can provide efficient exploration of the nanocrystalline morphology for most FINEMET systems, for cases in which the optimization of one or more material properties or process variables are desired.
Metastable Phase Diagram and Precipitation Kinetics of Magnetic Nanocrystals in FINEMET Alloys
At zero temperature, the 3-state antiferromagnetic Potts model on a square lattice maps exactly onto a point of the 6-vertex model whose long-distance behavior is equivalent to that of a free scalar boson. We point out that at nonzero temperature there are two distinct types of excitation: vortices, which are relevant with renormalization-group eigenvalue 1/2; and non-vortex unsatisfied bonds, which are strictly marginal and serve only to renormalize the stiffness coefficient of the underlying free boson. Together these excitations lead to an unusual form for the corrections to scaling: for example, the correlation length diverges as \beta \equiv J/kT \to \infty according to \xi \sim A e^{2\beta} (1 + b\beta e^{-\beta} + ...), where b is a nonuniversal constant that may nevertheless be determined independently. A similar result holds for the staggered susceptibility. These results are shown to be consistent with the anomalous behavior found in the Monte Carlo simulations of Ferreira and Sokal.
Unusual corrections to scaling in the 3-state Potts antiferromagnet on a square lattice
The forensic science community has increasingly sought quantitative methods for conveying the weight of evidence. Experts from many forensic laboratories summarize their findings in terms of a likelihood ratio. Several proponents of this approach have argued that Bayesian reasoning proves it to be normative. We find this likelihood ratio paradigm to be unsupported by arguments of Bayesian decision theory, which applies only to personal decision making and not to the transfer of information from an expert to a separate decision maker. We further argue that decision theory does not exempt the presentation of a likelihood ratio from uncertainty characterization, which is required to assess the fitness for purpose of any transferred quantity. We propose the concept of a lattice of assumptions leading to an uncertainty pyramid as a framework for assessing the uncertainty in an evaluation of a likelihood ratio. We demonstrate the use of these concepts with illustrative examples regarding the refractive index of glass and automated comparison scores for fingerprints.
Likelihood Ratio as Weight of Forensic Evidence: A Closer Look
Load forecasting is a crucial topic in energy management systems (EMS) due to its vital role in optimizing energy scheduling and enabling more flexible and intelligent power grid systems. As a result, these systems allow power utility companies to respond promptly to demands in the electricity market. Deep learning (DL) models have been commonly employed in load forecasting problems supported by adaptation mechanisms to cope with the changing pattern of consumption by customers, known as concept drift. A drift magnitude threshold should be defined to design change detection methods to identify drifts. While the drift magnitude in load forecasting problems can vary significantly over time, existing literature often assumes a fixed drift magnitude threshold, which should be dynamically adjusted rather than fixed during system evolution. To address this gap, in this paper, we propose a dynamic drift-adaptive Long Short-Term Memory (DA-LSTM) framework that can improve the performance of load forecasting models without requiring a drift threshold setting. We integrate several strategies into the framework based on active and passive adaptation approaches. To evaluate DA-LSTM in real-life settings, we thoroughly analyze the proposed framework and deploy it in a real-world problem through a cloud-based environment. Efficiency is evaluated in terms of the prediction performance of each approach and computational cost. The experiments show performance improvements on multiple evaluation metrics achieved by our framework compared to baseline methods from the literature. Finally, we present a trade-off analysis between prediction performance and computational costs.
DA-LSTM: A Dynamic Drift-Adaptive Learning Framework for Interval Load Forecasting with LSTM Networks
We analyze the finite temperature phase diagram of ultrathin magnetic films by introducing a mean field theory, valid in the low anisotropy regime, i.e., close to de Spin Reorientation Transition. The theoretical results are compared with Monte Carlo simulations carried out on a microscopic Heisenberg model. Connections between the finite temperature behavior and the ground state properties of the system are established. Several properties of the stripes pattern, such as the presence of canted states, the stripes width variation phenomenon and the associated magnetization profiles are also analyzed.
Finite temperature phase diagram of ultrathin magnetic films without external fields
We use a tunable laser ARPES to study the electronic properties of the prototypical multiband BCS superconductor MgB2. Our data reveal a strong renormalization of the dispersion (kink) at ~65 meV, which is caused by coupling of electrons to the E2g phonon mode. In contrast to cuprates, the 65 meV kink in MgB2 does not change significantly across Tc. More interestingly, we observe strong coupling to a second, lower energy collective mode at binding energy of 10 meV. This excitation vanishes above Tc and is likely a signature of the elusive Leggett mode.
Strong interaction between electrons and collective excitations in multiband superconductor MgB2
Photospheric C, N, and O abundances of 118 solar-analog stars were determined by applying the synthetic-fitting analysis to their spectra in the blue or near-UV region comprising lines of CH, NH, and OH molecules, with an aim of clarifying the behaviors of these abundances in comparison with [Fe/H]. It turned out that, in the range of -0.6<[Fe/H]<+0.3, [C/Fe] shows a marginally increasing tendency with decreasing [Fe/H] with a slight upturn around [Fe/H]~0, [N/Fe] tends to somewhat decrease towards lower [Fe/H], and [O/Fe] systematically increases (and thus [C/O] decreases) with a decrease in [Fe/H]. While these results are qualitatively consistent with previous determinations mostly based on atomic lines, the distribution centers of these [C/Fe], [N/Fe], and [O/Fe] at the near-solar metallicity are slightly negative by several hundredths dex, which is interpreted as due to unusual solar abundances possibly related to the planetary formation of our solar system. However, clear anomalies are not observed in the [C,N,O/Fe] ratios of planet-host stars. Three out of four very Be-deficient stars were found to show anomalous [C/Fe] or [N/Fe] which may be due to mass transfer from the evolved companion, though its relation to Be depletion mechanism is still unclear.
Spectroscopic determination of C, N, and O abundances of solar-analog stars based on the lines of hydride molecules
We construct global solutions on a full measure set with respect to the Gibbs measure for the one dimensional cubic fractional nonlinear Schr\"odinger equation (FNLS) with weak dispersion $(-\partial_x^2)^{\alpha/2}$, $\alpha<2$ by quite different methods, depending on the value of $\alpha$. We show that if $\alpha>\frac{6}{5}$, the sequence of smooth solutions for FNLS with truncated initial data converges almost surely, and the obtained limit has recurrence properties as the time goes to infinity. The analysis requires to go beyond the available deterministic theory of the equation. When $1<\alpha\leq \frac{6}{5}$, we are not able so far to get the recurrence properties but we succeeded to use a method of Bourgain-Bulut to prove the convergence of the solutions of the FNLS equation with regularized both data and nonlinearity. Finally, if $\frac{7}{8}<\alpha\leq 1$ we can construct global solutions in a much weaker sense by a classical compactness argument.
Gibbs measure dynamics for the fractional NLS
We study restricted chain-order polytopes associated to Young diagrams using combinatorial mutations. These polytopes are obtained by intersecting chain-order polytopes with certain hyperplanes. The family of chain-order polytopes associated to a poset interpolate between the order and chain polytopes of the poset. Each such polytope retains properties of the order and chain polytope; for example its Ehrhart polynomial. For a fixed Young diagram, we show that all restricted chain-order polytopes are related by a sequence of combinatorial mutations. Since the property of giving rise to the period collapse phenomenon is invariant under combinatorial mutations, we provide a large class of rational polytopes that give rise to period collapse.
Restricted Chain-Order Polytopes via Combinatorial Mutations
This paper describes a program that solves elementary mathematical problems, mostly in metric space theory, and presents solutions that are hard to distinguish from solutions that might be written by human mathematicians. The program is part of a more general project, which we also discuss.
A fully automatic problem solver with human-style output
Do low corporate taxes always favor multinational production over economic integration? We propose a two-country model in which multinationals choose the locations of production plants and foreign distribution affiliates and shift profits between them through transfer prices. With high trade costs, plants are concentrated in the low-tax country; surprisingly, this pattern reverses with low trade costs. Indeed, economic integration has a non-monotonic impact: falling trade costs first decrease and then increase the plant share in the high-tax country, which we empirically confirm. Moreover, allowing for transfer pricing makes tax competition tougher and international coordination on transfer-pricing regulation can be beneficial.
Economic Integration and Agglomeration of Multinational Production with Transfer Pricing
We show local H\"older continuity of quasiminimizers of functionals with non-standard (Musielak--Orlicz) growth. Compared with previous results, we cover more general minimizing functionals and need fewer assumptions. We prove Harnack's inequality and a Morrey type estimate for quasiminimizers. Combining this with Ekeland's variational principle, we obtain local H\"older continuity for $\omega$-minimizers.
H\"older continuity of $\omega$-minimizers of functionals with generalized Orlicz growth
Traditional halftoning usually drops colors when dithering images with binary dots, which makes it difficult to recover the original color information. We proposed a novel halftoning technique that converts a color image into a binary halftone with full restorability to its original version. Our novel base halftoning technique consists of two convolutional neural networks (CNNs) to produce the reversible halftone patterns, and a noise incentive block (NIB) to mitigate the flatness degradation issue of CNNs. Furthermore, to tackle the conflicts between the blue-noise quality and restoration accuracy in our novel base method, we proposed a predictor-embedded approach to offload predictable information from the network, which in our case is the luminance information resembling from the halftone pattern. Such an approach allows the network to gain more flexibility to produce halftones with better blue-noise quality without compromising the restoration quality. Detailed studies on the multiple-stage training method and loss weightings have been conducted. We have compared our predictor-embedded method and our novel method regarding spectrum analysis on halftone, halftone accuracy, restoration accuracy, and the data embedding studies. Our entropy evaluation evidences our halftone contains less encoding information than our novel base method. The experiments show our predictor-embedded method gains more flexibility to improve the blue-noise quality of halftones and maintains a comparable restoration quality with a higher tolerance for disturbances.
Taming Reversible Halftoning via Predictive Luminance
We consider the problem of generating automatic code given sample input-output pairs. We train a neural network to map from the current state and the outputs to the program's next statement. The neural network optimizes multiple tasks concurrently: the next operation out of a set of high level commands, the operands of the next statement, and which variables can be dropped from memory. Using our method we are able to create programs that are more than twice as long as existing state-of-the-art solutions, while improving the success rate for comparable lengths, and cutting the run-time by two orders of magnitude. Our code, including an implementation of various literature baselines, is publicly available at https://github.com/amitz25/PCCoder
Automatic Program Synthesis of Long Programs with a Learned Garbage Collector
We study the surface criticality of a three-dimensional classical antiferromagnetic Potts model, whose bulk critical behaviors belongs to the XY model because of emergent O(2) symmetry. We find that the surface antiferromagnetic next-nearest neighboring interactions can drive the extraordinary-log phase to the ordinary phase, the transition between the two phases belongs to the universality class of the well-known special transition of the XY model. Further strengthening the surface next-nearest neighboring interactions, the extraordinary-log phase reappears, but the main critical behaviors are dominated on the sublattices of the model; the special point between the ordinary phase and the sublattice extraordinary-log phase belongs to a new universality class.
Sublattice extraordinary-log phase and new special point of the antiferromagnetic Potts model
Phase-space realisations of an infinite parameter family of quantum deformations of the boson algebra in which the $q$-- and the $qp$--deformed algebras arise as special cases are studied. Quantum and classical models for the corresponding deformed oscillators are provided. The deformation parameters are identified with coefficients of non-linear terms in the normal forms expansion of a family of classical Hamiltonian systems. These quantum deformations are trivial in the sense that they correspond to non-unitary transformations of the Weyl algebra. They are non-trivial in the sense that the deformed commutators consistently quantise a class of non-canonical classical Poisson structures.
Geometry of Deformed Boson Algebras
The LISA time-delay-interferometry responses to a gravitational-wave signal are rewritten in a form that accounts for the motion of the LISA constellation around the Sun; the responses are given in closed analytic forms valid for any frequency in the band accessible to LISA. We then present a complete procedure, based on the principle of maximum likelihood, to search for stellar-mass binary systems in the LISA data. We define the required optimal filters, the amplitude-maximized detection statistic (analogous to the F statistic used in pulsar searches with ground-based interferometers), and discuss the false-alarm and detection probabilities. We test the procedure in numerical simulations of gravitational-wave detection.
Optimal filtering of the LISA data
The proper definition of subsystems in gauge theory and gravity requires an extension of the local phase space by including edge mode fields. Their role is on the one hand to restore gauge invariance with respect to gauge transformations supported on the boundary, and on the other hand to parametrize the largest set of boundary symmetries which can arise if both the gauge parameters and the dynamical fields are unconstrained at the boundary. In this work we construct the extended phase space for three-dimensional gravity in first order connection and triad variables. There, the edge mode fields consist of a choice of coordinate frame on the boundary and a choice of Lorentz frame on the bundle, which together constitute the Lorentz-diffeomorphism edge modes. After constructing the extended symplectic structure and proving its gauge invariance, we study the boundary symmetries and the integrability of their generators. We find that the infinite-dimensional algebra of boundary symmetries with first order variables is the same as that with metric variables, and explain how this can be traced back to the expressions for the diffeomorphism Noether charge in both formulations. This concludes the study of extended phase spaces and edge modes in three-dimensional gravity, which was done previously by the author in the BF and Chern-Simons formulations.
Lorentz-diffeomorphism edge modes in 3d gravity
Beginning in 2006, G. Gentili and D.C. Struppa developed a theory of regular quaternionic functions with properties that recall classical results in complex analysis. For instance, in each Euclidean ball centered at 0 the set of regular functions coincides with that of quaternionic power series converging in the same ball. In 2009 the author proposed a classification of singularities of regular functions as removable, essential or as poles and studied poles by constructing the ring of quotients. In that article, not only the statements, but also the proving techniques were confined to the special case of balls centered at 0. In a subsequent paper, F. Colombo, G. Gentili, I. Sabadini and D.C. Struppa (2009) identified a larger class of domains, on which the theory of regular functions is natural and not limited to quaternionic power series. The present article studies singularities in this new context, beginning with the construction of the ring of quotients and of Laurent-type expansions at points other than the origin. These expansions, which differ significantly from their complex analogs, allow a classification of singularities that is consistent with the one given in 2009. Poles are studied, as well as essential singularities, for which a version of the Casorati-Weierstrass Theorem is proven.
Singularities of slice regular functions
Due to language models' propensity to generate toxic or hateful responses, several techniques were developed to align model generations with users' preferences. Despite the effectiveness of such methods in improving the safety of model interactions, their impact on models' internal processes is still poorly understood. In this work, we apply popular detoxification approaches to several language models and quantify their impact on the resulting models' prompt dependence using feature attribution methods. We evaluate the effectiveness of counter-narrative fine-tuning and compare it with reinforcement learning-driven detoxification, observing differences in prompt reliance between the two methods despite their similar detoxification performances.
Let the Models Respond: Interpreting Language Model Detoxification Through the Lens of Prompt Dependence
We present our recent results on the scattering length of ^4He-^4He_2 collisions. These investigations are based on the hard-core version of the Faddeev differential equations. As compared to our previous calculations of the same quantity, a much more refined grid is employed, providing an improvement of about 10%. Our results are compared with other ab initio, and with model calculations.
Scattering length of the helium atom - helium dimer collision
The entropy production rate is a key quantity in irreversible thermodynamics. In this work, we concentrate on the realization of entropy production rate in chemical reaction systems in terms of the experimentally measurable reaction rate. Both triangular and linear networks have been studied. They attain either thermodynamic equilibrium or a non-equilibrium steady state, under suitable external constraints. We have shown that the entropy production rate is proportional to the square of the reaction velocity only around equilibrium and not any arbitrary non-equilibrium steady state. This feature can act as a guide in revealing the nature of a steady state, very much like the minimum entropy production principle. A discussion on this point has also been presented.
How is entropy production rate related to chemical reaction rate?
For a Legendrian submanifold $M$ of a Sasaki manifold $N$, we study harmonicity and biharmonicity of the corresponding Lagrangian cone submanifold C(M) of a Kaehler manifold C(N). We show that, if $C(M)$ is biharmonic in C(N), then it is harmonic; and $M$ is proper biharmonic in $N$ if and only if C(M) has a non-zero eigen-section of the Jacobi operator with the eigenvalue $m=dim M$.
Sasaki manifolds, Kaehler cone manifolds and biharmonic submanifolds
Graphene membranes act as highly sensitive transducers in nanoelectromechanical devices due to their ultimate thinness. Previously, the piezoresistive effect has been experimentally verified in graphene using uniaxial strain in graphene. Here we report experimental and theoretical data on the uni- and biaxial piezoresistive properties of suspended graphene membranes applied to piezoresistive pressure sensors. A detailed model that utilizes a linearized Boltzman transport equation describes accurately the charge carrier density and mobility in strained graphene, and hence the gauge factor. The gauge factor is found to be practically independent of the doping concentration and crystallographic orientation of the graphene films. These investigations provide deeper insight into the piezoresistive behavior of graphene membranes.
Piezoresistive Properties of Suspended Graphene Membranes under Uniaxial and Biaxial Strain in Nanoelectromechanical Pressure Sensors
We investigate the structural and magnetic properties of a Kitaev spin liquid candidate material Ag$_3$LiIr$_2$O$_6$ based on $^7$Li nuclear magnetic resonance line shape, Knight shift and spin-lattice relaxation rate $1/T_1$. The first sample A shows signatures of magnetically ordered spins, and exhibits one sharp $^7$Li peak with FWHM increasing significantly below 14~K. $1/T_1^{stretch}$ of this sample displays a broad local maximum at 40~K, followed by a very sharp peak at $T_N = 9\pm1$~K due to critical slowing down of Ir spin fluctuations, a typical signature of magnetic long range order. In order to shed light on the position-by-position variation of $1/T_1$ throughout the sample, we use Inverse Laplace Transform $T_1$ analysis based on Tikhonov regularization to deduce the density distribution function $P(1/T_1)$. We demonstrate that $\sim 60\%$ of Ir spins are statically ordered at the NMR measurement timescale but the rest of the sample volume remains paramagnetic even at 4.2~K, presumably because of structural disorder induced primarily by stacking faults. In order to further investigate the influence of structural disorder, we compare these NMR results with those of a second sample B, which has been shown by transmission electron microscope to have domains with unwanted Ag inclusion at Li and Ir sites within the Ir honeycomb planes. The sample B displays an additional NMR peak with relative intensity of $\sim 17\%$. The small Knight shift and $1/T_1$ of these defect-induced $^7$Li sites and the enhancement of bulk susceptibility at low temperatures suggest that these defects generate domains of only weakly magnetic Ir spins accompanied by free spins, leading to a lack of clear signatures of long-range order. The apparent lack of long-range order could be easily misinterpreted as evidence for the realization of a spin liquid ground state in highly disordered Kitaev lattice.
NMR Investigation on Honeycomb Iridate Ag$_3$LiIr$_2$O$_6$
Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the biomedicine domain. The benchmark consists of five tasks with ten datasets that cover both biomedical and clinical texts with different dataset sizes and difficulties. We also evaluate several baselines based on BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results. We make the datasets, pre-trained models, and codes publicly available at https://github.com/ncbi-nlp/BLUE_Benchmark.
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets
We study spiky string solutions in AdS3 x S1 that are characterized by two spins S,J as well as winding m in S1 and spike number n. We construct explicitly two-cut solutions by using the SL(2) asymptotic Bethe Ansatz equations at leading order in strong coupling. Unlike the folded spinning string, these solutions have asymmetric distributions of Bethe roots. The solutions match the known spiky string classical results obtained directly from string theory for arbitrary semiclassical parameters, including J=0 and any value of S, namely short and long strings. At large spins and winding number the string touches the boundary, and we find a new scaling limit with the energy given as E-S = n /2/ pi sqrt[1+ 4 pi^2/n^2 (J^2/ln^2 S + m^2/ln^2 S)] ln S. This is a generalization of the known scaling for the folded spinning string.
Spiky strings in Bethe Ansatz at strong coupling
This research addresses the multiprocessor scheduling problem of hard real-time systems, and it especially focuses on optimal and global schedulers when practical constraints are taken into account. First, we propose an improvement of the optimal algorithm BF. We formally prove that our adaptation is (i) optimal, i.e., it always generates a feasible schedule as long as such a schedule exists, and (ii) valid, i.e., it complies with the all the requirements. We also show that it outperforms BF by providing a computing complexity of O(n), where n is the number of tasks to be scheduled. Next, we propose a schedulability analysis which indicates a priori whether the real-time application can be scheduled by our improvement of BF without missing any deadline. This analysis is, to the best of our knowledge, the first such test for multiprocessors that takes into account all the main overheads generated by the Operating System.
On the Design of an Optimal Multiprocessor Real-Time Scheduling Algorithm under Practical Considerations (Extended Version)
The present study deals a scientometric analysis of 8486 bibliometric publications retrieved from the Web of Science database during the period 2008 to 2017. Data is collected and analyzed using Bibexcel software. The study focuses on various aspect of the quantitative research such as growth of papers (year wise), Collaborative Index (CI), Degree of Collaboration (DC), Co-authorship Index (CAI), Collaborative Co-efficient (CC), Modified Collaborative Co-Efficient (MCC), Lotka's Exponent value, Kolmogorov-Smirnov test (K-S Test).
Lotka's Law and Pattern of Author Productivity in the Field of Brain Concussion Research: A Scientometric Analysis
Quantization is a technique for creating efficient Deep Neural Networks (DNNs), which involves performing computations and storing tensors at lower bit-widths than f32 floating point precision. Quantization reduces model size and inference latency, and therefore allows for DNNs to be deployed on platforms with constrained computational resources and real-time systems. However, quantization can lead to numerical instability caused by roundoff error which leads to inaccurate computations and therefore, a decrease in quantized model accuracy. Similarly to prior works, which have shown that both biases and activations are more sensitive to quantization and are best kept in full precision or quantized with higher bit-widths, we show that some weights are more sensitive than others which should be reflected on their quantization bit-width. To that end we propose MixQuant, a search algorithm that finds the optimal custom quantization bit-width for each layer weight based on roundoff error and can be combined with any quantization method as a form of pre-processing optimization. We show that combining MixQuant with BRECQ, a state-of-the-art quantization method, yields better quantized model accuracy than BRECQ alone. Additionally, we combine MixQuant with vanilla asymmetric quantization to show that MixQuant has the potential to optimize the performance of any quantization technique.
MixQuant: Mixed Precision Quantization with a Bit-width Optimization Search
The novel coronavirus SARS-CoV-2, which emerged in late 2019, has since spread around the world and infected hundreds of millions of people with coronavirus disease 2019 (COVID-19). While this viral species was unknown prior to January 2020, its similarity to other coronaviruses that infect humans has allowed for rapid insight into the mechanisms that it uses to infect human hosts, as well as the ways in which the human immune system can respond. Here, we contextualize SARS-CoV-2 among other coronaviruses and identify what is known and what can be inferred about its behavior once inside a human host. Because the genomic content of coronaviruses, which specifies the virus's structure, is highly conserved, early genomic analysis provided a significant head start in predicting viral pathogenesis and in understanding potential differences among variants. The pathogenesis of the virus offers insights into symptomatology, transmission, and individual susceptibility. Additionally, prior research into interactions between the human immune system and coronaviruses has identified how these viruses can evade the immune system's protective mechanisms. We also explore systems-level research into the regulatory and proteomic effects of SARS-CoV-2 infection and the immune response. Understanding the structure and behavior of the virus serves to contextualize the many facets of the COVID-19 pandemic and can influence efforts to control the virus and treat the disease.
Pathogenesis, Symptomatology, and Transmission of SARS-CoV-2 through Analysis of Viral Genomics and Structure
We simulate four quantum error correcting codes under error models inspired by realistic noise sources in near-term ion trap quantum computers: $T_2$ dephasing, gate overrotation, and crosstalk. We use this data to find preferred codes for given error parameters along with logical error biases and a pseudothreshold which compares the physical and logical gate failure rates for a CNOT gate. Using these results we conclude that Bacon-Shor-13 is the most promising near term candidate as long as the impact of crosstalk can be mitigated through other means.
Logical Performance of 9 Qubit Compass Codes in Ion Traps with Crosstalk Errors
Ultracold atoms in optical lattices are a versatile tool to investigate fundamental properties of quantum many body systems. In particular, the high degree of control of experimental parameters has allowed the study of many interesting phenomena such as quantum phase transitions and quantum spin dynamics. Here we demonstrate how such control can be extended down to the most fundamental level of a single spin at a specific site of an optical lattice. Using a tightly focussed laser beam together with a microwave field, we were able to flip the spin of individual atoms in a Mott insulator with sub-diffraction-limited resolution, well below the lattice spacing. The Mott insulator provided us with a large two-dimensional array of perfectly arranged atoms, in which we created arbitrary spin patterns by sequentially addressing selected lattice sites after freezing out the atom distribution. We directly monitored the tunnelling quantum dynamics of single atoms in the lattice prepared along a single line and observed that our addressing scheme leaves the atoms in the motional ground state. Our results open the path to a wide range of novel applications from quantum dynamics of spin impurities, entropy transport, implementation of novel cooling schemes, and engineering of quantum many-body phases to quantum information processing.
Single-Spin Addressing in an Atomic Mott Insulator
Let $ V $ a vector space of dimension $n$. A family $ \{H_1, \ldots, H_p \} $ of vectorial hyperplans $V$ defines an arrangement $ {\cal A} $ of $ V $. For $ i \in \{ 1, \ldots, p \} $, let $ l_i $ be a linear form on $V$ with $H_i$ as kernel. We denote by $A_V ({\bf C}) $, the Weyl algebra of algebraic differential operators on $V$. Following J. Bernstein, the ideal constituted by polynomials $ b \in {\bf C} [s_1, \ldots, s_p] $ such that : $$ \; \; b (s_1, \ldots, s_p) \, l_1^{s_1} \ldots l_p^{s_p} \in A_n ({\bf C}) [s_1, \ldots, s_p] \, l_1^ {s_1 + 1} \ldots l_p^{s_p + 1} \; , $$ is not reduced to zero. This ideal does not depend on the choice of linear forms $ l_i $. The goal of this article is to determine this ideal when $ {\cal A} $ is a free arrangement constituted by linear hyperplans within the meaning of K. Saito.
L'id\'eal de Bernstein d'un arrangement libre d'hyperplans lin\'eaires
Event-based collections are often started with a web search, but the search results you find on Day 1 may not be the same as those you find on Day 7. In this paper, we consider collections that originate from extracting URIs (Uniform Resource Identifiers) from Search Engine Result Pages (SERPs). Specifically, we seek to provide insight about the retrievability of URIs of news stories found on Google, and to answer two main questions: first, can one "refind" the same URI of a news story (for the same query) from Google after a given time? Second, what is the probability of finding a story on Google over a given period of time? To answer these questions, we issued seven queries to Google every day for over seven months (2017-05-25 to 2018-01-12) and collected links from the first five SERPs to generate seven collections for each query. The queries represent public interest stories: "healthcare bill," "manchester bombing," "london terrorism," "trump russia," "travel ban," "hurricane harvey," and "hurricane irma." We tracked each URI in all collections over time to estimate the discoverability of URIs from the first five SERPs. Our results showed that the daily average rate at which stories were replaced on the default Google SERP ranged from 0.21 -0.54, and a weekly rate of 0.39 - 0.79, suggesting the fast replacement of older stories by newer stories. The probability of finding the same URI of a news story after one day from the initial appearance on the SERP ranged from 0.34 - 0.44. After a week, the probability of finding the same news stories diminishes rapidly to 0.01 - 0.11. Our findings suggest that due to the difficulty in retrieving the URIs of news stories from Google, collection building that originates from search engines should begin as soon as possible in order to capture the first stages of events, and should persist in order to capture the evolution of the events...
Scraping SERPs for Archival Seeds: It Matters When You Start
We analyze the potential of the CERN Large Hadron Collider (LHC) to study the structure of quartic vector-boson interactions through the pair production of electroweak gauge bosons via weak boson fusion q q -> q q W W. In order to study these couplings we have performed a partonic level calculation of all processes p p -> j j e+/- mu+/- nu nu and pp -> j j e+/- mu-/+ nu nu at the LHC using the exact matrix elements at O(\alpha_{em}^6) and O(\alpha_{em}^4 \alpha_s^2) as well as a full simulation of the t tbar plus 0 to 2 jets backgrounds. A complete calculation of the scattering amplitudes is necessary not only for a correct description of the process but also to preserve all correlations between the final state particles which can be used to enhance the signal. Our analyses indicate that the LHC can improve by more than one order of magnitude the bounds arising at present from indirect measurements.
p p -> j j e+/- mu+/- nu nu and j j e+/- mu-/+ nu nu at O(\alpha_{em}^6) and O(\alpha_{em}^4 \alpha_s^2) for the Study of the Quartic Electroweak Gauge Boson Vertex at LHC
The extension of the Standard model by assuming $U(1)_{\rm B-L}$ gauge symmetry is very well-motivated since it naturally explains the presence of heavy right-handed neutrinos required to account for the small active neutrino masses via the seesaw mechanism and thermal leptogenesis. Traditionally, we introduce three right handed neutrinos to cancel the $[U(1)_{\rm B-L}]^3$ anomaly. However, it suffices to introduce two heavy right-handed neutrinos for these purposes and therefore we can replace one right-handed neutrino by new chiral fermions to cancel the $U(1)_{\rm B-L}$ gauge anomaly. Then, one of the chiral fermions can naturally play a role of a dark matter candidate. In this paper, we demonstrate how this framework produces a dark matter candidate which can address the so-called "core-cusp problem". As one of the small scale problems that $\Lambda$CDM paradigm encounters, it may imply an important clue for a nature of dark matter. One of resolutions among many is hypothesizing that sub-keV fermion dark matter halos in dwarf spheroidal galaxies are in (quasi) degenerate configuration. We show how the degenerate sub-keV fermion dark matter candidate can be non-thermally originated in our model and thus can be consistent with Lyman-$\alpha$ forest observation. Thereby, the small neutrino mass, baryon asymmetry, and the sub-keV dark matter become consequences of the broken B-L gauge symmetry.
Degenerate Fermion Dark Matter from a Broken $U(1)_{\rm B-L}$ Gauge Symmetry
Heating of carriers in an intrinsic graphene under dc electric field is considered taking into account the intraband energy relaxation due to acoustic phonon scattering and the interband generation-recombination transitions due to thermal radiation. The distribution of nonequilibrium carriers is obtained for the cases when the intercarrier scattering is unessential and when the carrier-carrier Coulomb scattering effectively establishes the quasiequilibrium distribution with the temperature and the density of carriers that are determined by the balance equations. Because of an interplay between weak energy relaxation and generation-recombination processes a very low threshold of nonlinear response takes place. The nonlinear current-voltage characteristics are calculated for the case of the momentum relaxation caused by the elastic scattering. Obtained current-voltage characteristics show low threshold of nonlinear behavior and appearance of the second ohmic region, for strong fields.
Hot carriers in an intrinsic graphene
In this work we first prove, by formal arguments, that the diffusion limit of nonlinear kinetic equations, where both the transport term and the turning operator are density-dependent, leads to volume-exclusion chemotactic equations. We generalise an asymptotic preserving scheme for such nonlinear kinetic equations based on a micro-macro decomposition. By properly discretizing the nonlinear term implicitly-explicitly in an upwind manner, the scheme produces accurate approximations also in the case of strong chemosensitivity. We show, via detailed calculations, that the scheme presents the following properties: asymptotic preserving, positivity preserving and energy dissipation, which are essential for practical applications. We extend this scheme to two dimensional kinetic models and we validate its efficiency by means of 1D and 2D numerical experiments of pattern formation in biological systems.
Asymptotic preserving schemes for nonlinear kinetic equations leading to volume-exclusion chemotaxis in the diffusive limit
The explicit expressions describing the structure function g_1 at arbitrary x and Q^2 are obtained. In the first place, they combine the well-known DGLAP expressions for g_1 with the total resummation of leading logarithms of x, which makes possible to cover the kinematic region of arbitrary x and large Q^2. In order to cover the small-Q^2 region the shift Q^2 -> Q^2 + mu^2 in the large-Q^2 expressions for g_1 is suggested and values of mu are estimated. The expressions obtained do not require singular factors x^{-a} in the fits for initial parton densities.
Description of the spin structure function g_1 at arbitrary $x$ and arbitrary Q^2
Stroke is a major cause of mortality and disability worldwide from which one in four people are in danger of incurring in their lifetime. The pre-hospital stroke assessment plays a vital role in identifying stroke patients accurately to accelerate further examination and treatment in hospitals. Accordingly, the National Institutes of Health Stroke Scale (NIHSS), Cincinnati Pre-hospital Stroke Scale (CPSS) and Face Arm Speed Time (F.A.S.T.) are globally known tests for stroke assessment. However, the validity of these tests is skeptical in the absence of neurologists and access to healthcare may be limited. Therefore, in this study, we propose a motion-aware and multi-attention fusion network (MAMAF-Net) that can detect stroke from multimodal examination videos. Contrary to other studies on stroke detection from video analysis, our study for the first time proposes an end-to-end solution from multiple video recordings of each subject with a dataset encapsulating stroke, transient ischemic attack (TIA), and healthy controls. The proposed MAMAF-Net consists of motion-aware modules to sense the mobility of patients, attention modules to fuse the multi-input video data, and 3D convolutional layers to perform diagnosis from the attention-based extracted features. Experimental results over the collected Stroke-data dataset show that the proposed MAMAF-Net achieves a successful detection of stroke with 93.62% sensitivity and 95.33% AUC score.
MAMAF-Net: Motion-Aware and Multi-Attention Fusion Network for Stroke Diagnosis
Accretion powers relativistic jets in GRBs, similarly to other jet sources. Black holes that are at heart of long GRBs, are formed as the end product of stellar evolution. At birth, some of the black holes must be very rapidly spinning, to be able to power the GRBS. In some cases, the black holes may be born without formation of a disk/jet engine, and then the star will collapse without an electromagnetic transient. In this proceeding, we discuss the conditions for launching variable jets from the magnetized disk in an arrested state. We also discuss properties of collapsing massive stars as progenitors of GRBs, and the conditions which must be satisfied for the star in order for the collapsar to produce a bright gamma-ray transient. We find that the black hole rotation is further affected by self-gravity of the collapsing matter. Finally, we comment on the properties of the accretion disk under extreme conditions of nuclear densities and temperatures, while it can contribute to the kilonova accompanying short GRBs.
Many faces of accretion in gamma ray bursts
Out-of-time-ordered correlators (OTOCs) are of crucial importance for studying a wide variety of fundamental phenomena in quantum physics, ranging from information scrambling to quantum chaos and many-body localization. However, apart from a few special cases, they are notoriously difficult to compute even numerically due to the exponential complexity of generic quantum many-body systems. In this paper, we introduce a machine learning approach to OTOCs based on the restricted-Boltzmann-machine architecture, which features wide applicability and could work for arbitrary-dimensional systems with massive entanglement. We show, through a concrete example involving a two-dimensional transverse field Ising model, that our method is capable of computing early-time OTOCs with respect to random pure quantum states or infinite-temperature thermal ensembles. Our results showcase the great potential for machine learning techniques in computing OTOCs, which open up numerous directions for future studies related to similar physical quantities.
Artificial Neural Network Based Computation for Out-of-Time-Ordered Correlators
Understanding the processes which create and destroy $^{22}$Na is important for diagnosing classical nova outbursts. Conventional $^{22}$Na(p,$\gamma$) studies are complicated by the need to employ radioactive targets. In contrast, we have formed the particle-unbound states of interest through the heavy-ion fusion reaction, $^{12}$C($^{12}$C,n)$^{23}$Mg and used the Gammasphere array to investigate their radiative decay branches. Detailed spectroscopy was possible and the $^{22}$Na(p,$\gamma$) reaction rate has been re-evaluated. New hydrodynamical calculations incorporating the upper and lower limits on the new rate suggest a reduction in the yield of $^{22}$Na with respect to previous estimates, implying a reduction in the maximum detectability distance for $^{22}$Na $\gamma$ rays from novae.
Reevaluation of the $^{22}$Na(p,$\gamma$) reaction rate: Implications for the detection of $^{22}$Na gamma rays from novae
By assuming simultaneously the unitarity of the Hawking evaporation and the universality of Bekenstein entropy bound as well as the validity of cosmic censorship conjecture, we have found that the black hole evaporation rate could evolve from the usual inverse square law in black hole mass into a constant evaporation rate near the end of the Hawking evaporation before quantum gravity could come into play, inferring a slightly longer lifetime for lighter black holes.
The doomsday of black hole evaporation
We have measured Faraday Rotation Measures (RMs) at Arecibo Observatory for 36 pulsars, 17 of them new. We combine these and earlier measurements to study the galactic magnetic field and its possible temporal variations. Many RM values have changed significantly on several-year timescales, but these variations probably do not reflect interstellar magnetic field changes. By studying the distribution of pulsar RMs near the plane in conjunction with the new NE2001 electron density model, we note the following structures in the first galactic longitude quadrant: (1) The local field reversal can be traced as a null in RM in a 0.5-kpc wide strip interior to the Solar Circle, extending \~7 kpc around the Galaxy. (2) Steadily increasing RMs in a 1-kpc wide strip interior to the local field reversal, and also in the wedge bounded by 42<l<52 deg, indicate that the large-scale field is approximately steady from the local reversal in to the Sagittarius arm. (3) The RMs in the 1-kpc wide strip interior to the Sagittarius arm indicate another field reversal in this strip. (4) The RMs in a final 1-kpc wide interior strip, straddling the Scutum arm, also support a second field reversal interior to the Sun,between the Sagittarius and Scutum arms. (5) Exterior to the nearby reversal, RMs from 60<l<78 deg show evidence for two reversals, on the near and far side of the Perseus arm. (6) In general, the maxima in the large-scale fields tend to lie along the spiral arms, while the field minima tend to be found between them. We have also determined polarized profiles of 48 pulsars at 430 MHz. We present morphological pulse profile classifications of the pulsars, based on our new measurements and previously published data.
Arecibo 430 MHz Pulsar Polarimetry: Faraday Rotation Measures and Morphological Classifications
We study social cost losses in Facility Location games, where $n$ selfish agents install facilities over a network and connect to them, so as to forward their local demand (expressed by a non-negative weight per agent). Agents using the same facility share fairly its installation cost, but every agent pays individually a (weighted) connection cost to the chosen location. We study the Price of Stability (PoS) of pure Nash equilibria and the Price of Anarchy of strong equilibria (SPoA), that generalize pure equilibria by being resilient to coalitional deviations. A special case of recently studied network design games, Facility Location merits separate study as a classic model with numerous applications and individual characteristics: our analysis for unweighted agents on metric networks reveals constant upper and lower bounds for the PoS, while an $O(\ln n)$ upper bound implied by previous work is tight for non-metric networks. Strong equilibria do not always exist, even for the unweighted metric case. We show that $e$-approximate strong equilibria exist ($e=2.718...$). The SPoA is generally upper bounded by $O(\ln W)$ ($W$ is the sum of agents' weights), which becomes tight $\Theta(\ln n)$ for unweighted agents. For the unweighted metric case we prove a constant upper bound. We point out several challenging open questions that arise.
On Pure and (approximate) Strong Equilibria of Facility Location Games
For a left action $S\overset{\lambda}{\curvearrowright}X$ of a cancellative right amenable monoid $S$ on a discrete Abelian group $X$, we construct its Ore localization $G\overset{\lambda^*}{\curvearrowright}X^*$, where $G$ is the group of left fractions of $S$; analogously, for a right action $K\overset{\rho}\curvearrowleft S$ on a compact space $K$, we construct its Ore colocalization $K^*\overset{\rho^*}{\curvearrowleft} G$. Both constructions preserve entropy, i.e., for the algebraic entropy $h_{\mathrm{alg}}$ and for the topological entropy $h_{\mathrm{top}}$ one has $h_{\mathrm{alg}}(\lambda)=h_{\mathrm{alg}}(\lambda^*)$ and $h_{\mathrm{top}}(\rho)=h_{\mathrm{top}}(\rho^*)$, respectively. Exploiting these constructions and the theory of quasi-tilings, we extend the Addition Theorem for $h_{\mathrm{top}}$, known for right actions of countable amenable groups on compact metrizable groups, to right actions $K\overset{\rho}{\curvearrowleft} S$ of cancellative right amenable monoids $S$ (with no restrictions on the cardinality) on arbitrary compact groups $K$. When the compact group $K$ is Abelian, we prove that $h_{\mathrm{top}}(\rho)$ coincides with $h_{\mathrm{alg}}(\hat{\rho})$, where $S\overset{\hat{\rho}}\curvearrowright X$ is the dual left action on the discrete Pontryagin dual $X=\hat{K}$, that is, a so-called Bridge Theorem. From the Addition Theorem for $h_{\mathrm{top}}$ and the Bridge Theorem, we obtain an Addition Theorem for $h_{\mathrm{alg}}$ for left actions $S\overset{\lambda}\curvearrowright X$ on discrete Abelian groups, so far known only under the hypotheses that either $X$ is torsion or $S$ is locally monotileable. The proofs substantially use the unified approach towards entropy based on the entropy of actions of cancellative right amenable monoids on appropriately defined normed monoids.
Ore localization of amenable monoid actions and applications towards entropy $-$ addition formulas and the bridge theorem
Hyperparameter tuning is a common technique for improving the performance of neural networks. Most techniques for hyperparameter search involve an iterated process where the model is retrained at every iteration. However, the expected accuracy improvement from every additional search iteration, is still unknown. Calculating the expected improvement can help create stopping rules for hyperparameter tuning and allow for a wiser allocation of a project's computational budget. In this paper, we establish an empirical estimate for the expected accuracy improvement from an additional iteration of hyperparameter search. Our results hold for any hyperparameter tuning method which is based on random search \cite{bergstra2012random} and samples hyperparameters from a fixed distribution. We bound our estimate with an error of $O\left(\sqrt{\frac{\log k}{k}}\right)$ w.h.p. where $k$ is the current number of iterations. To the best of our knowledge this is the first bound on the expected gain from an additional iteration of hyperparameter search. Finally, we demonstrate that the optimal estimate for the expected accuracy will still have an error of $\frac{1}{k}$.
Random Search Hyper-Parameter Tuning: Expected Improvement Estimation and the Corresponding Lower Bound
In our preceding paper, Liverpool Telescope data of M31 novae in eruption were used to facilitate a search for their progenitor systems within archival Hubble Space Telescope (HST) data, with the aim of detecting systems with red giant secondaries (RG-novae) or luminous accretion disks. From an input catalog of 38 spectroscopically confirmed novae with archival quiescent observations, likely progenitors were recovered for eleven systems. Here we present the results of the subsequent statistical analysis of the original survey, including possible biases associated with the survey and the M31 nova population in general. As part of this analysis we examine the distribution of optical decline times (t(2)) of M31 novae, how the likely bulge and disk nova distributions compare, and how the M31 t(2) distribution compares to that of the Milky Way. Using a detailed Monte Carlo simulation, we determine that 30 (+13/-10) percent of all M31 nova eruptions can be attributed to RG-nova systems, and at the 99 percent confidence level, >10 percent of all M31 novae are RG-novae. This is the first estimate of a RG-nova rate of an entire galaxy. Our results also imply that RG-novae in M31 are more likely to be associated with the M31 disk population than the bulge, indeed the results are consistent with all RG-novae residing in the disk. If this result is confirmed in other galaxies, it suggests any Type Ia supernovae that originate from RG-nova systems are more likely to be associated with younger populations, and may be rare in old stellar populations, such as early-type galaxies.
On the Progenitors of Local Group Novae. II. The Red Giant Nova Rate of M31
Sorted L-One Penalized Estimation (SLOPE) has shown the nice theoretical property as well as empirical behavior recently on the false discovery rate (FDR) control of high-dimensional feature selection by adaptively imposing the non-increasing sequence of tuning parameters on the sorted $\ell_1$ penalties. This paper goes beyond the previous concern limited to the FDR control by considering the stepdown-based SLOPE to control the probability of $k$ or more false rejections ($k$-FWER) and the false discovery proportion (FDP). Two new SLOPEs, called $k$-SLOPE and F-SLOPE, are proposed to realize $k$-FWER and FDP control respectively, where the stepdown procedure is injected into the SLOPE scheme. For the proposed stepdown SLOPEs, we establish their theoretical guarantees on controlling $k$-FWER and FDP under the orthogonal design setting, and also provide an intuitive guideline for the choice of regularization parameter sequence in much general setting. Empirical evaluations on simulated data validate the effectiveness of our approaches on controlled feature selection and support our theoretical findings.
Stepdown SLOPE for Controlled Feature Selection
We show that p-harmonic functions in the plane satisfy a nonlinear asymptotic mean value property for p>1. This extends previous results of Manfredi and Lindqvist for certain range of p's.
On the asymptotic mean value property for planar p-harmonic functions
The paper reports optical orientation experiments performed in the narrow GaAs/AlGaAs quantum wells doped with Mn. We experimentally demonstrate a control over the spin polarization by means of the optical orientation via the impurity-to-band excitation and observe a sign inversion of the luminescence polarization depending on the pump power. The g factor of a hole localized on the Mn acceptor in the quantum well was also found to be considerably modified from its bulk value due to the quantum confinement effect. This finding shows the importance of the local environment on magnetic properties of the dopants in semiconductor nanostructures.
Optical orientation of spins in GaAs:Mn/AlGaAs quantum wells via impurity-to-band excitation
The Gaussian Mixture Probability Hypothesis Density (GM-PHD) filter is an almost exact closed-form approximation to the Bayes-optimal multi-target tracking algorithm. Due to its optimality guarantees and ease of implementation, it has been studied extensively in the literature. However, the challenges involved in implementing the GM-PHD filter efficiently in a distributed (multi-sensor) setting have received little attention. The existing solutions for distributed PHD filtering either have a high computational and communication cost, making them infeasible for resource-constrained applications, or are unable to guarantee the asymptotic convergence of the distributed PHD algorithm to an optimal solution. In this paper, we develop a distributed GM-PHD filtering recursion that uses a probabilistic communication rule to limit the communication bandwidth of the algorithm, while ensuring asymptotic optimality of the algorithm. We derive the convergence properties of this recursion, which uses weighted average consensus of Gaussian mixtures (GMs) to lower (and asymptotically minimize) the Cauchy-Schwarz divergence between the sensors' local estimates. In addition, the proposed method is able to avoid the issue of false positives, which has previously been noted to impact the filtering performance of distributed multi-target tracking. Through numerical simulations, it is demonstrated that our proposed method is an effective solution for distributed multi-target tracking in resource-constrained sensor networks.
Distributed Gaussian Mixture PHD Filtering under Communication Constraints
A recent investigation of the single spin asymmetry (SSA) in low virtuality electroproduction/photoproduction of $J/\psi$ in color evaporation model is presented. It is shown that the asymmetry is sizable and can be used as a probe for the still unknown gluon Sivers function.
Sivers Asymmetry in $e+p^\uparrow \rightarrow e+J/\psi+X$
The shock L1157-B1 driven by the low-mass protostar L1157-mm is an unique environment to investigate the chemical enrichment due to molecules released from dust grains. IRAM-30m and Plateau de Bure Interferometer observations allow a census of Si-bearing molecules in L1157-B1. We detect SiO and its isotopologues and, for the first time in a shock, SiS. The strong gradient of the [SiO/SiS] abundance ratio across the shock (from >=180 to ~25) points to a different chemical origin of the two species. SiO peaks where the jet impacts the cavity walls ([SiO/H2] ~ 1e-6), indicating that SiO is directly released from grains or rapidly formed from released Si in the strong shock occurring at this location. In contrast, SiS is only detected at the head of the cavity opened by previous ejection events ([SiS/H2] ~ 2e-8). This suggests that SiS is not directly released from the grain cores but instead should be formed through slow gas-phase processes using part of the released silicon. This finding shows that Si-bearing molecules can be useful to distinguish regions where grains or gas-phase chemistry dominates.
Silicon-bearing molecules in the shock L1157-B1: first detection of SiS around a Sun-like protostar
Using the notion of compatibility between Poisson brackets and cluster structures in the coordinate rings of simple Lie groups, Gekhtman Shapiro and Vainshtein conjectured a correspondence between the two. Poisson Lie groups are classified by the Belavin-Drinfeld classification of solutions to the classical Yang Baxter equation. For any non trivial Belavin-Drinfeld data of minimal size for $SL_{n}$, we give an algorithm for constructing an initial seed $\Sigma$ in $\mathcal{O}(SL_{n})$. The cluster structure $\mathcal{C}=\mathcal{C}(\Sigma)$ is then proved to be compatible with the Poisson bracket associated with that Belavin-Drinfeld data, and the seed $\Sigma$ is locally regular. This is the first of two papers, and the second one proves the rest of the conjecture: the upper cluster algebra $\bar{\mathcal{A}}_{\mathbb{C}}(\mathcal{C})$ is naturally isomorphic to $\mathcal{O}(SL_{n})$, and the correspondence of Belavin-Drinfeld classes and cluster structures is one to one.
Exotic Cluster Structures on $SL_n$ with Belavin-Drinfeld Data of Minimal Size, I. The Structure