text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
The linearly polarized Gowdy $T^3$ model is paradigmatic for studying technical and conceptual issues in the quest for a quantum theory of gravity since, after a suitable and almost complete gauge fixing, it becomes an exactly soluble midisuperspace model. Recently, a new quantization of the model, possessing desired features such as a unitary implementation of the gauge group and of the time evolution, has been put forward and proven to be essentially unique. An appropriate setting for making contact with other approaches to canonical quantum gravity is provided by the Schr\"odinger representation, where states are functionals on the configuration space of the theory. Here we construct this functional description, analyze the time evolution in this context and show that it is also unitary when restricted to physical states, i.e. states which are solutions to the remaining constraint of the theory. | Quantum Gowdy $T^3$ Model: Schrodinger Representation with Unitary Dynamics |
One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. In this study, we focused on a task between agents and humans. We experimentally investigated hypotheses stating that task difficulty and task content facilitate human empathy. The experiment was a two-way analysis of variance (ANOVA) with four conditions: task difficulty (high, low) and task content (competitive, cooperative). The results showed no main effect for the task content factor and a significant main effect for the task difficulty factor. In addition, pre-task empathy toward the agent decreased after the task. The ANOVA showed that one category of empathy toward the agent increased when the task difficulty was higher than when it was lower.This indicated that this category of empathy was more likely to be affected by the task. The task itself used can be an important factor when manipulating each category of empathy. | Agents facilitate one category of human empathy through task difficulty |
We analyze a tractable model of a limit order book on short time scales, where the dynamics are driven by stochastic fluctuations between supply and demand. We establish the existence of a limiting distribution for the highest bid, and for the lowest ask, where the limiting distributions are confined between two thresholds. We make extensive use of fluid limits in order to establish recurrence properties of the model. We use the model to analyze various high-frequency trading strategies, and comment on the Nash equilibria that emerge between high-frequency traders when a market in continuous time is replaced by frequent batch auctions. | A Markov model of a limit order book: thresholds, recurrence, and trading strategies |
We apply spectral theoretic methods to obtain a Littlewood-Paley decomposition of abstract inhomogeneous Besov spaces in terms of "smooth" and "bandlimited" functions. Well-known decompositions in several contexts are as special examples and are unified under the spectral theoretic approach | A unified approach for Littlewood-Paley Decomposition of Abstract Besov Spaces |
The purpose of this note is to clarify the conditions under which the first-order generalize-harmonic representation of the vacuum Einstein evolution system is linearly degenerate. | Linear Degeneracy of the First-Order Generalized-Harmonic Einstein System |
We present GeoSP, a parallel method that creates a parcellation of the cortical mesh based on a geodesic distance, in order to consider gyri and sulci topology. The method represents the mesh with a graph and performs a K-means clustering in parallel. It has two modes of use, by default, it performs the geodesic cortical parcellation based on the boundaries of the anatomical parcels provided by the Desikan-Killiany atlas. The other mode performs the complete parcellation of the cortex. Results for both modes and with different values for the total number of sub-parcels show homogeneous sub-parcels. Furthermore, the execution time is 82 s for the whole cortex mode and 18 s for the Desikan-Killiany atlas subdivision, for a parcellation into 350 sub-parcels. The proposed method will be available to the community to perform the evaluation of data-driven cortical parcellations. As an example, we compared GeoSP parcellation with Desikan-Killiany and Destrieux atlases in 50 subjects, obtaining more homogeneous parcels for GeoSP and minor differences in structural connectivity reproducibility across subjects. | GeoSP: A parallel method for a cortical surface parcellation based on geodesic distance |
The Rosetta spacecraft detected transient and sporadic diamagnetic regions around comet 67P/Churyumov-Gerasimenko. In this paper we present a statistical analysis of bulk and suprathermal electron dynamics, as well as a case study of suprathermal electron pitch angle distributions (PADs) near a diamagnetic region. Bulk electron densities are correlated with the local neutral density and we find a distinct enhancement in electron densities measured over the southern latitudes of the comet. Flux of suprathermal electrons with energies between tens of eV to a couple of hundred eV decreases each time the spacecraft enters a diamagnetic region. We propose a mechanism in which this reduction can be explained by solar wind electrons that are tied to the magnetic field and after having been transported adiabatically in a decaying magnetic field environment, have limited access to the diamagnetic regions. Our analysis shows that suprathermal electron PADs evolve from an almost isotropic outside the diamagnetic cavity to a field-aligned distribution near the boundary. Electron transport becomes chaotic and non-adiabatic when electron gyroradius becomes comparable to the size of the magnetic field line curvature, which determines the upper energy limit of the flux variation. This study is based on Rosetta observations at around 200 km cometocentric distance when the comet was at 1.24 AU from the Sun and during the southern summer cometary season. | Electron Dynamics near Diamagnetic Regions of Comet 67P/Churyumov-Gerasimenko |
We introduce and benchmark a systematically improvable route for excited-state calculations, state-specific configuration interaction ($\Delta$CI), \alert{which is a particular realization of multiconfigurational self-consistent field and multireference configuration interaction.} Starting with a reference built from optimized configuration state functions, separate CI calculations are performed for each targeted state (hence state-specific orbitals and determinants). Accounting for single and double excitations produces the $\Delta$CISD model, which can be improved with second-order Epstein-Nesbet perturbation theory ($\Delta$CISD+EN2) or a posteriori Davidson corrections ($\Delta$CISD+Q). These models were gauged against a vast and diverse set of 294 reference excitation energies. We have found that $\Delta$CI is significantly more accurate than standard ground-state-based CI, whereas close performances were found between $\Delta$CISD and EOM-CC2, and between $\Delta$CISD+EN2 and EOM-CCSD. For larger systems, $\Delta$CISD+Q delivers more accurate results than EOM-CC2 and EOM-CCSD. The $\Delta$CI route can handle challenging multireference problems, singly- and doubly-excited states, from closed- and open-shell species, with overall comparable accuracy, and thus represents a promising alternative to more established methodologies. In its current form, however, it is only reliable for relatively low-lying excited states. | State-Specific Configuration Interaction for Excited States |
Machine learning (ML) techniques have been proposed to automatically select the best solver from a portfolio of solvers, based on predicted performance. These techniques have been applied to various problems, such as Boolean Satisfiability, Traveling Salesperson, Graph Coloring, and others. These methods, known as meta-solvers, take an instance of a problem and a portfolio of solvers as input. They then predict the best-performing solver and execute it to deliver a solution. Typically, the quality of the solution improves with a longer computational time. This has led to the development of anytime selectors, which consider both the instance and a user-prescribed computational time limit. Anytime meta-solvers predict the best-performing solver within the specified time limit. Constructing an anytime meta-solver is considerably more challenging than building a meta-solver without the "anytime" feature. In this study, we focus on the task of designing anytime meta-solvers for the NP-hard optimization problem of Pseudo-Boolean Optimization (PBO), which generalizes Satisfiability and Maximum Satisfiability problems. The effectiveness of our approach is demonstrated via extensive empirical study in which our anytime meta-solver improves dramatically on the performance of Mixed Integer Programming solver Gurobi, which is the best-performing single solver in the portfolio. For example, out of all instances and time limits for which Gurobi failed to find feasible solutions, our meta-solver identified feasible solutions for 47% of these. | Automatic Algorithm Selection for Pseudo-Boolean Optimization with Given Computational Time Limits |
Experimental studies of fission induced in relativistic nuclear collisions show a systematic enhancement of the excitation energy of the primary fragments by a factor of ~ 2, before their decay by fission and other secondary fragments. Although it is widely accepted that by doubling the energies of the single-particle states may yield a better agreement with fission data, it does not prove fully successful, since it is not able to explain yields for light and intermediate mass fragments. State-of-the-art calculations are successful to describe the overall shape of the mass distribution of fragments, but fail within a factor of 2-10 for a large number of individual yields. Here, we present a novel approach that provides an account of the additional excitation of primary fragments due to final state interaction with the target. Our method is applied to the 238U + 208Pb reaction at 1 GeV/nucleon (and is applicable to other energies), an archetype case of fission studies with relativistic heavy ions, where we find that the large probability of energy absorption through final state excitation of giant resonances in the fragments can substantially modify the isotopic distribution of final fragments in a better agreement with data. Finally, we demonstrate that large angular momentum transfers to the projectile and to the primary fragments via the same mechanism imply the need of more elaborate theoretical methods than the presently existing ones. | Fission of relativistic nuclei with fragment excitation and reorientation |
We review exact results in N=2 supersymmetric gauge theories defined on S^4 and its deformation. We first summarize the construction of rigid SUSY theories on curved backgrounds based on off-shell supergravity, then explain how to apply localization principle to supersymmetric path integrals. Closed formulae for partition function as well as expectation values of non-local BPS observables are presented. | N=2 SUSY gauge theories on S^4 |
In this article we study a multi-asset version of the Merton investment and consumption problem with proportional transaction costs. In general it is difficult to make analytical progress towards a solution in such problems, but we specialise to a case where transaction costs are zero except for sales and purchases of a single asset which we call the illiquid asset. Assuming agents have CRRA utilities and asset prices follow exponential Brownian motions we show that the underlying HJB equation can be transformed into a boundary value problem for a first order differential equation. The optimal strategy is to trade the illiquid asset only when the fraction of the total portfolio value invested in this asset falls outside a fixed interval. Important properties of the multi-asset problem (including when the problem is well-posed, ill-posed, or well-posed only for large transaction costs) can be inferred from the behaviours of a quadratic function of a single variable and another algebraic function. | A multi-asset investment and consumption problem with transaction costs |
The search for an understanding of an energy source great enough to explain the gamma-ray burst (GRB) phenomena has attracted much attention from the astrophysical community since its discovery. In this paper we extend the work of K. Asano and T. Fukuyama, and J. D. Salmonson and J. R. Wilson, and analyze the off-axis contributions to the energy-momentum deposition rate (MDR) from the neutrino anti-neutrino collisions above a rotating black hole/thin accretion disk system. Our calculations are performed by imaging the accretion disk at a specified observer using the full geodesic equations, and calculating the cumulative MDR from the scattering of all pairs of neutrinos and anti-neutrinos arriving at the observer. Our results shed light on the beaming efficiency of GRB models of this kind. Although we confirm Asano and Fukuyama's conjecture as to the constancy of the beaming for small angles away from the axis; nevertheless, we find the dominant contribution to the MDR comes from near the surface of the disk with a tilt of approximately \pi/4 in the direction of the disk's rotation. We find that the MDR at large radii is directed outward in a conic section centered around the symmetry axis and is larger, by a factor of 10 to 20, than the on-axis values. By including this off-axis disk source, we find a linear dependence of the MDR on the black hole angular momentum (a). In addition, we find that scattering is directed back onto the black hole in regions just above the horizon of the black hole. This gravitational ``in scatter'' may provide an observable high energy signature of the central engine, or at least another channel for accretion. | Off-Axis Neutrino Scattering in GRB Central Engines |
We study dynamical ordering of rods. In this process, rod alignment via pairwise interactions competes with diffusive wiggling. Under strong diffusion, the system is disordered, but at weak diffusion, the system is ordered. We present an exact steady-state solution for the nonlinear and nonlocal kinetic theory of this process. We find the Fourier transform as a function of the order parameter, and show that Fourier modes decay exponentially with the wave number. We also obtain the order parameter in terms of the diffusion constant. This solution is obtained using iterated partitions of the integer numbers. | Alignment of Rods and Partition of Integers |
In this work, we show a convergence result for the discrete formulation of the generalised KPZ equation $\partial_t u = (\Delta u) + g(u)(\nabla u)^2 + k(\nabla u) + h(u) + f(u)\xi_t(x)$, where the $\xi$ is a real-valued random field, $\Delta$ is the discrete Laplacian, and $\nabla$ is a discrete gradient, without fixing the spatial dimension. Our convergence result is established within the discrete regularity structures introduced by Hairer and Erhard [arXiv:1705.02836]. We extend with new ideas the convergence result found in [arXiv:2103.13479] that deals with a discrete form of the Parabolic Anderson model driven by a (rescaled) symmetric simple exclusion process. This is the first time that a discrete generalised KPZ equation is treated and it is a major step toward a general convergence result that will cover a large family of discrete models. | Convergence of space-discretised gKPZ via Regularity Structures |
We construct a flat (and fake-flat) 2-connection in the configuration space of $n$ indistinguishable particles in the complex plane, which categorifies the $sl(2,C)$-Knizhnik-Zamolodchikov connection obtained from the adjoint representation of $sl(2,C)$. This will be done by considering the adjoint categorical representation of the string Lie 2-algebra and the notion of an infinitesimal 2-Yang-Baxter operator in a differential crossed module. Specifically, we find an infinitesimal 2-Yang-Baxter operator in the string Lie 2-algebra, proving that any (strict) categorical representation of the string Lie-2-algebra, in a chain-complex of vector spaces, yields a flat and (fake flat) 2-connection in the configuration space, categorifying the $sl(2,C)$-Knizhnik-Zamolodchikov connection. We will give very detailed explanation of all concepts involved, in particular discussing the relevant theory of 2-connections and their two dimensional holonomy, in the specific case of 2-groups derived from chain complexes of vector spaces. | Categorifying the $sl(2,C)$ Knizhnik-Zamolodchikov Connection via an Infinitesimal 2-Yang-Baxter Operator in the String Lie-2-Algebra |
Hyperspectral image (HSI) has some advantages over natural image for various applications due to the extra spectral information. During the acquisition, it is often contaminated by severe noises including Gaussian noise, impulse noise, deadlines, and stripes. The image quality degeneration would badly effect some applications. In this paper, we present a HSI restoration method named smooth and robust low rank tensor recovery. Specifically, we propose a structural tensor decomposition in accordance with the linear spectral mixture model of HSI. It decomposes a tensor into sums of outer matrix vector products, where the vectors are orthogonal due to the independence of endmember spectrums. Based on it, the global low rank tensor structure can be well exposited for HSI denoising. In addition, the 3D anisotropic total variation is used for spatial spectral piecewise smoothness of HSI. Meanwhile, the sparse noise including impulse noise, deadlines and stripes, is detected by the l1 norm regularization. The Frobenius norm is used for the heavy Gaussian noise in some real world scenarios. The alternating direction method of multipliers is adopted to solve the proposed optimization model, which simultaneously exploits the global low rank property and the spatial spectral smoothness of the HSI. Numerical experiments on both simulated and real data illustrate the superiority of the proposed method in comparison with the existing ones. | Hyperspectral Image Denoising with Partially Orthogonal Matrix Vector Tensor Factorization |
The percolation threshold is an important measure to determine the inherent rigidity of large networks. Predictors of the percolation threshold for large networks are computationally intense to run, hence it is a necessity to develop predictors of the percolation threshold of networks, that do not rely on numerical simulations. We demonstrate the efficacy of five machine learning-based regression techniques for the accurate prediction of the percolation threshold. The dataset generated to train the machine learning models contains a total of 777 real and synthetic networks. It consists of 5 statistical and structural properties of networks as features and the numerically computed percolation threshold as the output attribute. We establish that the machine learning models outperform three existing empirical estimators of bond percolation threshold, and extend this experiment to predict site and explosive percolation. Further, we compared the performance of our models in predicting the percolation threshold using RMSE values. The gradient boosting regressor, multilayer perceptron and random forests regression models achieve the least RMSE values among considered models. | Machine Learning as an Accurate Predictor for Percolation Threshold of Diverse Networks |
Frequency combs (FCs) -- spectra containing equidistant coherent peaks -- have enabled researchers and engineers to measure the frequencies of complex signals with high precision thereby revolutionising the areas of sensing, metrology and communications and also benefiting the fundamental science. Although mostly optical FCs have found widespread applications thus far, in general FCs can be generated using waves other than light. Here, we review and summarise recent achievements in the emergent field of acoustic frequency combs (AFCs) including phononic FCs and relevant acousto-optical, Brillouin light scattering and Faraday wave-based techniques that have enabled the development of phonon lasers, quantum computers and advanced vibration sensors. In particular, our discussion is centred around potential applications of AFCs in precision measurements in various physical, chemical and biological systems in conditions, where using light, and hence optical FCs, faces technical and fundamental limitations, which is, for example, the case in underwater distance measurements and biomedical imaging applications. This review article will also be of interest to readers seeking a discussion of specific theoretical aspects of different classes of AFCs. To that end, we support the mainstream discussion by the results of our original analysis and numerical simulations that can be used to design the spectra of AFCs generated using oscillations of gas bubbles in liquids, vibrations of liquid drops and plasmonic enhancement of Brillouin light scattering in metal nanostructures. We also discuss the application of non-toxic room-temperature liquid-metal alloys in the field of AFC generation. | Acoustic, phononic, Brillouin light scattering and Faraday wave based frequency combs: physical foundations and applications |
The experiments conducted in previous studies demonstrated the successful performance of BSA and its non-sensitivity toward the several types of optimisation problems. This success of BSA motivated researchers to work on expanding it, e.g., developing its improved versions or employing it for different applications and problem domains. However, there is a lack of literature review on BSA; therefore, reviewing the aforementioned modifications and applications systematically will aid further development of the algorithm. This paper provides a systematic review and meta-analysis that emphasise on reviewing the related studies and recent developments on BSA. Hence, the objectives of this work are two-fold: (i) First, two frameworks for depicting the main extensions and the uses of BSA are proposed. The first framework is a general framework to depict the main extensions of BSA, whereas the second is an operational framework to present the expansion procedures of BSA to guide the researchers who are working on improving it. (ii) Second, the experiments conducted in this study fairly compare the analytical performance of BSA with four other competitive algorithms: differential evolution (DE), particle swarm optimisation (PSO), artificial bee colony (ABC), and firefly (FF) on 16 different hardness scores of the benchmark functions with different initial control parameters such as problem dimensions and search space. The experimental results indicate that BSA is statistically superior than the aforementioned algorithms in solving different cohorts of numerical optimisation problems such as problems with different levels of hardness score, problem dimensions, and search spaces. | Operational Framework for Recent Advances in Backtracking Search Optimisation Algorithm: A Systematic Review and Performance Evaluation |
While the community keeps promoting end-to-end models over conventional hybrid models, which usually are long short-term memory (LSTM) models trained with a cross entropy criterion followed by a sequence discriminative training criterion, we argue that such conventional hybrid models can still be significantly improved. In this paper, we detail our recent efforts to improve conventional hybrid LSTM acoustic models for high-accuracy and low-latency automatic speech recognition. To achieve high accuracy, we use a contextual layer trajectory LSTM (cltLSTM), which decouples the temporal modeling and target classification tasks, and incorporates future context frames to get more information for accurate acoustic modeling. We further improve the training strategy with sequence-level teacher-student learning. To obtain low latency, we design a two-head cltLSTM, in which one head has zero latency and the other head has a small latency, compared to an LSTM. When trained with Microsoft's 65 thousand hours of anonymized training data and evaluated with test sets with 1.8 million words, the proposed two-head cltLSTM model with the proposed training strategy yields a 28.2\% relative WER reduction over the conventional LSTM acoustic model, with a similar perceived latency. | High-Accuracy and Low-Latency Speech Recognition with Two-Head Contextual Layer Trajectory LSTM Model |
We have carried-out multi-configuration Breit-Pauli AUTOSTRUCTURE calculations for the dielectronic recombination (DR) of Fe^{8+} - Fe^{12+} ions. We obtain total DR rate coefficients for the initial ground-level which are an order of magnitude larger than those corresponding to radiative recombination (RR), at temperatures where Fe 3p^q (q=2-6) ions are abundant in photoionized plasmas. The resultant total (DR+RR) rate coefficients are then an order of magnitude larger than those currently in use by photoionized plasma modeling codes such as CLOUDY, ION and XSTAR. These rate coefficients, together with our previous results for q=0 and 1, are critical for determining the ionization balance of the M-shell Fe ions which give rise to the prominent unresolved-transition-array X-ray absorption feature found in the spectrum of many active galactic nuclei. This feature is poorly described by CLOUDY and ION, necessitating an ad hoc modification to the low-temperature DR rate coefficients. Such modifications are no longer necessary and a rigorous approach to such modeling can now take place using these data. | Dielectronic recombination of Fe 3p^q ions: a key ingredient for describing X-ray absorption in active galactic nuclei |
The lightest baryon octet is studied within a covariant and confining Nambu--Jona-Lasinio model. By solving the relativistic Faddeev equations including scalar and axialvector diquarks, we determine the masses and axial charges for \Delta S = 0 transitions. For the latter the degree of violation of SU(3) symmetry arising because of the strange spectator quark(s) is found to be up to 10%. | SU(3)-flavour breaking in octet baryon masses and axial couplings |
Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications. Yet, reinforcement learning has only achieved limited impact on real-time robot control due to its high demand of real-world interactions. In this work, by leveraging a learnt probabilistic model of drone dynamics, we learn a thrust-attitude controller for a quadrotor through model-based reinforcement learning. No prior knowledge of the flight dynamics is assumed; instead, a sequential latent variable model, used generatively and as an online filter, is learnt from raw sensory input. The controller and value function are optimised entirely by propagating stochastic analytic gradients through generated latent trajectories. We show that "learning to fly" can be achieved with less than 30 minutes of experience with a single drone, and can be deployed solely using onboard computational resources and sensors, on a self-built drone. | Learning to Fly via Deep Model-Based Reinforcement Learning |
Discovering achievements with a hierarchical structure on procedurally generated environments poses a significant challenge. This requires agents to possess a broad range of abilities, including generalization and long-term reasoning. Many prior methods are built upon model-based or hierarchical approaches, with the belief that an explicit module for long-term planning would be beneficial for learning hierarchical achievements. However, these methods require an excessive amount of environment interactions or large model sizes, limiting their practicality. In this work, we identify that proximal policy optimization (PPO), a simple and versatile model-free algorithm, outperforms the prior methods with recent implementation practices. Moreover, we find that the PPO agent can predict the next achievement to be unlocked to some extent, though with low confidence. Based on this observation, we propose a novel contrastive learning method, called achievement distillation, that strengthens the agent's capability to predict the next achievement. Our method exhibits a strong capacity for discovering hierarchical achievements and shows state-of-the-art performance on the challenging Crafter environment using fewer model parameters in a sample-efficient regime. | Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning |
A magnetic system with a phase transition at temperature $T_c$ may exhibit double resonance peaks under a periodic external magnetic field because the time scale matches the external frequency at two different temperatures, one above $T_c$ and the other below $T_c$. We study the double resonance phenomena for the mean-field $q$-state clock model based on the heat-bath-typed master equation. We find double peaks as observed in the kinetic Ising case ($q=2$) for all $q\ge 4$, but for the three-state clock model ($q=3$), the existence of double peaks is possible only above a certain external frequency since it undergoes a discontinuous phase transition. | Double stochastic resonance in the mean-field $q$-state clock models |
The goal of this study is to improve the accuracy of millimeter wave received power prediction by utilizing camera images and radio frequency (RF) signals, while gathering image inputs in a communication-efficient and privacy-preserving manner. To this end, we propose a distributed multimodal machine learning (ML) framework, coined multimodal split learning (MultSL), in which a large neural network (NN) is split into two wirelessly connected segments. The upper segment combines images and received powers for future received power prediction, whereas the lower segment extracts features from camera images and compresses its output to reduce communication costs and privacy leakage. Experimental evaluation corroborates that MultSL achieves higher accuracy than the baselines utilizing either images or RF signals. Remarkably, without compromising accuracy, compressing the lower segment output by 16x yields 16x lower communication latency and 2.8% less privacy leakage compared to the case without compression. | Communication-Efficient Multimodal Split Learning for mmWave Received Power Prediction |
We investigate the dynamics of two interacting electrons confined to a pair of coupled quantum dots driven by an external AC field. By numerically integrating the two-electron Schroedinger equation in time, we find that for certain values of the strength and frequency of the AC field we can cause the electrons to be localised within the same dot, in spite of the Coulomb repulsion between them. Reducing the system to an effective two-site model of Hubbard type and applying Floquet theory leads to a detailed understanding of this effect. This demonstrates the possibility of using appropriate AC fields to manipulate entangled states in mesoscopic devices on extremely short timescales, which is an essential component of practical schemes for quantum information processing. | Coherent transport in a two-electron quantum dot molecule |
Sexual contacts are the main spreading route of HIV. This puts sex workers at higher risk of infection even in populations where HIV prevalence is moderate or low. Alongside condom use, Pre-Exposure Prophylaxis (PrEP) is an effective tool for sex workers to reduce their risk of HIV acquisition. However, PrEP provides no direct protection against sexually transmitted infections (STIs) other than HIV, unlike condoms. We use an empirical network of sexual contacts among female sex workers (FSWs) and clients to simulate the spread of HIV and gonorrhea. We then investigate the effect of PrEP adoption and adherence, on both HIV and gonorrhea prevalence. We also study the effect of a potential increase in condomless acts due to lowered risk perception with respect of the no-PrEP scenario (risk compensation). We find that when HIV is the only disease circulating, PrEP is effective in reducing HIV prevalence, even with high risk compensation. Instead, the complex interplay between the two diseases shows that different levels of risk compensation require different intervention strategies. Finally, we find that providing PrEP only to the most active FSWs is less effective than uniform PrEP adoption. Our work shows that the effects emerging from the complex interactions between these diseases and the available prophylactic measures need to be accounted for, to devise effective intervention strategies. | Evaluating the impact of PrEP on HIV and gonorrhea on a networked population of female sex workers |
In many cases, the near-horizon geometry encodes sufficient information to compute conserved charges of a gravitational solution, including thermodynamic quantities. These charges are Noether charges associated to asymptotic isometries that preserve appropriate boundary conditions at the future horizon. For isolated, compact horizons these charges turn out to be integrable, conserved and finite, and they have been studied in many examples of interest, notably in 3+1 dimensions. In higher dimensions, where the variety of horizon structures is more diverse, it is still possible to apply the same method, although explicit examples have so far been limited to simple topologies. In this paper, we demonstrate that such computations can also be applied to higher-dimensional solutions with event horizons whose spacelike cross sections exhibit non-trivial topology. We provide several explicit examples, with particular focus on the 5-dimensional black ring. | Probing the near-horizon geometry of black rings |
In this work we formulate and test a new procedure, the Multiscale Perturbation Method for Two-Phase Flows (MPM-2P), for the fast, accurate and naturally parallelizable numerical solution of two-phase, incompressible, immiscible displacement in porous media approximated by an operator splitting method. The proposed procedure is based on domain decomposition and combines the Multiscale Perturbation Method (MPM) with the Multiscale Robin Coupled Method (MRCM). When an update of the velocity field is called by the operator splitting algorithm, the MPM-2P may provide, depending on the magnitude of a dimensionless algorithmic parameter, an accurate and computationally inexpensive approximation for the velocity field by reusing previously computed multiscale basis functions. Thus, a full update of all multiscale basis functions required by the MRCM for the construction of a new velocity field is avoided. There are two main steps in the formulation of the MPM-2P. Initially, for each subdomain one local boundary value problem with trivial Robin boundary conditions is solved (instead of a full set of multiscale basis functions, that would be required by the MRCM). Then, the solution of an inexpensive interface problem provides the velocity field on the skeleton of the decomposition of the domain. The resulting approximation for the velocity field is obtained by downscaling. We consider challenging two-phase flow problems, with high-contrast permeability fields and water-oil finger growth in homogeneous media. Our numerical experiments show that the use of the MPM-2P gives exceptional speed-up - almost 90% of reduction in computational cost - of two-phase flow simulations. Hundreds of MRCM solutions can be replaced by inexpensive MPM-2P solutions, and water breakthrough can be simulated with very few updates of the MRCM set of multiscale basis functions. | The Multiscale Perturbation Method for Two-Phase Reservoir Flow Problems |
When applying machine learning to problems in NLP, there are many choices to make about how to represent input texts. These choices can have a big effect on performance, but they are often uninteresting to researchers or practitioners who simply need a module that performs well. We propose an approach to optimizing over this space of choices, formulating the problem as global optimization. We apply a sequential model-based optimization technique and show that our method makes standard linear models competitive with more sophisticated, expensive state-of-the-art methods based on latent variable models or neural networks on various topic classification and sentiment analysis problems. Our approach is a first step towards black-box NLP systems that work with raw text and do not require manual tuning. | Bayesian Optimization of Text Representations |
We study the high density region of QCD within an effective model obtained in the frame of the hopping parameter expansion and choosing Polyakov type of loops as the main dynamical variables representing the fermionic matter. To get a first idea of the phase structure, the model is analyzed in strong coupling expansion and using a mean field approximation. In numerical simulations, the model still shows the so-called sign problem, a difficulty peculiar to non-zero chemical potential, but it permits the development of algorithms which ensure a good overlap of the Monte Carlo ensemble with the true one. We review the main features of the model and present calculations concerning the dependence of various observables on the chemical potential and on the temperature, in particular of the charge density and the diquark susceptibility, which may be used to characterize the various phases expected at high baryonic density. We obtain in this way information about the phase structure of the model and the corresponding phase transitions and cross over regions, which can be considered as hints for the behaviour of non-zero density QCD. | A Model for QCD at High Density and Large Quark Mass |
We calculate the gravitational-wave (GW) signatures of detailed 3D core-collapse supernova simulations spanning a range of massive stars. Most of the simulations are carried out to times late enough to capture more than 95% of the total GW emission. We find that the f/g-mode and f-mode of proto-neutron star oscillations carry away most of the GW power. The f-mode frequency inexorably rises as the proto-neutron star (PNS) core shrinks. We demonstrate that the GW emission is excited mostly by accretion plumes onto the PNS that energize modal oscillations and also high-frequency (``haze") emission correlated with the phase of violent accretion. The duration of the major phase of emission varies with exploding progenitor and there is a strong correlation between the total GW energy radiated and the compactness of the progenitor. Moreover, the total GW emissions vary by as much as three orders of magnitude from star to star. For black-hole formation, the GW signal tapers off slowly and does not manifest the haze seen for the exploding models. For such failed models, we also witness the emergence of a spiral shock motion that modulates the GW emission at a frequency near $\sim$100 Hertz that slowly increases as the stalled shock sinks. We find significant angular anisotropy of both the high- and low-frequency (memory) GW emissions, though the latter have very little power. | The Gravitational-Wave Signature of Core-Collapse Supernovae |
Let $(\otimes_{j=1}^nV_j)[0]$ be the zero weight subspace of a tensor product of finite-dimensional irreducible $\frak{sl}_2$-modules. The dynamical elliptic Bethe algebra is a commutative algebra of differential operators acting on $(\otimes_{j=1}^nV_j)[0]$-valued functions on the Cartan subalgebra of $\frak{sl}_2$. The algebra is generated by values of the coefficient $S_2(x)$ of a certain differential operator $D=(d/dx)^2+S_2(x)$, defined to V. Rubtsov, A. Silantyev, D. Talalaev in 2009. We express $S_2(x)$ in terms of the KZB operators introduced by G. Felder and C. Wieszerkowski in 1994. We study the eigenfunctions of the dynamical elliptic Bethe algebra by the Bethe ansatz method. Under certain assumptions we show that such Bethe eigenfunctions are in one-to-one correspondence with ordered pairs of theta-polynomials of certain degree. The correspondence between Bethe eigenfunctions and two-dimensional spaces, generated by the two theta-polynomials, is an analog of the non-dynamical non-elliptic correspondence between the eigenvectors of the $\frak{gl}_2$ Gaudin model and the two-dimensional subspaces of the vector space $\Bbb C[x]$, due to E. Mukhin, V. Tarasov, A. Varchenko. We obtain a counting result for, equivalently, certain solutions of the Bethe ansatz equation, certain fibers of the elliptic Wronski map, or ratios of theta polynomials, whose derivative is of a certain form. We give an asymptotic expansion for Bethe eigenfunctions in a certain limit, and deduce from that that the Weyl involution acting on Bethe eigenfunctions coincides with the action of an analytic involution given by the transposition of theta-polynomials in the associated ordered pair. | Dynamical elliptic Bethe algebra, KZB eigenfunctions, and theta-polynomials |
We investigate the conditions for a bounce to occur in Friedmann-Robertson-Walker cosmologies for the class of fourth order gravity theories. The general bounce criterion is determined and constraints on the parameters of three specific models are given in order to obtain bounces solutions. It is found that unlike the case of General Relativity, a bounce appears to be possible in open and flat cosmologies. | Bounce Conditions in f(R) Cosmologies |
We prove the first Fixed-depth Size-hierarchy Theorem for uniform AC$^0[\oplus]$ circuits; in particular, for fixed $d$, the class $\mathcal{C}_{d,k}$ of uniform AC$^0[\oplus]$ formulas of depth $d$ and size $n^k$ form an infinite hierarchy. For this, we find the first class of explicit functions giving (up to polynomial factor) matching upper and lower bounds for AC$^0[\oplus]$ formulas, derived from the $\delta$-Coin Problem, the computational problem of distinguishing between coins that are heads with probability $(1+\delta)/2$ or $(1-\delta)/2,$ where $\delta$ is a parameter going to $0$. We study this problem's complexity and make progress on both upper bounds and lower bounds. Upper bounds. We find explicit monotone AC$^0$ formulas solving the $\delta$-coin problem, having depth $d$, size $\exp(O(d(1/\delta)^{1/(d-1)}))$, and sample complexity poly$(1/\delta)$, for constant $d\ge2$. This matches previous upper bounds of O'Donnell and Wimmer (ICALP 2007) and Amano (ICALP 2009) in terms of size and improves the sample complexity. Lower bounds. The upper bounds are nearly tight even for the stronger model of AC$^0[\oplus]$ formulas (which allow NOT and Parity gates): any AC$^0[\oplus]$ formula solving the $\delta$-coin problem must have size $\exp(\Omega(d(1/\delta)^{1/(d-1)})).$ This strengthens a result of Cohen, Ganor and Raz (APPROX-RANDOM 2014), who prove a similar result for AC$^0$, and a result of Shaltiel and Viola (SICOMP 2010), who give a superpolynomially weaker (still exponential) lower bound. The upper bound is a derandomization involving a use of Janson's inequality (as far as we know, the first such use of the inequality) and classical combinatorial designs. For the lower bound, we prove an optimal (up to constant factor) degree lower bound for multivariate polynomials over $\mathbb{F}_2$ solving the $\delta$-coin problem, which may be of independent interest. | A Fixed-Depth Size-Hierarchy Theorem for AC$^0[\oplus]$ via the Coin Problem |
We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the execution-guided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%. | IncSQL: Training Incremental Text-to-SQL Parsers with Non-Deterministic Oracles |
The Heusler compound ScInAu$_2$ was previously reported to have a superconducting ground state with a critical temperature of 3.0 K. Recent high throughput calculations have also predicted that the material harbors a topologically non-trivial band structure similar to that reported for beta-PdBi$_2$. In an effort to explore the interplay between the superconducting and topological properties properties, electrical resistance, magnetization, and x-ray diffraction measurements were performed on polycrystalline ScInAu$_2$. The data reveal that high-quality polycrystalline samples lack the super-conducting transition present samples that have not been annealed. These results indicate the earlier reported superconductivity is non-intrinsic. Several compounds in the Au-In-Sc ternary phase space (ScAu$_2$, ScIn$_3$, and ScInAu$_2$) were explored in an attempt to identify the secondary phase responsible for the non-intrinsic superconductivity. The results suggest that elemental In is responsible for the reported superconductivity in ScInAu$_2$. | Absence of superconductivity in topological metal ScInAu$_2$ |
This paper studies the well-posedness of a class of nonlocal parabolic partial differential equations (PDEs), or equivalently equilibrium Hamilton-Jacobi-Bellman equations, which has a strong tie with the characterization of the equilibrium strategies and the associated value functions for time-inconsistent stochastic control problems. Specifically, we consider nonlocality in both time and space, which allows for modelling of the stochastic control problems with initial-time-and-state dependent objective functionals. We leverage the method of continuity to show the global well-posedness within our proposed Banach space with our established Schauder prior estimate for the linearized nonlocal PDE. Then, we adopt a linearization method and Banach's fixed point arguments to show the local well-posedness of the nonlocal fully nonlinear case, while the global well-posedness is attainable provided that a very sharp a-priori estimate is available. On top of the well-posedness results, we also provide a probabilistic representation of the solutions to the nonlocal fully nonlinear PDEs and an estimate on the difference between the value functions of sophisticated and na\"{i}ve controllers. Finally, we give a financial example of time inconsistency that is proven to be globally solvable. | On the Well-posedness of Hamilton-Jacobi-Bellman Equations of the Equilibrium Type |
Deep Learning based methods have emerged as the indisputable leaders for virtually all image restoration tasks. Especially in the domain of microscopy images, various content-aware image restoration (CARE) approaches are now used to improve the interpretability of acquired data. Naturally, there are limitations to what can be restored in corrupted images, and like for all inverse problems, many potential solutions exist, and one of them must be chosen. Here, we propose DivNoising, a denoising approach based on fully convolutional variational autoencoders (VAEs), overcoming the problem of having to choose a single solution by predicting a whole distribution of denoised images. First we introduce a principled way of formulating the unsupervised denoising problem within the VAE framework by explicitly incorporating imaging noise models into the decoder. Our approach is fully unsupervised, only requiring noisy images and a suitable description of the imaging noise distribution. We show that such a noise model can either be measured, bootstrapped from noisy data, or co-learned during training. If desired, consensus predictions can be inferred from a set of DivNoising predictions, leading to competitive results with other unsupervised methods and, on occasion, even with the supervised state-of-the-art. DivNoising samples from the posterior enable a plethora of useful applications. We are (i) showing denoising results for 13 datasets, (ii) discussing how optical character recognition (OCR) applications can benefit from diverse predictions, and are (iii) demonstrating how instance cell segmentation improves when using diverse DivNoising predictions. | Fully Unsupervised Diversity Denoising with Convolutional Variational Autoencoders |
We introduce a geometrically transparent strict saddle property for nonsmooth functions. This property guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. We argue that the strict saddle property may be a realistic assumption in applications, since it provably holds for generic semi-algebraic optimization problems. | Proximal methods avoid active strict saddles of weakly convex functions |
This paper explores orbits in extended mass distributions and develops an analytic approximation scheme based on epicycloids (spirograph patterns). We focus on the Hernquist potential which provides a good model for many astrophysical systems, including elliptical galaxies, dark matter halos, and young embedded star clusters. For a given potential, one can readily calculate orbital solutions as a function of energy and angular momentum using numerical methods. In contrast, this paper presents a number of analytic results for the Hernquist potential and proves a series of general constraints showing that orbits have similar properties for any extended mass distribution (including, e.g., the NFW profile). We discuss circular orbits, radial orbits, zero energy orbits, different definitions of eccentricity, analogs of Kepler's law, the definition of orbital elements, and the relation of these orbits to spirograph patterns (epicycloids). Over much of parameter space the orbits can be adequately described (with accuracy better than 10%) using the parametric equations of epicycloids, thereby providing an analytic description of the orbits. As an application of this formal development, we find a solution for the orbit of the Large Magellanic Cloud in the potential of our Galaxy. | Orbits in Extended Mass Distributions: General Results and the Spirographic Approximation |
We investigate the distribution of errors on a computationally useful entangled state generated via the repeated emission from an emitter undergoing strongly non-Markovian evolution. For emitter-environment coupling of pure-dephasing form, we show that the probability that a particular patten of errors occurs has a bound of Markovian form, and thus accuracy threshold theorems based on Markovian models should be just as effective. This is the case, for example, for a charged quantum dot emitter in a moderate to strong magnetic field. Beyond the pure-dephasing assumption, though complicated error structures can arise, they can still be qualitatively bounded by a Markovian error model. | Error distributions on large entangled states with non-Markovian dynamics |
We present an X-ray imaging and spectroscopic study of a partially occulted C7.7 flare on 2003 April 24 observed by RHESSI that accompanied a prominence eruption observed by TRACE. (1) The activation and rise of the prominence occurs during the preheating phase of the flare. The initial X-ray emission appears as a single coronal source at one leg of the prominence and it then splits into a double source. Such a source splitting happens three times, each coinciding with an increased X-ray flux and plasma temperature, suggestive of fast reconnection in a localized current sheet and an enhanced energy release rate. In the late stage of this phase, the prominence displays a helical structure. These observations are consistent with the tether-cutting and/or kink instability model for triggering solar eruptions. (2) The eruption of the prominence takes place during the flare impulsive phase. Since then, there appear signatures predicted by the classical CSHKP model of two-ribbon flares occurring in a vertical current sheet trailing an eruption. These signatures include an EUV cusp and current-sheet-like feature (or ridge) above it. There is also X-ray emission along the EUV ridge both below and above the cusp, which in both regions appears closer to the cusp at higher energies in the thermal regime. This trend is reversed in the nonthermal regime. (3) Spectral analysis indicates thermal X-rays from all sources throughout the flare, while during the impulsive phase there is additional nonthermal emission which primarily comes from the coronal source below the cusp. This source also has a lower temperature, a higher emission measure, and a much harder nonthermal spectrum than the upper sources. | Episodic X-ray Emission Accompanying the Activation of an Eruptive Prominence: Evidence of Episodic Magnetic Reconnection |
In order to understand the origin and evolution of comets, one must decipher the processes that formed and processed cometary ice and dust. Cometary materials have diverse physical and chemical properties and are mixed in various ways. Laboratory experiments are capable of producing simple to complex analogues of comet-like materials, measuring their properties, and simulating the processes by which their compositions and structures may evolve. The results of laboratory experiments are essential for the interpretations of comet observations and complement theoretical models. They are also necessary for planning future missions to comets. This chapter presents an overview of past and ongoing laboratory experiments exploring how comets were formed and transformed, from the nucleus interior and surface, to the coma. Throughout these sections, the pending questions are highlighted, and the perspectives and prospects for future experiments are discussed. | Laboratory Experiments to Understand Comets |
We report precise Doppler measurements of the stars HD 216437, HD 196050 and HD 160691 obtained with the Anglo-Australian Telescope using the UCLES spectrometer together with an iodine cell as part of the Anglo-Australian Planet Search. Our measurements reveal periodic Keplerian velocity variations that we interpret as evidence for planets in orbit around these solar type stars. HD 216437 has a period of 1294$\pm$250 d, a semi-amplitude of 38$\pm$4 m s$^{-1}$ and of an eccentricity of 0.33$\pm$0.09. The minimum (M sin $i$) mass of the companion is 2.1$\pm$0.3 M$_{\rm JUP}$ and the semi-major axis is 2.4$\pm$0.5 au. HD 196050 has a period of 1288$\pm$230 d, a semi-amplitude of 54$\pm$8 m s$^{-1}$ and an eccentricity of 0.28$\pm$0.15. The minimum mass of the companion is 3.0$\pm$0.5 M$_{\rm JUP}$ and the semi-major axis is 2.3$\pm$0.5 au. We also report further observations of the metal rich planet bearing star HD 160691. Our new solution confirms the previously reported planet and shows a trend indicating a second, longer-period companion. These discoveries add to the growing numbers of midly-eccentric, long-period extra-solar planets around Sun-like stars. As seems to be typical of stars with planets, both stars are metal-rich. | Extra-solar planets around HD196050, HD216437 and HD160691 |
We use data from a two-year intensive RXTE monitoring campaign of the broad-line radio galaxy 3C 390.3 to investigate its stationarity. In order to exploit the potential information contained in a time series more efficiently, we use a multi-technique approach. Specifically, the temporal properties are first studied with a technique borrowed from non-linear dynamics. Then we utilize traditional linear techniques both in the Fourier domain, by estimating the power spectral density, and in the time domain with a structure function analysis. Finally we investigate directly the probability density function associated with the physical process underlying the signal. All the methods demonstrate the presence of non-stationarity. The structure function analysis, and (at a somewhat lower significance level) the power spectral density suggest that 3C 390.3 is not even second order stationarity. This result indicates, for the first time, that the variability properties of the active galactic nuclei light curves may also vary with time, in the same way as they do in Galactic black holes, strengthening the analogy between the X-ray variability properties of the two types of objects. | Non-stationary variability in AGN: the case of 3C 390.3 |
Bayesian Optimization (BO) is a method for globally optimizing black-box functions. While BO has been successfully applied to many scenarios, developing effective BO algorithms that scale to functions with high-dimensional domains is still a challenge. Optimizing such functions by vanilla BO is extremely time-consuming. Alternative strategies for high-dimensional BO that are based on the idea of embedding the high-dimensional space to the one with low dimension are sensitive to the choice of the embedding dimension, which needs to be pre-specified. We develop a new computationally efficient high-dimensional BO method that exploits variable selection. Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables, without the demand of any pre-specified hyperparameters. We theoretically analyze the computational complexity of our algorithm and derive the regret bound. We empirically show the efficacy of our method on several synthetic and real problems. | Computationally Efficient High-Dimensional Bayesian Optimization via Variable Selection |
Globally ice-covered oceans have been found on multiple moons in the solar system and may also have been a feature of Earth's past. However, relatively little is understood about the dynamics of these ice-covered oceans, which affect not only the physical environment but also any potential life and its detectability. A number of studies have simulated the circulation of icy-world oceans, but have come to seemingly widely different conclusions. To better understand and narrow down these diverging results, we discuss energetic constraints for the circulation on ice-covered oceans, focusing in particular on Snowball Earth, Europa, and Enceladus. Energy input that can drive ocean circulation on ice-covered bodies can be associated with heat and salt fluxes at the boundaries as well as ocean tides and librations. We show that heating from the solid core balanced by heat loss through the ice sheet can drive an ocean circulation, but the resulting flows would be relatively weak and strongly affected by rotation. Salt fluxes associated with freezing and melting at the ice sheet boundary are unlikely to energetically drive a circulation, although they can shape the large-scale circulation when combined with turbulent mixing. Ocean tides and librations may provide an energy source for such turbulence, but the magnitude of this energy source remains highly uncertain for the icy moons, which poses a major obstacle to predicting the ocean dynamics of icy worlds and remains as an important topic for future research. | Energetic constraints on ocean circulations of icy ocean worlds |
We study possibilities to explain the whole dark matter abundance by primordial black holes (PBHs) or to explain the merger rate of binary black holes estimated from the gravitational wave detections by LIGO/Virgo. We assume that the PBHs are originated in a radiation- or matter-dominated era from large primordial curvature perturbation generated by inflation. We take a simple model-independent approach considering inflation with large running spectral indices which are parametrized by $n_\text{s}, \alpha_\text{s}$, and $\beta_\text{s}$ consistent with the observational bounds. The merger rate is fitted by PBHs with masses of $\mathcal{O}(10)$ $M_{\odot}$ produced in the radiation-dominated era. Then the running of running should be $\beta_\text{s} \sim 0.025$, which can be tested by future observation. On the other hand, the whole abundance of dark matter is consistent with PBHs with masses of asteroids ($\mathcal{O}(10^{-17})~M_{\odot}$) produced in an early matter-dominated era if a set of running parameters are properly realized. | Primordial Black Hole Dark Matter and LIGO/Virgo Merger Rate from Inflation with Running Spectral Indices: Formation in the Matter- and/or Radiation-Dominated Universe |
We formulate a semiclassical theory for systems with spin-orbit interactions. Using spin coherent states, we start from the path integral in an extended phase space, formulate the classical dynamics of the coupled orbital and spin degrees of freedom, and calculate the ingredients of Gutzwiller's trace formula for the density of states. For a two-dimensional quantum dot with a spin-orbit interaction of Rashba type, we obtain satisfactory agreement with fully quantum-mechanical calculations. The mode-conversion problem, which arose in an earlier semiclassical approach, has hereby been overcome. | Semiclassical theory of spin-orbit interactions using spin coherent states |
We present a detailed analysis, based on the Forward Flux Sampling (FFS) simulation method, of the switching dynamics and stability of two models of genetic toggle switches, consisting of two mutually-repressing genes encoding transcription factors (TFs); in one model (the exclusive switch), they mutually exclude each other's binding, while in the other model (general switch) the two transcription factors can bind simultaneously to the shared operator region. We assess the role of two pairs of reactions that influence the stability of these switches: TF-TF homodimerisation and TF-DNA association/dissociation. We factorise the flipping rate k into the product of the probability rho(q*) of finding the system at the dividing surface (separatrix) between the two stable states, and a kinetic prefactor R. In the case of the exclusive switch, the rate of TF-operator binding affects both rho(q*) and R, while the rate of TF dimerisation affects only R. In the case of the general switch both TF-operator binding and TF dimerisation affect k, R and rho(q*). To elucidate this, we analyse the transition state ensemble (TSE). For the exclusive switch, varying the rate of TF-operator binding can drastically change the pathway of switching, while changing the rate of dimerisation changes the switching rate without altering the mechanism. The switching pathways of the general switch are highly robust to changes in the rate constants of both TF-operator and TF-TF binding, even though these rate constants do affect the flipping rate; this feature is unique for non-equilibrium systems. | Reaction coordinates for the flipping of genetic switches |
We prove using an integral criterion the existence and completeness of the wave operators $W_{\pm}(\Delta_h^{(k)}, \Delta_g^{(k)}, I_{g,h}^{(k)})$ corresponding to the Hodge Laplacians $\Delta_\nu^{(k)}$ acting on differential $k$-forms, for $\nu\in\{g,h\}$, induced by two quasi-isometric Riemannian metrics $g$ and $h$ on a complete open smooth manifold $M$. In particular, this result provides a criterion for the absolutely continuous spectra $\sigma_{\mathrm{ac}}(\Delta_g^{(k)}) = \sigma_{\mathrm{ac}}(\Delta_h^{(k)})$ of $\Delta_\nu^{(k)}$ to coincide. The proof is based on gradient estimates obtained by probabilistic Bismut-type formulae for the heat semigroup defined by spectral calculus. By these localised formulae, the integral criterion requires local curvature bounds and some upper local control on the heat kernel acting on functions provided the Weitzenb\"ock curvature endomorphism is in the Kato class, but no control on the injectivity radii. A consequence is a stability result of the absolutely continuous spectrum under a Ricci flow. As an application we concentrate on the important case of conformal perturbations. | Scattering theory for the Hodge Laplacian |
We show that, for every transitive group $G$ of degree $n\ge 2$, the largest abelian quotient of $G$ has cardinality at most $4^{n/\sqrt{\log_2 n}}$. This gives a positive answer to a 1989 outstanding question of L\'aszl\'o Kov\'acs and Cheryl Praeger. | A subexponential bound on the cardinality of abelian quotients in finite transitive groups |
We used resonant laser spectroscopy of multiple confocal InGaAs quantum dots to spatially locate charge fluctuators in the surrounding semiconductor matrix. By mapping out the resonance condition between a narrow-band laser and the neutral exciton transitions of individual dots in a field effect device, we identified spectral discontinuities as arising from charging and discharging events that take place within the volume adjacent to the quantum dots. Our analysis suggests that residual carbon dopants are a major source of charge-fluctuating traps in quantum dot heterostructures. | Locating environmental charge impurities with confluent laser spectroscopy of multiple quantum dots |
We take advantage of a state-of-art semi analytic model of galaxy formation, and the model presented in \citet{contini21a}, to investigate the mass distribution of Brightest Cluster Galaxies (BCGs) and Intra-Cluster Light (ICL) by addressing two points: (1) the region of transition between a BCG dominated distribution and an ICL dominated one, and; (2) the relation between the total BCG+ICL mass and the ICL one alone. We find the transition radius to be independent of both BCG+ICL and halo masses, with an average of 60$\pm$40 kpc, in good agreement with previous observational measurements, but given the large scatter, it can be considered as a sort of physical separation between the two components only on cluster scale. From the analysis of $M_{ICL}-M_{BCG+ICL}$ relation, we build a method able to extract the ICL mass directly from the knowledge of the BCG+ICL one. Given the large scatter on low mass systems, such method under/overpredicts the true value of the ICL in a significant way, up to a factor of three in the worst cases. On the other hand, for $\log M_{BCG+ICL}>12$ or $\log M_{Halo}>14$, the difference between the true value and the one extracted from the $M_{ICL}-M_{BCG+ICL}$ relation ranges between $\pm$30\%. We therefore suggest this relation as a reliable test for observational works aiming to isolate the ICL from the BCG, for systems hosted by haloes on cluster scale. | The Transition Region between Brightest Cluster Galaxies and Intra-Cluster Light in Galaxy Groups and Clusters |
A cellular automaton that is a generalization of the box-ball system with either many kinds of balls or finite carrier capacity is proposed and studied through two discrete integrable systems: nonautonomous discrete KP lattice and nonautonomous discrete two-dimensional Toda lattice. Applying reduction technique and ultradiscretization procedure to these discrete systems, we derive two types of time evolution equations of the proposed cellular automaton, and particular solutions to the ultradiscrete equations. | Another generalization of the box-ball system with many kinds of balls |
Backdoor attacks are a kind of emergent security threat in deep learning. After being injected with a backdoor, a deep neural model will behave normally on standard inputs but give adversary-specified predictions once the input contains specific backdoor triggers. In this paper, we find two simple tricks that can make existing textual backdoor attacks much more harmful. The first trick is to add an extra training task to distinguish poisoned and clean data during the training of the victim model, and the second one is to use all the clean training data rather than remove the original clean data corresponding to the poisoned data. These two tricks are universally applicable to different attack models. We conduct experiments in three tough situations including clean data fine-tuning, low-poisoning-rate, and label-consistent attacks. Experimental results show that the two tricks can significantly improve attack performance. This paper exhibits the great potential harmfulness of backdoor attacks. All the code and data can be obtained at \url{https://github.com/thunlp/StyleAttack}. | Textual Backdoor Attacks Can Be More Harmful via Two Simple Tricks |
A cavity-modified master equation is derived for a coherently driven, V-type three-level atom coupled to a single-mode cavity in the bad cavity limit. We show that population inversion in both the bare and dressed-state bases may be achieved, originating from the enhancement of the atom-cavity interaction when the cavity is resonant with an atomic dressed-state transition. The atomic populations in the dressed state representation are analysed in terms of the cavity-modified transition rates. The atomic fluorescence spectrum and probe absorption spectrum also investigated, and it is found that the spectral profiles may be controlled by adjusting the cavity frequency. Peak suppression and line narrowing occur under appropriate conditions. | Cavity induced modifications to the resonance fluorescence and probe absorption of a laser-dressed V atom |
Noisy training labels can hurt model performance. Most approaches that aim to address label noise assume label noise is independent from the input features. In practice, however, label noise is often feature or \textit{instance-dependent}, and therefore biased (i.e., some instances are more likely to be mislabeled than others). E.g., in clinical care, female patients are more likely to be under-diagnosed for cardiovascular disease compared to male patients. Approaches that ignore this dependence can produce models with poor discriminative performance, and in many healthcare settings, can exacerbate issues around health disparities. In light of these limitations, we propose a two-stage approach to learn in the presence instance-dependent label noise. Our approach utilizes \textit{\anchor points}, a small subset of data for which we know the observed and ground truth labels. On several tasks, our approach leads to consistent improvements over the state-of-the-art in discriminative performance (AUROC) while mitigating bias (area under the equalized odds curve, AUEOC). For example, when predicting acute respiratory failure onset on the MIMIC-III dataset, our approach achieves a harmonic mean (AUROC and AUEOC) of 0.84 (SD [standard deviation] 0.01) while that of the next best baseline is 0.81 (SD 0.01). Overall, our approach improves accuracy while mitigating potential bias compared to existing approaches in the presence of instance-dependent label noise. | Leveraging an Alignment Set in Tackling Instance-Dependent Label Noise |
Depletion-induced interactions between colloids in colloid-polymer mixtures depend in range and strength on size, shape, and concentration of depletants. Crowding by colloids in turn affects shapes of polymer coils, such as biopolymers in biological cells. By simulating hard-sphere colloids and random-walk polymers, modeled as fluctuating ellipsoids, we compute depletion-induced potentials and polymer shape distributions. Comparing results with exact density-functional theory calculations, molecular simulations, and experiments, we show that polymer shape fluctuations play an important role in depletion and crowding phenomena. | Influence of Polymer Shape on Depletion Potentials and Crowding in Colloid-Polymer Mixtures |
Considering ($1+1$)-dimensional fluid in presence of gravitational trace anomaly, as an effective description of higher-dimensional fluid, the hydrodynamics is discussed through a first order thermodynamic description. Contrary to the existing approaches, the fluid velocity is identified through the auxiliary field required to describe the Polyakov action for the effective description of relevant energy-momentum tensor. The thermodynamic and fluid quantities, on a static black hole spacetime, are calculated both near the horizon as well as at the asymptotic infinity. The Unruh vacuum appears to be suitable one for the present analysis. Interestingly, we observe that such a fluid description is equally capable of calculating Hawking flux and thereby establishing a probable close connection with the well known anomaly cancellation method for the Hawking effect. | Hydrodynamics of $(1+1)$ dimensional fluid in the presence of gravitational anomaly from first order thermodynamics and its connection with Hawking effect |
The authors report on the use of carbon nanofiber nanoemitters to ionize deuterium atoms for the generation of neutrons in a deuterium-deuterium reaction in a preloaded target. Acceleration voltages in the range of 50-80 kV are used. Field emission of electrons is investigated to characterize the emitters. The experimental setup and sample preparation are described and first data of neutron production are presented. Ongoing experiments to increase neutron production yields by optimizing the field emitter geometry and surface conditions are discussed. | Development of a Compact Neutron Source based on Field Ionization Processes |
I discuss the one jet inclusive jet cross section, d sigma /dE_T emphasizing the concept of infrared safety and the cone definition of jets. Then I estimate the size of power corrections to the jet cross section, which become important at smaller values of E_T. | Jet Observables in Theory and Reality |
The holographic relation between local boundary conformal quantum field theories (BCFT) and their non-local boundary restrictions is reviewed, and non-vacuum BCFT's, whose existence was conjectured previously, are constructed. | On local boundary CFT and non-local CFT on the boundary |
Database Forensics (DBF) domain is a branch of digital forensics, concerned with the identification, collection, reconstruction, analysis, and documentation of database crimes. Different researchers have introduced several identification models to handle database crimes. Majority of proposed models are not specific and are redundant, which makes these models a problem because of the multidimensional nature and high diversity of database systems. Accordingly, using the metamodeling approach, the current study is aimed at proposing a unified identification model applicable to the database forensic field. The model integrates and harmonizes all exiting identification processes into a single abstract model, called Common Identification Process Model (CIPM). The model comprises six phases: 1) notifying an incident, 2) responding to the incident, 3) identification of the incident source, 4) verification of the incident, 5) isolation of the database server and 6) provision of an investigation environment. CIMP was found capable of helping the practitioners and newcomers to the forensics domain to control database crimes. | CIPM: Common Identification Process Model for Database Forensics Field |
It is shown that every sufficiently large even integer is a sum of two primes and exactly 13 powers of 2. Under the Generalized Rieman Hypothesis one can replace 13 by 7. Unlike previous work on this problem, the proof avoids numerical calculations with explicit zero-free regions of Dirichlet L-functions. The argument uses a new technique to bound the measure of the set on which the exponential sum formed from powers of 2 is large. | Integers Represented as a Sum of Primes and Powers of Two |
Recently, the philosophy of visual saliency and attention has started to gain popularity in the robotics community. Therefore, this paper aims to mimic this mechanism in SLAM framework by using saliency prediction model. Comparing with traditional SLAM that treated all feature points as equal important in optimization process, we think that the salient feature points should play more important role in optimization process. Therefore, we proposed a saliency model to predict the saliency map, which can capture both scene semantic and geometric information. Then, we proposed Salient Bundle Adjustment by using the value of saliency map as the weight of the feature points in traditional Bundle Adjustment approach. Exhaustive experiments conducted with the state-of-the-art algorithm in KITTI and EuRoc datasets show that our proposed algorithm outperforms existing algorithms in both indoor and outdoor environments. Finally, we will make our saliency dataset and relevant source code open-source for enabling future research. | Salient Bundle Adjustment for Visual SLAM |
We prove a conjecture of Northshield by determining the maximal order of his analogue of Stern's sequence for $\mathbb{Z}[\sqrt{2}]$. In particular, if $b$ is Northshield's analogue, we prove that $$\limsup_{n\to\infty}\frac{2b(n)}{(2n)^{\log_3 (\sqrt{2}+1)}}=1.$$ | Proof of Northshield's conjecture concerning an analogue of Stern's sequence for $\mathbb{Z}[\sqrt{2}]$ |
The parametric error on the QCD-coupling can be a dominant source of uncertainty in several important observables. One way to extract the coupling is to compare high order perturbative computations with lattice evaluated moments of heavy quark two-point functions. The truncation of the perturbative series is a sizable systematic uncertainty that needs to be under control. In this contribution we give an update on our study arXiv:hep-lat/2203.07936v1 on this issue. We measure pseudo-scalar two-point functions in volumes of $L=2$ fm with twisted-mass Wilson fermions in the quenched approximation. We use full twist, the non-perturbative clover term and lattice spacings down to $a=0.010$ fm to tame the large discretization effects. Our results show that both the continuum extrapolations and the extrapolation of the $\Lambda$-parameter to the asymptotic perturbative region are very challenging. | A Quenched Exploration of Heavy Quark Moments and their Perturbative Expansion |
Multidimensional magneto-hydrodynamical (MHD) simulations coupled with stochastic differential equations (SDEs) adapted to test particle acceleration and transport in complex astrophysical flows are presented. The numerical scheme allows the investigation of shock acceleration, adiabatic and radiative losses as well as diffusive spatial transport in various diffusion regimes. The applicability of SDEs to astrophysics is first discussed in regards to the different regimes and the MHD code spatial resolution. The procedure is then applied to 2.5D MHD-SDE simulations of kilo-parsec scale extragalactic jets. The ability of SDE to reproduce analytical solutions of the diffusion-convection equation for electrons is tested through the incorporation of an increasing number of effects: shock acceleration, spatially dependent diffusion coefficients and synchrotron losses. The SDEs prove to be efficient in various shock configuration occurring in the inner jet during the development of the Kelvin-Helmholtz instability. The particle acceleration in snapshots of strong single and multiple shock acceleration including realistic spatial transport is treated. In chaotic magnetic diffusion regime, turbulence levels $\eta_T=<\delta B^2>/(B^2+<\delta B^2>)$ around $0.2-0.3$ are found to be the most efficient to enable particles to reach the highest energies. The spectrum, extending from 100 MeV to few TeV (or even 100 TeV for fast flows), does not exhibit a power-law shape due to transverse momentum dependent escapes. Out of this range, the confinement is not so efficient and the spectrum cut-off above few hundreds of GeV, questioning the Chandra observations of X-ray knots as being synchrotron radiation. The extension to full time dependent simulations to X-ray extragalactic jets is discussed. | Relativistic particle transport in extragalactic jets: I. Coupling MHD and kinetic theory |
Unsupervised near-duplicate detection has many practical applications ranging from social media analysis and web-scale retrieval, to digital image forensics. It entails running a threshold-limited query on a set of descriptors extracted from the images, with the goal of identifying all possible near-duplicates, while limiting the false positives due to visually similar images. Since the rate of false alarms grows with the dataset size, a very high specificity is thus required, up to $1 - 10^{-9}$ for realistic use cases; this important requirement, however, is often overlooked in literature. In recent years, descriptors based on deep convolutional neural networks have matched or surpassed traditional feature extraction methods in content-based image retrieval tasks. To the best of our knowledge, ours is the first attempt to establish the performance range of deep learning-based descriptors for unsupervised near-duplicate detection on a range of datasets, encompassing a broad spectrum of near-duplicate definitions. We leverage both established and new benchmarks, such as the Mir-Flick Near-Duplicate (MFND) dataset, in which a known ground truth is provided for all possible pairs over a general, large scale image collection. To compare the specificity of different descriptors, we reduce the problem of unsupervised detection to that of binary classification of near-duplicate vs. not-near-duplicate images. The latter can be conveniently characterized using Receiver Operating Curve (ROC). Our findings in general favor the choice of fine-tuning deep convolutional networks, as opposed to using off-the-shelf features, but differences at high specificity settings depend on the dataset and are often small. The best performance was observed on the MFND benchmark, achieving 96\% sensitivity at a false positive rate of $1.43 \times 10^{-6}$. | Benchmarking unsupervised near-duplicate image detection |
We consider the problem of computationally-efficient prediction with high dimensional and highly correlated predictors when accurate variable selection is effectively impossible. Direct application of penalization or Bayesian methods implemented with Markov chain Monte Carlo can be computationally daunting and unstable. A common solution is first stage dimension reduction through screening or projecting the design matrix to a lower dimensional hyper-plane. Screening is highly sensitive to threshold choice, while projections often have poor performance in very high-dimensions. We propose TArgeted Random Projection (TARP) to combine positive aspects of both strategies. TARP uses screening to order the inclusion probabilities of the features in the projection matrix used for dimension reduction, leading to data-informed sparsity. We provide theoretical support for a Bayesian predictive algorithm based on TARP, including statistical and computational complexity guarantees. Examples for simulated and real data applications illustrate gains relative to a variety of competitors. | Targeted Random Projection for Prediction from High-Dimensional Features |
GENIUS is a proposal for a large volume detector to search for rare events. An array of 40-400 'naked' HPGe detectors will be operated in a tank filled with ultra-pure liquid nitrogen. After a description of performed technical studies of detector operation in liquid nitrogen and of Monte Carlo simulations of expected background components, the potential of GENIUS for detecting WIMP dark matter, the neutrinoless double beta decay in 76-Ge and low-energy solar neutrinos is discussed. | Hot and Cold Dark Matter Search with GENIUS |
Compressible turbulence, especially the magnetized version of it, traditionally has a bad reputation with researchers. However, recent progress in theoretical understanding of incompressible MHD as well as that in computational capabilities enabled researchers to obtain scaling relations for compressible MHD turbulence. We discuss scalings of Alfven, fast, and slow modes in both magnetically dominated (low $\beta$) and gas pressure dominated (high $\beta$) plasmas. We also show that the new regime of MHD turbulence below viscous cutoff reported earlier for incompressible flows persists for compressible turbulence. Our recent results show that this leads to density fluctuations. New understanding of MHD turbulence is likely to influence many key astrophysical problems. | Compressible MHD Turbulence: Mode Coupling, Anisotropies and Scalings |
The availability of open-source projects facilitates developers to contribute and collaborate on a wide range of projects. As a result, the developer community contributing to such open-source projects is also increasing. Many of the projects involve frequent updates and extensive reuses. A well-updated documentation helps in a better understanding of the software project and also facilitates efficient contribution and reuse. Though software documentation plays an important role in the development and maintenance of software, it also suffers from various issues that include insufficiency, inconsistency, ill-maintainability, and so on. Exploring the perception of developers towards documentation could help in understanding the reasons behind prevalent issues in software documentation. It could further aid in deciding on training that could be given to the developer community towards building more sustainable projects for society. Analyzing sentiments of contributors to a project could provide insights on understanding developer perceptions. Hence, as the first step towards this direction, we analyze sentiments of commit messages specific to the documentation of a software project. To this end, we considered the commit history of 998 GitHub projects from the GHTorrent dataset and identified 10,996 commits that correspond to the documentation of repositories. Further, we apply sentiment analysis techniques to obtain insights on the type of sentiment being expressed in commit messages of the selected commits. We observe that around 45% of the identified commit messages express trust emotion. | Understanding Emotions of Developer Community Towards Software Documentation |
Using 8.4 years of photometry from the AllWISE/NEOWISE multi-epoch catalogs, we compare the mid-infrared variability properties of a sample of 2197 dwarf galaxies (M_stellar < 2 x 10^9 h^-2 M_sun) to a sample of 6591 more massive galaxies (M_stellar >= 10^10 h^-2 M_sun) matched in mid-infrared apparent magnitude. We find only 2 dwarf galaxies with mid-infrared variability, a factor of ~10 less frequent than the more massive galaxies (p = 6 x 10^-6), consistent with previous findings of optical variability in low-mass and dwarf galaxies using data with a similar baseline and cadence. Within the more massive control galaxy population, we see no evidence for a stellar mass dependence of mid-infrared variability, suggesting that this apparent reduction in the frequency of variable objects occurs below a stellar mass of ~10^10 h^-2 M_sun. Compared to the more massive galaxies, AGNs selected in dwarf galaxies using either their mid-infrared color or optical emission line classification are systematically missed by variability selection. Our results suggest, in agreement with previous optical studies at similar cadence, that variability selection of AGNs in dwarf galaxies is ineffective unless higher-cadence data is used. | A Low Incidence of Mid-Infrared Variability in Dwarf Galaxies |
We develop a general approach to Stein's method for approximating a random process in the path space $D([0,T]\to R^d)$ by a real continuous Gaussian process. We then use the approach in the context of processes that have a representation as integrals with respect to anunderlying point process, deriving a general quantitative Gaussian approximation. The error bound is expressed in terms of couplings of the original process to processes generated from the reduced Palm measures associated with the point process. As applications, we study certain $\text{GI}/\text{GI}/\infty$ queues in the "heavy traffic" regime. | Stein's method, Gaussian processes and Palm measures, with applications to queueing |
We consider a fully quadratic vibronic model Hamiltonian for studying photoinduced electronic transitions through conical intersections. Using a second order perturbative approximation for diabatic couplings we derive an analytical expression for the time evolution of electronic populations at a given temperature. This formalism extends upon a previously developed perturbative technique for a linear vibronic coupling Hamiltonian. The advantage of the quadratic model Hamiltonian is that it allows one to use separate quadratic representations for potential energy surfaces of different electronic states and a more flexible representation of interstate couplings. We explore features introduced by the quadratic Hamiltonian in a series of 2D models, and then apply our formalism to the 2,6-bis(methylene) adamantyl cation, and its dimethyl derivative. The Hamiltonian parameters for the molecular systems have been obtained from electronic structure calculations followed by a diabatization procedure. The evolution of electronic populations in the molecular systems using the perturbative formalism shows a good agreement with that from variational quantum dynamics. | A perturbative formalism for electronic transitions through conical intersections in a fully quadratic vibronic model |
We prove the existence of wild automorphisms on an affine quadric threefold. The method we use is an adaptation of the one used by Shestakov and Umirbaev to prove the existence of wild automorphisms on the affine three dimensional space. | The tame and the wild automorphisms of an affine quadric threefold |
The large rapidity gap events from HERA are analyzed within a model containing a pomeron and an $f$-reggeon contribution. The choice for the pomeron contribution is based on the Donnachie-Landshoff model. The dependence of the "effective intercept" of the pomeron on the momentum fraction $\beta$ and on its Bjorken variable $\xi$ is calculated. | Non-factorizable Contributions to the Large Rapidity Gap at HERA |
We discuss how to find the well-covered dimension of a graph that is the Cartesian product of paths, cycles, complete graphs, and other simple graphs. Also, a bound for the well-covered dimension of $K_n\times G$ is found, provided that $G$ has a largest greedy independent decomposition of length $c<n$. Formulae to find the well-covered dimension of graphs obtained by vertex blowups on a known graph, and to the lexicographic product of two known graphs are also given. | The Well-Covered Dimension of Products of Graphs |
The different between the inverse power function and the negative exponential function is significant. The former suggests a complex distribution, while the latter indicates a simple distribution. However, the association of the power-law distribution with the exponential distribution has been seldom researched. Using mathematical derivation and numerical experiments, I reveal that a power-law distribution can be created through averaging an exponential distribution. For the distributions defined in a 1-dimension space, the scaling exponent is 1; while for those defined in a 2-dimension space, the scaling exponent is 2. The findings of this study are as follows. First, the exponential distributions suggest a hidden scaling, but the scaling exponents suggest a Euclidean dimension. Second, special power-law distributions can be derived from exponential distributions, but they differ from the typical power-law distribution. Third, it is the real power-law distribution that can be related with fractal dimension. This study discloses the inherent relationship between simplicity and complexity. In practice, maybe the result presented in this paper can be employed to distinguish the real power laws from spurious power laws (e.g., the fake Zipf distribution). | Power-law distributions based on exponential distributions: Latent scaling, spurious Zipf's law, and fractal rabbits |
The effect of pressure on the unique electronic state of the antiferromagnetic (AF) compound EuCu2Ge2 has been measured in a wide temperature range from 10 mK to 300 K by electrical resistivity measurements up to 10 GPa. The Neel temperature of TN = 15 K at ambient pressure increases monotonically with increasing pressure and becomes a maximum of TN = 27 K at 6.2 GPa but suddenly drops to zero at Pc = 6.5 GPa, suggesting the quantum critical point (QCP) of the valence transition of Eu from a nearly divalent state to that with trivalent weight. The rhomag0 and A values obtained from the low-temperature electrical resistivity based on the Fermi liquid relation of rhomag = rhomag0 + AT^2 exhibit huge and sharp peaks around Pc. The exponent n obtained from the power law dependence rhomag = rhomag0 + BT^n is clearly less than 1.5 at P = Pc = 6. 5 GPa, which is expected at the AF-QCP. These results indicate that Pc coincides with Pv, corresponding to the quantum criticality of the valence transition pressure Pv. The electronic specific heat coefficient, estimated from the generalized Kadowaki-Woods relation, is about 510 mJ/mol K^2 around Pc, suggesting the formation of a heavy-fermion state. | Quantum Criticality of Valence Transition for the Unique Electronic State of Antiferromagnetic Compound EuCu2Ge2 |
We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power. | Simple, Parallel, High-Performance Virtual Machines for Extreme Computations |
In this paper, we extend work of the first author on a crystal structure on rigged configurations of simply-laced type to all non-exceptional affine types using the technology of virtual rigged configurations and crystals. Under the bijection between rigged configurations and tensor products of Kirillov-Reshetikhin crystals specialized to a single tensor factor, we obtain a new tableaux model for Kirillov-Reshetikhin crystals. This is related to the model in terms of Kashiwara-Nakashima tableaux via a filling map, generalizing the recently discovered filling map in type $D_n^{(1)}$. | Crystal structure on rigged configurations and the filling map |
The performance potential for simulating quantum electron transport on graphical processing units (GPUs) is studied. Using graphene ribbons of realistic sizes as an example it is shown that GPUs provide significant speed-ups in comparison to central processing units as the transverse dimension of the ribbon grows. The recursive Green's function algorithm is employed and implementation details on GPUs are discussed. Calculated conductances were found to accumulate significant numerical error due to single-precision floating-point arithmetic at energies close to the charge neutrality point of the graphene. | Computation of electron quantum transport in graphene nanoribbons using GPU |
Cool-core clusters are characterized by strong surface brightness peaks in the X-ray emission from the Intra Cluster Medium (ICM). This phenomenon is associated with complex physics in the ICM and has been a subject of intense debate and investigation in recent years. In order to quantify the evolution in the cool-core cluster population, we robustly measure the cool-core strength in a local, representative cluster sample, and in the largest sample of high-redshift clusters available to date. We use high-resolution Chandra data of three representative cluster samples spanning different redshift ranges: (i) the local sample from the 400 SD survey with median z = 0.08, (ii) the high redshift sample from the 400 SD Survey with median z=0.59, and (iii) 15 clusters drawn from the RDCS and the WARPS, with median z = 0.83. Our analysis is based on the measurement of the surface brightness concentration, c_SB, which allows us to characterize the cool-core strength in low signal-to-noise data. We also obtain gas density profiles to derive cluster central cooling times and entropy. In addition to the X-ray analysis, we search for radio counterparts associated with the cluster cores. We find a statistically significant difference in the c_SB distributions of the two high-z samples, pointing towards a lack of concentrated clusters in the 400 SD high-z sample. Taking this into account, we confirm a negative evolution in the fraction of cool-core clusters with redshift, in particular for very strong cool-cores. This result is validated by the central entropy and central cooling time, which show strong anti-correlations with c_SB. However, the amount of evolution is significantly smaller than previously claimed, leaving room for a large population of well formed cool-cores at z~1. | The evolution of cool-core clusters |
Evidential grids have been recently used for mobile object perception. The novelty of this article is to propose a perception scheme using prior map knowledge. A geographic map is considered an additional source of information fused with a grid representing sensor data. Yager's rule is adapted to exploit the Dempster-Shafer conflict information at large. In order to distinguish stationary and mobile objects, a counter is introduced and used as a factor for mass function specialisation. Contextual discounting is used, since we assume that different pieces of information become obsolete at different rates. Tests on real-world data are also presented. | Map-aided Fusion Using Evidential Grids for Mobile Perception in Urban Environment |
The problem of determining if a military unit has correctly understood an order and is properly executing on it is one that has bedeviled military planners throughout history. The advent of advanced language models such as OpenAI's GPT-series offers new possibilities for addressing this problem. This paper presents a mechanism to harness the narrative output of large language models and produce diagrams or "maps" of the relationships that are latent in the weights of such models as the GPT-3. The resulting "Neural Narrative Maps" (NNMs), are intended to provide insight into the organization of information, opinion, and belief in the model, which in turn provide means to understand intent and response in the context of physical distance. This paper discusses the problem of mapping information spaces in general, and then presents a concrete implementation of this concept in the context of OpenAI's GPT-3 language model for determining if a subordinate is following a commander's intent in a high-risk situation. The subordinate's locations within the NNM allow a novel capability to evaluate the intent of the subordinate with respect to the commander. We show that is is possible not only to determine if they are nearby in narrative space, but also how they are oriented, and what "trajectory" they are on. Our results show that our method is able to produce high-quality maps, and demonstrate new ways of evaluating intent more generally. | Ethics, Rules of Engagement, and AI: Neural Narrative Mapping Using Large Transformer Language Models |
Exploiting the information provided by the molecular noise of a biological process has proven to be valuable in extracting knowledge about the underlying kinetic parameters and sources of variability from single cell measurements. However, quantifying this additional information a priori, to decide whether a single cell experiment might be beneficial, is currently only possibly in very simple systems where either the chemical master equation is computationally tractable or a Gaussian approximation is appropriate. Here we show how the information provided by distribution measurements can be approximated from the first four moments of the underlying process. The derived formulas are generally valid for any stochastic kinetic model including models that comprise both intrinsic and extrinsic noise. This allows us to propose an optimal experimental design framework for heterogeneous cell populations which we employ to compare the utility of dual reporter and perturbation experiments for separating extrinsic and intrinsic noise in a simple model of gene expression. Subsequently, we compare the information content of different experiments which have been performed in an engineered light-switch gene expression system in yeast and show that well chosen gene induction patterns may allow one to identify features of the system which remain hidden in unplanned experiments. | Designing Experiments to Understand the Variability in Biochemical Reaction Networks |
Majorana features of neutrinos and SO(3) gauge symmetry of three families enable us to construct a gauge model of neutrino for understanding naturally the observed smallness of neutrino masses and the nearly tri-bimaximal neutrino mixing when combining together with the mechanism of approximate global U(1) family symmetry. The vacuum structure of SO(3) symmetry breaking is found to play an important role. The mixing angle $\theta_{13}$ and CP-violating phases governed by the vacuum of spontaneous symmetry breaking are in general non-zero and testable experimentally at the allowed sensitivity. The model predicts the existence of vector-like SO(3) triplet charged leptons and vector-like SO(3) triplet Majorana neutrinos as well as SO(3) tri-triplet Higgs bosons, some of them can be light and explored at the colliders LHC and ILC. | Gauge Theory Model of the Neutrino and New Physics Beyond the Standard Model |
An enhancement in the number of galaxies as function of the redshift is visible on the SDSS Photometric Catalogue DR 12 at z=0.383. This over-density of galaxies is named the Great Wall. This variable number of galaxies as a function of the redshift can be explained in the framework of the luminosity function for galaxies. The differential of the luminosity distance in respect to the redshift is evaluated in the framework of the LCDM cosmology. | The Great Wall of SDSS galaxies |
We present a model of roundoff error analysis that combines simplicity with predictive power. Though not considering all sources of roundoff within an algorithm, the model is related to a recursive roundoff error analysis and therefore capable of correctly predicting stability or instability of an algorithm. By means of nontrivial examples, such as the componentwise backward stability analysis of Gaussian elimination with a single iterative refinement step, we demonstrate that the model even yields quantitative backward error bounds that show all the known problem-dependent terms (with the exception of dimension-dependent constants, which are the weak spot of any a priori analysis). The model can serve as a convenient tool for teaching or as a heuristic device to discover stability results before entering a further, detailed analysis. | A Model for Understanding Numerical Stability |
Comparative studies of cancer-related genes allow us to gain novel information about the evolution and function of these genes, but also to understand cancer as a driving force in biological systems and species life histories. So far, comparative studies of cancer genes have focused on mammals. Here, we provide the first comparative study of cancer-related gene copy number variation in fish. As fish are evolutionarily older and genetically more diverse than mammals, their tumour suppression mechanisms should not only include most of the mammalian mechanisms, but also reveal novel (but potentially phylogenetically older) previously undetected mechanisms. We have matched the sequenced genomes of 65 fish species from the Ensemble database with the cancer gene information from the COSMIC database. By calculating the number of gene copies across species using the Ensembl CAFE data (providing species trees for gene copy number counts), we were able to develop a novel, less resource demanding method for ortholog identification. Our analysis demonstrates a masked relationship with cancer-related gene copy number variation (CNV) and maximum lifespan in fish species, suggesting that higher tumour suppressor gene CNV lengthens and oncogene CNV shortens lifespan, when both traits are added to the model. Based on the correlation between tumour suppressor and oncogene CNV, we were able to show which species have more tumour suppressors in relation to oncogenes. It could therefore be suggested that these species have stronger genetic defences against oncogenic processes. Fish studies could yet be a largely unexplored treasure trove for understanding the evolution and ecology of cancer, by providing novel insights into the study of cancer and tumour suppression, in addition to the study of fish evolution, life-history trade-offs, and ecology. | Comparative study of the evolution of human cancer gene duplications across fish |
We measure the power spectrum of the galaxy distribution in the ESO Slice Project (ESP) galaxy redshift survey. We develope a technique to describe the survey window function analytically, and then deconvolve it from the measured power spectrum using a variant of the Lucy method. We test the whole deconvolution procedure on ESP mock catalogues drawn from large N-body simulations, and find that it is reliable for recovering the correct amplitude and shape of $P(k)$ at $k> 0.065 h$ Mpc$^{-1}$. In general, the technique is applicable to any survey composed by a collection of circular fields with arbitrary pattern on the sky, as typical of surveys based on fibre spectrographs. The estimated power spectrum has a well-defined power-law shape $k^n$ with $n\simeq -2.2$ for $k\ge 0.2 h$ Mpc$^{-1}$, and a smooth bend to a flatter shape ($n\simeq -1.6$) for smaller $k$'s. The smallest wavenumber, where a meaningful reconstruction can be performed ($k\sim 0.06 h$ Mpc$^{-1}$), does not allow us to explore the range of scales where other power spectra seem to show a flattening and hints for a turnover. We also find, by direct comparison of the Fourier transforms, that the estimate of the two-point correlation function $\xi(s)$ is much less sensitive to the effect of a problematic window function as that of the ESP, than the power spectrum. Comparison to other surveys shows an excellent agreement with estimates from blue-selected surveys. In particular, the ESP power spectrum is virtually indistinguishable from that of the Durham-UKST survey over the common range of $k$'s, an indirect confirmation of the quality of the deconvolution technique applied. | Power Spectrum Analysis of the ESP Galaxy Redshift Survey |
We check the capability of the DUNE neutrino experiment to detect new sources of leptonic CP violation beside the single phase expected in the Standard Model. We illustrate our strategy based on the measurement of CP asymmetries in the case New Physics will show up as Non-Standard neutrino Interactions and sterile neutrino states and show that the most promising one, once the experimental errors are taken into account in both scenarios, is the one related to the $\nu_\mu \to \nu_e$ transition. | New sources of leptonic CP violation at the DUNE neutrino experiment |
Quantum computing is a new form of computing that is based on the principles of quantum mechanics. It has the potential to revolutionize many fields, including the humanities and social sciences. The idea behind quantum humanities is to explore the potential of quantum computing to answer new questions in these fields, as well as to consider the potential societal impacts of this technology. This paper proposes a research program for quantum humanities, which includes the application of quantum algorithms to humanities and social science research, the reflection on the methods and techniques of quantum computing, and the evaluation of its potential societal implications. This research program aims to define the field of quantum humanities and to establish it as a meaningful part of the humanities and social sciences. | Introducing a Research Program for Quantum Humanities: Theoretical Implications |
Fetal alcohol spectrum disorder (FASD) is a syndrome whose only difference compared to other children's conditions is the mother's alcohol consumption during pregnancy. An earlier diagnosis of FASD improving the quality of life of children and adolescents. For this reason, this study focus on evaluating the use of the artificial neural network (ANN) to classify children with FASD and explore how accurate it is. ANN has been used to diagnose cancer, diabetes, and other diseases in the medical area, being a tool that presents good results. The data used is from a battery of tests from children for 5-18 years old (include tests of psychometric, saccade eye movement, and diffusion tensor imaging (DTI)). We study the different configurations of ANN with dense layers. The first one predicts 75\% of the outcome correctly for psychometric data. The others models include a feature layer, and we used it to predict FASD using every test individually. The models accurately predict over 70\% of the cases, and psychometric and memory guides predict over 88\% accuracy. The results suggest that the ANN approach is a competitive and efficient methodology to detect FASD. However, we could be careful in used as a diagnostic technique. | Detecting Fetal Alcohol Spectrum Disorder in children using Artificial Neural Network |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.