text
stringlengths
133
1.92k
summary
stringlengths
24
228
Implicit Neural Representations (INR) have recently shown to be powerful tool for high-quality video compression. However, existing works are limiting as they do not explicitly exploit the temporal redundancy in videos, leading to a long encoding time. Additionally, these methods have fixed architectures which do not scale to longer videos or higher resolutions. To address these issues, we propose NIRVANA, which treats videos as groups of frames and fits separate networks to each group performing patch-wise prediction. This design shares computation within each group, in the spatial and temporal dimensions, resulting in reduced encoding time of the video. The video representation is modeled autoregressively, with networks fit on a current group initialized using weights from the previous group's model. To further enhance efficiency, we perform quantization of the network parameters during training, requiring no post-hoc pruning or quantization. When compared with previous works on the benchmark UVG dataset, NIRVANA improves encoding quality from 37.36 to 37.70 (in terms of PSNR) and the encoding speed by 12X, while maintaining the same compression rate. In contrast to prior video INR works which struggle with larger resolution and longer videos, we show that our algorithm is highly flexible and scales naturally due to its patch-wise and autoregressive designs. Moreover, our method achieves variable bitrate compression by adapting to videos with varying inter-frame motion. NIRVANA achieves 6X decoding speed and scales well with more GPUs, making it practical for various deployment scenarios.
NIRVANA: Neural Implicit Representations of Videos with Adaptive Networks and Autoregressive Patch-wise Modeling
Current state of the art acoustic models can easily comprise more than 100 million parameters. This growing complexity demands larger training datasets to maintain a decent generalization of the final decision function. An ideal dataset is not necessarily large in size, but large with respect to the amount of unique speakers, utilized hardware and varying recording conditions. This enables a machine learning model to explore as much of the domain-specific input space as possible during parameter estimation. This work introduces Common Phone, a gender-balanced, multilingual corpus recorded from more than 11.000 contributors via Mozilla's Common Voice project. It comprises around 116 hours of speech enriched with automatically generated phonetic segmentation. A Wav2Vec 2.0 acoustic model was trained with the Common Phone to perform phonetic symbol recognition and validate the quality of the generated phonetic annotation. The architecture achieved a PER of 18.1 % on the entire test set, computed with all 101 unique phonetic symbols, showing slight differences between the individual languages. We conclude that Common Phone provides sufficient variability and reliable phonetic annotation to help bridging the gap between research and application of acoustic models.
Common Phone: A Multilingual Dataset for Robust Acoustic Modelling
We present a new algorithm for probabilistic planning with no observability. Our algorithm, called Probabilistic-FF, extends the heuristic forward-search machinery of Conformant-FF to problems with probabilistic uncertainty about both the initial state and action effects. Specifically, Probabilistic-FF combines Conformant-FFs techniques with a powerful machinery for weighted model counting in (weighted) CNFs, serving to elegantly define both the search space and the heuristic function. Our evaluation of Probabilistic-FF shows its fine scalability in a range of probabilistic domains, constituting a several orders of magnitude improvement over previous results in this area. We use a problematic case to point out the main open issue to be addressed by further research.
Probabilistic Planning via Heuristic Forward Search and Weighted Model Counting
The deformation retract is, by definition, a homotopy between a retraction and the identity map. We show that applying this topological concept to Ricci-flat wormholes/black holes implies that such objects can get deformed and reduced to lower dimensions. The homotopy theory can provide a rigorous proof to the existence of black holes/wormholes deformations and explain the topological origin. The current work discusses such possible deformations and dimensional reductions from a global topological point of view, it also represents a new application of the homotopy theory and deformation retract in astrophysics and quantum gravity.
On the existence of deformations and dimensional reduction in wormholes and black holes
The ground-based gravitational wave (GW) observatories discover a population of merging stellar binary black holes (BBHs), which are promising targets for multiband observations by the low-, middle-, and high-frequency GW detectors. In this paper, we investigate the multiband GW detections of BBHs and demonstrate the advantages of such observations in improving the localization and parameter estimates of the sources. We generate mock samples of BBHs by considering different formation models as well as the merger rate density constrained by the current observations (GWTC-3). We specifically consider the astrodynamical middle-frequency interferometer GW observatory (AMIGO) in the middle-frequency band and estimate that it may detect $21$-$91$ BBHs with signal-to-noise ratio $\varrho\geq8$ in a $4$-yr observation period. The multiband observations by the low-frequency detectors [Laser Interferometer Space Antenna (LISA) and Taiji] and AMIGO may detect $5$-$33$ BBHs with $\varrho_{\rm LT}\geq5$ and $\varrho_{\rm AMI}\geq5$, which can evolve to the high-frequency band within $4$ yr and can be detected by the Cosmic Explorer (CE) and Einstein Telescope (ET). The joint observations of LISA-Taiji-AMIGO-ET-CE may localize the majority of the detectable BBHs in sky areas of $7\times10^{-7}$ to $2\times10^{-3}$ deg$^2$, which is improved by a factor of $\sim120$, $\sim2.4\times10^{5}$, $\sim1.8\times10^{4}$, or $\sim1.2\times10^{4}$, comparing with those by only adopting CE-ET, AMIGO, LISA-Taiji, or LISA-Taiji-AMIGO. These joint observations can also lead to an improvement of the measurement precision of the chirp mass (symmetric mass ratio) by a factor of $\sim5.5\times10^{4}$ ($33$), $\sim16$ ($8$), $\sim120$ ($90$), or $\sim5$ ($5$), comparing with those by CE-ET, AMIGO, LISA-Taiji, or LISA-Taiji-AMIGO.
Multiband gravitational wave observations of stellar binary black holes at the low to middle and high frequencies
We obtain retarded Green's functions for massless scalar fields in the background of near-extreme, near-horizon rotating charged black holes of five-dimensional minimal gauged supergravity. The radial part of the (separable) massless Klein-Gordon equation in such general black hole backgrounds is Heun's equation, due to the singularity structure associated with the three black hole horizons. On the other hand, we find the scaling limit for the near-extreme, near-horizon background where the radial equation reduces to a Hypergeometric equation whose $SL(2,{\bf R})^2$ symmetry signifies the underlying two-dimensional conformal invariance, with the two sectors governed by the respective Frolov-Thorne temperatures.
Conformal Invariance and Near-extreme Rotating AdS Black Holes
In non-viral gene delivery, the variance of transgenic expression stems from the low number of plasmids successfully transferred. Here, we experimentally determine Lipofectamine- and PEI-mediated exogenous gene expression distributions from single cell time-lapse analysis. Broad Poisson-like distributions of steady state expression are observed for both transfection agents, when used with synchronized cell lines. At the same time, co-transfection analysis with YFP- and CFP-coding plasmids shows that multiple plasmids are simultaneously expressed, suggesting that plasmids are delivered in correlated units (complexes). We present a mathematical model of transfection, where a stochastic, two-step process is assumed, with the first being the low-probability entry step of complexes into the nucleus, followed by the subsequent release and activation of a small number of plasmids from a delivered complex. This conceptually simple model consistently predicts the observed fraction of transfected cells, the cotransfection ratio and the expression level distribution. It yields the number of efficient plasmids per complex and elucidates the origin of the associated noise, consequently providing a platform for evaluating and improving non-viral vectors.
Predictive Modeling of Non-Viral Gene Transfer
Hydration of hydrophobic solutes in water is the cause of different phenomena, including the hydrophobic heat-capacity anomaly, which are not yet fully understood. Because of its topicality, there has recently been growing interest in the mechanism of hydrophobic aggregation, and in the physics on which it is based. In this study we use a simple yet powerful mixture model for water, an adapted two-state Muller-Lee-Graziano model, to describe the energy levels of water molecules as a function of their proximity to non-polar solute molecules. The model is shown to provide an appropriate description of many-body interactions between the hydrophobic solute particles. The solubility and aggregation of hydrophobic substances is studied by evaluating detailed Monte Carlo simulations in the vicinity of the first-order aggregation phase transition. A closed-loop coexistence curve is found, which is consistent with a mean-field calculation carried out for the same system. In addition, the destabilizing effect of a chaotropic substance in the solution is studied by suitable modification of the MLG model. These findings suggest that a simple model for the hydrophobic interaction may contain the primary physical processes involved in hydrophobic aggregation and in the chaotropic effect.
Hydrophobic Interaction Model for Upper and Lower Critical Solution Temperatures
Moir\'e systems provide a highly tunable platform for engineering band structures and exotic correlated phases. Here, we theoretically study a model for a single layer of graphene subject to a smooth moir\'e electrostatic potential, induced by an insulating substrate layer. For sufficiently large moir\'e unit cells, we find that ultra-flat bands coexist with a triangular network of chiral one-dimensional (1D) channels. These channels mediate an effective interaction between localized modes with spin-, orbital- and valley degrees of freedom emerging from the flat bands. The form of the interaction reflects the chiralilty and 1D nature of the network. We study this interacting model within an $SU(4)$ mean-field theory, semi-classical Monte-Carlo simulations, and an $SU(4)$ spin-wave theory, focusing on commensurate order stabilized by local two-site and chiral three-site interactions. By tuning a gate voltage, one can trigger a non-coplanar phase characterized by a peculiar coexistence of three different types of order: ferromagnetic spin order in one valley, non-coplanar chiral spin order in the other valley, and 120$^\circ$ order in the remaining spin and valley-mixed degrees of freedom. Quantum and classical fluctuations have qualitatively different effects on the observed phases and can, for example, create a finite spin-chirality purely via fluctuation effects.
Network of chiral one-dimensional channels and localized states emerging in a moir\'e system
The Compact Linear Collider (CLIC) targets a nanometre beam size at the collision point. Realising this beam size requires the generation and transport of ultra-low emittance beams. Dynamic imperfections can deflect the colliding beams, leading to a collision with a relative offset. They can also degrade the emittance of each beam. Both of these effects can significantly impact the luminosity of CLIC. In this paper, we examine a newly considered dynamic imperfection: stray magnetic fields. Measurements of stray magnetic fields in the Large Hadron Collider tunnel are presented and used to develop a statistical model that can be used to realistically generate stray magnetic fields in simulations. The model is used in integrated simulations of CLIC at 380GeV including mitigation systems for stray magnetic fields to evaluate their impact on luminosity.
Measurements and modelling of stray magnetic fields and the simulation of their impact on the Compact Linear Collider at 380 GeV
Galileo Galilei believed that stars were distant suns whose sizes, measured via his telescope, were a direct indication of distance -- fainter stars (appearing smaller in the telescope) were farther away than brighter ones. Galileo argued in his Dialogue that telescopic observation of a chance alignment of a faint (distant) and bright (closer) star would reveal annual parallax, if such double stars could be found. This would provide support both for Galileo's ideas concerning the nature of stars and for the motion of the Earth. However, Galileo actually made observations of such double stars, well before publication of the Dialogue. We show that the results of these observations, and the likely results of observations of any double star that was a viable subject for Galileo's telescope, would undermine Galileo's ideas, not support them. We argue that such observations would lead either to the more correct conclusion that stars were sun-like bodies of varying sizes which could be physically grouped, or to the less correct conclusion that stars are not sun-like bodies, and even to the idea that the Earth did not move. Lastly, we contrast these conclusions to those reached through applying Galileo's ideas to observations of visible stars as a whole.
Regarding the Potential Impact of Double Star Observations on Conceptions of the Universe of Stars in the Early 17th Century
In Part 1 of this paper we construct a spectral sequence converging to the relative Lie algebra cohomology associated to the action of any subgroup $G$ of the symplectic group on the polynomial Fock model of the Weil representation, see Section 7. These relative Lie algebra cohomology groups are of interest because they map to the cohomology of suitable arithmetic quotients of the symmetric space $G/K$ of $G$. We apply this spectral sequence to the case $G = \mathrm{SO}_0(n,1)$ in Sections 8, 9, and 10 to compute the relative Lie algebra cohomology groups $H^{\bullet} \big(\mathfrak{so}(n,1), \mathrm{SO}(n); \mathcal{P}(V^k) \big)$. Here $V = \mathbb{R}^{n,1}$ is Minkowski space and $\mathcal{P}(V^k)$ is the subspace of $L^2(V^k)$ consisting of all products of polynomials with the Gaussian. In Part 2 of this paper we compute the cohomology groups $H^{\bullet}\big(\mathfrak{so}(n,1), \mathrm{SO}(n); L^2(V^k) \big)$ using spectral theory and representation theory. In Part 3 of this paper we compute the maps between the polynomial Fock and $L^2$ cohomology groups induced by the inclusions $\mathcal{P}(V^k) \subset L^2(V^k)$.
The Relative Lie Algebra Cohomology of the Weil Representation of SO(n,1)
We study the economic interactions among sellers and buyers in online markets. In such markets, buyers have limited information about the product quality, but can observe the sellers' reputations which depend on their past transaction histories and ratings from past buyers. Sellers compete in the same market through pricing, while considering the impact of their heterogeneous reputations. We consider sellers with limited as well as unlimited capacities, which correspond to different practical market scenarios. In the unlimited seller capacity scenario, buyers prefer the seller with the highest reputation-price ratio. If the gap between the highest and second highest seller reputation levels is large enough, then the highest reputation seller dominates the market as a monopoly. If sellers' reputation levels are relatively close to each other, then those sellers with relatively high reputations will survive at the equilibrium, while the remaining relatively low reputation sellers will get zero market share. In the limited seller capacity scenario, we further consider two different cases. If each seller can only serve one buyer, then it is possible for sellers to set their monopoly prices at the equilibrium while all sellers gain positive market shares; if each seller can serve multiple buyers, then it is possible for sellers to set maximum prices at the equilibrium. Simulation results show that the dynamics of reputations and prices in the longer-term interactions will converge to stable states, and the initial buyer ratings of the sellers play the critical role in determining sellers' reputations and prices at the stable state.
Reputation and Pricing Dynamics in Online Markets
Real-time object detection and tracking have shown to be the basis of intelligent production for industrial 4.0 applications. It is a challenging task because of various distorted data in complex industrial setting. The correlation filter (CF) has been used to trade off the low-cost computation and high performance. However, traditional CF training strategy can not get satisfied performance for the various industrial data; because the simple sampling(bagging) during training process will not find the exact solutions in a data space with a large diversity. In this paper, we propose Dijkstra-distance based correlation filters (DBCF), which establishes a new learning framework that embeds distribution-related constraints into the multi-channel correlation filters (MCCF). DBCF is able to handle the huge variations existing in the industrial data by improving those constraints based on the shortest path among all solutions. To evaluate DBCF, we build a new dataset as the benchmark for industrial 4.0 application. Extensive experiments demonstrate that DBCF produces high performance and exceeds the state-of-the-art methods. The dataset and source code can be found at https://github.com/bczhangbczhang
Object detection and tracking benchmark in industry based on improved correlation filter
A rarely studied open cluster, King 1 is observed using the 1.3m telescope equipped with 2k x 4k CCD at the Vainu Bappu Observatory, India. We analyse the photometric data obtained from the CCD observations in both B and V bands. Out of 132 detected stars in the open cluster King 1 field, we have identified 4 stellar variables and 2 among them are reported as newly detected binary systems, in this paper. The parallax values from GAIA DR2 suggest that the open cluster King 1 is at the background of these two detected binary systems, falling along the same line of sight, giving rise to different parallax values. The periodogram analysis was carried out using Phase Dispersion Minimization (PDM) and Lomb Scargle (LS) method for all the detected variables. PHOEBE (PHysics Of Eclipsing BinariEs) is extensively used to model various stellar parameters of both the detected binary systems. Based on the modeling results obtained from this work, one of the binary systems is reported for the first time as an Eclipsing Detached (ED) and the other as an Eclipsing Contact (EC) binary of W-type W UMa.
Binary star detection in the Open Cluster King 1 field
We discuss the extension of the oscillator-basis $J$-matrix formalism on the case of true $A$-body scattering. The formalism is applied to loosely-bound $^{11}$Li and $^6$He nuclei within three-body cluster models ${\rm {^9Li}}+n+n$ and $\alpha+n+n$. The $J$-matrix formalism is used not only for the calculation of the three-body continuum spectrum wave functions but also for the calculation of the $S$-matrix poles associated with the $^{11}$Li and $^6$He ground states to improve the description of the binding energies and ground state properties. The effect of the phase equivalent transformation of the $n{-}\alpha$ interaction on the properties of the $^6$He nucleus is examined.
Loosely bound three-body nuclear systems in the J-matrix approach
We introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization. We derive a new set of conditions for the existence of DFSs within this generalized framework. By relaxing the initialization requirement we show that a DFS can tolerate arbitrarily large preparation errors. This has potentially significant implications for experiments involving DFSs, in particular for the experimental implementation, over DFSs, of the large class of quantum algorithms which can function with arbitrary input states.
Theory of Initialization-Free Decoherence-Free Subspaces and Subsystems
In this work, we present electronic and magnetic properties of CaMnO3 (CMO) as obtained from ab initio calculations. We identify the preferable magnetic order by means of density functional theory plus Hubbard U calculations and extract the effective exchange parameters (Jij's) using the magnetic force theorem. We find that the effects of geometrical relaxation at the surface as well as the change of crystal field are very strong and are able to influence the lower energy magnetic configuration. In particular, our analysis reveals that the exchange interaction between the Mn atoms belonging to the surface and the subsurface layers is very sensitive to the structural changes. An earlier study [A. Filippetti and W.E. Pickett, Phys. Rev. Lett. 83, 4184 (1999)] suggested that this coupling is ferromagnetic and gives rise to the spin flip process on the surface of CMO. In our work we confirm their finding for an unrelaxed geometry, but once the structural relaxations are taken into account, this exchange coupling changes its sign. Thus, we suggest that the surface of CMO should have the same G-type antiferromagnetic order as in the bulk. Finally, we show that the suggested SF can be induced in the system by introducing an excess of electrons.
Exchange interactions of CaMnO3 in the bulk and at the surface
3D face alignment of monocular images is a crucial process in the recognition of faces with disguise.3D face reconstruction facilitated by alignment can restore the face structure which is helpful in detcting disguise interference.This paper proposes a dual attention mechanism and an efficient end-to-end 3D face alignment framework.We build a stable network model through Depthwise Separable Convolution, Densely Connected Convolutional and Lightweight Channel Attention Mechanism. In order to enhance the ability of the network model to extract the spatial features of the face region, we adopt Spatial Group-wise Feature enhancement module to improve the representation ability of the network. Different loss functions are applied jointly to constrain the 3D parameters of a 3D Morphable Model (3DMM) and its 3D vertices. We use a variety of data enhancement methods and generate large virtual pose face data sets to solve the data imbalance problem. The experiments on the challenging AFLW,AFLW2000-3D datasets show that our algorithm significantly improves the accuracy of 3D face alignment. Our experiments using the field DFW dataset show that DAMDNet exhibits excellent performance in the 3D alignment and reconstruction of challenging disguised faces.The model parameters and the complexity of the proposed method are also reduced significantly.The code is publicly available at https:// github.com/LeiJiangJNU/DAMDNet
Dual Attention MobDenseNet(DAMDNet) for Robust 3D Face Alignment
Here we report a strategy, by taking a prototypical model system for photocatalysis (viz. N-doped (TiO$_2$)$_n$ clusters), to accurately determine low energy metastable structures that can play a major role with enhanced catalytic reactivity. Computational design of specific metastable photocatalyst with enhanced activity is never been easy due to plenty of isomers on potential energy surface. This requires fixing various parameters viz. (i) favorable formation energy, (ii) low fundamental gap, (iii) low excitation energy and (iv) high vertical electron affinity (VEA) and low vertical ionization potential (VIP). We validate here by integrating several first principles based methodologies that consideration of the global minimum structure alone can severely underestimate the activity. As a first step, we have used a suite of genetic algorithms [viz. searching clusters with conventional minimum total energy ((GA)$_\textrm{E}$); searching clusters with specific property i.e. high VEA ((GA)$_\textrm{P}^{\textrm{EA}}$), and low VIP ((GA)$_\textrm{P}^{\textrm{IP}}$)] to model the N-doped (TiO$_2$)$_n$ clusters. Following this, we have identified its free energy using ab initio thermodynamics to confirm that the metastable structures are not too far from the global minima. By analyzing a large dataset, we find that N-substitution ((N)$_\textrm{O}$) prefers to reside at highly coordinated oxygen site to maximize its coordination, whereas N-interstitial ((NO)$_\textrm{O}$) and split-interstitial ((N$_2)_\textrm{O}$) favor the dangling oxygen site. Interestingly, we notice that each types of defect (viz. substitution, interstitials) reduce the fundamental gap and excitation energy substantially. However, (NO)$_\textrm{O}$ and (N$_2)_\textrm{O}$ doped clusters are the potential candidates for overall water splitting, whereas N$_\textrm{O}$ is congenial only for oxygen evolution reaction.
Metastability Triggered Reactivity in Clusters at Realistic Conditions: A Case Study of N-doped (TiO$_2$)$_n$ for Photocatalysis
A new technique is presented for determining the black-hole masses of high-redshift quasars from optical spectroscopy. The new method utilizes the full-width half maximum (FWHM) of the low-ionization MgII emission line and the correlation between broad-line region (BLR) radius and continuum luminosity at 3000 Angstroms. Using archival UV spectra it is found that the correlation between BLR radius and 3000 Angstrom luminosity is tighter than the established correlation with 5100 Angstrom luminosity. Furthermore, it is found that the correlation between BLR radius and 3000 Ang luminosity is consistent with a relation of the form $R_{BLR} \propto lambda L_{lambda}^{0.5}$, as expected for a constant ionization parameter. Using a sample of objects with broad-line radii determined from reverberation mapping it is shown that the FWHM of MgII and H beta are consistent with following an exact one-to-one relation, as expected if both H beta and MgII are emitted at the same radius from the central ionizing source. The resulting virial black-hole mass estimator based on rest-frame UV observables is shown to reproduce black-hole mass measurements based on reverberation mapping to within a factor of 2.5 (1 sigma). Finally, the new UV black-hole mass estimator is shown to produce identical results to the established optical (H beta) estimator when applied to 128 intermediate-redshift (0.3<z<0.9) quasars drawn from the Large Bright Quasar Survey and the radio-selected Molonglo quasar sample. We therefore conclude that the new UV virial black-hole mass estimator can be reliably used to estimate the black-hole masses of quasars from z~0.25 through to the peak epoch of quasar activity at z~2.5 via optical spectroscopy alone.
Measuring the black hole masses of high redshift quasars
This article is the last one about the isomorphism between Lubin-Tate and Drinfeld towers. We prove the existence of an isomorphism between the compactly supported etale cohomology of the Lubin-Tate and Drinfeld towers, and more generally their equivariant cohomology complex. We also prove the existence of a geometric local Jacquet-Langlands correspondence between some equivariant rigid etale sheaves on Gross-Hopkins period space $\mathbb{P}^{n-1}$ and Drinfeld one $\Omega$.
Comparaison de la cohomologie des tours de Lubin-Tate et de Drinfeld et correspondance de Jacquet-Langlands geometrique
Homography estimation is an important task in computer vision applications, such as image stitching, video stabilization, and camera calibration. Traditional homography estimation methods heavily depend on the quantity and distribution of feature correspondences, leading to poor robustness in low-texture scenes. The learning solutions, on the contrary, try to learn robust deep features but demonstrate unsatisfying performance in the scenes with low overlap rates. In this paper, we address these two problems simultaneously by designing a contextual correlation layer (CCL). The CCL can efficiently capture the long-range correlation within feature maps and can be flexibly used in a learning framework. In addition, considering that a single homography can not represent the complex spatial transformation in depth-varying images with parallax, we propose to predict multi-grid homography from global to local. Moreover, we equip our network with a depth perception capability, by introducing a novel depth-aware shape-preserved loss. Extensive experiments demonstrate the superiority of our method over state-of-the-art solutions in the synthetic benchmark dataset and real-world dataset. The codes and models will be available at https://github.com/nie-lang/Multi-Grid-Deep-Homography.
Depth-Aware Multi-Grid Deep Homography Estimation with Contextual Correlation
Emotion Recognition in Conversations (ERC) is an important and active research area. Recent work has shown the benefits of using multiple modalities (e.g., text, audio, and video) for the ERC task. In a conversation, participants tend to maintain a particular emotional state unless some stimuli evokes a change. There is a continuous ebb and flow of emotions in a conversation. Inspired by this observation, we propose a multimodal ERC model and augment it with an emotion-shift component that improves performance. The proposed emotion-shift component is modular and can be added to any existing multimodal ERC model (with a few modifications). We experiment with different variants of the model, and results show that the inclusion of emotion shift signal helps the model to outperform existing models for ERC on MOSEI and IEMOCAP datasets.
Shapes of Emotions: Multimodal Emotion Recognition in Conversations via Emotion Shifts
Consider a balance law where the flux depends explicitly on the space variable. At jump discontinuities, modeling considerations may impose the defect in the conservation of some quantities, thus leading to non conservative products. Below, we deduce the evolution in the smooth case from the jump conditions at discontinuities. Moreover, the resulting framework enjoys well posedness and solutions are uniquely characterized. These results apply, for instance, to the flow of water in a canal with varying width and depth, as well as to the inviscid Euler equations in pipes with varying geometry.
Well Posedness and Characterization of Solutions to Non Conservative Products in Non Homogeneous Fluid Dynamics Equations
To a vector space V equipped with a non-quasiclassical involutary solution of the quantum Yang-Baxter equation and a partition $\lambda$, we associate a vector space $\Vl$ and compute its dimension. The functor $V\mapsto \Vl$ is an analogue of the well-known Schur functor. The category generated by the objects $\Vl$ is called the Schur-Weyl category. We suggest a way to construct some related twisted varieties looking like orbits of semisimple elements in sl(n)^*. We consider in detail a particular case of such "twisted orbits", namely the twisted non-quasiclassical hyperboloid and we define the twisted Casimir operator on it. In this case, we obtain a formula looking like the Weyl formula, and describing the asymptotic behavior of the function $N(\la)=\{\sharp \la_i\leq\la\}$, where $\la_i$ are the eigenvalues of this operator.
Schur-Weyl Categories and Non-quasiclassical Weyl Type Formula
Aims: Our aim is to obtain ages that are as accurate as possible for the seven nearby open clusters alpha Per, Coma Ber, IC 2602, NGC 2232, NGC 2451A, NGC 2516, and NGC 6475, each of which contains at least one magnetic Ap or Bp star. Simultaneously, we test the current calibrations of Te and luminosity for the Ap/Bp star members, and identify clearly blue stragglers in the clusters studied. Methods: We explore the possibility that isochrone fitting in the theoretical Hertzsprung-Russell diagram (i.e. log (L/L&sun;) vs. log Te), rather than in the conventional colour-magnitude diagram, can provide more precise and accurate cluster ages, with well-defined uncertainties. Results: Well-defined ages are found for all the clusters studied. For the nearby clusters studied, the derived ages are not very sensitive to the small uncertainties in distance, reddening, membership, metallicity, or choice of isochrones. Our age determinations are all within the range of previously determined values, but the associated uncertainties are considerably smaller than the spread in recent age determinations from the literature. Furthermore, examination of proper motions and HR diagrams confirms that the Ap stars identified in these clusters are members, and that the presently accepted temperature scale and bolometric corrections for Ap stars are approximately correct. We show that in these theoretical HR diagrams blue stragglers are particularly easy to identify. Conclusions: Constructing the theoretical HR diagram of a nearby open cluster makes possible an accurate age determination, with well defined uncertainty. This diagnostic of a cluster also provides a useful tool for studying unusual stars such as Ap stars and blue stragglers.
Accurate age determinations of several nearby open clusters containing magnetic Ap stars
Object counting aims to estimate the number of objects in images. The leading counting approaches focus on the single category counting task and achieve impressive performance. Note that there are multiple categories of objects in real scenes. Multi-class object counting expands the scope of application of object counting task. The multi-target detection task can achieve multi-class object counting in some scenarios. However, it requires the dataset annotated with bounding boxes. Compared with the point annotations in mainstream object counting issues, the coordinate box-level annotations are more difficult to obtain. In this paper, we propose a simple yet efficient counting network based on point-level annotations. Specifically, we first change the traditional output channel from one to the number of categories to achieve multiclass counting. Since all categories of objects use the same feature extractor in our proposed framework, their features will interfere mutually in the shared feature space. We further design a multi-mask structure to suppress harmful interaction among objects. Extensive experiments on the challenging benchmarks illustrate that the proposed method achieves state-of-the-art counting performance.
Dilated-Scale-Aware Attention ConvNet For Multi-Class Object Counting
Over the past decade, capacitive deionization (CDI) has realized a surge in attention in the field of water desalination and can now be considered as an important technology class, along with reverse osmosis and electrodialysis. While many of the recently developed technologies no longer use a mechanism that follows the strict definition of the term "capacitive", these methods nevertheless share many common elements that encourage treating them with similar metrics and analyses. Specifically, they all involve electrically driven removal of ions from a feed stream, storage in an electrode (i.e., ion electrosorption) and release, in charge/discharge cycles. Grouping all these methods in the technology class of CDI makes it possible to treat evolving new technologies in standardized terms and compare them to other technologies in the same class.
Capacitive Deionization -- defining a class of desalination technologies
We use the oxDNA coarse-grained model to provide a detailed characterization of the fundamental structural properties of DNA origamis, focussing on archetypal 2D and 3D origamis. The model reproduces well the characteristic pattern of helix bending in a 2D origami, showing that it stems from the intrinsic tendency of anti-parallel four-way junctions to splay apart, a tendency that is enhanced both by less screened electrostatic interactions and by increased thermal motion. We also compare to the structure of a 3D origami whose structure has been determined by cryo-electron microscopy. The oxDNA average structure has a root-mean-square deviation from the experimental structure of 8.4 Angstrom, which is of the order of the experimental resolution. These results illustrate that the oxDNA model is capable of providing detailed and accurate insights into the structure of DNA origamis, and has the potential to be used to routinely pre-screen putative origami designs.
Coarse-grained modelling of the structural properties of DNA origami
I extend the usual linear-theory formula for large-scale clustering in redshift-space to include gravitational redshift. The extra contribution to the standard galaxy power spectrum is suppressed by k_c^{-2}, where k_c=c k/a H (k is the wavevector, a the expansion factor, and H=\dot{a}/a), and is thus effectively limited to the few largest-scale modes and very difficult to detect; however, a correlation, \propto k_c^{-1}, is generated between the real and imaginary parts of the Fourier space density fields of two different types of galaxy, which would otherwise be zero, i.e., the cross-power spectrum has an imaginary part: P_{ab}(k,\mu)/P(k)=(b_a+f\mu^2)(b_b+f\mu^2) -i(3\Omega_m/2)(\mu/k_c)(b_a-b_b)+\mathcal{O}(k_c^{-2}), where P(k) is the real-space mass-density power spectrum, b_i are the galaxy biases, \mu is the cosine of the angle between the wavevector and line of sight, and f=dlnD/dlna (D is the linear growth factor). The total signal-to-noise of measurements of this effect is not dominated by the largest scales -- it converges at k~0.05 h/Mpc. This gravitational redshift result is pedagogically interesting, but naive in that it is gauge dependent and there are other effects of similar form and size, related to the transformation between observable and proper coordinates. I include these effects, which add other contributions to the coefficient of \mu/k_c, and add a \mu^3/k_c term, but don't qualitatively change the picture. The leading source of noise in the measurement is galaxy shot-noise, not sample variance, so developments that allow higher S/N surveys can make this measurement powerful, although it would otherwise be only marginally detectable in a JDEM-scale survey.
Gravitational redshift and other redshift-space distortions of the imaginary part of the power spectrum
We present two new nonlinearity tolerant modulation formats at spectral efficiencies lower than 4bits/4D-symbol, obtained using a simplified bit-to-symbol mapping approach to set-partition PDM-QPSK in 8 dimensions.
Nonlinearity-tolerant 8D modulation formats by set-partitioning PDM-QPSK
What is the role of the constants of nature in physical theory? I hypothesize that the observable universe, u0, constitutes a Universal Turing Machine (UTM) constrained by algorithmically random logical tape parameters defining its material properties (a physical UTM). The finite non-zero empirical values of Planck's constant, h, and other constants of nature exemplify those logical parameters. Their algorithmic randomness is necessary and sufficient for the consistent operation of a physical UTM. At any given time, ti, these constants correspond to the first n random halt digits, Omega-n, of Chaitin's Halting Probability, Omega. Planck's equation E=hv and Boltzmann's relation S=kLogW are shown to apply to the operation of a physical UTM. The genomic evolution of u0 in constants of nature space (CON space) from an undecidable state in u0's Planck era to its current ordered condition occurs through the algorithmically random, symmetry-breaking addition of new constants to the laws by which u0 operates -- a process called logical tunneling. The temperature of u0 for t<=tp is shown to be T=0K. The energy dissipated when a physical UTM clears its memory after each computation is proposed as a candidate for cold dark matter (CDM) and is calculated to comprise 87.5% of the cosmological matter content, Omega-M, of u0. This result concurs with current astronomical estimates that 87.1% of the matter content of u0 consists of CDM. The energy incorporated in u0 through the process of logical tunneling from undecidable states of the complete Universe, E, to decidable states of u0 is suggested as a candidate for the unexplained "dark energy", Omega-X, hypothesized to drive the accelerating cosmological expansion of space and believed to constitute 66% of the critical mass, Omega-0, of the observable universe.
Some Properties of the Random Universe
Consider a convex cone in three-dimensional Minkowski space which either contains the lightcone or is contained in it. This work considers mean curvature flow of a proper spacelike strictly mean convex disc in the cone which is graphical with respect to its rays. Its boundary is required to have constant intersection angle with the boundary of the cone. We prove that the corresponding parabolic boundary value problem for the graph admits a solution for all time which rescales to a self-similarly expanding solution.
A capillary problem for spacelike mean curvature flow in a cone of Minkowski space
Various functions of a network of excitable units can be enhanced if the network is in the `critical regime', where excitations are, on average, neither damped nor amplified. An important question is how can such networks self-organize to operate in the critical regime. Previously it was shown that regulation via resource transport on a secondary network can robustly maintain the primary network dynamics in a balanced state where activity doesn't grow or decay. Here we show that this inter-network regulation process robustly produces a power-law distribution of activity avalanches, as observed in experiments, over ranges of model parameters spanning orders of magnitude. We also show that the resource transport over the secondary network protects the system against the destabilizing effect of local variations in parameters and heterogeneity in network structure. For homogeneous networks, we derive a reduced 3-dimensional map which reproduces the behavior of the full system.
Dynamic regulation of resource transport induces criticality in interdependent networks of excitable units
We present a theoretical analysis of a Majorana-based qubit consisting of two topological superconducting islands connected via a Josephson junction. The qubit is operated by electrostatic gates which control the coupling of two of the four Majorana zero modes. At the end of the operation, readout is performed in the charge basis. Even though the operations are not topologically protected, the proposed experiment can potentially shed light on the coherence of the parity degree of freedom in Majorana devices and serve as a first step towards topological Majorana qubits. We discuss in detail the charge-stability diagram and its use for characterizing the parameters of the devices, including the overlap of the Majorana edge states. We describe the multi-level spectral properties of the system and present a detailed study of its controlled coherent oscillations, as well as decoherence resulting from coupling to a non-Markovian environment. In particular, we study a gate-controlled protocol where conversion between Coulomb-blockade and transmon regimes generates coherent oscillations of the qubit state due to the overlap of Majorana modes. We show that, in addition to fluctuations of the Majorana coupling, considerable measurement errors may be accumulated during the conversion intervals when electrostatic fluctuations in the superconducting islands are present. These results are also relevant for several proposed implementations of topological qubits which rely on readout based on charge detection.
Four-Majorana qubit with charge readout: dynamics and decoherence
Among the recent achievements of sodium-ion battery (SIB) electrode materials, hybridization of two-dimentional (2D) materials is one of the most interesting appointments. In this work, we propose to use the 2D hybrid composites of SnS2 with graphene or graphene oxide (GO) layers as SIB anode, based on the first-principles calculations of their atomic structures, sodium intercalation energetics and electronic properties. The calculations reveal that graphene or GO film can effectively support not only the stable formation of hetero-interface with the SnS2 layer but also the easy intercalation of sodium atom with low migration energy and acceptable low volume change. The electronic charge density differences and the local density of state indicate that the electrons are transferred from the graphene or GO layer to the SnS2 layer, facilitating the formation of hetero-interface and improving the electronic conductance of the semiconducting SnS2 layer. These 2D hybrid composites of SnS2/G or GO are concluded to be more promising candidates for SIB anodes compared with the individual monolayers.
Two-dimensional hybrid composites of SnS2 with graphene and graphene oxide for improving sodium storage: A first-principles study
We study in this paper the validity of the mean ergodic theorem along \emph{left} F\o lner sequences in a countable amenable group $G$. Although the \emph{weak} ergodic theorem always holds along \emph{any} left F\o lner sequence in $G$, we provide examples where the \emph{mean} ergodic theorem fails in quite dramatic ways. On the other hand, if $G$ does not admit any ICC quotients, e.g. if $G$ is virtually nilpotent, then we prove that the mean ergodic theorem does indeed hold along \emph{any} left F\o lner sequence. In the case when a unitary representation of a countable amenable group is induced from a unitary representation of a "sufficiently thin" subgroup, we prove that the mean ergodic theorem holds along any left F\o lner sequence for this representation. Furthermore, we show that every countable (infinite) amenable group $L$ embeds into a countable group $G$ which admits a unitary representation with the property that for any left F\o lner sequence $(F_n)$ in $L$, there exists a sequence $(s_n)$ in $G$ such that the mean (but \emph{not} the weak) ergodic theorem fails for this representation along the sequence $(F_n s_n)$. Finally, we provide examples of countable (not necessarily amenable) groups $G$ with proper, infinite-index subgroups $H$, so that the \emph{pointwise} ergodic theorem holds for averages along \emph{any} strictly increasing and nested sequence of finite subsets of the coset $G/H$.
Ergodic Theorems for coset spaces
We prove an effective version of a result due to Einsiedler, Mozes, Shah and Shapira on the asymptotic distribution of primitive rational points on expanding closed horospheres in the space of lattices. Key ingredients of our proof include recent bounds on matrix Kloosterman sums due to Erd\'elyi and T\'oth, results by Clozel, Oh and Ullmo on the effective equidistribution of Hecke points, and Rogers' integration formula in the geometry of numbers. As an application of the main theorem, we also obtain a result on the limit distribution of the number of small solutions of a random system of linear congruences to a large modulus. Furthermore, as a by-product of our proofs, we obtain a sharp bound on the number of nonsquare matrices over a finite field $\mathbb{F}_p$ with small entries and of a given size and rank.
Effective equidistribution of primitive rational points on expanding horospheres
Recently topological superconducting states has attracted a lot of interest. In this work, we consider a topo- logical superconductor with $Z_2$ topological mirror order [1] and s$\pm$-wave superconducting pairing symmetry, within a two-orbital model originally designed for iron-based superconductivity [2]. We predict the existence of gapless edge states. We also study the local electronic structure around an adsorbed interstitial magnetic impurity in the system, and find the existence of low-energy in-gap bound states even with a weak spin polar- ization on the impurity. We also discuss the relevance of our results to the recent STM experiment on Fe(Te,Se) compound with adsorbed Fe impurity [3], for which our density functional calculations show the Fe impurity is spin polarized.
Edge states and local electronic structure around an adsorbed impurity in a topological superconductor
We obtain various versions of classical Lieb--Thirring bounds for one- and multi-dimensional complex Jacobi matrices. Our method is based on Fan-Mirski Lemma and seems to be fairly general.
Lieb--Thirring bounds for complex Jacobi matrices
Network inference is a rapidly advancing field, with new methods being proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against \textit{gold standard} (true values) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration aspects of topological and information content in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information-theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concepts from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is superior to random guesswork. Therefore special care should be taken to suit the method to the purpose at hand. Finally, we show that evaluations from data generated using different underlying topologies have different signatures that can be used to better choose a network reconstruction method.
Evaluating Network Inference Methods in Terms of Their Ability to Preserve the Topology and Complexity of Genetic Networks
The temperature dependence of magnetization and specific heat of the ferromagnetic compound DyFe2Zn20 has been measured in detail in various magnetic fields. We have observed anomalous magnetic behavior, i.e., a strong anisotropy at 2 K, disappearance of this anisotropy between approximately 30 K and T_c, and anomalous behavior of the specific heat in the magnetic fields near 20 K. These anomalous phenomena have been analyzed based on the strong exchange interaction between the Fe itinerant electrons and the Dy localized electrons as well as the crystalline electric field, Zeeman energy, and an usual exchange interaction between two Dy atoms. The higher T_c of DyFe2Zn20 compared with that of DyRu2Zn20 is caused by this exchange interaction between the Fe and Dy atoms.
Enhancement of Curie temperature due to the coupling between Fe itinerant electrons and Dy localized electrons in DyFe2Zn20
The performance of automatic speech recognition (ASR) systems has advanced substantially in recent years, particularly for languages for which a large amount of transcribed speech is available. Unfortunately, for low-resource languages, such as minority languages, regional languages or dialects, ASR performance generally remains much lower. In this study, we investigate whether data augmentation techniques could help improve low-resource ASR performance, focusing on four typologically diverse minority languages or language variants (West Germanic: Gronings, West-Frisian; Malayo-Polynesian: Besemah, Nasal). For all four languages, we examine the use of self-training, where an ASR system trained with the available human-transcribed data is used to generate transcriptions, which are then combined with the original data to train a new ASR system. For Gronings, for which there was a pre-existing text-to-speech (TTS) system available, we also examined the use of TTS to generate ASR training data from text-only sources. We find that using a self-training approach consistently yields improved performance (a relative WER reduction up to 20.5% compared to using an ASR system trained on 24 minutes of manually transcribed speech). The performance gain from TTS augmentation for Gronings was even stronger (up to 25.5% relative reduction in WER compared to a system based on 24 minutes of manually transcribed speech). In sum, our results show the benefit of using self-training or (if possible) TTS-generated data as an efficient solution to overcome the limitations of data availability for resource-scarce languages in order to improve ASR performance.
Making More of Little Data: Improving Low-Resource Automatic Speech Recognition Using Data Augmentation
Although carbon nanotubes consist of honeycomb carbon, they have never been fabricated from graphene directly. Here, it is shown by quantum molecular-dynamics simulations and classical continuum-elasticity modeling, that graphene nanoribbons can, indeed, be transformed into carbon nanotubes by means of twisting. The chiralities of the tubes thus fabricated can be not only predicted but also externally controlled. This twisting route is an opportunity for nanofabrication, and is easily generalizable to ribbons made of other planar nanomaterials.
Twisting Graphene Nanoribbons into Carbon Nanotubes
Buoyant bubbles of relativistic plasma in cluster cores plausibly play a key role in conveying the energy from a supermassive black hole to the intracluster medium (ICM) - the process known as radio-mode AGN feedback. Energy conservation guarantees that a bubble loses most of its energy to the ICM after crossing several pressure scale heights. However, actual processes responsible for transferring the energy to the ICM are still being debated. One attractive possibility is the excitation of internal waves, which are trapped in the cluster's core and eventually dissipate. Here we show that a sufficient condition for efficient excitation of these waves in stratified cluster atmospheres is flattening of the bubbles in the radial direction. In our numerical simulations, we model the bubbles phenomenologically as rigid bodies buoyantly rising in the stratified cluster atmosphere. We find that the terminal velocities of the flattened bubbles are small enough so that the Froude number ${\rm Fr}\lesssim 1$. The effects of stratification make the dominant contribution to the total drag force balancing the buoyancy force. In particular, clear signs of internal waves are seen in the simulations. These waves propagate horizontally and downwards from the rising bubble, spreading their energy over large volumes of the ICM. If our findings are scaled to the conditions of the Perseus cluster, the expected terminal velocity is $\sim100-200{\,\rm km\,s^{-1}}$ near the cluster cores, which is in broad agreement with direct measurements by the Hitomi satellite.
Generation of Internal Waves by Buoyant Bubbles in Galaxy Clusters and Heating of Intracluster Medium
We consider connectivity properties of the Branching Interlacements model in $\mathbb{Z}^d,~d\ge5$, recently introduced by Angel, R\'ath and Zhu in 2016. Using stochastic dimension techniques we show that every two vertices visited by the branching interlacements are connected via at most $\lceil d/4\rceil$ conditioned critical branching random walks from the underlying Poisson process, and that this upper bound is sharp. In particular every such two branching random walks intersect if and only if $5\le d\le 8$. The stochastic dimension of branching random walk result is of independent interest. We additionally obtain heat kernel bounds for branching random walks conditioned on survival.
Connectivity properties of Branching Interlacements
We follow up our photometric study of the post-outburst evolution of the FU Ori object V960 Mon with a complementary spectroscopic study at high dispersion that uses time series spectra from Keck/HIRES. Consistent with the photometric results reported in Carvalho et al. 2023, we find that the spectral evolution of V960 Mon corresponds to a decrease in the temperature of the inner disk, driven by a combination of decreasing accretion rate and increasing inner disk radius. We also find that although the majority of the absorption lines are well-matched by our accretion disk model spectrum, there are several strong absorption line families and a few emission lines that are not captured by the model. By subtracting the accretion disk model from the data at each epoch, we isolate the wind/outflow components of the system. The residuals show both broad and highly blueshifted profiles, as well as narrow and only slightly blueshifted profiles, with some lines displaying both types of features.
Disk Cooling and Wind Lines As Seen In the Spectral Line Evolution of V960 Mon
We give a full classification, in terms of periodic skew diagrams, of irreducible semisimple modules in category O for the degenerate double affine Hecke algebra of type A which can be realized as submodules of Verma modules.
Irreducible modules for the degenerate double affine Hecke algebra of type $A$ as submodules of Verma modules
We study multi-unit auctions in which bidders have limited knowledge of opponent strategies and values. We characterize optimal prior-free bids; these bids minimize the maximal loss in expected utility resulting from uncertainty surrounding opponent behavior. Optimal bids are readily computable despite bidders having multi-dimensional private information, and in certain cases admit closed-form solutions. In the pay-as-bid auction the minimax-loss bid is unique; in the uniform-price auction the minimax-loss bid is unique if the bidder is allowed to determine the quantities for which they bid, as in many practical applications. We compare minimax-loss bids and auction outcomes across auction formats, and derive testable predictions.
Bidding in Multi-Unit Auctions under Limited Information
We prove that if $\mathcal{A}$ is a complex, unital semisimple Banach algebra and $\mathcal{B}$ is a complex, unital Banach algebra having a separating family of finite-dimensional irreducible representations, then any unital linear operator from $\mathcal{A}$ onto $\mathcal{B}$ which preserves the spectral radius is a Jordan morphism.
Spectral isometries onto algebras having a separating family of finite-dimensional irreducible representations
With the increasing of electric vehicle (EV) adoption in recent years, the impact of EV charging activities to the power grid becomes more and more significant. In this article, an optimal scheduling algorithm which combines smart EV charging and V2G gird service is developed to integrate EVs into power grid as distributed energy resources, with improved system cost performance. Specifically, an optimization problem is formulated and solved at each EV charging station according to control signal from aggregated control center and user charging behavior prediction by mean estimation and linear regression. The control center collects distributed optimization results and updates the control signal, periodically. The iteration continues until it converges to optimal scheduling. Experimental result shows this algorithm helps fill the valley and shave the peak in electric load profiles within a microgrid, while the energy demand of individual driver can be satisfied.
Distributed Optimal Vehicle Grid Integration Strategy with User Behavior Prediction
One of the main challenges in autonomous racing is to design algorithms for motion planning at high speed, and across complex racing courses. End-to-end trajectory synthesis has been previously proposed where the trajectory for the ego vehicle is computed based on camera images from the racecar. This is done in a supervised learning setting using behavioral cloning techniques. In this paper, we address the limitations of behavioral cloning methods for trajectory synthesis by introducing Differential Bayesian Filtering (DBF), which uses probabilistic B\'ezier curves as a basis for inferring optimal autonomous racing trajectories based on Bayesian inference. We introduce a trajectory sampling mechanism and combine it with a filtering process which is able to push the car to its physical driving limits. The performance of DBF is evaluated on the DeepRacing Formula One simulation environment and compared with several other trajectory synthesis approaches as well as human driving performance. DBF achieves the fastest lap time, and the fastest speed, by pushing the racecar closer to its limits of control while always remaining inside track bounds.
This is the Way: Differential Bayesian Filtering for Agile Trajectory Synthesis
Context: The Crab pulsar underwent its largest timing glitch on 2017 Nov 8. The event was discovered at radio wavelengths, and was followed at soft X-ray energies by observatories, such as XPNAV and NICER. aims: This work aims to compare the glitch behavior at the two wavelengths mentioned above. Preliminary work in this regard has been done by the X-ray satellite XPNAV. NICER with its far superior sensitivity is expected to reveal much more detailed behavior. methods: NICER has accumulated more than $301$ kilo seconds of data on the Crab pulsar, equivalent to more than $3.3$ billion soft X-ray photons. These data were first processed using the standard NICER analysis pipeline. Then the arrival times of the X-ray photons were referred to the solar system's barycenter. Then specific analysis was done to study the specific behavior outlined in the following sections, while taking dead time into account. results: The variation of the rotation frequency of the Crab pulsar and its time derivative during the glitch is almost exactly similar at the radio and X-ray energies. The following properties of the Crab pulsar remain essentially constant before and after the glitch: the total X-ray flux; the flux, widths, and peaks of the two components of its integrated profile; and the soft X-ray spectrum. There is no evidence for giant pulses at X-ray energies. However, the timing noise of the Crab pulsar shows quasi sinusoidal variation before the glitch, with increasing amplitude, which is absent after the glitch. conclusions: Even the strongest glitch in the Crab pulsar appears not to affect all but one of the properties mentioned above, at either frequency. The fact that the timing noise appears to change due to the glitch is an important clue to unravel as this is still an unexplained phenomenon.
NICER observations of the Crab pulsar glitch of 2017 November
We study the time-domain acoustic scattering problem by a cluster of small holes (i.e. sound-soft obstacles). Based on the retarded boundary integral equation method, we derive the asymptotic expansion of the scattered field as the size of the holes goes to zero. Under certain geometrical constraints on the size and the minimum distance of the holes, we show that the scattered field is approximated by a linear combination of point-sources where the weights are given by the capacitance of each hole and the causal signals (of these point-sources) can be computed by solving a, retarded in time, linear algebraic system. A rigorous justification of the asymptotic expansion and the unique solvability of the linear algebraic system are shown under natural conditions on the cluster of holes. As an application of the asymptotic expansion, we derive, in the limit case when the holes are densely distributed and occupy a bounded domain, the equivalent effective acoustic medium (an equivalent mass density characterized by the capacitance of the holes) that generates, approximately, the same scattered field as the cluster of holes. Conversely, given a locally variable, smooth and positive mass density, satisfying a certain subharmonicity condition, we can design a perforated material with holes, having appropriate capacitances, that generates approximately the same acoustic field as the acoustic medium modelled by the given mass density (and constant speed of propagation). Finally, we numerically verify the asymptotic expansions by comparing the asymptotic approximations with the numerical solutions of the scattered fields via the finite element method.
Analysis of the acoustic waves reflected by a cluster of small holes in the time-domain and the equivalent mass density
It was conjectured in \cite{Namikawa_ExtendedTorelli} that the Torelli map $M_g\to A_g$ associating to a curve its jacobian extends to a regular map from the Deligne-Mumford moduli space of stable curves $\bar{M}_g$ to the (normalization of the) Igusa blowup $\bar{A}_g^{\rm cent}$. A counterexample in genus $g=9$ was found in \cite{AlexeevBrunyate}. Here, we prove that the extended map is regular for all $g\le8$, thus completely solving the problem in every genus.
Extended Torelli map to the Igusa blowup in genus 6, 7, and 8
Conversational Artificial Intelligence (CAI) systems and Intelligent Personal Assistants (IPA), such as Alexa, Cortana, Google Home and Siri are becoming ubiquitous in our lives, including those of children, the implications of which is receiving increased attention, specifically with respect to the effects of these systems on children's cognitive, social and linguistic development. Recent advances address the implications of CAI with respect to privacy, safety, security, and access. However, there is a need to connect and embed the ethical and technical aspects in the design. Using a case-study of a research and development project focused on the use of CAI in storytelling for children, this paper reflects on the social context within a specific case of technology development, as substantiated and supported by argumentation from within the literature. It describes the decision making process behind the recommendations made on this case for their adoption in the creative industries. Further research that engages with developers and stakeholders in the ethics of storytelling through CAI is highlighted as a matter of urgency.
Interactive Storytelling for Children: A Case-study of Design and Development Considerations for Ethical Conversational AI
Exact recovery of $K$-sparse signals $x \in \mathbb{R}^{n}$ from linear measurements $y=Ax$, where $A\in \mathbb{R}^{m\times n}$ is a sensing matrix, arises from many applications. The orthogonal matching pursuit (OMP) algorithm is widely used for reconstructing $x$. A fundamental question in the performance analysis of OMP is the characterizations of the probability of exact recovery of $x$ for random matrix $A$ and the minimal $m$ to guarantee a target recovery performance. In many practical applications, in addition to sparsity, $x$ also has some additional properties. This paper shows that these properties can be used to refine the answer to the above question. In this paper, we first show that the prior information of the nonzero entries of $x$ can be used to provide an upper bound on $\|x\|_1^2/\|x\|_2^2$. Then, we use this upper bound to develop a lower bound on the probability of exact recovery of $x$ using OMP in $K$ iterations. Furthermore, we develop a lower bound on the number of measurements $m$ to guarantee that the exact recovery probability using $K$ iterations of OMP is no smaller than a given target probability. Finally, we show that when $K=O(\sqrt{\ln n})$, as both $n$ and $K$ go to infinity, for any $0<\zeta\leq 1/\sqrt{\pi}$, $m=2K\ln (n/\zeta)$ measurements are sufficient to ensure that the probability of exact recovering any $K$-sparse $x$ is no lower than $1-\zeta$ with $K$ iterations of OMP. For $K$-sparse $\alpha$-strongly decaying signals and for $K$-sparse $x$ whose nonzero entries independently and identically follow the Gaussian distribution, the number of measurements sufficient for exact recovery with probability no lower than $1-\zeta$ reduces further to $m=(\sqrt{K}+4\sqrt{\frac{\alpha+1}{\alpha-1}\ln(n/\zeta)})^2$ and asymptotically $m\approx 1.9K\ln (n/\zeta)$, respectively.
Signal-Dependent Performance Analysis of Orthogonal Matching Pursuit for Exact Sparse Recovery
If F is a type-definable family of commensurable subsets, subgroups or sub-vector spaces in a metric structure, then there is an invariant subset, subgroup or sub-vector space commensurable with F. This in particular applies to type-definable or hyper-definable objects in a classical first-order structure.
A metric version of Schlichting's Theorem
A double-tip scanning tunneling microscope with nanometer scale tip separation has the ability to access the single electron Green's function in real and momentum space based on second order tunneling processes. Experimental realization of such measurements has been limited to quasi-one-dimensional systems due to the extremely small signal size. Here we propose an alternative approach to obtain such information by exploiting the current-current correlations from the individual tips, and present a theoretical formalism to describe it. To assess the feasibility of our approach we make a numerical estimate for a $\sim$ 25 nm Pb nanoisland and show that the wavefunction in fact extends from tip-to-tip and the signal depends less strongly on increased tip separation in the diffusive regime than the one in alternative approaches relying on tip-to-tip conductance.
Modeling Green's functions measurements with two-tip scanning tunneling microscopy
The electroseismic model describes the coupling phenomenon of the electromagnetic waves and seismic waves in fluid immersed porous rock. Electric parameters have better contrast than elastic parameters while seismic waves provide better resolution because of the short wavelength. The combination of theses two different waves is prominent in oil exploration. Under some assumptions on the physical parameters, we derived a H\"older stability estimate to the inverse problem of recovery of the electric parameters and the coupling coefficient from the knowledge of the fields in a small open domain near the boundary. The proof is based on a Carleman estimate of the electroseismic model.
An inverse problem for an electroseismic model describing the coupling phenomenon of electromagnetic and seismic waves
In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.
Real Time Image Saliency for Black Box Classifiers
Mobile phones pervade our daily lives and play ever expanding roles in many contexts. Their ubiquitousness makes them pivotal in empowering disabled people. However, if no inclusive approaches are provided, it becomes a strong vehicle of exclusion. Even though current solutions try to compensate for the lack of sight, not all information reaches the blind user. Good spatial ability is still required to make sense of the device and its interface, as well as the need to memorize positions on screen or keys and associated actions in a keypad. Those problems are compounded by many individual attributes such as age, age of blindness onset or tactile sensitivity which often are forgotten by designers. Worse, the entire blind population is recurrently thought of as homogeneous (often stereotypically so). Thus all users face the same solutions, ignoring their specific capabilities and needs. We usually ignore this diversity as we have the ability to adapt and become experts in interfaces that were probably maladjusted to begin with. This adaptation is not always within reach. Interaction with mobile devices is highly visually demanding which widens this gap amongst blind people. It is paramount to understand the impact of individual differences and their relationship with demands to enable the deployment of more inclusive solutions. We explore individual differences among blind people and assess how they are related with mobile interface demands, both at low (e.g. performing an on-screen gesture) and high level (text-entry) tasks. Results confirmed that different ability levels have significant impact on the performance attained by a blind person. Particularly, otherwise ignored attributes like tactile acuity, pressure sensitivity, spatial ability or verbal IQ have shown to be matched with specific mobile demands and parametrizations.
User-Sensitive Mobile Interfaces: Accounting for Individual Differences amongst the Blind
In this work, we establish an analogue result of the Erd\"os-Stone theorem of weighted digraphs using Regularity Lemma of digraphs. We give a stability result of oriented graphs and digraphs with forbidden blow-up transitive triangle and show that almost all oriented graphs and almost all digraphs with forbidden blow-up transitive triangle are almost bipartite respectively.
Typical structure of oriented graphs and digraphs with forbidden blow-up transitive triangle
We study the problem of correct solvability in the space $L_p(\mathbb R),$ $p\in[1,\infty)$ of the equation $$ -(r(x) y'(x))'+q(x)y(x)=f(x),\quad x\in \mathbb R $$ under the conditions $$r>0,\quad q\ge 0,\quad \frac{1}{r}\in L_1(\mathbb R),\quad q\in L_1(\mathbb R).$$
Principal fundamental system of solutions, The Hartman-Wintner problem and correct solvability of the general Sturm-Liouville equation
We present the results of our monitoring program to study the long-term variability of the Halpha line in high-mass X-ray binaries. We have carried out the most complete optical spectroscopic study of the global properties of high-mass X-ray binaries so far with the analysis of more than 1100 spectra of 20 sources. Our aim is to characterise the optical variability timescales and study the interaction between the neutron star and the accreting material. Our results can be summarised as follows: i) we find that Be/X-ray binaries with narrow orbits are more variable than systems with long orbital periods, ii) we show that a Keplerian distribution of the gas particles provides a good description of the disks in Be/X-ray binaries, as it does in classical Be stars, iii) a decrease in the Halpha equivalent width is generally observed after major X-ray outbursts, iv) we confirm that the Halpha equivalent width correlates with disk radius, v) while systems with supergiant companions display, multi-structured profiles, most of the Be/X-ray binaries show at some epoch double-peak asymmetric profiles, indicating that density inhomogeneities is a common property in the disk of Be/X-ray binaries, vi) the profile variability (V/R ratio) timescales are shorter and the Halpha equivalent width are smaller in Be/X-ray binaries than in isolated Be stars, and vii) we provide new evidence that the disk in Be/X-ray binaries is on average denser than in classical Be stars.
Long-term optical variability of high-mass X-ray binaries. II. Spectroscopy
We investigate the late-time asymptotic behavior of solutions to nonlinear hyperbolic systems of conservation laws containing stiff relaxation terms. First, we introduce a Chapman-Enskog-type asymptotic expansion and derive an effective system of equations describing the late-time/stiff relaxation singular limit. The structure of this new system is discussed and the role of a mathematical entropy is emphasized. Second, we propose a new finite volume discretization which, in late-time asymptotics, allows us to recover a discrete version of the same effective asymptotic system. This is achieved provided we suitably discretize the relaxation term in a way that depends on a matrix-valued free-parameter, chosen so that the desired asymptotic behavior is obtained. Our results are illustrated with several models of interest in continuum physics, and numerical experiments demonstrate the relevance of the proposed theory and numerical strategy.
Late-time/stiff relaxation asymptotic-preserving approximations of hyperbolic equations
In this work, we focus on the situation where a significant amount of matter could be located close to the event horizon of the central black hole and how it affects the gravitational lensing signal. We consider a simple toy model where the matter is concentrated in the rather small region between the inner photon sphere associated with the mass of central black hole and outer photon sphere associated with the total mass outside. If no photon sphere is present inside the matter distribution, then effective potential displays an interesting trend with maxima at inner and outer photon sphere, with a peak at inner photon sphere higher than that at outer photon sphere. In such a case we get three distinct set of infinitely many relativistic images and Einstein rings that occur due to the light rays that approach the black hole from a distant source and get reflected just outside the outer photon sphere, due to light rays that enter the outer photon sphere slightly above the outer peak and get reflected off the potential barrier inside the matter distribution and due to the light rays that get reflected just outside the inner photon sphere. This kind of pattern of images is quite unprecedented. We show that since relativistic images are highly demagnified, only three images are prominently visible from the point of observations in the presence of matter as opposed to only one prominent image in case of a single isolated black hole and also compute the time delay between them. This provides a smoking gun signature of the presence of matter lump around the black hole. We further argue that if the mass of the black hole inferred from the observation of the size of its shadow is less than the mass inferred from the motion of objects around it, it signals the presence of matter in the vicinity of the black hole.
Gravitational lensing signature of matter distribution around Schwarzschild black hole
It is now clear that abundance variations from star-to-star among the light elements, particularly C, N, O, Na and Al, are ubiquitous within galactic globular clusters; they appear seen whenever data of high quality is obtained for a sufficiently large sample of stars within such a cluster. The correlations and anti-correlations among these elements and the range of variation of each element appear to be independent of stellar evolutionary state, with the exception that enhanced depletion of C and of O is sometimes seen just at the RGB tip. While the latter behavior is almost certainly due to internal production and mixing, the internal mixing hypothesis can now be ruled out for producing the bulk of the variations seen. We focus on the implications of our new data for any explanation invoking primordial variations in the proto-cluster or accretion of polluted material from a neighboring AGB star.
Chemical Abundance Inhomogeneities in Globular Cluster Stars
Doppler reflectometry is considered in slab plasma model in the frameworks of analytical theory. The diagnostics locality is analyzed for both regimes: linear and nonlinear in turbulence amplitude. The toroidal antenna focusing of probing beam to the cut-off is proposed and discussed as a method to increase diagnostics spatial resolution. It is shown that even in the case of nonlinear regime of multiple scattering, the diagnostics can be used for an estimation (with certain accuracy) of plasma poloidal rotation profile.
Analytical theory of Doppler reflectometry in slab plasma model
The effect of Coulomb interaction screening on non-relativistic free-free absorption is investigated by integrating the numerical continuum wave functions. The screened potential is taken to be in Debye-H\H{u}ckel (Yukawa) form with a screening length D. It is found that the values of the free-free Gaunt factors for different Debye screening lengths D for a given initial electron energy \eps_i and absorbing photon energy \omega, generally lie between those of the pure Coulomb field and field-free case. However, for initial electron energies below 0.1 Ry and fixed photon energy, the Gaunt factors show dramatic enhancements (broad and narrow resonances) in the vicinities of the critical screening lengths, Dnl, at which the energies of nl bound states in the potential merge into the continuum. These enhancements of the Gaunt factors can be significantly higher than their values in the unscreened (Coulomb) case over a broad range of \eps_i. The observed broad and narrow resonances in the Gaunt factors are related to the temporary formation of weakly bound (virtual) and resonant (quasi-bound) states of the low-energy initial electron on the Debye-H\H{u}ckel potential when the screening length is in the vicinity of Dnl.
Resonances in nonrelativistic free-free Gaunt factors with screened Coulomb interaction
In this paper we present a new technique for efficiently implementing Large Eddy Simulation with the Discontin- uous Galerkin method on unstructured meshes. In particular, we will focus upon the approach to overcome the computational complexity that the additional degrees of freedom in Discontinuous Galerkin methods entail. The turbulence algorithms have been implemented within Fluidity, an open-source computational fluid dynamics solver. The model is tested with the well known backward-facing step problem, and is shown to concur with published results.
Efficient Large Eddy Simulation for the Discontinuous Galerkin Method
A unified treatment of high energy collisions in QCD is presented. Using a probabilistic approach, we incorporate both perturbative (hard) and non-perturbative (soft) components in a consistent fashion, leading to a ``Heterotic Pomeron". As a Regge trajectory, it is nonlinear, approaching 1 in the limit $t\rightarrow -\infty$.
Heterotic Pomeron: A Unified Treatment of High Energy Hadronic Collisions in QCD
We present an idealized, spherical model of the evolution of a magnetized molecular cloud due to ambipolar diffusion. This model allows us to follow the quasi-static evolution of the cloud's core prior to collapse and the subsequent evolution of the remaining envelope. By neglecting the thermal pressure gradients in comparison with magnetic stresses and by assuming that the ion velocity is small compared with the neutral velocity, we are able to find exact analytic solutions to the MHD equations. We show that, in the case of a centrally condensed cloud, a core of finite mass collapses into the origin leaving behind a quasi-static envelope, whereas initially homogeneous clouds never develop any structure in the absence of thermal stresses, and collapse as a whole. Prior to the collapse of the core, the cloud's evolution is characterized by two phases: a long, quasi-static phase where the relevant timescale is the ambipolar diffusion time (treated in this paper), and a short, dynamical phase where the characteristic timescale is the free-fall time. The collapse of the core is an "outside-in" collapse. The quasi-static evolution terminates when the cloud becomes magnetically supercritical; thereafter its evolution is dynamical, and a singularity develops at the origin-a protostar. After the initial formation of the protostar, the outer envelope continues to evolve quasi-statically, while the region of dynamical infall grows with time-an "inside-out" collapse. We use our solution to estimate the magnetic flux trapped in the collapsing core and the mass accretion rate onto the newly formed protostar. Our results agree, within factors of order unity, with the numerical results of Fiedler & Mouschovias (1992) for the physical quantities in the midplane of
Star Formation in Cold, Spherical, Magnetized Molecular Clouds
The breaking of supersymmetry is usually assumed to occur in a hidden sector. Two natural candidates for the supersymmetry breaking transmission from the hidden to the observable sector are gravity and the gauge interactions. Only the second one allows for supersymmetry breaking at low energies. I show how the two candidates deal with the flavor and the $\mu$-problem; I also briefly comment on the doublet-triplet and dark matter problem.
Advantages and Disadvantages of Supersymmetry Breaking at Low Energies
A new stochastic primal--dual algorithm for solving a composite optimization problem is proposed. It is assumed that all the functions/operators that enter the optimization problem are given as statistical expectations. These expectations are unknown but revealed across time through i.i.d. realizations. The proposed algorithm is proven to converge to a saddle point of the Lagrangian function. In the framework of the monotone operator theory, the convergence proof relies on recent results on the stochastic Forward Backward algorithm involving random monotone operators. An example of convex optimization under stochastic linear constraints is considered.
A Fully Stochastic Primal-Dual Algorithm
Complex network theory provides a powerful framework to statistically investigate the topology of local and non-local statistical interrelationships, i.e. teleconnections, in the climate system. Climate networks constructed from the same global climatological data set using the linear Pearson correlation coefficient or the nonlinear mutual information as a measure of dynamical similarity between regions, are compared systematically on local, mesoscopic and global topological scales. A high degree of similarity is observed on the local and mesoscopic topological scales for surface air temperature fields taken from AOGCM and reanalysis data sets. We find larger differences on the global scale, particularly in the betweenness centrality field. The global scale view on climate networks obtained using mutual information offers promising new perspectives for detecting network structures based on nonlinear physical processes in the climate system.
Complex networks in climate dynamics - Comparing linear and nonlinear network construction methods
Unlike fixed-gain robust control, which trades off performance with modeling uncertainty, direct adaptive control uses partial modeling information for online tuning. The present paper combines retrospective cost adaptive control (RCAC), a direct adaptive control technique for sampled-data systems, with online system identification based on recursive least squares (RLS) with variable-rate forgetting (VRF). The combination of RCAC and RLS-VRF constitutes data-driven RCAC (DDRCAC), where the online system identification is used to construct the target model, which defines the retrospective performance variable. This paper investigates the ability of RLS-VRF to provide the modeling information needed for the target model, especially nonminimum-phase (NMP) zeros. DDRCAC is applied to single-input, single-output (SISO) and multiple-input, multiple-output (MIMO) numerical examples with unknown NMP zeros, as well as several flight control problems, namely, unknown transition from minimum-phase to NMP lateral dynamics, flexible modes, flutter, and nonlinear planar missile dynamics.
Data-Driven Retrospective Cost Adaptive Control for Flight Control Application
Using the transfer matrix technique, we estimate the entropy for a gas of rods of sizes equal to k (named k-mers), which cover completely a square lattice. Our calculations were made considering three different constructions, using periodical and helical boundary conditions. One of those constructions, which we call Profile Method, was based on the calculations performed by Dhar and Rajesh [Phys. Rev. E 103, 042130 (2021)] to obtain a lower limit to the entropy of very large chains placed on the square lattice. This method, so far as we know, was never used before to define the transfer matrix, but turned out to be very useful, since it produces matrices with smaller dimensions than those obtained using other approaches. Our results were obtained for chain sizes ranging from k=2 to k=10 and they are compared with results already available in the literature. In the case of dimers ($k=2$) our results are compatible with the exact result, for trimers ($k=3$), recently investigated by Ghosh et al [Phys. Rev. E 75, 011115 (2007)] also our results were compatible, the same happening for the simulational estimates obtained by Pasinetti et al [Physical Review E 104, 054136 (2021)] in the whole range of rod sizes. Our results are consistent with the asymptotic expression for the behavior of the entropy as a function of the size $k$, proposed by Dhar and Rajesh [Phys. Rev. E 103, 042130 (2021)] for very large rods (k>>1).
Entropy of rigid k-mers on a square lattice
The optical morphology of galaxies is strongly related to galactic environment, with the fraction of early-type galaxies increasing with local galaxy density. In this work we present the first analysis of the galaxy morphology-density relation in a cosmological hydrodynamical simulation. We use a convolutional neural network, trained on observed galaxies, to perform visual morphological classification of galaxies with stellar masses $M_\ast > 10^{10} \, \mathrm{M}_\odot$ in the EAGLE simulation into elliptical, lenticular and late-type (spiral/irregular) classes. We find that EAGLE reproduces both the galaxy morphology-density and morphology-mass relations. Using the simulations, we find three key processes that result in the observed morphology-density relation: (i) transformation of disc-dominated galaxies from late-type (spiral) to lenticular galaxies through gas stripping in high-density environments, (ii) formation of lenticular galaxies by merger-induced black hole feedback in low-density environments, and (iii) an increasing fraction of high-mass galaxies, which are more often elliptical galaxies, at higher galactic densities.
The galaxy morphology-density relation in the EAGLE simulation
Among the possible alternatives to the standard cosmological model ($\Lambda$CDM), coupled Dark Energy models postulate that Dark Energy (DE), seen as a dynamical scalar field, may interact with Dark Matter (DM), giving rise to a "fifth-force", felt by DM particles only. In this paper, we study the impact of these cosmologies on the statistical properties of galaxy populations by combining high-resolution numerical simulations with semi-analytic models (SAM) of galaxy formation and evolution. New features have been implemented in the reference SAM in order to have it run self-consistently and calibrated on these cosmological simulations. They include an appropriate modification of the mass temperature relation and of the baryon fraction in DM haloes, due to the different virial scalings and to the gravitational bias, respectively. Our results show that the predictions of our coupled-DE SAM do not differ significantly from theoretical predictions obtained with standard SAMs applied to a reference $\Lambda$CDM simulation, implying that the statistical properties of galaxies provide only a weak probe for these alternative cosmological models. On the other hand, we show that both galaxy bias and the galaxy pairwise velocity distribution are sensitive to coupled DE models: this implies that these probes might be successfully applied to disentangle among quintessence, $f(R)$-Gravity and coupled DE models.
Semi-analytic galaxy formation in coupled dark energy cosmologies
The LHC forward experiment (LHCf) is specifically designed for measurements of the very forward ($\eta$$>$8.4) production cross sections of neutral pions and neutrons at Large Hadron Collider (LHC) at CERN. LHCf started data taking in December 2009, when the LHC started to provide stable collisions of protons at $\sqrt{s}$=900\,GeV. Since March 2010, LHC increased the collision energy up to $\sqrt{s}$=7\,TeV. By the time of the symposium, LHCf collected 113k events of high energy showers (corresponding to $\sim$7M inelastic collisions) at $\sqrt{s}$=900\,GeV and $\sim$100M showers ($\sim$14 nb$^{-1}$ of integrated luminosity) at $\sqrt{s}$=7\,TeV. Analysis results with the first limited sample of data demonstrate that LHCf will provide crucial data to improve the interaction models to understand very high-energy cosmic-ray air showers.
LHCf Measurements of Very Forward Particles at LHC
We consider the sequential sampling of species, where observed samples are classified into the species they belong to. We are particularly interested in studying some quantities describing the sampling process when there is a new species discovery. We assume that the observations and species are organized as a two-parameter Poisson-Dirichlet Process, which is commonly used as a Bayesian prior in the context of entropy estimation, and we use the computation of the mean posterior entropy given a sample developed in [4]. Our main result shows the existence of a monotone functional, constructed from the difference between the maximal entropy and the mean entropy throughout the sampling process. We show that this functional remains constant only when a new species discovery occurs.
One step entropy variation in sequential sampling of species for the Poisson-Dirichlet Process
Dynamic behavior of complex neuronal ensembles is a topic comprising a streamline of current researches worldwide. In this article we study the behavior manifested by epileptic brain, in the case of spontaneous non-convulsive paroxysmal activity. For this purpose we analyzed archived long-term recording of paroxysmal activity in animals genetically susceptible to absence epilepsy, namely WAG/Rij rats. We first report that the brain activity alternated between normal states and epilepsy paroxysms is the on-off intermittency phenomenon which has been observed and studied earlier in the different nonlinear systems.
On-Off Intermittency in Time Series of Spontaneous Paroxysmal Activity in Rats with Genetic Absence Epilepsy
The spectral approach to infinite disordered crystals is applied to an Anderson-type Hamiltonian to demonstrate the existence of extended states for nonzero disorder in 2D lattices of different geometries. The numerical simulations shown prove that extended states exist for disordered honeycomb, triangular, and square crystals. This observation stands in contrast to the predictions of scaling theory, and aligns with experiments in photonic lattices and electron systems. The method used is the only theoretical approach aimed at showing delocalization. A comparison of the results for the three geometries indicates that the triangular and honeycomb lattices experience transition in the transport behavior for same amount of disorder, which is to be expected from planar duality. This provides justification for the use of artificially-prepared triangular lattices as analogues for honeycomb materials, such as graphene. The analysis also shows that the transition in the honeycomb case happens more abruptly as compared to the other two geometries, which can be attributed to the number of nearest neighbors. We outline the advantages of the spectral approach as a viable alternative to scaling theory and discuss its applicability to transport problems in both quantum and classical 2D systems.
Delocalization in infinite disordered 2D lattices of different geometry
Using ab-initio methods, we show that the uniform deformation either leaves graphene (semi)metallic or opens up a small gap yet only beyond the mechanical breaking point of the graphene, contrary to claims in the literature based on tight-binding (TB) calculations. It is possible, however, to open up a global gap by a sine-like one-dimensional inhomogeneous deformation applied along any direction but the armchair one, with the largest gap for the corrugation along the zigzag direction (~0.5 eV) without any electrostatic gating. The gap opening has a threshold character with very sharp rise when the ratio of the amplitude A and the period of the sine wave deformation lambda exceeds (A/lambda)_c ~0.1 and the inversion symmetry is preserved, while it is threshold-less when the symmetry is broken, in contrast with TB-derived pseudo-magnetic field models.
Gap opening in graphene by simple periodic inhomogeneous strain
In this Letter, we explore nonrelativistic string solutions in various subsectors of the $ SU(1,2|3) $ SMT strings that correspond to different spin groups and satisfy the respective BPS bounds. In particular, we carry out an explicit analysis on rotating string solutions in the light of recently proposed SMT limits. We explore newly constructed SMT limits of type IIB (super) strings on $ AdS_5 \times S^5 $ and estimate the corresponding leading order stringy corrections near the respective BPS bounds.
Decoding the Spin-Matrix limit of strings on $AdS_5 \times S^5$
We propose a modification of the predictions of the Cohen--Lenstra heuristic for class groups of number fields in the case where roots of unity are present in the base field. As evidence for this modified formula we provide a large set of computational data which show close agreement.
On the distribution of class groups of number fields
Improving the efficiency of dispatching orders to vehicles is a research hotspot in online ride-hailing systems. Most of the existing solutions for order-dispatching are centralized controlling, which require to consider all possible matches between available orders and vehicles. For large-scale ride-sharing platforms, there are thousands of vehicles and orders to be matched at every second which is of very high computational cost. In this paper, we propose a decentralized execution order-dispatching method based on multi-agent reinforcement learning to address the large-scale order-dispatching problem. Different from the previous cooperative multi-agent reinforcement learning algorithms, in our method, all agents work independently with the guidance from an evaluation of the joint policy since there is no need for communication or explicit cooperation between agents. Furthermore, we use KL-divergence optimization at each time step to speed up the learning process and to balance the vehicles (supply) and orders (demand). Experiments on both the explanatory environment and real-world simulator show that the proposed method outperforms the baselines in terms of accumulated driver income (ADI) and Order Response Rate (ORR) in various traffic environments. Besides, with the support of the online platform of Didi Chuxing, we designed a hybrid system to deploy our model.
Multi-Agent Reinforcement Learning for Order-dispatching via Order-Vehicle Distribution Matching
We consider the dynamics of spatially-distributed, diffusing populations of organisms with antagonistic interactions. These interactions are found on many length scales, ranging from kilometer-scale animal range dynamics with selection against hybrids to micron-scale interactions between poison-secreting microbial populations. We find that the dynamical line tension at the interface between antagonistic organisms suppresses survival probabilities of small clonal clusters: the line tension introduces a critical cluster size that an organism with a selective advantage must achieve before deterministically spreading through the population. We calculate the survival probability as a function of selective advantage $\delta$ and antagonistic interaction strength $\sigma$. Unlike a simple Darwinian selective advantage, the survival probability depends strongly on the spatial diffusion constant $D_s$ of the strains when $\sigma>0$, with suppressed survival when both species are more motile. Finally, we study the survival probability of a single mutant cell at the frontier of a growing spherical cluster of cells, such as the surface of an avascular spherical tumor. Both the inflation and curvature of the frontier significantly enhance the survival probability by changing the critical size of the nucleating cell cluster.
Nucleation of antagonistic organisms and cellular competitions on curved, inflating substrates
Let $n$ be any positive integer and $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ be the semigroup of all order isomorphisms between principal filters of the $n$-th power of the set of positive integers $\mathbb{N}$ with the product order. We study algebraic properties of the semigroup $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$. In particular, we show that $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ is a bisimple, $E$-unitary, $F$-inverse semigroup, describe Green's relations on $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ and its maximal subgroups. We show that the semigroup $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ is isomorphic to the semidirect product of the direct $n$-th power of the bicyclic monoid ${\mathscr{C}}^n(p,q)$ by the group of permutation $\mathscr{S}_n$. Also we prove that every non-identity congruence $\mathfrak{C}$ on the semigroup $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ is group and describe the least group congruence on $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$. We show that every Hausdorff shift-continuous topology on $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ is discrete and discuss embedding of the semigroup $\mathscr{I\!\!P\!F}(\mathbb{N}^n)$ into compact-like topological semigroups.
The monoid of order isomorphisms of principal filters of a power of the positive integers
Let $K_{n}^{c}$ denote a complete graph on $n$ vertices whose edges are colored in an arbitrary way. Let $\Delta^{\mathrm{mon}} (K_{n}^{c})$ denote the maximum number of edges of the same color incident with a vertex of $K_{n}^{c}$. A properly colored cycle (path) in $K_{n}^{c}$ is a cycle (path) in which adjacent edges have distinct colors. B. Bollob\'{a}s and P. Erd\"{o}s (1976) proposed the following conjecture: if $\Delta^{\mathrm{mon}} (K_{n}^{c})<\lfloor \frac{n}{2} \rfloor$, then $K_{n}^{c}$ contains a properly colored Hamiltonian cycle. Li, Wang and Zhou proved that if $\Delta^{\mathrm{mon}} (K_{n}^{c})< \lfloor \frac{n}{2} \rfloor$, then $K_{n}^{c}$ contains a properly colored cycle of length at least $\lceil \frac{n+2}{3}\rceil+1$. In this paper, we improve the bound to $\lceil \frac{n}{2}\rceil + 2$.
Long properly colored cycles in edge colored complete graphs
For a complex projective space the inertia group, the homotopy inertia group and the concordance inertia group are isomorphic. In complex dimension 4n+1, these groups are related to computations in stable cohomotopy. Using stable homotopy theory, we make explicit computations to show that the inertia group is non-trivial in many cases. In complex dimension 9, we deduce some results on geometric structures on homotopy complex projective spaces and complex hyperbolic manifolds.
Inertia groups of high dimensional complex projective spaces
The subtyping relation in Java exhibits self-similarity. The self-similarity in Java subtyping is interesting and intricate due to the existence of wildcard types and, accordingly, the existence of three subtyping rules for generic types: covariant subtyping, contravariant subtyping and invariant subtyping. Supporting bounded type variables also adds to the complexity of the subtyping relation in Java and in other generic nominally-typed OO languages such as C# and Scala. In this paper we explore defining an operad to model the construction of the subtyping relation in Java and in similar generic nominally-typed OO programming languages. Operads, from category theory, are frequently used to model self-similar phenomena. The Java subtyping operad, we hope, will shed more light on understanding the type systems of generic nominally-typed OO languages.
Towards a Java Subtyping Operad
We have performed radio-frequency dissociation spectroscopy of weakly bound ^6Li_2 Feshbach molecules using low-density samples of about 30 molecules in an optical dipole trap. Combined with a high magnetic field stability this allows us to resolve the discrete trap levels in the RF dissociation spectra. This novel technique allows the binding energy of Feshbach molecules to be determined with unprecedented precision. We use these measurements as an input for a fit to the ^6Li scattering potential using coupled-channel calculations. From this new potential, we determine the pole positions of the broad ^6Li Feshbach resonances with an accuracy better than 7 \times 10^{-4} of the resonance widths. This eliminates the dominant uncertainty for current precision measurements of the equation of state of strongly interacting Fermi gases. For example, our results imply a corrected value for the Bertsch parameter \xi measured by Ku et al. [Science 335, 563 (2012)], which is \xi = 0.370(5)(8).
Precise characterization of ^6Li Feshbach resonances using trap-sideband resolved RF spectroscopy of weakly bound molecules
For any finitely generated module $M$ with non-zero rank over a commutative one dimensional Noetherian local domain, the numerical invariant $h(M)$ was introduced and studied in the author's previous work "Partial Trace Ideals and Berger's Conjecture". We establish a bound on it which helps capture information about the torsion submodule of $M$ when $M$ has rank one and it also generalizes the discussion in the mentioned previous article. We further study bounds and properties of $h(M)$ in the case when $M$ is the canonical module $\omega_R$. This in turn helps in answering a question of S. Greco and then provide some classifications. Most of the results in this article are based on the results presented in the author's doctoral dissertation "Partial Trace Ideals, The Conductor and Berger's Conjecture".
Partial Trace Ideals, Torsion and Canonical Module
Colocalization aims at characterizing spatial associations between two fluorescently-tagged biomolecules by quantifying the co-occurrence and correlation between the two channels acquired in fluorescence microscopy. Colocalization is presented either as the degree of overlap between the two channels or the overlays of the red and green images, with areas of yellow indicating colocalization of the molecules. This problem remains an open issue in diffraction-limited microscopy and raises new challenges with the emergence of super-resolution imaging, a microscopic technique awarded by the 2014 Nobel prize in chemistry. We propose GcoPS, for Geo-coPositioning System, an original method that exploits the random sets structure of the tagged molecules to provide an explicit testing procedure. Our simulation study shows that GcoPS unequivocally outperforms the best competitive methods in adverse situations (noise, irregularly shaped fluorescent patterns, different optical resolutions). GcoPS is also much faster, a decisive advantage to face the huge amount of data in super-resolution imaging. We demonstrate the performances of GcoPS on two biological real datasets, obtained by conventional diffraction-limited microscopy technique and by super-resolution technique, respectively.
Testing independence between two random sets for the analysis of colocalization in bio-imaging
A spin system is considered with a Hamiltonian typical of molecular magnets, having dipole-dipole interactions and a single-site magnetic anisotropy. In addition, spin interactions through the common radiation field are included. A fully quantum-mechanical derivation of the collective radiation rate is presented. An effective narrowing of the dipole-dipole attenuation, due to high spin polarization is taken into account. The influence of the radiation rate on spin dynamics is carefully analysed. It is shown that this influence is completely negligible. No noticeable collective effects, such as superradiance, can appear in molecular magnets, being caused by electromagnetic spin radiation. Spin superradiance can arise in molecular magnets only when these are coupled to a resonant electric circuit, as has been suggested earlier by one of the authors in Laser Phys. {\bf 12}, 1089 (2002).
Absence of spin superradiance in resonatorless magnets
This tutorial is an andragogical guide for students and practitioners seeking to understand the fundamentals and practice of linear programming. The exercises demonstrate how to solve classical optimization problems with an emphasis on spatial analysis in supply chain management and transport logistics. All exercises display the Python programs and optimization libraries used to solve them. The first chapter introduces key concepts in linear programming and contributes a new cognitive framework to help students and practitioners set up each optimization problem. The cognitive framework organizes the decision variables, constraints, the objective function, and variable bounds in a format for direct application to optimization software. The second chapter introduces two types of mobility optimization problems (shortest path in a network and minimum cost tour) in the context of delivery and service planning logistics. The third chapter introduces four types of spatial optimization problems (neighborhood coverage, flow capturing, zone heterogeneity, service coverage) and contributes a workflow to visualize the optimized solutions in maps. The workflow creates decision variables from maps by using the free geographic information systems (GIS) programs QGIS and GeoDA. The fourth chapter introduces three types of spatial logistical problems (spatial distribution, flow maximization, warehouse location optimization) and demonstrates how to scale the cognitive framework in software to reach solutions. The final chapter summarizes lessons learned and provides insights about how students and practitioners can modify the Phyton programs and GIS workflows to solve their own optimization problem and visualize the results.
Tutorial and Practice in Linear Programming: Optimization Problems in Supply Chain and Transport Logistics
Microarray gene expression data are often accompanied by a large number of genes and a small number of samples. However, only a few of these genes are relevant to cancer, resulting in signigicant gene selection challenges. Hence, we propose a two-stage gene selection approach by combining extreme gradient boosting (XGBoost) and a multi-objective optimization genetic algorithm (XGBoost-MOGA) for cancer classification in microarray datasets. In the first stage, the genes are ranked use an ensemble-based feature selection using XGBoost. This stage can effectively remove irrelevant genes and yield a group comprising the most relevant genes related to the class. In the second stage, XGBoost-MOGA searches for an optimal gene subset based on the most relevant genes's group using a multi-objective optimization genetic algorithm. We performed comprehensive experiments to compare XGBoost-MOGA with other state-of-the-art feature selection methods using two well-known learning classifiers on 13 publicly available microarray expression datasets. The experimental results show that XGBoost-MOGA yields significantly better results than previous state-of-the-art algorithms in terms of various evaluation criteria, such as accuracy, F-score, precision, and recall.
Hybrid gene selection approach using XGBoost and multi-objective genetic algorithm for cancer classification