text
stringlengths
6
128k
Developing ontologies can be expensive, time-consuming, as well as difficult to develop and maintain. This is especially true for more expressive and/or larger ontologies. Some ontologies are, however, relatively repetitive, reusing design patterns; building these with both generic and bespoke patterns should reduce duplication and increase regularity which in turn should impact on the cost of development. Here we report on the usage of patterns applied to two biomedical ontologies: firstly a novel ontology for karyotypes which has been built ground-up using a pattern based approach; and, secondly, our initial refactoring of the SIO ontology to make explicit use of patterns at development time. To enable this, we use the Tawny-OWL library which enables full-programmatic development of ontologies. We show how this approach can generate large numbers of classes from much simpler data structures which is highly beneficial within biomedical ontology engineering.
Improved mobility not only contributes to more intensive human activities but also facilitates the spread of communicable disease, thus constituting a major threat to billions of urban commuters. In this study, we present a multi-city investigation of communicable diseases percolating among metro travelers. We use smart card data from three megacities in China to construct individual-level contact networks, based on which the spread of disease is modeled and studied. We observe that, though differing in urban forms, network layouts, and mobility patterns, the metro systems of the three cities share similar contact network structures. This motivates us to develop a universal generation model that captures the distributions of the number of contacts as well as the contact duration among individual travelers. This model explains how the structural properties of the metro contact network are associated with the risk level of communicable diseases. Our results highlight the vulnerability of urban mass transit systems during disease outbreaks and suggest important planning and operation strategies for mitigating the risk of communicable diseases.
Modern high load applications store data using multiple database instances. Such an architecture requires data consistency, and it is important to ensure even distribution of data among nodes. Load balancing is used to achieve these goals. Hashing is the backbone of virtually all load balancing systems. Since the introduction of classic Consistent Hashing, many algorithms have been devised for this purpose. One of the purposes of the load balancer is to ensure storage cluster scalability. It is crucial for the performance of the whole system to transfer as few data records as possible during node addition or removal. The load balancer hashing algorithm has the greatest impact on this process. In this paper we experimentally evaluate several hashing algorithms used for load balancing, conducting both simulated and real system experiments. To evaluate algorithm performance, we have developed a benchmark suite based on Unidata MDM~ -- a scalable toolkit for various Master Data Management (MDM) applications. For assessment, we have employed three criteria~ -- uniformity of the produced distribution, the number of moved records, and computation speed. Following the results of our experiments, we have created a table, in which each algorithm is given an assessment according to the abovementioned criteria.
This study examines the relationship between globalization and income inequality, utilizing panel data spanning from 1992 to 2020. Globalization is measured by the World Bank global-link indicators such as FDI, Remittance, Trade Openness, and Migration while income inequality is measured by Gini Coefficient and the median income of 50% of the population. The fixed effect panel data analysis provides empirical evidence indicating that globalization tends to reduce income inequality, though its impact varies between developed and developing countries. The analysis reveals a strong negative correlation between net foreign direct investment (FDI) inflows and inequality in developing countries, while no such relationship was found for developed countries.The relationship holds even if we consider an alternative measure of inequality. However, when dividing countries by developed and developing groups, no statistically significant relationship was observed. Policymakers can use these findings to support efforts to increase FDI, trade, tourism, and migration to promote growth and reduce income inequality.
Maximum likelihood estimation of a log-concave probability density is formulated as a convex optimization problem and shown to have an equivalent dual formulation as a constrained maximum Shannon entropy problem. Closely related maximum Renyi entropy estimators that impose weaker concavity restrictions on the fitted density are also considered, notably a minimum Hellinger discrepancy estimator that constrains the reciprocal of the square-root of the density to be concave. A limiting form of these estimators constrains solutions to the class of quasi-concave densities.
The effects of ground-state correlations on the dipole and quadrupole excitations are studied for $^{40}$Ca and $^{48}$Ca using the extended random phase approximation (ERPA) derived from the time-dependent density-matrix theory. Large effects of the ground-state correlations are found in the fragmentation of the giant quadrupole resonance in $^{40}$Ca and in the low-lying dipole strength in $^{48}$Ca. It is discussed that the former is due to a mixing of different configurations in the ground state and the latter is from the partial occupation of the neutron single-particle states. The dipole and quadrupole strength distributions below 10 MeV calculated in ERPA are in qualitatively agreement with experiment.
Many challenging image processing tasks can be described by an ill-posed linear inverse problem: deblurring, deconvolution, inpainting, compressed sensing, and superresolution all lie in this framework. Traditional inverse problem solvers minimize a cost function consisting of a data-fit term, which measures how well an image matches the observations, and a regularizer, which reflects prior knowledge and promotes images with desirable properties like smoothness. Recent advances in machine learning and image processing have illustrated that it is often possible to learn a regularizer from training data that can outperform more traditional regularizers. We present an end-to-end, data-driven method of solving inverse problems inspired by the Neumann series, which we call a Neumann network. Rather than unroll an iterative optimization algorithm, we truncate a Neumann series which directly solves the linear inverse problem with a data-driven nonlinear regularizer. The Neumann network architecture outperforms traditional inverse problem solution methods, model-free deep learning approaches, and state-of-the-art unrolled iterative methods on standard datasets. Finally, when the images belong to a union of subspaces and under appropriate assumptions on the forward model, we prove there exists a Neumann network configuration that well-approximates the optimal oracle estimator for the inverse problem and demonstrate empirically that the trained Neumann network has the form predicted by theory.
The Rashba effect leads to a chiral precession of the spins of moving electrons while the Dzyaloshinskii-Moriya interaction (DMI) generates preference towards a chiral profile of local spins. We predict that the exchange interaction between these two spin systems results in a 'chiral' magnetoresistance depending on the chirality of the local spin texture. We observe this magnetoresistance by measuring the domain wall (DW) resistance in a uniquely designed Pt/Co/Pt zigzag wire, and by changing the chirality of the DW with applying an in-plane magnetic field. A chirality-dependent DW resistance is found, and a quantitative analysis shows a good agreement with a theory based on the Rashba model. Moreover, the DW resistance measurement allows us to independently determine the strength of the Rashba effect and the DMI simultaneously, and the result implies a possible correlation between the Rashba effect, the DMI, and the symmetric Heisenberg exchange.
We find the free-energy in the thermodynamic limit of a one dimensional XY model associated to a system of N qubits. The coupling among the sigma_i^z is a long range two bodies random interaction. The randomness in the couplings is the typical interaction of the Hopfield model with p patterns (p<<N), with the patterns being p sequences of independent identically distributed (i.i.d.) random variables assuming values \pm 1 with probability 1/2. We show also that in the case p < alpha N the free-energy is asymptotically independent from the choice of the patterns, i.e. it is self-averaging. The Hamiltonian is the one used by (Neigovzen et al. 2009) in their experiment.
The ABC-stacked N-layer-graphene family of two-dimensional electron systems is described at low energies by two remarkably flat bands with Bloch states that have strongly momentum-dependent phase differences between carbon pi-orbital amplitudes on different layers, and large associated momentum space Berry phases. These properties are most easily understood using a simplified model with only nearest-neighbor inter-layer hopping which leads to gapless semiconductor electronic structure, with p^N dispersion in both conduction and valence bands. We report on a study of the electronic band structures of trilayers which uses ab initio density functional theory and k*p theory to fit the parameters of a pi-band tight-binding model. We find that when remote interlayer hopping is retained, the triple Dirac point of the simplified model is split into three single Dirac points located along the three KM directions. External potential differences between top and bottom layers are strongly screened by charge transfer within the trilayer, but still open an energy gap at overall neutrality.
The ability that one system immediately affects another one by using local measurements is regarded as quantum steering, which can be detected by various steering criteria. Recently, Mondal et al. [Phys. Rev. A 98, 052330 (2018)] derived the complementarity relations of coherence steering criteria, and revealed that the quantum steering of system can be observed through the average coherence of subsystem. Here, we experimentally verify the complementarity relations between quantum steering criteria by employing two-photon Bell-like states and three Pauli operators. The results demonstrate that if prepared quantum states can violate two setting coherence steering criteria and turn out to be steerable states, then it cannot violate the complementary settings criteria. Three measurement settings inequality, which establish a complementarity relation between these two coherence steering criteria, always holds in experiment. Besides, we experimentally certify that the strengths of coherence steering criteria dependent on the choice of coherence measure. In comparison with two setting coherence steering criteria based on l1 norm of coherence and relative entropy of coherence, our experimental results show that the steering criterion based on skew information of coherence is more stronger in detecting the steerability of quantum states. Thus, our experimental demonstrations can deepen the understanding of the relation between the quantum steering and quantum coherence.
In the $d$-Scattered Set problem we are asked to select at least $k$ vertices of a given graph, so that the distance between any pair is at least $d$. We study the problem's (in-)approximability and offer improvements and extensions of known results for Independent Set, of which the problem is a generalization. Specifically, we show: - A lower bound of $\Delta^{\lfloor d/2\rfloor-\epsilon}$ on the approximation ratio of any polynomial-time algorithm for graphs of maximum degree $\Delta$ and an improved upper bound of $O(\Delta^{\lfloor d/2\rfloor})$ on the approximation ratio of any greedy scheme for this problem. - A polynomial-time $2\sqrt{n}$-approximation for bipartite graphs and even values of $d$, that matches the known lower bound by considering the only remaining case. - A lower bound on the complexity of any $\rho$-approximation algorithm of (roughly) $2^{\frac{n^{1-\epsilon}}{\rho d}}$ for even $d$ and $2^{\frac{n^{1-\epsilon}}{\rho(d+\rho)}}$ for odd $d$ (under the randomized ETH), complemented by $\rho$-approximation algorithms of running times that (almost) match these bounds.
We explore the two-dimensional motion of relativistic electrons when they are trapped in magnetic fields having spatial power-law variation. Its impacts include lifting of degeneracy that emerged in the case of the constant magnetic field, special alignment of Landau levels of spin-up and spin-down electrons depending on whether the magnetic field is increasing or decreasing from the centre, splitting of Landau levels of electrons with zero angular momentum from that of positive one and the change in the equation of state of matter. Landau quantization (LQ) in variable magnetic fields has interdisciplinary applications in a variety of disciplines ranging from condensed matter to quantum information. As examples, we discuss the increase in quantum speed of the electron in presence of spatially increasing magnetic field; and the attainment of super Chandrasekhar mass of white dwarfs by taking into account LQ and Lorentz force simultaneously.
The quasielastic charged current (CCQE) $\nu_e n \rightarrow e^- p$ scattering is the dominant mechanism to detect appearance of a $\nu_e$ in an almost $\nu_\mu$ flux at the 1 GeV scale. Actual experiments show a precision below 1% and between less known background contributions, but necessary to constraint the event excess, we have the radiative corrections. A consistent model recently developed for the simultaneous description of elastic and radiative $\pi N$ scattering, pion-photoproduction and single pion production processes, both for charged and neutral current neutrino-nucleon scattering, is extended for the evaluation of the radiative $\nu_l N\rightarrow \nu_l N \gamma$ cross section. Our results are similar to a previous (but inconsistent) theoretical evaluation in the low energy region, and show an increment in the upper region where the $\Delta$ resonance becomes relevant.
Seasonality is one of the oldest and most elucidation-resistant issues in suicide epidemiological research. Despite winter depression (also known as Seasonal Affective Disorder, SAD) is known and treated since many years, worldwide cross-sectional data from 28 countries show a lower frequency of suicide attempts around the equinoxes and a higher frequency in spring (both in Northern and Southern Hemisphere). This peak is not compatible with the SAD explanation. However, in recent years epidemiological research has yielded new results, which provide new perspectives on the matter. In fact, the discovery of a new pathology called Post-Series Depression (PSD) could provide an explanation of the suicide attempts pattern. The aim of this study is to analyse weekly data in order to compare them with the TV series broadcasting. Since medical observations in our sample are distributed over many years, in order to compare them as best as we can with the television programming, Grey's Anatomy series was chosen. This medical drama has been in the top 10 of most viewed TV series since 12 years and it is broadcast all over the world, so that it can be considered a universal and homogeneous phenomenon. A full season of the series is split into two separate units with a hiatus around the end of the calendar year, and it runs from September through May. Data analysis was made in order to prove the correlation between PSD and the increase of suicide attempts. Surprisingly, the data analysis shows that the increase of rate of suicide attempts does not coincide with the breaks in Grey's Anatomy scheduling, but with the series broadcasting. This therefore suggests that it is the series itself to increase the viewer's depression.
In this paper, we introduce the notions of logarithmic Poisson structure and logarithmic principal Poisson structure; we prove that the latter induces a representation by logarithmic derivation of the module of logarithmic Kahler differentials; therefore, it induces a differential complex from which we derive the notion of logarithmic Poisson cohomology. We prove that Poisson cohomology and logarithmic Poisson cohomology are equal when the Poisson structure is logsymplectic. We give an example of non logsymplectic but logarithmic Poisson structure for which these cohomologies are equal. We also give an example for which these cohomologies are different. We discuss and modify the K. Saito definition of logarithmic forms. The notes end with an application to a prequantization of the logarithmic Poisson algebra: (C[x; y]; {x; y} = x):
Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions -- more specifically the bipartite graph structure -- into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec and Collaborative Memory Network. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering.
Let (\rho_\lambda)_{\lambda\in \Lambda} be a holomorphic family of representations of a finitely generated group G into PSL(2,C), parameterized by a complex manifold \Lambda . We define a notion of bifurcation current in this context, that is, a positive closed current on \Lambda describing the bifurcations of this family of representations in a quantitative sense. It is the analogue of the bifurcation current introduced by DeMarco for holomorphic families of rational mappings on the Riemann sphere. Our definition relies on the theory of random products of matrices, so it depends on the choice of a probability measure \mu on G. We show that under natural assumptions on \mu, the support of the bifurcation current coincides with the bifurcation locus of the family. We also prove that the bifurcation current describes the asymptotic distribution of several codimension 1 phenomena in parameter space, like accidental parabolics or new relations, or accidental collisions between fixed points.
We generalize Brudno's theorem of $1$-dimensional shift dynamical system to $\mathbb{Z}^d$ (or $\mathbb{Z}_+^d$) subshifts. That is to say, in $\mathbb{Z}^d$ (or $\mathbb{Z}^d_+$) subshift, the Kolmogorov-Sinai entropy is equivalent to the Kolmogorov complexity density almost everywhere for an ergodic shift-invariant measure.
In this paper we prove the existence of solutions for a second order sweeping process with a Lipschitz single valued perturbation by transforming it to a first order problem.
In this paper, we present our first attempts in building a multilingual Neural Machine Translation framework under a unified approach. We are then able to employ attention-based NMT for many-to-many multilingual translation tasks. Our approach does not require any special treatment on the network architecture and it allows us to learn minimal number of free parameters in a standard way of training. Our approach has shown its effectiveness in an under-resourced translation scenario with considerable improvements up to 2.6 BLEU points. In addition, the approach has achieved interesting and promising results when applied in the translation task that there is no direct parallel corpus between source and target languages.
We study the light quark-mass dependence of charmed baryon masses as measured by various QCD lattice collaborations. A global fit to such data based on the chiral SU(3) Lagrangian is reported on. All low-energy constants that are relevant at next-to-next-to-next-to-leading order (N$^3$LO) are determined from the lattice data sets where constraints from sum rules as they follow from large-Nc QCD at subleading order are considered. The expected hierarchy for the low-energy constants in the 1/Nc expansion is confirmed by our global fits to the lattice data. With our results the low-energy interaction of the Goldstone bosons with the charmed baryon ground states is well constrained and the path towards realistic coupled-channel computations in this sector of QCD is prepared.
In this survey, we provide a comprehensive review of more than 200 papers, technical reports, and GitHub repositories published over the last 10 years on the recent developments of deep learning techniques for iris recognition, covering broad topics on algorithm designs, open-source tools, open challenges, and emerging research. First, we conduct a comprehensive analysis of deep learning techniques developed for two main sub-tasks in iris biometrics: segmentation and recognition. Second, we focus on deep learning techniques for the robustness of iris recognition systems against presentation attacks and via human-machine pairing. Third, we delve deep into deep learning techniques for forensic application, especially in post-mortem iris recognition. Fourth, we review open-source resources and tools in deep learning techniques for iris recognition. Finally, we highlight the technical challenges, emerging research trends, and outlook for the future of deep learning in iris recognition.
The topology transition problem of transmission networks is becoming increasingly crucial with topological flexibility more widely leveraged to promote high renewable penetration. This paper proposes a novel methodology to address this problem. Aiming at achieving a bumpless topology transition regarding both static and dynamic performance, this methodology utilizes various eligible control resources in transmission networks to cooperate with the optimization of line-switching sequence. Mathematically, a composite formulation is developed to efficiently yield bumpless transition schemes with AC feasibility and stability both ensured. With linearization of all non-convexities involved and tractable bumpiness metrics, a convex mixed-integer program firstly optimizes the line-switching sequence and partial control resources. Then, two nonlinear programs recover AC feasibility, and optimize the remaining control resources by minimizing the $\mathcal{H}_2$-norm of associated linearized systems, respectively. The final transition scheme is selected by accurate evaluation including stability verification using time-domain simulations. Finally, numerical studies demonstrate the effectiveness and superiority of the proposed methodology to achieve bumpless topology transition.
In Landau gauge QCD the Kugo-Ojima confinement criterion and its relation to the infrared behaviour of the gluon and ghost propagators are reviewed. It is demonstrated that the realization of this confinement criterion (which is closely related to the Gribov-Zwanziger horizon condition) results from quite general properties of the ghost Dyson-Schwinger equation. The numerical solutions for the gluon and ghost propagators obtained from a truncated set of Dyson--Schwinger equations provide an explicit example for the anticipated infrared behaviour. The results are in good agreement, also quantitatively, with corresponding lattice data obtained recently. The resulting running coupling approaches a fixed point in the infrared, $\alpha(0) = 8.915/N_c$. Solutions for the coupled system of Dyson--Schwinger equations for the quark, gluon and ghost propagators are presented. Dynamical generation of quark masses and thus spontaneous breaking of chiral symmetry takes place. In the quenched approximation the quark propagator functions agree well with those of corresponding lattice calculations. For a small number of light flavours the quark, gluon and ghost propagators deviate only slightly from the ones in quenched approximation. While the positivity violation of the gluon spectral function is manifest in the gluon propagator, there are no clear indications of analogous positivity violations for quarks so far.
We use a recent implementation of the large $D$ expansion in order to construct the higher-dimensional Kerr-Newman black hole and also new charged rotating black bar solutions of the Einstein-Maxwell theory, all with rotation along a single plane. We describe the space of solutions, obtain their quasinormal modes, and study the appearance of instabilities as the horizons spread along the plane of rotation. Generically, the presence of charge makes the solutions less stable. Instabilities can appear even when the angular momentum of the black hole is small, as long as the charge is sufficiently large. We expect that, although our study is performed in the limit $D\to\infty$, the results provide a good approximation for charged rotating black holes at finite $D\geq 6$.
G\"ottsche gave a formula for the dimension of the cohomology of Hilbert schemes of points on a smooth projective surface $S$. When $S$ admits an action by a finite group $G$, we describe the action of $G$ on the Hodge structure. In the case that $S$ is a K3 surface, each element of $G$ gives a trace on $\sum_{n=0}^{\infty}\sum_{i=0}^{\infty}(-1)^{i}H^{i}(S^{[n]},\mathbb{C})q^{n}$. When $G$ acts faithfully and symplectically on $S$, the resulting generating function is of the form $q/f(q)$, where $f(q)$ is a cusp form. We relate the Hodge structure of Hilbert schemes of points to the Hodge structure of the compactified Jacobian of the tautological family of curves over an integral linear system on a K3 surface as $G$-representations. Finally, we give a sufficient condition for a $G$-orbit of curves with nodal singularities not to contribute to the representation.
The mechanism of Cooper pair formation in iron-based superconductors remains a controversial topic. The main question is whether spin or orbital fluctuations are responsible for the pairing mechanism. To solve this problem, a crucial clue can be obtained by examining the remarkable enhancement of magnetic neutron scattering signals appearing in a superconducting phase. The enhancement is called spin resonance for a spin fluctuation model, in which their energy is restricted below twice the superconducting gap value (2Ds), whereas larger energies are possible in other models such as an orbital fluctuation model. Here we report the doping dependence of low-energy magnetic excitation spectra in Ba1-xKxFe2As2 for 0.5<x<0.84 studied by inelastic neutron scattering. We find that the behavior of the spin resonance dramatically changes from optimum to overdoped regions. Strong resonance peaks are observed clearly below 2Ds in the optimum doping region, while they are absent in the overdoped region. Instead, there is a transfer of spectral weight from energies below 2Ds to higher energies, peaking at values of 3Ds for x = 0.84. These results suggest a reduced impact of magnetism on Cooper pair formation in the overdoped region.
If edge devices are to be deployed to critical applications where their decisions could have serious financial, political, or public-health consequences, they will need a way to signal when they are not sure how to react to their environment. For instance, a lost delivery drone could make its way back to a distribution center or contact the client if it is confused about how exactly to make its delivery, rather than taking the action which is "most likely" correct. This issue is compounded for health care or military applications. However, the brain-realistic temporal credit assignment problem neuromorphic computing algorithms have to solve is difficult. The double role weights play in backpropagation-based-learning, dictating how the network reacts to both input and feedback, needs to be decoupled. e-prop 1 is a promising learning algorithm that tackles this with Broadcast Alignment (a technique where network weights are replaced with random weights during feedback) and accumulated local information. We investigate under what conditions the Bayesian loss term can be expressed in a similar fashion, proposing an algorithm that can be computed with only local information as well and which is thus no more difficult to implement on hardware. This algorithm is exhibited on a store-recall problem, which suggests that it can learn good uncertainty on decisions to be made over time.
The channeling of the ion recoiling after a collision with a WIMP changes the ionization signal in direct detection experiments, producing a larger signal scintillation or ionization than otherwise expected. We give estimates of the fraction of channeled recoiling ions in CsI crystals using analytic models produced since the 1960's and 70's to describe channeling and blocking effects.
It is well known that the cohomology groups of a closed manifold $M$ can be reconstructed using the gradient dynamical of a Morse-Smale function $f\colon M\to \R$. A direct result of this construction are Morse inequalities that provide lower bounds for the number of critical points of $f$ in term of Betti numbers of $M$. These inequalities can be deduced through a purely analytic method by studying the asymptotic behaviour of the deformed Laplacian operator. This method was introduced by E. Witten and has inspired a numbers of great achievements in Geometry and Topology in few past decades. In this paper, adopting the Witten approach, we provide an analytic proof for; the so called; equivariant Morse inequalities when the underlying manifold is acted on by the Lie group $G=S^1$ and the Morse function $f$ is invariant with respect to this action.
It is a central prediction of renormalisation group theory that the critical behaviours of many statistical mechanics models on Euclidean lattices depend only on the dimension and not on the specific choice of lattice. We investigate the extent to which this universality continues to hold beyond the Euclidean setting, taking as case studies Bernoulli bond percolation and lattice trees. We present strong numerical evidence that the critical exponents governing these models on transitive graphs of polynomial volume growth depend only on the volume-growth dimension of the graph and not on any other large-scale features of the geometry. For example, our results strongly suggest that percolation, which has upper-critical dimension six, has the same critical exponents on the four-dimensional hypercubic lattice $\mathbb{Z}^4$ and the Heisenberg group despite the distinct large-scale geometries of these two lattices preventing the relevant percolation models from sharing a common scaling limit. On the other hand, we also show that no such universality should be expected to hold on fractals, even if one allows the exponents to depend on a large number of standard fractal dimensions. Indeed, we give natural examples of two fractals which share Hausdorff, spectral, topological, and topological Hausdorff dimensions but exhibit distinct numerical values of the percolation Fisher exponent $\tau$. This gives strong evidence against a conjecture of Balankin et al. [Phys. Lett. A 2018].
The medical field is creating large amount of data that physicians are unable to decipher and use efficiently. Moreover, rule-based expert systems are inefficient in solving complicated medical tasks or for creating insights using big data. Deep learning has emerged as a more accurate and effective technology in a wide range of medical problems such as diagnosis, prediction and intervention. Deep learning is a representation learning method that consists of layers that transform the data non-linearly, thus, revealing hierarchical relationships and structures. In this review we survey deep learning application papers that use structured data, signal and imaging modalities from cardiology. We discuss the advantages and limitations of applying deep learning in cardiology that also apply in medicine in general, while proposing certain directions as the most viable for clinical use.
Supermassive black holes (BHs) residing in the brightest cluster galaxies are over-massive relative to the stellar bulge mass or central stellar velocity dispersion of their host galaxies. As BHs residing at the bottom of the galaxy cluster's potential well may undergo physical processes that are driven by the large-scale characteristics of the galaxy clusters, it is possible that the growth of these BHs is (indirectly) governed by the properties of their host clusters. In this work, we explore the connection between the mass of BHs residing in the brightest group/cluster galaxies (BGGs/BCGs) and the virial temperature, and hence total gravitating mass, of galaxy groups/clusters. To this end, we investigate a sample of 17 BGGs/BCGs with dynamical BH mass measurements and utilize XMM-Newton X-ray observations to measure the virial temperatures and infer the $M_{\rm 500}$ mass of the galaxy groups/clusters. We find that the $M_{\rm BH} - kT$ relation is significantly tighter and exhibits smaller scatter than the $M_{\rm BH} - M_{\rm bulge}$ relations. The best-fitting power-law relations are $ \log_{10} (M_{\rm BH}/10^{9} \ \rm{M_{\odot}}) = 0.20 + 1.74 \log_{10} (kT/1 \ \rm{keV}) $ and $ \log_{10} (M_{\rm BH}/10^{9} \ \rm{M_{\odot}}) = -0.80 + 1.72 \log_{10} (M_{\rm bulge}/10^{11} \ M_{\odot})$. Thus, the BH mass of BGGs/BCGs may be set by physical processes that are governed by the properties of the host galaxy group/cluster. These results are confronted with the Horizon-AGN simulation, which reproduces the observed relations well, albeit the simulated relations exhibit notably smaller scatter.
We present a novel numerical solver for the systems of coupled non-linear elliptical differential equations. The solver partitions the computational domain into a set of rectangular pseudo-spectral collocation subdomains and is especially well-suited for working with stiff solutions such as almost shell-like solitonic boson stars. The method can be used in any number of dimensions although is practical for one- to three-dimensional problems. We apply the method to rotating and spherically symmetric solitonic boson stars and demonstrate that it displays exponential convergence. In the spherical symmetric case we explore families of almost shell-like solitonic boson stars and get the results that conform with the well-known analytic approximation.
The problem of forced convection along an isothermal moving plate is a classical problem of fluid mechanics that has been solved for the first time in 1961 by Sakiadis (1961). It appears that the first work concerning mixed convection along a moving plate is that of Moutsoglou and Chen (1980). Thereafter, many solutions have been obtained for different aspects of this class of boundary layer problems. In the previous works the fluid properties have been assumed constant. Ali (2006) in a recent paper treated, for the first time, the mixed convection problem with variable viscosity. He used the local similarity method to solve this problem but there are doubts about the validity of his results. For that reason we resolved the above problem with the direct numerical solution of the boundary layer equations without any transformation.
Besides the use as cold matrix for spectroscopic studies, superfluid helium droplets have served as a cold environment for the synthesis of molecules and clusters. Since vibrational frequencies of molecules in helium droplets exhibit almost no shift compared to the free molecule values, one could assume the solvated particles move frictionless and undergo a reaction as soon as their paths cross. There have been a few unexplained observations that seemed to indicate cases of two species on one droplet not forming bonds but remaining isolated. In this work, we performed a systematic study of helium droplets doped with one rubidium and one strontium atom showing that besides a reaction to RbSr, there is a probability of finding separated Rb and Sr atoms on one droplet that only react after electronic excitation. Our results further indicate that ground-state Sr atoms can reside at the surface as well as inside the droplet.
This paper solves exit problems for spectrally negative Markov additive processes and their reflections. A so-called scale matrix, which is a generalization of the scale function of a spectrally negative \levy process, plays a central role in the study of exit problems. Existence of the scale matrix was shown in Thm. 3 of Kyprianou and Palmowski (2008). We provide a probabilistic construction of the scale matrix, and identify the transform. In addition, we generalize to the MAP setting the relation between the scale function and the excursion (height) measure. The main technique is based on the occupation density formula and even in the context of fluctuations of spectrally negative L\'{e}vy processes this idea seems to be new. Our representation of the scale matrix $W(x)=e^{-\Lambda x}\eL(x)$ in terms of nice probabilistic objects opens up possibilities for further investigation of its properties.
Photoreflectance is used for the characterisation of semiconductor samples, usually by sweeping the monochromatized probe beam within the energy range comprised between the highest value set by the pump beam and the lowest absorption threshold of the sample. There is, however, no fundamental upper limit for the probe beam other than the limited spectral content of the source and the responsivity of the detector. As long as the modulation mechanism behind photoreflectance does affect the complete electronic structure of the material under study, sweeping the probe beam upstream towards higher energies from that of the pump source is equally effective in order to probe high energy critical points. This fact, up to now largely overseen, is shown experimentally in this work. E1 and E0+{\Delta}0 critical points of bulk GaAs are unambiguously resolved using pump light of lower energy. Upstream modulation may widen further applications of the technique.
We introduce a weak notion of $2\times 2$-minors of gradients of a suitable subclass of $BV$ functions. In the case of maps in $BV(\mathbb{R}^2;\mathbb{R}^2)$ such a notion extends the standard definition of Jacobian determinant to non-Sobolev maps. We use this distributional Jacobian to prove a compactness and $\Gamma$-convergence result for a new model describing the emergence of topological singularities in two dimensions, in the spirit of Ginzburg-Landau and core-radius approaches. Within our framework, the order parameter is an $SBV$ map $u$ taking values in $\mathbb{S}^1$ and the energy is made by the sum of the squared $L^2$ norm of $\nabla u$ and of the length of (the closure of) the jump set of $u$ multiplied by $\frac 1 \varepsilon$. Here, $\varepsilon$ is a length-scale parameter. We show that, in the $|\log\varepsilon|$ regime, the Jacobian distributions converge, as $\varepsilon\to 0^+$, to a finite sum $\mu$ of Dirac deltas with weights multiple of $\pi$, and that the corresponding effective energy is given by the total variation of $\mu$.
Crystal structure prediction is one of the major unsolved problems in materials science. Traditionally, this problem is formulated as a global optimization problem for which global search algorithms are combined with first principle free energy calculations to predict the ground-state crystal structure given only a material composition or a chemical system. These ab initio algorithms usually cannot exploit a large amount of implicit physicochemical rules or geometric constraints (deep knowledge) of atom configurations embodied in a large number of known crystal structures. Inspired by the deep learning enabled breakthrough in protein structure prediction, herein we propose AlphaCrystal, a crystal structure prediction algorithm that combines a deep residual neural network model that learns deep knowledge to guide predicting the atomic contact map of a target crystal material followed by reconstructing its 3D crystal structure using genetic algorithms. Based on the experiments of a selected set of benchmark crystal materials, we show that our AlphaCrystal algorithm can predict structures close to the ground truth structures. It can also speed up the crystal structure prediction process by predicting and exploiting the predicted contact map so that it has the potential to handle relatively large systems. We believe that our deep learning based ab initio crystal structure prediction method that learns from existing material structures can be used to scale up current crystal structure prediction practice. To our knowledge, AlphaCrystal is the first neural network based algorithm for crystal structure contact map prediction and the first method for directly reconstructing crystal structures from materials composition, which can be further optimized by DFT calculations.
We present the first broad-band X-ray study of the nuclei of 14 hard X-ray selected giant radio galaxies, based both on the literature and on the analysis of archival X-ray data from NuSTAR, XMM-Newton, Swift and INTEGRAL. The X-ray properties of the sources are consistent with an accretion-related X-ray emission, likely originating from an X-ray corona coupled to a radiatively efficient accretion flow. We find a correlation between the X-ray luminosity and the radio core luminosity, consistent with that expected for AGNs powered by efficient accretion. In most sources, the luminosity of the radio lobes and the estimated jet power are relatively low compared with the nuclear X-ray emission. This indicates that either the nucleus is more powerful than in the past, consistent with a restarting of the central engine, or that the giant lobes are dimmer due to expansion losses.
Previous work introduced a lower-dimensional numerical model for the geometric nonlinear simulation and optimization of compliant pressure actuated cellular structures. This model takes into account hinge eccentricities as well as rotational and axial cell side springs. The aim of this article is twofold. First, previous work is extended by introducing an associated continuum model. This model is an exact geometric representation of a cellular structure and the basis for the spring stiffnesses and eccentricities of the numerical model. Second, the state variables of the continuum and numerical model are linked via discontinuous stress constraints on the one hand and spring stiffness, hinge eccentricities on the other hand. An efficient optimization algorithm that fully couples both sets of variables is presented. The performance of the proposed approach is demonstrated with the help of an examples.
In this data paper we present the results of an extensive 21cm-line synthesis imaging survey of 43 spiral galaxies in the nearby Ursa Major cluster using the Westerbork Synthesis Radio Telescope. Detailed kinematic information in the form of position-velocity diagrams and rotation curves is presented in an atlas together with HI channel maps, 21cm continuum maps, global HI profiles, radial HI surface density profiles, integrated HI column density maps, and HI velocity fields. The relation between the corrected global HI linewidth and the rotational velocities Vmax and Vflat as derived from the rotation curves is investigated. Inclination angles obtained from the optical axis ratios are compared to those derived from the inclined HI disks and the HI velocity fields. The galaxies were not selected on the basis of their HI content but solely on the basis of their cluster membership and inclination which should be suitable for a kinematic analysis. The observed galaxies provide a well-defined, volume limited and equidistant sample, useful to investigate in detail the statistical properties of the Tully-Fisher relation and the dark matter halos around them.
Food supply chain plays a vital role in human health and food prices. Food supply chain inefficiencies in terms of unfair competition and lack of regulations directly affect the quality of human life and increase food safety risks. This work merges Hyperledger Fabric, an enterprise-ready blockchain platform with existing conventional infrastructure, to trace a food package from farm to fork using an identity unique for each food package while keeping it uncomplicated. It keeps the records of business transactions that are secured and accessible to stakeholders according to the agreed set of policies and rules without involving any centralized authority. This paper focuses on exploring and building an uncomplicated, low-cost solution to quickly link the existing food industry at different geographical locations in a chain to track and trace the food in the market.
Quantum and private communications are affected by a fundamental limitation which severely restricts the optimal rates that are achievable by two distant parties. To overcome this problem, one needs to introduce quantum repeaters and, more generally, quantum communication networks. Within a quantum network, other problems and features may appear when we move from the basic unicast setting of single-sender/single-receiver to more complex multi-end scenarios, where multiple senders and multiple receivers simultaneously use the network to communicate. Assuming various configurations, including multiple-unicast, multicast, and multiple-multicast communication, we bound the optimal rates for transmitting quantum information, distributing entanglement, or generating secret keys in quantum networks connected by arbitrary quantum channels. These bounds cannot be surpassed by the most general adaptive protocols of quantum network communication.
In this paper we are concerned with the 2D incompressible Navier-Stokes equations driven by space-time white noise. We establish existence of infinitely many global-in-time probabilistically strong and analytically weak solutions $u$ for every divergence free initial condition $u_0\in L^p\cup C^{-1+\delta},\ p\in(1,2),\delta>0$. More precisely, there exist infinitely many solutions such that $u-z\in C([0,\infty);L^p)\cap L^2_{\rm{loc}}([0,\infty);H^\zeta)\cap L^1_{\rm{loc}}([0,\infty);W^{\frac13,1})$ for some $\zeta\in(0,1)$, where $z$ is the solution to the linear equation. This result in particular implies non-uniqueness in law. Our result is sharp in the sense that the solution satisfying $u-z\in C([0,\infty);L^2)\cap L^2_{\rm{loc}}([0,\infty);H^\zeta)$ for some $\zeta\in(0,1)$ is unique.
Results of recent observations of the Galactic bulge demand that we discard a simple picture of its formation, suggesting the presence of two stellar populations represented by two peaks of stellar metallicity distribution (MDF) in the bulge. To assess this issue, we construct Galactic chemical evolution models that have been updated in two respects: First, the delay time distribution (DTD) of type Ia supernovae (SNe Ia) recently revealed by extensive SN Ia surveys is incorporated into the models. Second, the nucleosynthesis clock, the s-processing in asymptotic giant branch (AGB) stars, is carefully considered in this study. This novel model first shows that the Galaxy feature tagged by the key elements, Mg, Fe, Ba for the bulge as well as thin and thick disks is compatible with a short-delay SN Ia. We present a successful modeling of a two-component bulge including the MDF and the evolutions of [Mg/Fe] and [Ba/Mg], and reveal its origin as follows. A metal-poor component (<[Fe/H]>~-0.5) is formed with a relatively short timescale of ~1 Gyr. These properties are identical to the thick disk's characteristics in the solar vicinity. Subsequently from its remaining gas mixed with a gas flow from the disk outside the bulge, a metal-rich component (<[Fe/H]>~+0.3) is formed with a longer timescale (~4 Gyr) together with a top-heavy initial mass function that might be identified with the thin disk component within the bulge.
The effect of disorder on a class of transition metal oxides described by a single orbital Hubbard model at half filling is investigated. The phases are characterized by the nature of the electronic and spin excitations. The frequency and temperature-dependent conductivity and spin susceptibility as functions of disorder are calculated. The interplay of disorder and electron-electron interaction produces unusual behavior in this system. For example, the dc conductivity, which is vanishingly small at low disorder in the Mott phase and at high disorder in the localized phase, gets surprisingly enhanced at intermediate disorder in a "metallic" phase. Moreover, the spin susceptibility in this "metallic" phase is not the expected Pauli-behavior but Curie-$1/T$ due to the presence of local moments.
It is notoriously difficult to securely configure HTTPS, and poor server configurations have contributed to several attacks including the FREAK, Logjam, and POODLE attacks. In this work, we empirically evaluate the TLS security posture of popular websites and endeavor to understand the configuration decisions that operators make. We correlate several sources of influence on sites' security postures, including software defaults, cloud providers, and online recommendations. We find a fragmented web ecosystem: while most websites have secure configurations, this is largely due to major cloud providers that offer secure defaults. Individually configured servers are more often insecure than not. This may be in part because common resources available to individual operators -- server software defaults and online configuration guides -- are frequently insecure. Our findings highlight the importance of considering SaaS services separately from individually-configured sites in measurement studies, and the need for server software to ship with secure defaults.
This paper presents a high-performance general-purpose no-reference (NR) image quality assessment (IQA) method based on image entropy. The image features are extracted from two domains. In the spatial domain, the mutual information between the color channels and the two-dimensional entropy are calculated. In the frequency domain, the two-dimensional entropy and the mutual information of the filtered sub-band images are computed as the feature set of the input color image. Then, with all the extracted features, the support vector classifier (SVC) for distortion classification and support vector regression (SVR) are utilized for the quality prediction, to obtain the final quality assessment score. The proposed method, which we call entropy-based no-reference image quality assessment (ENIQA), can assess the quality of different categories of distorted images, and has a low complexity. The proposed ENIQA method was assessed on the LIVE and TID2013 databases and showed a superior performance. The experimental results confirmed that the proposed ENIQA method has a high consistency of objective and subjective assessment on color images, which indicates the good overall performance and generalization ability of ENIQA. The source code is available on github https://github.com/jacob6/ENIQA.
Energy densities of relativistic electrons and protons in extended galactic and intracluster regions are commonly determined from spectral radio and (rarely) $\gamma$-ray measurements. The time-independent particle spectral density distributions are commonly assumed to have a power-law (PL) form over the relevant energy range. A theoretical relation between energy densities of electrons and protons is usually adopted, and energy equipartition is invoked to determine the mean magnetic field strength in the emitting region. We show that for typical conditions, in both star-forming and starburst galaxies, these estimates need to be scaled down substantially due to significant energy losses that (effectively) flatten the electron spectral density distribution, resulting in a much lower energy density than deduced when the distribution is assumed to have a PL form. The steady-state electron distribution in the nuclear regions of starburst galaxies is calculated by accounting for Coulomb, bremsstrahlung, Compton, and synchrotron losses; the corresponding emission spectra of the latter two processes are calculated and compared to the respective PL spectra. We also determine the proton steady-state distribution by taking into account Coulomb and pion production losses, and briefly discuss implications of our steady-state particle spectra for estimates of proton energy densities and magnetic fields.
A graph $G$ is $(a,b)$-choosable if for any color list of size $a$ associated with each vertices, one can choose a subset of $b$ colors such that adjacent vertices are colored with disjoint color sets. This paper shows an equivalence between the $(a,b)$-choosability of a graph and the $(a,b)$-choosability of one of its subgraphs called the extended core. As an application, this result allows to prove the $(5,2)$-choosability and $(7,3)$-colorability of triangle-free induced subgraphs of the triangular lattice.
The electronic and structural properties of Li$B$O$_3$ ($B$=V, Nb, Ta, Os) are investigated via first-principles methods. We show that Li$B$O$_3$ belong to the recently proposed hyperferroelectrics, i.e., they all have unstable longitudinal optic phonon modes. Especially, the ferroelectric-like instability in the metal LiOsO$_3$, whose optical dielectric constant goes to infinity, is a limiting case of hyperferroelectrics. Via an effective Hamiltonian, we further show that, in contrast to normal proper ferroelectricity, in which the ferroelectric instability usually comes from long-range coulomb interactions, the hyperferroelectric instability is due to the structure instability driven by short-range interactions. This could happen in systems with large ion size mismatches, which therefore provides a useful guidance in searching for novel hyperferroelectrics.
The EditLens is an interactive lens technique that supports the editing of graphs. The user can insert, update, or delete nodes and edges while maintaining an already existing layout of the graph. For the nodes and edges that are affected by an edit operation, the EditLens suggests suitable locations and routes, which the user can accept or adjust. For this purpose, the EditLens requires an efficient routing algorithm that can compute results at interactive framerates. Existing algorithms cannot fully satisfy the needs of the EditLens. This paper describes a novel algorithm that can compute orthogonal edge routes for incremental edit operations of graphs. Tests indicate that, in general, the algorithm is better than alternative solutions.
The aim of this paper is to develop and test metrics to quantitatively identify technological discontinuities in a knowledge network. We developed five metrics based on innovation theories and tested the metrics by a simulation model-based knowledge network and hypothetically designed discontinuity. The designed discontinuity is modeled as a node which combines two different knowledge streams and whose knowledge is dominantly persistent in the knowledge network. The performances of the proposed metrics were evaluated by how well the metrics can distinguish the designed discontinuity from other nodes on the knowledge network. The simulation results show that the persistence times # of converging main paths provides the best performance in identifying the designed discontinuity: the designed discontinuity was identified as one of the top 3 patents with 96~99% probability by Metric 5 and it is, according to the size of a domain, 12~34% better than the performance of the second best metric. Beyond the simulation analysis, we tested the metrics using a patent set representative of the Magnetic information storage domain. The three representative patents associated with a well-known breakthrough technology in the domain, the giant magneto-resistance (GMR) spin valve sensor, were selected based on the qualitative studies, and the metrics were tested by how well the metrics identify the selected patents as top-ranked patents. The empirical results fully support the simulation results and therefore the persistence times # of converging main paths is recommended for identifying technological discontinuities for any technology.
We apply the theory of finite-type invariants of homology 3-spheres to investigate the structure of the Torelli group. We construct natural cocycles in the Torelli group and show that the lower central series quotients of the Torelli group map onto a vector space of trivalent graphs. We also have analogous results for two other natural subgroups of the mapping class group.
Considering the physics potential of an e-e- collider in the TeV energy range, we indicate a few interesting examples for exotic processes and discuss the standard model backgrounds. Focussing on pair production of weak gauge bosons, we report some illustrative predictions.
When the scattering length is proportional to the distance from the center of the system, two particles are shown to be trapped about the center. Furthermore, their spectrum exhibits discrete scale invariance, whose scale factor is controlled by the slope of the scattering length. While this resembles the Efimov effect, our system has a number of advantages when realized with ultracold atoms. We also elucidate how the emergent discrete scaling symmetry is violated for more than two bosons, which may shed new light on Efimov physics. Our system thus serves as a tunable model system to investigate universal physics involving scale invariance, quantum anomaly, and renormalization group limit cycle, which are important in a broad range of quantum physics.
In this paper, we consider the iterative method of subspace corrections with random ordering. We prove identities for the expected convergence rate, which can provide sharp estimates for the error reduction per iteration. We also study the fault-tolerant feature of the randomized successive subspace correction method by simply rejecting all the corrections when error occurs and show that the results iterative method converges with probability one. Moreover, we also provide sharp estimates on the expected convergence rate for the fault-tolerant, randomized, subspace correction method.
Transition metal doping is known to increase the photosensitivity to visible light for photocatalytically active ZnO. We report on the electronic structure of nano-crystalline Fe:ZnO, which has recently been shown to be an efficient photocatalyst. The photo-activity of ZnO reduces Fe from 3+ to 2+ in the surface region of the nano-crystalline material. Electronic states corresponding to low-spin Fe 2+ are observed and attributed to crystal field modification at the surface. These states can be important for the photocatalytic sensitivity to visible light due to their deep location in the ZnO bandgap. X-ray absorption and x-ray photoemission spectroscopy suggest that Fe is only homogeneously distributed for concentrations up to 3%. Increased concentrations does not result in a higher concentration of Fe ions in the surface region. This is a crucial factor limiting the photocatalytic functionality of ZnO, where the most efficient doping concentration have been shown to be 2-4% for Fe doping. Using resonant photoemission spectroscopy we determine the location of Fe 3d states with sensitivity to the charge states of the Fe ion even for multi-valent and multi-coordinated Fe.
We derive a novel formulation for the interaction potential between deformable fibers due to short-range fields arising from intermolecular forces. The formulation improves the existing section-section interaction potential law for in-plane beams by considering an offset between interacting cross sections. The new law is asymptotically consistent, which is particularly beneficial for computationally demanding scenarios involving short-range interactions like van der Waals and steric forces. The formulation is implemented within a framework of rotation-free Bernoulli-Euler beams utilizing the isogeometric paradigm. The improved accuracy of the novel law is confirmed through thorough numerical studies. We apply the developed formulation to investigate the complex behavior observed during peeling and pull-off of elastic fibers interacting via the Lennard-Jones potential.
Almost all of the most successful quantum algorithms discovered to date exploit the ability of the Fourier transform to recover subgroup structure of functions, especially periodicity. The fact that Fourier transforms can also be used to capture shift structure has received far less attention in the context of quantum computation. In this paper, we present three examples of ``unknown shift'' problems that can be solved efficiently on a quantum computer using the quantum Fourier transform. We also define the hidden coset problem, which generalizes the hidden shift problem and the hidden subgroup problem. This framework provides a unified way of viewing the ability of the Fourier transform to capture subgroup and shift structure.
We consider various configurations of T-branes which are non-abelian bound states of branes and were recently introduced by Cecotti, Cordova, Heckman and Vafa. They are a refinement of the concept of monodromic branes featured in phenomenological F-theory models. We are particularly interested in the T-branes corresponding to Z3 and Z4 monodromies, which are used to break E7 or E8 gauge groups to SU(5) GUT. Our results imply that the up-type and down-type Yukawa couplings for the breaking of E7 are zero, whereas up-type and down-type Yukawa couplings, together with right handed neutrino Yukawas are non-zero for the case of the breaking of E8. The dimension four proton decay mediating term is avoided in models with either E7 or E8 breaking.
We explore the convergence of the light-front coupled-cluster (LFCC) method in the context of two-dimensional quenched scalar Yukawa theory. This theory is simple enough for higher-order LFCC calculations to be relatively straightforward. The quenching is to maintain stability; the spectrum of the full theory with pair creation and annihilation is unbounded from below. The basic interaction in the quenched theory is only emission and absorption of a neutral scalar by the complex scalar. The LFCC method builds the eigenstate with one complex scalar and a cloud of neutrals from a valence state that is just the complex scalar and the action of an exponentiated operator that creates neutrals. The lowest order LFCC operator creates one; we add the next order, a term that creates two. At this order there is a direct contribution to the wave function for two neutrals and one complex scalar and additional contributions to all higher Fock wave functions from the exponentiation. Results for the lowest order and this new second-order approximation are compared with those obtained with standard Fock-state expansions. The LFCC approach is found to allow representation of the eigenstate with far fewer functions than the number of wave functions required in a converged Fock-state expansion.
We present some entropy and temperature relations of multi-horizons, even including the "virtual" horizon. These relations are related to product, division and sum of entropy and temperature of multi-horizons. We obtain the additional thermodynamic relations of both static and rotating black holes in three and four dimensional (A)dS spacetime. Especially, a new dimensionless, charges-independence and $T_+S_+=T_-S_-$ like relation is presented. This relation does not depend on the mass, electric charge, angular momentum and cosmological constant, as it is always a constant. These relations lead us to get some interesting thermodynamic bound of entropy and temperature, including the Penrose inequality which is the first geometrical inequality of black holes. Besides, based on these new relations, one can obtain the first law of thermodynamics and Smarr relation for all horizons of black hole.
Recent work has identified cosmic ray events as an error source limiting the lifetime of quantum data. These errors are correlated and affect a large number of qubits, leading to the loss of data across a quantum chip. Previous works attempting to address the problem in hardware or by building distributed systems still have limitations. We approach the problem from a different perspective, developing a new hybrid hardware-software-based strategy based on the 2-D surface code, assuming the parallel development of a hardware strategy that limits the phonon propagation radius. We propose to flee the area: move the logical qubits far enough away from the strike's epicenter to maintain our logical information. Specifically, we: (1) establish the minimum hardware requirements needed for our approach; (2) propose a mapping for moving logical qubits; and (3) evaluate the possible choice of the code distance. Our analysis considers two possible cosmic ray events: those far from both ``holes'' in the surface code and those near or overlapping a hole. We show that the probability that the logical qubit will be destroyed can be reduced from 100% to the range 4% to 15% depending on the time required to move the logical qubit.
We present constraints on extensions of the minimal cosmological models dominated by dark matter and dark energy, $\Lambda$CDM and $w$CDM, by using a combined analysis of galaxy clustering and weak gravitational lensing from the first-year data of the Dark Energy Survey (DES Y1) in combination with external data. We consider four extensions of the minimal dark energy-dominated scenarios: 1) nonzero curvature $\Omega_k$, 2) number of relativistic species $N_{\rm eff}$ different from the standard value of 3.046, 3) time-varying equation-of-state of dark energy described by the parameters $w_0$ and $w_a$ (alternatively quoted by the values at the pivot redshift, $w_p$, and $w_a$), and 4) modified gravity described by the parameters $\mu_0$ and $\Sigma_0$ that modify the metric potentials. We also consider external information from Planck CMB measurements; BAO measurements from SDSS, 6dF, and BOSS; RSD measurements from BOSS; and SNIa information from the Pantheon compilation. Constraints on curvature and the number of relativistic species are dominated by the external data; when these are combined with DES Y1, we find $\Omega_k=0.0020^{+0.0037}_{-0.0032}$ at the 68% confidence level, and $N_{\rm eff}<3.28\, (3.55)$ at 68% (95%) confidence. For the time-varying equation-of-state, we find the pivot value $(w_p, w_a)=(-0.91^{+0.19}_{-0.23}, -0.57^{+0.93}_{-1.11})$ at pivot redshift $z_p=0.27$ from DES alone, and $(w_p, w_a)=(-1.01^{+0.04}_{-0.04}, -0.28^{+0.37}_{-0.48})$ at $z_p=0.20$ from DES Y1 combined with external data; in either case we find no evidence for the temporal variation of the equation of state. For modified gravity, we find the present-day value of the relevant parameters to be $\Sigma_0= 0.43^{+0.28}_{-0.29}$ from DES Y1 alone, and $(\Sigma_0, \mu_0)=(0.06^{+0.08}_{-0.07}, -0.11^{+0.42}_{-0.46})$ from DES Y1 combined with external data, consistent with predictions from GR.
In this article, we will explore the fundamental concepts, including various basic concepts on $d$-complex manifolds, along with several differential operators and examine the relationships between them. A $d$-K\"ahler manifold is a $d$-complex manifold equipped with a metric that satisfies a specific condition. We prove the Hodge decomposition theorem on compact $d$-K\"ahler manifolds, which establishes a crucial relationship between certain de-Rham cohomology groups and Dolbeault cohomology groups on a compact $d$-K\"ahler manifold .
We conducted an exploration of 12CO molecular outflows in the Orion A giant molecular cloud to investigate outflow feedback using 12CO (J = 1-0) and 13CO (J = 1-0) data obtained by the Nobeyama 45-m telescope. In the region excluding the center of OMC 1, we identified 44 12CO (including 17 newly detected) outflows based on the unbiased and systematic procedure of automatically determining the velocity range of the outflows and separating the cloud and outflow components. The optical depth of the 12CO emission in the detected outflows is estimated to be approximately 5. The total momentum and energy of the outflows, corrected for optical depth, are estimated to be 1.6 x 10 2 M km s-1 and 1.5 x 10 46 erg, respectively. The momentum and energy ejection rate of the outflows are estimated to be 36% and 235% of the momentum and energy dissipation rates of the cloud turbulence, respectively. Furthermore, the ejection rates of the outflows are comparable to those of the expanding molecular shells estimated by Feddersen et al. (2018, ApJ, 862, 121). Cloud turbulence cannot be sustained by the outflows and shells unless the energy conversion efficiency is as high as 20%.
We consider the Yamada model for an excitable or self-pulsating laser with saturable absorber, and study the effects of delayed optical self-feedback in the excitable case. More specifically, we are concerned with the generation of stable periodic pulse trains via repeated self-excitation after passage through the delayed feedback loop, as well as their bifurcations. We show that onset and termination of such pulse trains correspond to the simultaneous bifurcation of countably many fold periodic orbits with infinite period in this delay differential equation. We employ numerical continuation and the concept of reappearance of periodic solutions to show that these bifurcations coincide with codimension-two points along families of connecting orbits and fold periodic orbits in a related advanced differential equation. These points include heteroclinic connections between steady states, as well as homoclinic bifurcations with non-hyperbolic equilibria. Tracking these codimension-two points in parameter space reveals the critical parameter values for the existence of periodic pulse trains. We use the recently developed theory of temporal dissipative solitons to infer necessary conditions for the stability of such pulse trains.
The nature of the pseudogap phase is a central problem in the quest to understand high-Tc cuprate superconductors. A fundamental question is what symmetries are broken when that phase sets in below a temperature T*. There is evidence from both polarized neutron diffraction and polar Kerr effect measurements that time- reversal symmetry is broken, but at temperatures that differ significantly. Broken rotational symmetry was detected by both resistivity and inelastic neutron scattering at low doping and by scanning tunnelling spectroscopy at low temperature, but with no clear connection to T*. Here we report the observation of a large in-plane anisotropy of the Nernst effect in YBa2Cu3Oy that sets in precisely at T*, throughout the doping phase diagram. We show that the CuO chains of the orthorhombic lattice are not responsible for this anisotropy, which is therefore an intrinsic property of the CuO2 planes. We conclude that the pseudogap phase is an electronic state which strongly breaks four-fold rotational symmetry. This narrows the range of possible states considerably, pointing to stripe or nematic orders.
In this experiment, three different search algorithms are implemented for the purpose of extracting a task tree from a large knowledge graph, known as the Functional Object-Oriented Network (FOON). Using a universal FOON, which contains knowledge extracted by annotating online cooking videos, and a desired goal, a task tree can be retrieved. The process of searching the universal FOON for task tree retrieval is tested using iterative deepening search and greedy best-first search with two different heuristic functions. The performance of these three algorithms is analyzed and compared. The results of the experiment show that iterative deepening performs strongly overall. However, different heuristics in an informed search proved to be beneficial for certain situations.
We present new abundances and radial velocities for stars in the field of the open cluster Tombaugh 2, which has been suggested to be associated with the Galactic Anticenter Stellar Structure (also known as the Monoceros stream). Using VLT/FLAMES with the UVES and GIRAFFE spectrographs, we find a radial velocity (RV) of <V_{r}> = 121 \pm 0.4 km/s using eighteen Tombaugh 2 cluster stars. Our abundance analysis of RV-selected members finds that Tombaugh 2 is more metal-rich than previous studies have found; moreover, unlike the previous work, our larger sample also reveals that stars with the velocity of the cluster show a relatively large spread in chemical properties (e.g., Delta[Fe/H] > 0.2). This is the first time a possible abundance spread has been observed in an open cluster, though this is one of several possible explanations for our observations. While there is an apparent trend of [alpha/Fe] with [Fe/H], the distribution of abundances of these "RV cluster members" also may hint at a possible division into two primary groups with different mean chemical characteristics -- namely (<[Fe/H]>,<[Ti/Fe]>) ~ (-0.06, +0.02) and (-0.28, +0.36). Based on position and kinematics Tombaugh 2 is a likely member of the GASS/Monoceros stream, which makes Tombaugh 2 the second star cluster within the originally proposed GASS/Monoceros family. However, we explore other possible explanations for the observed spread in abundances and two possible sub-populations, with the most likely explanation being that the metal-poor ([Fe/H] = -0.28), more centrally-concentrated population being the true Tombaugh 2 clusters stars and the metal-rich ([Fe/H] = -0.06) population being an overlapping, and kinematically associated, but "cold" (sigma_V < 2 km/s) stellar stream at R_{gc} >= 15 kpc.
The hydrodynamic response of the inviscid small shearing box model of a midplane section of a rotationally supported astrophysical disk is examined. An energy functional ${\cal E}$ is formulated for the general nonlinear problem. It is found that the fate of disturbances is related to the conservation of this quantity which, in turn, depends on the boundary conditions utilized: ${\cal E}$ is conserved for channel boundary conditions while it is not conserved in general for shearing box conditions. Linearized disturbances subject to channel boundary conditions have normal-modes described by Bessel Functions and are qualitatively governed by a quantity $\Sigma$ which is a measure of the ratio between the azimuthal and vertical wavelengths. Inertial oscillations ensue if $\Sigma >1$ - otherwise disturbances must in general be treated as an initial value problem. We reflect upon these results and offer a speculation.
Two-dimensional metal-halide perovskites are highly versatile for light-driven applications due to their exceptional variety in material composition, which can be exploited for tunability of mechanical and optoelectronic properties. The band edge emission is defined by structure and composition of both organic and inorganic layers, and electron-phonon coupling plays a crucial role in the recombination dynamics. However, the nature of the electron-phonon coupling and which kind of phonons are involved is still under debate. Here we investigate the emission, reflectance and phonon response from single two-dimensional lead-iodide microcrystals with angle-resolved polarized spectroscopy. We find an intricate dependence of the emission polarization with the vibrational directionality in the materials, which reveals that several bands of the low-frequency phonons with non-orthogonal directionality contribute to the band edge emission. Such complex electron-phonon coupling requires adequate models to predict the thermal broadening of the emission and provides opportunities to design its polarization properties.
This paper studies the vertices, in the sense defined by J. A. Green, of Specht modules for symmetric groups. The main theorem gives, for each indecomposable non-projective Specht module, a large subgroup contained in one of its vertices. A corollary of this theorem is a new way to determine the defect groups of symmetric groups. We also use it to find the Green correspondents of a particular family of simple Specht modules; as a corollary, we get a new proof of the Brauer correspondence for blocks of the symmetric group. The proof of the main theorem uses the Brauer homomorphism on modules, as developed by M. Brou{\'e}, together with combinatorial arguments using Young tableaux.
From resonant Raman scattering on isolated nanotubes we obtained the optical transition energies, the radial breathing mode frequency and Raman intensity of both metallic and semiconducting tubes. We unambiguously assigned the chiral index (n_1,n_2) of approximately 50 nanotubes based solely on a third-neighbor tight-binding Kataura plot and find omega_RBM=214.4cm^-1nm/d+18.7cm^-1. In contrast to luminescence experiments we observe all chiralities including zig-zag tubes. The Raman intensities have a systematic chiral-angle dependence confirming recent ab-initio calculations.
We prove that the theory of the models constructible using finitely many cofinality quantifiers - $C_{\lambda_{1},...,\lambda_{n}}^{*}$ and $C_{<\lambda_{1},...,<\lambda_{n}}^{*}$ for $\lambda_{1},...,\lambda_{n}$ regular cardinals - is set-forcing absolute under the assumption of class many Woodin cardinals, and is independent of the regular cardinals used. Towards this goal we prove some properties of the generic embedding induced from the stationary tower restricted to $<\mu$-closed sets.
We introduce and study a simple model capturing the main features of unbalanced optimal transport. It is based on equipping the conical extension of the group of all diffeomorphisms with a natural metric, which allows a Riemannian submersion to the space of volume forms of arbitrary total mass. We describe its finite-dimensional version and present a concise comparison study of the geometry, Hamiltonian features, and geodesics for this and other extensions. One of the corollaries of this approach is that along any geodesic the total mass evolves with constant acceleration, as an object's height in a constant buoyancy field.
Neural networks has been successfully used in the processing of Lidar data, especially in the scenario of autonomous driving. However, existing methods heavily rely on pre-processing of the pulse signals derived from Lidar sensors and therefore result in high computational overhead and considerable latency. In this paper, we proposed an approach utilizing Spiking Neural Network (SNN) to address the object recognition problem directly with raw temporal pulses. To help with the evaluation and benchmarking, a comprehensive temporal pulses data-set was created to simulate Lidar reflection in different road scenarios. Being tested with regard to recognition accuracy and time efficiency under different noise conditions, our proposed method shows remarkable performance with the inference accuracy up to 99.83% (with 10% noise) and the average recognition delay as low as 265 ns. It highlights the potential of SNN in autonomous driving and some related applications. In particular, to our best knowledge, this is the first attempt to use SNN to directly perform object recognition on raw Lidar temporal pulses.
The notion of weighted $(b,c)$-inverse of an element in rings were introduced, very recently [Comm. Algebra, 48 (4) (2020): 1423-1438]. In this paper, we further elaborate on this theory by establishing a few characterizations of this inverse and their relationships with other $(v, w)$-weighted $(b,c)$-inverses. We introduce some necessary and sufficient conditions for the existence of the hybrid $(v, w)$-weighted $(b,c)$-inverse and annihilator $(v, w)$-weighted $(b,c)$-inverse of elements in rings. In addition to this, we explore a few sufficient conditions for the reverse-order law of the annihilator $(v, w)$-weighted $(b,c)$-inverses.
Mobile Ad hoc Network (MANET) is a distributed, infrastructure-less and decentralized network. A routing protocol in MANET is used to find routes between mobile nodes to facilitate communication within the network. Numerous routing protocols have been proposed for MANET. Those routing protocols are designed to adaptively accommodate for dynamic unpredictable changes in network's topology. The mobile nodes in MANET are often powered by limited batteries and network lifetime relies heavily on the energy consumption of nodes. In consequence, the lack of a mobile node can lead to network partitioning. In this paper we analyse, evaluate and measure the energy efficiency of three prominent MANET routing protocols namely DSR, AODV and OLSR in addition to modified protocols. These routing protocols follow the reactive and the proactive routing schemes. A discussion and comparison highlighting their particular merits and drawbacks are also presented. Evaluation study and simulations are performed using NS-2 and its accompanying tools for analysis and investigation of results.
Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multiplicity of communication channels that act on multiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans perceive information and determine the information to which humans are sensitive. A face is an independent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have shown that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves subsequent interaction.
Layered thallium copper chalcogenides can form single, double, or triple layers of Cu-Ch separated by Tl sheets. Here we report on the preparation and properties of Tl-based materials of TlCu2Se2, TlCu4S3, TlCu4Se3 and TlCu6S4, and compare to reports on layered ACu2nChn+1 materials with A = Ba, K, Rb, and Cs, and Ch = S, Se. Having no long-range magnetism for these materials is quite surprising considering the possibilities of inter- and intra-layer exchange interactions through Cu 3d, and we measure by magnetic susceptibility and confirm by neutron diffraction. First principles density-functional theory calculations for both the single-layer TlCu2Se2 (isostructural to the 122 iron-based superconductors) and the double-layer TlCu4Se3 suggest a lack of Fermi-level spectral weight that is needed to drive a magnetic or superconducting instability. The electronic structure calculations show a much greater likelihood of magnetism for multiple structural layers with Fe.
We introduce graph gamma process (GGP) linear dynamical systems to model real-valued multivariate time series. For temporal pattern discovery, the latent representation under the model is used to decompose the time series into a parsimonious set of multivariate sub-sequences. In each sub-sequence, different data dimensions often share similar temporal patterns but may exhibit distinct magnitudes, and hence allowing the superposition of all sub-sequences to exhibit diverse behaviors at different data dimensions. We further generalize the proposed model by replacing the Gaussian observation layer with the negative binomial distribution to model multivariate count time series. Generated from the proposed GGP is an infinite dimensional directed sparse random graph, which is constructed by taking the logical OR operation of countably infinite binary adjacency matrices that share the same set of countably infinite nodes. Each of these adjacency matrices is associated with a weight to indicate its activation strength, and places a finite number of edges between a finite subset of nodes belonging to the same node community. We use the generated random graph, whose number of nonzero-degree nodes is finite, to define both the sparsity pattern and dimension of the latent state transition matrix of a (generalized) linear dynamical system. The activation strength of each node community relative to the overall activation strength is used to extract a multivariate sub-sequence, revealing the data pattern captured by the corresponding community. On both synthetic and real-world time series, the proposed nonparametric Bayesian dynamic models, which are initialized at random, consistently exhibit good predictive performance in comparison to a variety of baseline models, revealing interpretable latent state transition patterns and decomposing the time series into distinctly behaved sub-sequences.
In this paper, we studied the ``hyperon puzzle", a problem that nevertheless the large number of studies is still an open problem. The solution of this issue requires one or more mechanisms that could eventually provide the additional repulsion needed to make the EoS stiffer and, therefore, the value of $M_{\rm{max}, T}$ compatible with the current observational limits. In this paper we proposed that including dark matter (DM) admixed with ordinary matter in neutron stars (NSs), change the hydrostatic equilibrium and may explain the observed discrepancies, regardless to hyperon multi-body interactions, which seem to be unavoidable. We have studied how non-self-annihilating, and self-interacting, DM admixed with ordinary matter in NSs changes their inner structure, and discussed the mass-radius relations of such NSs. We considered DM particle masses of 1, 10, and 100 GeV, while taking into account a rich list of the DM interacting strengths, $y$. By analyzing the multidimensional parameter space, including several quantities like: a. the DM interacting strength, b. the DM particle mass as well as the quantity of DM in its interior, and c. the DM fraction, ${\rm f}_{DM}$, we put constraints in the parameter space ${\rm f}_{DM} - p^{\prime}_{\rm DM}/p^{\prime}_{\rm OM}$. Our bounds are sensitive to the recently observed NSs total masses.
Guided wave dispersion is commonly assessed by Fourier analysis of the field along a line, resulting in frequency-wavenumber dispersion curves. In anisotropic plates, a point source can generate multiple dispersion branches pertaining to the same modal surface, which arise due to the angle between the power flux and the wave vector. We show that this phenomenon is particularly pronounced near zero-group-velocity points, entailing up to six contributions along a given direction. Stationary phase points accurately describe the measurements conducted on a monocrystalline silicon plate.
Light-weight camera localization in existing maps is essential for vision-based navigation. Currently, visual and visual-inertial odometry (VO\&VIO) techniques are well-developed for state estimation but with inevitable accumulated drifts and pose jumps upon loop closure. To overcome these problems, we propose an efficient monocular camera localization method in prior LiDAR maps using direct 2D-3D line correspondences. To handle the appearance differences and modality gaps between LiDAR point clouds and images, geometric 3D lines are extracted offline from LiDAR maps while robust 2D lines are extracted online from video sequences. With the pose prediction from VIO, we can efficiently obtain coarse 2D-3D line correspondences. Then the camera poses and 2D-3D correspondences are iteratively optimized by minimizing the projection error of correspondences and rejecting outliers. Experimental results on the EurocMav dataset and our collected dataset demonstrate that the proposed method can efficiently estimate camera poses without accumulated drifts or pose jumps in structured environments.
We review a recently proposed theory of random packings. We describe the volume fluctuations in jammed matter through a volume function, amenable to analytical and numerical calculations. We combine an extended statistical mechanics approach 'a la Edwards' (where the role traditionally played by the energy and temperature in thermal systems is substituted by the volume and compactivity) with a constraint on mechanical stability imposed by the isostatic condition. We show how such approaches can bring results that can be compared to experiments and allow for an exploitation of the statistical mechanics framework. The key result is the use of a relation between the local Voronoi volume of the constituent grains and the number of neighbors in contact that permits a simple combination of the two approaches to develop a theory of random packings. We predict the density of random loose packing (RLP) and random close packing (RCP) in close agreement with experiments and develop a phase diagram of jammed matter that provides a unifying view of the disordered hard sphere packing problem and further shedding light on a diverse spectrum of data, including the RLP state. Theoretical results are well reproduced by numerical simulations that confirm the essential role played by friction in determining both the RLP and RCP limits. Finally we present an extended discussion on the existence of geometrical and mechanical coordination numbers and how to measure both quantities in experiments and computer simulations.
Bayesian phylogenetic inference is currently done via Markov chain Monte Carlo (MCMC) with simple proposal mechanisms. This hinders exploration efficiency and often requires long runs to deliver accurate posterior estimates. In this paper, we present an alternative approach: a variational framework for Bayesian phylogenetic analysis. We propose combining subsplit Bayesian networks, an expressive graphical model for tree topology distributions, and a structured amortization of the branch lengths over tree topologies for a suitable variational family of distributions. We train the variational approximation via stochastic gradient ascent and adopt gradient estimators for continuous and discrete variational parameters separately to deal with the composite latent space of phylogenetic models. We show that our variational approach provides competitive performance to MCMC, while requiring much fewer (though more costly) iterations due to a more efficient exploration mechanism enabled by variational inference. Experiments on a benchmark of challenging real data Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods.
Multi-hop reading comprehension (RC) questions are challenging because they require reading and reasoning over multiple paragraphs. We argue that it can be difficult to construct large multi-hop RC datasets. For example, even highly compositional questions can be answered with a single hop if they target specific entity types, or the facts needed to answer them are redundant. Our analysis is centered on HotpotQA, where we show that single-hop reasoning can solve much more of the dataset than previously thought. We introduce a single-hop BERT-based RC model that achieves 67 F1---comparable to state-of-the-art multi-hop models. We also design an evaluation setting where humans are not shown all of the necessary paragraphs for the intended multi-hop reasoning but can still answer over 80% of questions. Together with detailed error analysis, these results suggest there should be an increasing focus on the role of evidence in multi-hop reasoning and possibly even a shift towards information retrieval style evaluations with large and diverse evidence collections.
Precoded polar product codes are proposed, where selected component codes enable successive cancellation list decoding to generate bit-wise soft messages efficiently for iterative decoding while targeting optimized distance spectrum as opposed to eBCH or polar component codes. Rate compatibility is a byproduct of $1$-bit granularity in the component code design.
We develop a new approach to production of the spectator nucleons in the heavy ion collisions. The energy transfer to the spectator system is calculated using the Monte Carlo based on the updated version of our generator of configurations in colliding nuclei which includes a realistic account of short-range correlations in nuclei. The transferred energy distributions are calculated within the framework of the Glauber multiple scattering theory, taking into account all the individual inelastic and elastic collisions using an independent realistic calculation of the potential energy contribution of each of the nucleon-nucleon pairs to the total potential. We show that the dominant mechanism of the energy transfer is tearing apart pairs of nucleons with the major contribution coming from the short-range correlations. We calculate the momentum distribution of the emitted nucleons which is strongly affected by short range correlations including its dependence on the azimuthal angle. In particular, we predict a strong angular asymmetry along the direction of the impact parameter b, providing a unique opportunity to determine the direction of b. Also, we predict a strong dependence of the shape of the nucleon momentum distribution on the centrality of the nucleus-nucleus collision.
In this paper, we present an approach for combining non-rigid structure-from-motion (NRSfM) with deep generative models,and propose an efficient framework for discovering trajectories in the latent space of 2D GANs corresponding to changes in 3D geometry. Our approach uses recent advances in NRSfM and enables editing of the camera and non-rigid shape information associated with the latent codes without needing to retrain the generator. This formulation provides an implicit dense 3D reconstruction as it enables the image synthesis of novel shapes from arbitrary view angles and non-rigid structure. The method is built upon a sparse backbone, where a neural regressor is first trained to regress parameters describing the cameras and sparse non-rigid structure directly from the latent codes. The latent trajectories associated with changes in the camera and structure parameters are then identified by estimating the local inverse of the regressor in the neighborhood of a given latent code. The experiments show that our approach provides a versatile, systematic way to model, analyze, and edit the geometry and non-rigid structures of faces.
This work explores the dynamic properties of test particles surrounding a distorted, deformed compact object. The astrophysical motivation was to choose such background, which could constitute a more reasonable model of a real situation that arises in the vicinity of compact objects with the possibility of having parameters as the extra physical degrees of freedom. This can facilitate associating observational data with astrophysical systems. This work's main goal is to study the dynamic regime of motion and quasi-periodic oscillation in this background, depending on different parameters of the system. Also, we exercise the resonant phenomena of the radial and vertical oscillations at their observed quasi-periodic oscillations frequency ratio of 3:2.
Threshold behavior of the cross sections of ultraperipheral nuclear interactions is studied. Production of $e^+e^-$ and $\mu ^+\mu ^-$ pairs as well as $\pi ^0$ and parapositronium is treated. The values of corresponding energy thresholds are presented and the total cross sections of these processes at the newly constructed NICA and FAIR facilities are estimated.
We study how an autonomous agent learns to perform a task from demonstrations in a different domain, such as a different environment or different agent. Such cross-domain imitation learning is required to, for example, train an artificial agent from demonstrations of a human expert. We propose a scalable framework that enables cross-domain imitation learning without access to additional demonstrations or further domain knowledge. We jointly train the learner agent's policy and learn a mapping between the learner and expert domains with adversarial training. We effect this by using a mutual information criterion to find an embedding of the expert's state space that contains task-relevant information and is invariant to domain specifics. This step significantly simplifies estimating the mapping between the learner and expert domains and hence facilitates end-to-end learning. We demonstrate successful transfer of policies between considerably different domains, without extra supervision such as additional demonstrations, and in situations where other methods fail.
The Boltzmann constant was measured by comparing the Johnson noise of a resistor at the triple point of water with a quantum-based voltage reference signal generated with a superconducting Josephson-junction waveform synthesizer. The measured value of k = 1.380651(18) \times 10^-23 J/K is consistent with the current CODATA value and the combined uncertainties. This is our first measurement of k with this electronic technique, and the first noise thermometry measurement to achieve a relative combined uncertainty of 13 parts in 10^6. We describe the most recent improvements to our Johnson Noise Thermometer that enabled the statistical uncertainty contribution to be reduced to seven parts in 10^6, as well as the further reduction of spurious systematic errors and EMI effects. The uncertainty budget for this measurement is discussed in detail.
$ $As a result of bad eating habits, humanity may be destroyed. People are constantly on the lookout for tasty foods, with junk foods being the most common source. As a consequence, our eating patterns are shifting, and we're gravitating toward junk food more than ever, which is bad for our health and increases our risk of acquiring health problems. Machine learning principles are applied in every aspect of our lives, and one of them is object recognition via image processing. However, because foods vary in nature, this procedure is crucial, and traditional methods like ANN, SVM, KNN, PLS etc., will result in a low accuracy rate. All of these issues were defeated by the Deep Neural Network. In this work, we created a fresh dataset of 10,000 data points from 20 junk food classifications to try to recognize junk foods. All of the data in the data set was gathered using the Google search engine, which is thought to be one-of-a-kind in every way. The goal was achieved using Convolution Neural Network (CNN) technology, which is well-known for image processing. We achieved a 98.05\% accuracy rate throughout the research, which was satisfactory. In addition, we conducted a test based on a real-life event, and the outcome was extraordinary. Our goal is to advance this research to the next level, so that it may be applied to a future study. Our ultimate goal is to create a system that would encourage people to avoid eating junk food and to be health-conscious. \keywords{ Machine Learning \and junk food \and object detection \and YOLOv3 \and custom food dataset.}