text
stringlengths
6
128k
As far as we know, usual computer algebra packages can not compute denumerants for almost medium (about a hundred digits) or almost medium--large (about a thousand digits) input data in a reasonably time cost on an ordinary computer. Implemented algorithms can manage numerical $n$--semigroups for small input data. Here we are interested in denumerants of numerical $3$--semigroups which have almost medium input data. A new algorithm for computing denumerants is given for this task. It can manage almost medium input data in the worst case and medium--large or even large input data in some cases.
We investigate the critical behavior that d-dimensional systems with short-range forces and a n-component order parameter exhibit at Lifshitz points whose wave-vector instability occurs in a m-dimensional isotropic subspace of ${\mathbb R}^d$. Utilizing dimensional regularization and minimal subtraction of poles in $d=4+{m\over 2}-\epsilon$ dimensions, we carry out a two-loop renormalization-group (RG) analysis of the field-theory models representing the corresponding universality classes. This gives the beta function $\beta_u(u)$ to third order, and the required renormalization factors as well as the associated RG exponent functions to second order, in u. The coefficients of these series are reduced to m-dependent expressions involving single integrals, which for general (not necessarily integer) values of $m\in (0,8)$ can be computed numerically, and for special values of m analytically. The $\epsilon$ expansions of the critical exponents $\eta_{l2}$, $\eta_{l4}$, $\nu_{l2}$, $\nu_{l4}$, the wave-vector exponent $\beta_q$, and the correction-to-scaling exponent are obtained to order $\epsilon^2$. These are used to estimate their values for d=3. The obtained series expansions are shown to encompass both isotropic limits m=0 and m=d.
We propose an algorithm to denoise speakers from a single microphone in the presence of non-stationary and dynamic noise. Our approach is inspired by the recent success of neural network models separating speakers from other speakers and singers from instrumental accompaniment. Unlike prior art, we leverage embedding spaces produced with source-contrastive estimation, a technique derived from negative sampling techniques in natural language processing, while simultaneously obtaining a continuous inference mask. Our embedding space directly optimizes for the discrimination of speaker and noise by jointly modeling their characteristics. This space is generalizable in that it is not speaker or noise specific and is capable of denoising speech even if the model has not seen the speaker in the training set. Parameters are trained with dual objectives: one that promotes a selective bandpass filter that eliminates noise at time-frequency positions that exceed signal power, and another that proportionally splits time-frequency content between signal and noise. We compare to state of the art algorithms as well as traditional sparse non-negative matrix factorization solutions. The resulting algorithm avoids severe computational burden by providing a more intuitive and easily optimized approach, while achieving competitive accuracy.
Raman scattering cross sections depend on photon polarization. In the cuprates nodal and antinodal directions are weighted more strongly in $B_{2g}$ and $B_{1g}$ symmetry, respectively. On the other hand in angle-resolved photoemission spectroscopy (ARPES), electronic properties are measured along well-defined directions in momentum space rather than their weighted averages. In contrast, the optical conductivity involves a momentum average over the entire Brillouin zone. Newly measured Raman response data on high-quality Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ single crystals up to high energies have been inverted using a modified maximum entropy inversion technique to extract from $B_{1g}$ and $B_{2g}$ Raman data corresponding electron-boson spectral densities (glue) are compared to the results obtained with known ARPES and optical inversions. We find that the $B_{2g}$ spectrum agrees qualitatively with nodal direction ARPES while the $B_{1g}$ looks more like the optical spectrum. A large peak around $30 - 40\,$meV in $B_{1g}$, much less prominent in $B_{2g}$, is taken as support for the importance of $(\pi,\pi)$ scattering at this frequency.
We study several optimal stopping problems that arise from trading a mean-reverting price spread over a finite horizon. Modeling the spread by the Ornstein-Uhlenbeck process, we analyze three different trading strategies: (i) the long-short strategy; (ii) the short-long strategy, and (iii) the chooser strategy, i.e. the trader can enter into the spread by taking either long or short position. In each of these cases, we solve an optimal double stopping problem to determine the optimal timing for starting and subsequently closing the position. We utilize the local time-space calculus of Peskir (2005a) and derive the nonlinear integral equations of Volterra-type that uniquely char- acterize the boundaries associated with the optimal timing decisions in all three problems. These integral equations are used to numerically compute the optimal boundaries.
Recently the media broadcast the news, together with illustrative videos, of a so-called Japanese method to perform multiplication by hand without using the multiplication tables. "Goodbye multiplication tables" was the headline of several websites, including important ones, where news are however too often `re-posted' uncritically. The easy numerical examples could induce naive internauts to believe that, in a short future, multiplications could be really done without the knowledge of multiplication tables. This is what a girl expresses, with great enthusiasm, to her father. The dialogues described here, although not real, are likely and have been inspired by this episode, being Maddalena the daughter of the author. Obviously the revolutionary value of the new method is easily disassembled, while its educational utility is highlighted to show (or remember) the reasoning on which the method learned in elementary school is based, although mostly applied mechanically.
We study the phase time for various quantum mechanical networks having potential barriers in its arms to find the generic presence of Hartman effect. In such systems it is possible to control the `super arrival' time in one of the arms by changing parameters on another, spatially separated from it. This is yet another quantum nonlocal effect. Negative time delays (time advancement) and `ultra Hartman effect' with negative saturation times have been observed in some parameter regimes.
The phase transition between charge- and spin-density-wave (CDW, SDW) phases is studied in the one-dimensional extended Hubbard model at half-filling. We discuss whether the transition can be described by the Gaussian and the spin-gap transitions under charge-spin separation, or by a direct CDW-SDW transition. We determine these phase boundaries by level crossings of excitation spectra which are identified according to discrete symmetries of wave functions. We conclude that the Gaussian and the spin-gap transitions take place separately from weak- to intermediate-coupling region. This means that the third phase exists between the CDW and the SDW states. Our results are also consistent with those of the strong-coupling perturbative expansion and of the direct evaluation of order parameters.
We show that there is an unexpected relation between free divisors and stability and coincidence thresholds for projective hypersurfaces.
In Galaxy And Mass Assembly Data Release 4 (GAMA DR4), we make available our full spectroscopic redshift sample. This includes 248682 galaxy spectra, and, in combination with earlier surveys, results in 330542 redshifts across five sky regions covering ~250deg^2. The redshift density, is the highest available over such a sustained area, has exceptionally high completeness (95 per cent to r_KIDS=19.65mag), and is well suited for the study of galaxy mergers, galaxy groups, and the low redshift (z<0.25) galaxy population. DR4 includes 32 value-added tables or Data Management Units (DMUs) that provide a number of measured and derived data products including GALEX, ESO KiDS, ESO VIKING, WISE and Herschel Space Observatory imaging. Within this release, we provide visual morphologies for 15330 galaxies to z<0.08, photometric redshift estimates for all 18million objects to r_KIDS~25mag, and stellar velocity dispersions for 111830 galaxies. We conclude by deriving the total galaxy stellar mass function (GSMF) and its sub-division by morphological class (elliptical, compact-bulge and disc, diffuse-bulge and disc, and disc only). This extends our previous measurement of the total GSMF down to 10^6.75 M_sol h^-2_70 and we find a total stellar mass density of rho_*=(2.97+/-0.04)x10^8 M_sol h_70 Mpc^-3 or Omega_*=(2.17+/-0.03)x10^-3 h^-1_70. We conclude that at z<0.1, the Universe has converted 4.9+/-0.1 per cent of the baryonic mass implied by Big Bang Nucleosynthesis into stars that are gravitationally bound within the galaxy population.
We establish the local in time well-posedness of strong solutions to the vacuum free boundary problem of the compressible Navier-Stokes-Poisson system in the spherically symmetric and isentropic motion. Our result captures the physical vacuum boundary behavior of the Lane-Emden star configurations for all adiabatic exponents $\gamma>{6/5}$.
We argue that extra dimensions with a properly chosen compactification scheme could be a natural source for emergent gauge symmetries. Actually, some proposed vector field potential terms or polynomial vector field constraints introduced in five-dimensional Abelian and non-Abelian gauge theory is shown to smoothly lead to spontaneous violation of an underlying 5D spacetime symmetry and generate pseudo-Goldstone vector modes as conventional 4D gauge boson candidates. As a special signature, there appear, apart from conventional gauge couplings, some properly suppressed direct multi-photon (multi-boson, in general) interactions in emergent QED and Yang-Mills theories whose observation could shed light on their high-dimensional nature. Moreover, in emergent Yang-Mills theories an internal symmetry G also occurs spontaneously broken to its diagonal subgroups once 5D Lorentz violation happens. This breaking origins from the extra vector field components playing a role of some adjoint scalar field multiplet in the 4D spacetime. So, one naturally has the Higgs effect without a specially introduced scalar field multiplet. Remarkably, when being applied to Grand Unified Theories this results in a fact that the emergent GUTs generically appear broken down to the Standard Model just at the 5D Lorentz violation scale M. PACS numbers: 11.15.-q, 11.30.Cp, 11.30.Pb, 11.10.Kk
We argue that (first-order) coherence is a relative, and not an absolute, property. It is shown how feedforward or feedback can be employed to make two (or more) lasers relatively coherent. We also show that after the relative coherence is established, the two lasers will stay relatively coherent for some time even if the feedforward or feedback loop has been turned off, enabling, e.g., demonstration of unconditional quantum teleportation using lasers.
In this paper we propose a thermodynamically consistent model for superfluid-normal phase transition in liquid helium, accounting for variations of temperature and density. The phase transition is described by means of an order parameter, according to the Ginzburg-Landau theory, emphasizing the analogies between superfluidity and superconductivity. The normal component of the velocity is assumed to be compressible and the usual phase diagram of liquid helium is recovered. Moreover, the continuity equation leads to a dependence between density and temperature in agreement with the experimental data.
In this work, we propose an adversarial attack-based data augmentation method to improve the deep-learning-based segmentation algorithm for the delineation of Organs-At-Risk (OAR) in abdominal Computed Tomography (CT) to facilitate radiation therapy. We introduce Adversarial Feature Attack for Medical Image (AFA-MI) augmentation, which forces the segmentation network to learn out-of-distribution statistics and improve generalization and robustness to noises. AFA-MI augmentation consists of three steps: 1) generate adversarial noises by Fast Gradient Sign Method (FGSM) on the intermediate features of the segmentation network's encoder; 2) inject the generated adversarial noises into the network, intentionally compromising performance; 3) optimize the network with both clean and adversarial features. Experiments are conducted segmenting the heart, left and right kidney, liver, left and right lung, spinal cord, and stomach. We first evaluate the AFA-MI augmentation using nnUnet and TT-Vnet on the test data from a public abdominal dataset and an institutional dataset. In addition, we validate how AFA-MI affects the networks' robustness to the noisy data by evaluating the networks with added Gaussian noises of varying magnitudes to the institutional dataset. Network performance is quantitatively evaluated using Dice Similarity Coefficient (DSC) for volume-based accuracy. Also, Hausdorff Distance (HD) is applied for surface-based accuracy. On the public dataset, nnUnet with AFA-MI achieves DSC = 0.85 and HD = 6.16 millimeters (mm); and TT-Vnet achieves DSC = 0.86 and HD = 5.62 mm. AFA-MI augmentation further improves all contour accuracies up to 0.217 DSC score when tested on images with Gaussian noises. AFA-MI augmentation is therefore demonstrated to improve segmentation performance and robustness in CT multi-organ segmentation.
Automated scientific fact checking is difficult due to the complexity of scientific language and a lack of significant amounts of training data, as annotation requires domain expertise. To address this challenge, we propose scientific claim generation, the task of generating one or more atomic and verifiable claims from scientific sentences, and demonstrate its usefulness in zero-shot fact checking for biomedical claims. We propose CLAIMGEN-BART, a new supervised method for generating claims supported by the literature, as well as KBIN, a novel method for generating claim negations. Additionally, we adapt an existing unsupervised entity-centric method of claim generation to biomedical claims, which we call CLAIMGEN-ENTITY. Experiments on zero-shot fact checking demonstrate that both CLAIMGEN-ENTITY and CLAIMGEN-BART, coupled with KBIN, achieve up to 90% performance of fully supervised models trained on manually annotated claims and evidence. A rigorous evaluation study demonstrates significant improvement in generated claim and negation quality over existing baselines
We introduce a message-passing-neural-network-based wave function Ansatz to simulate extended, strongly interacting fermions in continuous space. Symmetry constraints, such as continuous translation symmetries, can be readily embedded in the model. We demonstrate its accuracy by simulating the ground state of the homogeneous electron gas in three spatial dimensions at different densities and system sizes. With orders of magnitude fewer parameters than state-of-the-art neural-network wave functions, we demonstrate better or comparable ground-state energies. Reducing the parameter complexity allows scaling to $N=128$ electrons, previously inaccessible to neural-network wave functions in continuous space, enabling future work on finite-size extrapolations to the thermodynamic limit. We also show the Ansatz's capability of quantitatively representing different phases of matter.
Narrow carbon nanotubes (CNTs) desalinate water, mimicking water channels of biological membranes, yet the physics behind selectivity, especially, the effect of the membrane embedding CNTs on water and ion transfer, is still unclear. Here, we report $ab$ $initio$ analysis of the energies involved in transfer of water and K$^+$ and Cl$^-$ ions from solution to empty and water-filled 0.68 nm CNTs, for different dielectric constants $\epsilon$ of the surrounding matrix. The transfer energies computed for $1 \leq \epsilon < \infty$ permit a transparent breakdown of the transfer energy to three main contributions: binding to CNT, intra-CNT hydration, and dielectric polarization of the matrix. The latter scales inversely with $\epsilon$ and is of the order $10^2$/$\epsilon$ kJ/mol for both ions, which may change ion transfer from favorable to unfavorable, depending on ion, $\epsilon$, and CNT diameter. This may have broad implications for designing and tuning selectivity of nanochannel-based devices.
We employ the recent results on the generalization of the $c$-theorem to 2+1-d to derive non-perturbative results for strongly interacting quantum field theories, including QED-3 and the critical theory corresponding to certain quantum phase transitions in condensed matter systems. In particular, by demanding that the universal constant part of the entanglement entropy decreases along the renormalization group flow ("F-theorem"), we find bounds on the number of flavors of fermions required for the stability of QED-3 against chiral symmetry breaking and confinement. In this context, the exact results known for the entanglement of superconformal field theories turn out to be quite useful. Furthermore, the universal number corresponding to the ratio of the entanglement entropy of a free Dirac fermion to that of free scalar plays an interesting role in the bounds derived. Using similar ideas, we also derive strong constraints on the nature of quantum critical points in condensed matter systems with "topological order".
The Method a Raman of spectroscopy studies allocation of molecules in ternary mix-crystals of a p-dibromobenzene of p-dichlorobenzene and p-bromochlorobenzene. It is shown, that the mutual concentration of builders depends on requirements of growing. Was possibly as a uniform modification of concentration of all builders along a specimen, and a wavy modification of concentration of two substances.
Transistor-based memories are rapidly approaching their maximum density per unit area. Resistive crossbar arrays enable denser memory due to the small size of switching devices. However, due to the resistive nature of these memories, they suffer from current sneak paths complicating the readout procedure. In this paper, we propose a row readout technique with circuitry that can be used to read {selector-less} resistive crossbar based memories. High throughput reading and writing techniques are needed to overcome the memory-wall bottleneck problem and to enable near memory computing paradigm. The proposed technique can read the entire row of dense crossbar arrays in one cycle, unlike previously published techniques. The requirements for the readout circuitry are discussed and satisfied in the proposed circuit. Additionally, an approximated expression for the power consumed while reading the array is derived. A figure of merit is defined and used to compare the proposed approach with existing reading techniques. Finally, a quantitative analysis of the effect of biasing mismatch on the array size is discussed.
We determine the integral cohomology ring of the homogeneous space E_8/T^1E_7 by the Borel presentation and a method due to Toda. Then using the Gysin exact sequence associated with the circle bundle S^1 -> E_8/E_7 -> E_8/T^1E_7, we also determine the integral cohomology of E_8/E_7.
Polymer stretching in random smooth flows is investigated within the framework of the FENE dumbbell model. The advecting flow is Gaussian and short-correlated in time. The stationary probability density function of polymer extension is derived exactly. The characteristic time needed for the system to attain the stationary regime is computed as a function of the Weissenberg number and the maximum length of polymers. The transient relaxation to the stationary regime is predicted to be exceptionally slow in the proximity of the coil-stretch transition.
We classify extremal traces on the seven direct limit algebras of noncrossing partitions arising from the classification of free partition quantum groups of Banica-Speicher (arXiv:0808.2628) and Weber (arXiv:1201.4723). For the infinite-dimensional Temperley-Lieb-algebra (corresponding to the quantum group $O^+_N$) and the Motzkin algebra ($B^+_N$), the classification of extremal traces implies a classification result for well-known types of central random lattice paths. For the $2$-Fuss-Catalan algebra ($H_N^+$) we solve the classification problem by computing the \emph{minimal or exit boundary} (also known as the \emph{absolute}) for central random walks on the Fibonacci tree, thereby solving a probabilistic problem of independent interest, and to our knowledge the first such result for a nonhomogeneous tree. In the course of this article, we also discuss the branching graphs for all seven examples of free partition quantum groups, compute those that were not already known, and provide new formulas for the dimensions of their irreducible representations.
The F5 IV-V star Procyon A (aCMi) was observed in January 2001 by means of the high resolution spectrograph SARG operating with the TNG 3.5m Italian telescope (Telescopio Nazionale Galileo) at Canary Islands, exploiting the iodine cell technique. The time-series of about 950 spectra carried out during 6 observation nights and a preliminary data analysis were presented in Claudi et al. 2005. These measurements showed a significant excess of power between 0.5 and 1.5 mHz, with ~ 1 m/s peak amplitude. Here we present a more detailed analysis of the time-series, based on both radial velocity and line equivalent width analyses. From the power spectrum we found a typical p-mode frequency comb-like structure, identified with a good margin of certainty 11 frequencies in the interval 0.5-1400 mHz of modes with l=0,1,2 and 7< n < 22, and determined large and small frequency separations, Dn = 55.90 \pm 0.08 muHz and dnu_02=7.1 \pm 1.3 muHz, respectively. The mean amplitude per mode (l=0,1) at peak power results to be 0.45 \pm 0.07 m/s, twice larger than the solar one, and the mode lifetime 2 \pm 0.4 d, that indicates a non-coherent, stochastic source of mode excitation. Line equivalent width measurements do not show a significant excess of power in the examined spectral region but allowed us to infer an upper limit to the granulation noise.
De-Rating or Vulnerability Factors are a major feature of failure analysis efforts mandated by today's Functional Safety requirements. Determining the Functional De-Rating of sequential logic cells typically requires computationally intensive fault-injection simulation campaigns. In this paper a new approach is proposed which uses Machine Learning to estimate the Functional De-Rating of individual flip-flops and thus, optimising and enhancing fault injection efforts. Therefore, first, a set of per-instance features is described and extracted through an analysis approach combining static elements (cell properties, circuit structure, synthesis attributes) and dynamic elements (signal activity). Second, reference data is obtained through first-principles fault simulation approaches. Finally, one part of the reference dataset is used to train the Machine Learning algorithm and the remaining is used to validate and benchmark the accuracy of the trained tool. The intended goal is to obtain a trained model able to provide accurate per-instance Functional De-Rating data for the full list of circuit instances, an objective that is difficult to reach using classical methods. The presented methodology is accompanied by a practical example to determine the performance of various Machine Learning models for different training sizes.
Small black holes in string theory are characterized by a classically singular horizon with vanishing Bekenstein-Hawking entropy. It has been argued that higher-curvature corrections resolve the horizon and that the associated Wald entropy is in agreement with the microscopic degeneracy. In this note we study the heterotic two-charge small black hole and question this result, which we claim is caused by a misidentification of the fundamental constituents of the system studied when higher-curvature interactions are present. On the one hand, we show that quadratic curvature corrections do not solve the singular horizon of small black holes. On the other, we argue that the resolution of the heterotic small black hole reported in the literature involves the introduction of solitonic 5-branes, whose asymptotic charge vanishes due to a screening effect induced by the higher-curvature interactions, and a Kaluza-Klein monopole, whose charge remains unscreened.
The BeppoSAX High Energy Large Area Survey (HELLAS) has surveyed several tens of square degrees of the sky in the 5--10 keV band down to a flux of about 5 10^-14 erg cm-2 s-1. The extrapolation of the HELLAS logN--logS towards fainter fluxes with an euclidean slope is consistent with the first XMM measurements, in the same energy band, which are a factor 20 more sensitive. The source counts in the hardest band so far surveyed by X-ray satellites are used to constrain XRB models. It is shown that in order to reproduce the 5--10 keV counts over the range of fluxes covered by BeppoSAX and XMM a large fraction of highly absorbed (logN_H = 23--24 cm-2), luminous (L_X > 10^44 erg s-1) AGN is needed. A sizeable number of more heavily obscured, Compton thick, objects cannot be ruled out but it is not required by the present data. The model predicts an absorption distribution consistent with that found from the hardness ratios analysis of the so far identified HELLAS sources. Interestingly enough, there is evidence of a decoupling between X-ray absorption and optical reddening indicators especially at high redshifts/luminosities where several broad line quasars show hardness ratios typical of absorbed power law models with logN_H=22--24 cm-2.
The renormalization group (RG) constitutes a fundamental framework in modern theoretical physics. It allows the study of many systems showing states with large-scale correlations and their classification in a relatively small set of universality classes. RG is the most powerful tool for investigating organizational scales within dynamic systems. However, the application of RG techniques to complex networks has presented significant challenges, primarily due to the intricate interplay of correlations on multiple scales. Existing approaches have relied on hypotheses involving hidden geometries and based on embedding complex networks into hidden metric spaces. Here, we present a practical overview of the recently introduced Laplacian Renormalization Group for heterogeneous networks. First, we present a brief overview that justifies the use of the Laplacian as a natural extension for well-known field theories to analyze spatial disorder. We then draw an analogy to traditional real-space renormalization group procedures, explaining how the LRG generalizes the concept of "Kadanoff supernodes" as block nodes that span multiple scales. These supernodes help mitigate the effects of cross-scale correlations due to small-world properties. Additionally, we rigorously define the LRG procedure in momentum space in the spirit of Wilson RG. Finally, we show different analyses for the evolution of network properties along the LRG flow following structural changes when the network is properly reduced.
We calculate the form factors for the baryon number violation processes of a heavy-flavor baryon decaying into a pseudoscalar meson and a lepton. In the framework of the Standard Model effective field theory, the leptoquark operators at the bottom quark scale, whose matrix elements define the form factors, are derived by integrating out the high energy physics. Under the QCD factorization approach, the form factors of the baryon number violation processes at leading power can be factorized into the convolution of the long-distance hadron wave functions as well as the short-distance hard and jet functions representing the hard scale and hard-collinear scale effects, separately. Based on measurements of the baryon number violation processes by LHCb, we further impose constraints on the new physics constants of leptoquark operators.
We consider a class of stochastic processes modeling binary interactions in an N-particle system. Examples of such systems can be found in the modeling of biological swarms. They lead to the definition of a class of master equations that we call pair interaction driven master equations. We prove a propagation of chaos result for this class of master equations which generalizes Mark Kac's well know result for the Kac model in kinetic theory. We use this result to study kinetic limits for two biological swarm models. We show that propagation of chaos may be lost at large times and we exhibit an example where the invariant density is not chaotic.
The unprecedented center-of-mass energy available at the LHC offers unique opportunities for studying the properties of the strongly-interacting QCD matter created in PbPb collisions at extreme temperatures and very low parton momentum fractions. Electroweak boson production is an important benchmark process at hadron colliders. Precise measurements of Z production in heavy-ion collisions can help to constrain nuclear PDFs as well as serve as a standard candle of the initial state in PbPb collisions at the LHC energies. The inclusive and differential measurements of the Z boson yield in the muon decay channel will be presented, establishing that no modification is observed with respect to next-to-leading order pQCD calculations, scaled by the number of incoherent nucleon-nucleon collisions. The status of the Z measurement in the electron decay channel, as well as the first observation of W \rightarrow \mu {\nu} in heavy ion collisions will be given. The heavy-ion results will be presented in the context of those obtained in pp collisions with the CMS detector.
In this note we will show a Calder\'on--Zygmund decomposition associated with a function $f\in L^1(\mathbb{T}^{\omega})$. The idea relies on an adaptation of a more general result by J. L. Rubio de Francia in the setting of locally compact groups. Some related results about differentiation of integrals on the infinite-dimensional torus are also discussed.
Bootstrapping is a core mechanism in Reinforcement Learning (RL). Most algorithms, based on temporal differences, replace the true value of a transiting state by their current estimate of this value. Yet, another estimate could be leveraged to bootstrap RL: the current policy. Our core contribution stands in a very simple idea: adding the scaled log-policy to the immediate reward. We show that slightly modifying Deep Q-Network (DQN) in that way provides an agent that is competitive with distributional methods on Atari games, without making use of distributional RL, n-step returns or prioritized replay. To demonstrate the versatility of this idea, we also use it together with an Implicit Quantile Network (IQN). The resulting agent outperforms Rainbow on Atari, installing a new State of the Art with very little modifications to the original algorithm. To add to this empirical study, we provide strong theoretical insights on what happens under the hood -- implicit Kullback-Leibler regularization and increase of the action-gap.
Potential Based Reward Shaping combined with a potential function based on appropriately defined abstract knowledge has been shown to significantly improve learning speed in Reinforcement Learning. MultiGrid Reinforcement Learning (MRL) has further shown that such abstract knowledge in the form of a potential function can be learned almost solely from agent interaction with the environment. However, we show that MRL faces the problem of not extending well to work with Deep Learning. In this paper we extend and improve MRL to take advantage of modern Deep Learning algorithms such as Deep Q-Networks (DQN). We show that DQN augmented with our approach perform significantly better on continuous control tasks than its Vanilla counterpart and DQN augmented with MRL.
This dissertation studies the quantum anomalous effects on the description of high energy electrodynamics. We argue that on the temperatures comparable to the electroweak scale, characteristic for the early Universe and objects like neutron stars, the description of electromagnetic fields in conductive plasmas needs to be extended to include the effects of chiral anomaly. It is demonstrated that chiral effects can have a significant influence on the evolution of magnetic fields, tending to produce exponential amplification, creation of magnetic helicity from initially non-helical fields, and can lead to an inverse energy transfer. We further discuss the modified magnetohydrodynamic equations around the electroweak transition. The obtained solutions demonstrate that the asymmetry between right-handed and left-handed charged fermions of negligible mass typically grows with time when approaching the electroweak crossover from higher temperatures, until it undergoes a fast decrease at the transition, and then eventually gets damped at lower temperatures in the broken phase. At the same time, the dissipation of magnetic fields gets slower due to the chiral effects. We furthermore report some first analytical attempts in the study of chiral magnetohydrodynamic turbulence. Using the analysis of simplified regimes and qualitative arguments, it is shown that anomalous effects can strongly support turbulent inverse cascade and lead to a faster growth of the correlation length, when compared to the evolution predicted by the non-chiral magnetohydrodynamics. Finally, the discussion of relaxation towards minimal energy states in the chiral magnetohydrodynamic turbulence is also presented.
For a smooth (locally trivial) principal bundle in Ehresmann's sense, the relation between the commuting vertical and horizontal actions of the structural Lie group and the structural Lie groupoid (isomorphisms between vertical fibers) is regarded as a special case of a symmetrical concept of conjugation between "principal" Lie groupoid actions, allowing possibly non-locally trivial bundles. A diagrammatic description of this concept via a symmetric "butterfly diagram" allows its "internalization" in a wide class of categories (used by "working mathematicians") whenever they are endowed with two distinguished classes of monomorphisms and epimorphisms mimicking the properties of embeddings and surjective submersions. As an application, a general theorem of "universal activation" encompasses in a unified way such various situations as Palais' theory of globalization for partial action laws, the realization of non-abelian cocycles (including Haefliger cocycles for foliations) or the description of the "homogeneous space" attached to an embedding of Lie groups (still valid for Lie groupoids).
The last two decades have seen tremendous growth in data collections because of the realization of recent technologies, including the internet of things (IoT), E-Health, industrial IoT 4.0, autonomous vehicles, etc. The challenge of data transmission and storage can be handled by utilizing state-of-the-art data compression methods. Recent data compression methods are proposed using deep learning methods, which perform better than conventional methods. However, these methods require a lot of data and resources for training. Furthermore, it is difficult to materialize these deep learning-based solutions on IoT devices due to the resource-constrained nature of IoT devices. In this paper, we propose lightweight data compression methods based on data statistics and deviation. The proposed method performs better than the deep learning method in terms of compression ratio (CR). We simulate and compare the proposed data compression methods for various time series signals, e.g., accelerometer, gas sensor, gyroscope, electrical power consumption, etc. In particular, it is observed that the proposed method achieves 250.8\%, 94.3\%, and 205\% higher CR than the deep learning method for the GYS, Gactive, and ACM datasets, respectively. The code and data are available at https://github.com/vidhi0206/data-compression .
Boundary conditions may change the phase diagram of non-equilibrium statistical systems like the one-dimensional asymmetric simple exclusion process with and without particle number conservation. Using the quantum Hamiltonian approach, the model is mapped onto an XXZ quantum chain and solved using the Bethe ansatz. This system is related to a two-dimensional vertex model in thermal equilibrium. The phase transition caused by a point-like boundary defect in the dynamics of the one-dimensional exclusion model is in the same universality class as a continous (bulk) phase transition of the two-dimensional vertex model caused by a line defect at its boundary. (hep-th/yymmnnn)
Cubic sevenfolds are examples of Fano manifolds of Calabi-Yau type. We study them in relation with the Cartan cubic, the $E_6$-invariant cubic in $\PP^{26}$. We show that a generic cubic sevenfold $X$ can be described as a linear section of the Cartan cubic, in finitely many ways. To each such "Cartan representation" we associate a rank nine vector bundle on $X$ with very special cohomological properties. In particular it allows to define auto-equivalences of the non-commutative Calabi-Yau threefold associated to $X$ by Kuznetsov. Finally we show that the generic eight dimensional section of the Cartan cubic is rational.
We analyse the spatial and temporal coherence properties of a two-dimensional and finite sized polariton condensate with parameters tailored to the recent experiments which have shown spontaneous and thermal equilibrium polariton condensation in a CdTe microcavity [J. Kasprzak, M. Richard, S. Kundermann, A. Baas, P. Jeambrun, J.M.J. Keeling, F.M. Marchetti, M.H. Szymanska, R. Andre, J.L. Staehli, et al., Nature 443 (7110) (2006) 409]. We obtain a theoretical estimate of the thermal length, the lengthscale over which full coherence effectively exists (and beyond which power-law decay of correlations in a two-dimensional condensate occurs), of the order of 5 micrometers. In addition, the exponential decay of temporal coherence predicted for a finite size system is consistent with that found in the experiment. From our analysis of the luminescence spectra of the polariton condensate, taking into account pumping and decay, we obtain a dispersionless region at small momenta of the order of 4 degrees. In addition, we determine the polariton linewidth as a function of the pump power. Finally, we discuss how, by increasing the exciton-photon detuning, it is in principle possible to move the threshold for condensation from a region of the phase diagram where polaritons can be described as a weakly interacting Bose gas to a region where instead the composite nature of polaritons becomes important.
The fermion flavor $N_f$ dependence of non-perturbative solutions in the strong coupling phase of the gauge theory is reexamined based on the interrelation between the inversion method and the Schwinger-Dyson equation approach. Especially we point out that the apparent discrepancy on the value of the critical coupling in QED will be resolved by taking into account the higher order corrections which inevitably lead to the flavor-dependence. In the quenched QED, we conclude that the gauge-independent critical point $\alpha_c=2\pi/3$ obtained by the inversion method to the lowest order will be reduced to the result $\alpha_c=\pi/3$ of the Schwinger-Dyson equation in the infinite order limit, but its convergence is quite slow. This is shown by adding the chiral-invariant four-fermion interaction.
The assumption that the vacuum is the minimum energy state, invariant under unitary transformations, is fundamental to quantum field theory. However, the assertion that the conservation of charge implies that the equal time commutator of the charge density and its time derivative vanish for two spatially separated points is inconsistent with the requirement that the vacuum be the lowest energy state. Yet, for quantum field theory to be gauge invariant, this commutator must vanish. This essay explores how this conundrum is resolved in quantum electrodynamics.
Deterministically integrating single solid-state quantum emitters with photonic nanostructures serves as a key enabling resource in the context of photonic quantum technology. Due to the random spatial location of many widely-used solid-state quantum emitters, a number of positoning approaches for locating the quantum emitters before nanofabrication have been explored in the last decade. Here, we review the working principles of several nanoscale positioning methods and the most recent progress in this field, covering techniques including atomic force microscopy, scanning electron microscopy, confocal microscopy with \textit{in situ} lithography, and wide-field fluorescence imaging. A selection of representative device demonstrations with high-performance is presented, including high-quality single-photon sources, bright entangled-photon pairs, strongly-coupled cavity QED systems, and other emerging applications. The challenges in applying positioning techniques to different material systems and opportunities for using these approaches for realizing large-scale quantum photonic devices are discussed.
An edge coloring of a graph $G$ is a Gallai coloring if it contains no rainbow triangle. We show that the number of Gallai $r$-colorings of $K_n$ is $\left(\binom{r}{2}+o(1)\right)2^{\binom{n}{2}}$. This result indicates that almost all Gallai $r$-colorings of $K_n$ use only 2 colors. We also study the extremal behavior of Gallai $r$-colorings among all $n$-vertex graphs. We prove that the complete graph $K_n$ admits the largest number of Gallai $3$-colorings among all $n$-vertex graphs when $n$ is sufficiently large, while for $r\geq 4$, it is the complete bipartite graph $K_{\lfloor n/2 \rfloor, \lceil n/2 \rceil}$. Our main approach is based on the hypergraph container method, developed independently by Balogh, Morris, and Samotij as well as by Saxton and Thomason, together with some stability results for containers.
In future B5G/6G broadband communication systems, non-linear signal distortion caused by the impairment of transmit power amplifier (PA) can severely degrade the communication performance, especially when uplink users share the wireless medium using non-orthogonal multiple access (NOMA) schemes. This is because the successive interference cancellation (SIC) decoding technique, used in NOMA, is incapable of eliminating the interference caused by PA distortion. Consequently, each user's decoding process suffers from the cumulative distortion noise of all uplink users. In this paper, we establish a new and tractable PA distortion signal model based on real-world measurements, where the distortion noise power is a polynomial function of PA transmit power diverging from the oversimplified linear function commonly employed in existing studies. Applying the proposed signal model, we characterize the capacity rate region of multi-user uplink NOMA by optimizing the user transmit power. Our findings reveal a significant contraction in the capacity region of NOMA, attributable to polynomial distortion noise power. For practical engineering applications, we formulate a general weighted sum rate maximization (WSRMax) problem under individual user rate constraints. We further propose an efficient power control algorithm to attain the optimal performance. Numerical results show that the optimal power control policy under the proposed non-linear PA model achieves on average 13\% higher throughput compared to the policies assuming an ideal linear PA model. Overall, our findings demonstrate the importance of accurate PA distortion modeling to the performance of NOMA and provide efficient optimal power control method accordingly.
The renormalizable coloron model is built around a minimally extended color gauge group, which is spontaneously broken to QCD. The formalism introduces massive color-octet vector bosons (colorons), as well as several new scalars and fermions associated with the symmetry breaking sector. In this paper, we examine vacuum stability and triviality conditions within the context of the renormalizable coloron model up to a cutoff energy scale of 100~TeV, by computing the beta-functions of all relevant couplings and determining their running behavior as a function of the renormalization scale. We constrain the parameter space of the theory for four separate scenarios based on differing fermionic content, and demonstrate that the vectorial scenarios are less constrained by vacuum stability and triviality bounds than the chiral scenarios. Our results are summarized in exclusion plots for the separate scenarios, with previous bounds on the model overlaid for comparison. We find that a 100 TeV hadron collider could explore the entire allowed parameter space of the chiral models very effectively.
Processing-in-memory (PIM) architecture is an inherent match for data analytics application, but we observe major challenges to address when accelerating it using PIM. In this paper, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics, which fully exploits the internal bandwidth of DRAM using the bank-, bank group-, chip-, and rank-level parallelisms. Considering the properties of data analytics operators and DRAM's area constraints, Darwin maximizes the internal data bandwidth by placing the PIM processing units, buffers, and control circuits across the hierarchy of DRAM. More specifically, it introduces the bank processing unit for each bank in which a single instruction multiple data (SIMD) unit handles regular data analytics operators and bank group processing unit for each bank group to handle workload imbalance in the condition-oriented data analytics operators. Furthermore, Darwin supports a novel PIM instruction architecture that concatenates instructions for multiple thread executions on bank group processing entities, addressing the command bottleneck by enabling separate control of up to 512 different in-memory processing units simultaneously. We build a cycle-accurate simulation framework to evaluate Darwin with various DRAM configurations, optimization schemes and workloads. Darwin achieves up to 14.7x speedup over the non-optimized version. Finally, the proposed Darwin architecture achieves 4.0x-43.9x higher throughput and reduces energy consumption by 85.7% than the baseline CPU system (Intel Xeon Gold 6226 + 4 channels of DDR4-2933). Compared to the state-of-the-art PIM, Darwin achieves up to 7.5x and 7.1x in the basic query operators and TPC-H queries, respectively. Darwin is based on the latest GDDR6 and requires only 5.6% area overhead, suggesting a promising PIM solution for the future main memory system.
A structure prediction method is presented based on the Minima Hopping method. Optimized moves on the configurational enthalpy surface are performed to escape local minima using variable cell shape molecular dynamics by aligning the initial atomic and cell velocities to low curvature directions of the current minimum. The method is applied to both silicon crystals and binary Lennard-Jones mixtures and the results are compared to previous investigations. It is shown that a high success rate is achieved and a reliable prediction of unknown ground state structures is possible.
This is a study of the one-dimensional elementary cellular automaton rule 54 in the new formalism of "flexible time". We derive algebraic expressions for groups of several cells and their evolution in time. With them we can describe the behaviour of simple periodic patterns like the ether and gliders in an efficient way. We use that to look into their behaviour in detail and find general formulas that characterise them.
We provide a new model for texture synthesis based on a multiscale, multilayer feature extractor. Within the model, textures are represented by a set of statistics computed from ReLU wavelet coefficients at different layers, scales and orientations. A new image is synthesized by matching the target statistics via an iterative projection algorithm. We explain the necessity of the different types of pre-defined wavelet filters used in our model and the advantages of multilayer structures for image synthesis. We demonstrate the power of our model by generating samples of high quality textures and providing insights into deep representations for texture images.
We compare the charged-current quasielastic neutrino and antineutrino observables obtained in two different nuclear models, the phenomenological SuperScaling Approximation and the Relativistic Mean Field approach, with the recent data published by the MINERvA Collaboration. Both models provide a good description of the data without the need of an ad hoc increase in the mass parameter in the axial-vector dipole form factor. Comparisons are also made with the MiniBooNE results where different conclusions are reached.
This paper presents an in-depth analysis on the energy efficiency of Luby Transform (LT) codes with Frequency Shift Keying (FSK) modulation in a Wireless Sensor Network (WSN) over Rayleigh fading channels with pathloss. We describe a proactive system model according to a flexible duty-cycling mechanism utilized in practical sensor apparatus. The present analysis is based on realistic parameters including the effect of channel bandwidth used in the IEEE 802.15.4 standard, active mode duration and computation energy. A comprehensive analysis, supported by some simulation studies on the probability mass function of the LT code rate and coding gain, shows that among uncoded FSK and various classical channel coding schemes, the optimized LT coded FSK is the most energy-efficient scheme for distance d greater than the pre-determined threshold level d_T , where the optimization is performed over coding and modulation parameters. In addition, although the optimized uncoded FSK outperforms coded schemes for d < d_T , the energy gap between LT coded and uncoded FSK is negligible for d < d_T compared to the other coded schemes. These results come from the flexibility of the LT code to adjust its rate to suit instantaneous channel conditions, and suggest that LT codes are beneficial in practical low-power WSNs with dynamic position sensor nodes.
In most materials, transport can be described by the motion of distinct species of quasiparticles, such as electrons and phonons. Strong interactions between quasiparticles, however, can lead to collective behaviour, including the possibility of viscous hydrodynamic flow. In the case of electrons and phonons, an electron-phonon fluid is expected to exhibit strong phonon-drag transport signatures and an anomalously low thermal conductivity. The Dirac semi-metal PtSn4 has a very low resistivity at low temperatures and shows a pronounced phonon drag peak in the low temperature thermopower; it is therefore an excellent candidate for hosting a hydrodynamic electron-phonon fluid. Here we report measurements of the temperature and magnetic field dependence of the longitudinal and Hall electrical resistivities, the thermopower and the thermal conductivity of PtSn4. We confirm a phonon drag peak in the thermopower near 14 K and observe a concurrent breakdown of the Lorenz ratio below the Sommerfeld value. Both of these facts are expected for an electron-phonon fluid with a quasi-conserved total momentum. A hierarchy between momentum-conserving and momentum-relaxing scattering timescales is corroborated through measurements of the magnetic field dependence of the electrical and Hall resistivity and of the thermal conductivity. These results show that PtSn4 exhibits key features of hydrodynamic transport.
Recognizing irregular texts has been a challenging topic in text recognition. To encourage research on this topic, we provide a novel comic onomatopoeia dataset (COO), which consists of onomatopoeia texts in Japanese comics. COO has many arbitrary texts, such as extremely curved, partially shrunk texts, or arbitrarily placed texts. Furthermore, some texts are separated into several parts. Each part is a truncated text and is not meaningful by itself. These parts should be linked to represent the intended meaning. Thus, we propose a novel task that predicts the link between truncated texts. We conduct three tasks to detect the onomatopoeia region and capture its intended meaning: text detection, text recognition, and link prediction. Through extensive experiments, we analyze the characteristics of the COO. Our data and code are available at \url{https://github.com/ku21fan/COO-Comic-Onomatopoeia}.
Context. A large prominence was observed on September 24, 2013, for three hours (12:12 UT -15:12 UT) with the newly launched (June 2013) Interface Region Imaging Spectrograph (IRIS), THEMIS (Tenerife), the Hinode Solar Optical Telescope (SOT), the Solar Dynamic Observatory Atmospheric Imaging Assembly (SDO/AIA), and the Multichannel Subtractive Double Pass spectrograph (MSDP) in the Meudon Solar Tower. Aims. The aim of this work is to study the dynamics of the prominence fine structures in multiple wavelengths to understand their formation. Methods. The spectrographs IRIS and MSDP provided line profiles with a high cadence in Mg II and in Halpha lines. Results. The magnetic field is found to be globally horizontal with a relatively weak field strength (8-15 Gauss). The Ca II movie reveals turbulent-like motion that is not organized in specific parts of the prominence. On the other hand, the Mg II line profiles show multiple peaks well separated in wavelength. Each peak corresponds to a Gaussian profile, and not to a reversed profile as was expected by the present non-LTE radiative transfer modeling. Conclusions. Turbulent fields on top of the macroscopic horizontal component of the magnetic field supporting the prominence give rise to the complex dynamics of the plasma. The plasma with the high velocities (70 km/s to 100 km/s if we take into account the transverse velocities) may correspond to condensation of plasma along more or less horizontal threads of the arch-shape structure visible in 304 A. The steady flows (5 km/s) would correspond to a more quiescent plasma (cool and prominence-corona transition region) of the prominence packed into dips in horizontal magnetic field lines. The very weak secondary peaks in the Mg II profiles may reflect the turbulent nature of parts of the prominence.
Every multigraded free resolution of a monomial ideal I contains the Scarf multidegrees of I. We say I has a Scarf resolution if the Scarf multidegrees are sufficient to describe a minimal free resolution of I. The main question of this paper is which graphs G have edge ideal I(G) with a Scarf resolution? We show that I(G) has a Scarf resolution if and only if G is a gap-free forest. We also classify connected graphs for which all powers of I(G) have Scarf resolutions. Along the way, we give a concrete description of the Scarf complex of any forest. For a general graph, we give a recursive construction for its Scarf complex based on Scarf complexes of induced subgraphs.
In this paper we analyse financial implications of exchangeability and similar properties of finite dimensional random vectors. We show how these properties are reflected in prices of some basket options in view of the well-known put-call symmetry property and the duality principle in option pricing. A particular attention is devoted to the case of asset prices driven by Levy processes. Based on this, concrete semi-static hedging techniques for multi-asset barrier options, such as certain weighted barrier spread options, weighted barrier swap options or weighted barrier quanto-swap options are suggested.
The purpose of this paper is to establish limit laws for volume preserving almost Anosov flows on $3$-three manifolds having a neutral periodic of cubic saddle type. In the process, we derive estimates for the Dulac maps for cubic neutral saddles in planar vector fields.
We present evidence that the globular cluster NGC 6397 contains two distinct classes of centrally-concentrated UV-bright stars. Color-magnitude diagrams constructed from U, B, V, and I data obtained with the HST/WFPC2 reveal seven UV-bright stars fainter than the main-sequence turnoff, three of which had previously been identified as cataclysmic variables (CVs). Lightcurves of these stars show the characteristic ``flicker'' of CVs, as well as longer-term variability. A fourth star is identified as a CV candidate on the basis of its variability and UV excess. Three additional UV-bright stars show no photometric variability and have broad-band colors characteristic of B stars. These non-flickering UV stars are too faint to be extended horizontal branch stars. We suggest that they could be low-mass helium white dwarfs, formed when the evolution of a red giant is interrupted, due either to Roche-lobe overflow onto a binary companion, or to envelope ejection following a common-envelope phase in a tidal-capture binary. Alternatively, they could be very-low-mass core-He-burning stars. Both the CVs and the new class of faint UV stars are strongly concentrated toward the cluster center, to the extent that mass segregation from 2-body relaxation alone may be unable to explain their distribution.
This work presents the numerical simulation of the melting process of a particle injected in a plasma jet. The plasma process is nowadays applied to produce thin coatings on metal mechanical components with the aim of improving the surface resistance to different phenomena such as corrosion, temperature or wear. In this work we studied the heat transfer including phase-change of a bi-layer particle composed of a metallic iron core coated with ceramic alumina, inside a plasma jet. The model accounted for the environmental conditions along the particle path. The numerical simulation of this problem was performed via a temperature-based phase-change finite element formulation. The results obtained with this methodology satisfactorily described the melting process of the particle. Particularly, the results of the present work illustrate the phase change evolution in a bi-layer particle during its motion in the plasma jet. Moreover, the numerical trends agreed with those previously reported in the literature and computed with a finite volume enthalpy based formulation.
Time series of photospheric magnetic parameters of solar active regions (ARs) are used to answer whether scaling properties of fluctuations embedded in such time series help to distinguish between flare-quiet and flaring ARs. We examine a total of 118 flare-quiet and 118 flaring AR patches (called HARPs), which were observed from 2010 to 2016 by the Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO). Specifically, the scaling exponent of fluctuations is derived applying the Detrended Fluctuation Analysis (DFA) method to a dataset of 8-day time series of 18 photospheric magnetic parameters at 12-min cadence for all HARPs under investigation. We first find a statistically significant difference in the distribution of the scaling exponent between the flare-quiet and flaring HARPs, in particular for some space-averaged, signed parameters associated with magnetic field line twist, electric current density, and current helicity. The flaring HARPs tend to show higher values of the scaling exponent compared to those of the flare-quiet ones, even though there is considerable overlap between their distributions. In addition, for both the flare-quiet and flaring HARPs the DFA analysis indicates that (1) time series of most of various magnetic parameters under consideration are non-stationary, and (2) time series of the total unsigned magnetic flux and the mean photospheric magnetic free energy density in general present a non-stationary, persistent property, while the total unsigned flux near magnetic polarity inversion lines and parameters related to current density show a non-stationary, anti-persistent trend in their time series.
A new Monte-Carlo radiative-transfer code, Sunrise, is used to study the effects of dust in N-body/hydrodynamic simulations of interacting galaxies. Dust has a profound effect on the appearance of the simulated galaxies. At peak luminosities, about 90% of the bolometric luminosity is absorbed, and the dust obscuration scales with luminosity in such a way that the brightness at UV/visual wavelengths remains roughly constant. A general relationship between the fraction of energy absorbed and the ratio of bolometric luminosity to baryonic mass is found. Comparing to observations, the simulations are found to follow a relation similar to the observed IRX-Beta relation found by Meurer et al (1999) when similar luminosity objects are considered. The highest-luminosity simulated galaxies depart from this relation and occupy the region where local (U)LIRGs are found. This agreement is contingent on the presence of Milky-Way-like dust, while SMC-like dust results in far too red a UV continuum slope to match observations. The simulations are used to study the performance of star-formation indicators in the presence of dust. The far-infrared luminosity is found to be reliable. In contrast, the H-alpha and far-UV luminosity suffer severely from dust attenuation, and dust corrections can only partially remedy the situation.
An alternative approach for the panel second stage of data envelopment analysis (DEA) is presented in this paper. Instead of efficiency scores, we propose to model rankings in the second stage using a dynamic ranking model in the score-driven framework. We argue that this approach is suitable to complement traditional panel regression as a robustness check. To demonstrate the proposed approach, we determine research efficiency of higher education systems at country level by examining scientific publications and analyze its relation to good governance. The proposed approach confirms positive relation to the Voice and Accountability indicator, as found by the standard panel linear regression, while suggesting caution regarding the Government Effectiveness indicator.
We study cohomological induction for a pair $(\frak g,\frak k)$, $\frak g$ being an infinite dimensional locally reductive Lie algebra and $\frak k \subset\frak g$ being of the form $\frak k_0 + C_\gg(\frak k_0)$, where $\frak k_0\subset\frak g$ is a finite dimensional reductive in $\frak g$ subalgebra and $C_{\gg} (\frak k_0)$ is the centralizer of $\frak k_0$ in $\frak g$. We prove a general non-vanishing and $\frak k$-finiteness theorem for the output. This yields in particular simple $(\frak g,\frak k)$-modules of finite type over $\frak k$ which are analogs of the fundamental series of generalized Harish-Chandra modules constructed in \cite{PZ1} and \cite{PZ2}. We study explicit versions of the construction when $\frak g$ is a root-reductive or diagonal locally simple Lie algebra.
Transfer learning is widely used for training deep neural networks (DNN) for building a powerful representation. Even after the pre-trained model is adapted for the target task, the representation performance of the feature extractor is retained to some extent. As the performance of the pre-trained model can be considered the private property of the owner, it is natural to seek the exclusive right of the generalized performance of the pre-trained weight. To address this issue, we suggest a new paradigm of transfer learning called disposable transfer learning (DTL), which disposes of only the source task without degrading the performance of the target task. To achieve knowledge disposal, we propose a novel loss named Gradient Collision loss (GC loss). GC loss selectively unlearns the source knowledge by leading the gradient vectors of mini-batches in different directions. Whether the model successfully unlearns the source task is measured by piggyback learning accuracy (PL accuracy). PL accuracy estimates the vulnerability of knowledge leakage by retraining the scrubbed model on a subset of source data or new downstream data. We demonstrate that GC loss is an effective approach to the DTL problem by showing that the model trained with GC loss retains the performance on the target task with a significantly reduced PL accuracy.
Dark matter particles will be captured in neutron stars if they undergo scattering interactions with nucleons or leptons. These collisions transfer the dark matter kinetic energy to the star, resulting in appreciable heating that is potentially observable by forthcoming infrared telescopes. While previous work considered scattering only on nucleons, neutron stars contain small abundances of other particle species, including electrons and muons. We perform a detailed analysis of the neutron star kinetic heating constraints on leptophilic dark matter. We also estimate the size of loop induced couplings to quarks, arising from the exchange of photons and Z bosons. Despite having relatively small lepton abundances, we find that an observation of an old, cold, neutron star would provide very strong limits on dark matter interactions with leptons, with the greatest reach arising from scattering off muons. The projected sensitivity is orders of magnitude more powerful than current dark matter-electron scattering bounds from terrestrial direct detection experiments.
Investigating the early-stage evolution of an erupting flux rope from the Sun is important to understand the mechanisms of how it looses its stability and its space weather impacts. Our aim is to develop an efficient scheme for tracking the early dynamics of erupting solar flux ropes and use the algorithm to analyse its early-stage properties. The algorithm is tested on a data-driven simulation of an eruption that took place in active region AR12473. We investigate the modelled flux rope's footpoint movement and magnetic flux evolution and compare with observational data from the Solar Dynamics Observatory's Atmospheric Imaging Assembly in the 211 $\unicode{x212B}$ and 1600 $\unicode{x212B}$ channels. To carry out our analysis, we use the time-dependent data-driven magnetofrictional model (TMFM). We also perform another modelling run, where we stop the driving of the TMFM midway through the flux rope's rise through the simulation domain and evolve it instead with a zero-beta magnetohydrodynamic (MHD) approach. The developed algorithm successfully extracts a flux rope and its ascend through the simulation domain. We find that the movement of the modelled flux rope footpoints showcases similar trends in both TMFM and relaxation MHD run: they recede from their respective central location as the eruption progresses and the positive polarity footpoint region exhibits a more dynamic behaviour. The ultraviolet brightenings and extreme ultraviolet dimmings agree well with the models in terms of their dynamics. According to our modelling results, the toroidal magnetic flux in the flux rope first rises and then decreases. In our observational analysis, we capture the descending phase of toroidal flux. In conclusion, the extraction algorithm enables us to effectively study the flux rope's early dynamics and derive some of its key properties such as footpoint movement and toroidal magnetic flux.
We report the optical, UV, and soft X-ray observations of the $2017-2022$ eruptions of the recurrent nova M31N 2008-12a. We infer a steady decrease in the accretion rate over the years based on the inter-eruption recurrence period. We find a ``cusp'' feature in the $r'$ and $i'$ band light curves close to the peak, which could be associated to jets. Spectral modelling indicates a mass ejection of 10$^{-7}$ to 10$^{-8}$ M$_{\odot}$ during each eruption, and an enhanced Helium abundance of He/He$_{\odot}$ $\approx$ 3. The super-soft source (SSS) phase shows significant variability, which is anti-correlated to the UV emission, indicating a common origin. The variability could be due to the reformation of the accretion disk. A comparison of the accretion rate with different models on the $\rm M_{WD}$$-\dot{M}$ plane yields the mass of a CO WD, powering the ``H-shell flashes'' every $\sim$ 1 year to be $>1.36$ M$_{\odot}$ and growing with time, making M31N 2008-12a a strong candidate for the single degenerate scenario of Type Ia supernovae progenitor.
A notion of L^2-homology for compact quantum groups is introduced, generalizing the classical notion for countable, discrete groups. If the compact quantum group in question has tracial Haar state, it is possible to define its L^2-Betti numbers and Novikov-Shubin invariants/capacities. It is proved that these L^2-Betti numbers vanish for the Gelfand dual of a compact Lie group and that the zeroth Novikov-Shubin invariant equals the dimension of the underlying Lie group. Finally, we relate our approach to the approach of A. Connes and D. Shlyakhtenko by proving that the L^2-Betti numbers of a compact quantum group, with tracial Haar state, are equal to the Connes-Shlyakhtenko L^2-Betti numbers of its Hopf *-algebra of matrix coefficients.
The aim of this paper is to bring together the notions of quantum game and game isomorphism. The work is intended as an attempt to introduce a new criterion for quantum game schemes. The generally accepted requirement forces a quantum scheme to generate the classical game in a particular case. Now, given a quantum game scheme and two isomorphic classical games, we additionally require the resulting quantum games to be isomorphic as well. We are concerned with the Eisert-Wilkens-Lewenstein quantum game scheme and the strong isomorphism between games in strategic form.
In lattice QCD, colour confinement manifests in flux tubes. We compute in detail the quark-antiquark flux tube for pure gauge SU(3) dimension $D=3+1$ for quark-antiquark distances R ranging from 0.4 fm to 1.4 fm. To increase the signal over noise ratio, we apply the improved multihit and extended smearing techniques. We detail the gauge invariant squared components of the colour electric and colour magnetic fields both in the mediator plane between the static quark and static antiquark and in the planes of the sources. We fit the field densities with appropriate ansatze and we observe the screening of the colour fields in all studied planes together with the quantum widening of the flux tube in the mediator plane. All components squared of the colour fields are non-vanishing and are consistent with a penetration length lambda ~ 0.22 to 0.24 fm and an effective screening mass mu ~ 0.9 to 0.8 GeV. The quantum widening of the flux tube is well fitted with a logarithmic law in R.
Diffusive shock acceleration in the environs of a remnant's expanding shell is a popular candidate for the origin of SNR gamma-rays. In this paper, results from our study of non-linear effects in shock acceleration theory and their impact on the gamma-ray spectra of SNRs are presented. These effects describe the dynamical influence of the accelerated cosmic rays on the shocked plasma at the same time as addressing how the non-uniformities in the fluid flow force the distribution of the cosmic rays to deviate from pure power-laws. Such deviations are crucial to gamma-ray spectral determination. Our self-consistent Monte Carlo approach to shock acceleration is used to predict ion and electron distributions that spawn neutral pion decay, bremsstrahlung and inverse Compton emission components for SNRs. We demonstrate how the spatial and temporal limitations imposed by the expanding SNR shell quench acceleration above critical energies in the 500 GeV - 10 TeV range, thereby spawning gamma-ray spectral cutoffs that are quite consistent with Whipple's TeV upper limits to the EGRET unidentified sources that have SNR associations. We also discuss the role of electron injection in shocks and its impact on the significance of electromagnetic components to GeV--TeV spectral formation.
This paper describes a new magnetic trap for ultra-cold neutrons (UCNs) made from a 1.2 m long Halbach-octupole array of permanent magnets with an inner bore radius of 47 mm combined with an assembly of superconducting end coils and bias field solenoid. The use of the trap in a vertical, magneto-gravitational and a horizontal setup are compared in terms of the effective volume and ability to control key systematic effects that need to be addressed in high precision neutron lifetime measurements.
We consider elastic antineutrino-electron scattering taking into account possible effects of neutrino masses and mixing and of neutrino magnetic moments and electric dipole moments. Having in mind antineutrinos produced in a nuclear reactor we compute, in particular, the weak-electromagnetic interference terms which are linear in the magnetic (electric dipole) moments and also in the neutrino masses. We show that these terms are, however, suppressed compared to the pure weak and electromagnetic cross section. We also comment upon the possibility of using the electromagnetic cross section to investigate neutrino oscillations.
A simple model is presented for the parton distributions in hadrons. The parton momenta in the hadron rest frame are derived from a spherically symmetric, Gaussian, distribution having a width motivated by the Heisenberg uncertainty relation applied to the hadron size. Valence quarks and gluons originate from the `bare' hadron, while sea partons arise mainly from pions in hadronic fluctuations. Starting from a low Q^2 scale, the distributions are evolved with next-to-leading order DGLAP and give the proton structure function F2(x,Q^2) in good agreement with deep inelastic scattering data.
Using the tools of Differential Geometry, we define a new <<fast>> chaoticity indicator, able to detect dynamical instability of trajectories much more effectively, (i.e. "quickly") than the usual tools, like Lyapunov Characteristic Numbers (LCN's) or Poincare` Surface of Section. Moreover, at variance with other "fast" indicators proposed in the Literature, it gives informations about the asymptotic behaviour of trajectories, though being local in phase-space. Furthermore, it detects the chaotic or regular nature of geodesics without any reference to a given perturbation and it allows also to discriminate between different regimes (and possibly sources) of chaos in distinct regions of phase-space.
We have introduced two crossover operators, MMX-BLXexploit and MMX-BLXexplore, for simultaneously solving multiple feature/subset selection problems where the features may have numeric attributes and the subset sizes are not predefined. These operators differ on the level of exploration and exploitation they perform; one is designed to produce convergence controlled mutation and the other exhibits a quasi-constant mutation rate. We illustrate the characteristic of these operators by evolving pattern detectors to distinguish alcoholics from controls using their visually evoked response potentials (VERPs). This task encapsulates two groups of subset selection problems; choosing a subset of EEG leads along with the lead-weights (features with attributes) and the other that defines the temporal pattern that characterizes the alcoholic VERPs. We observed better generalization performance from MMX-BLXexplore. Perhaps, MMX-BLXexploit was handicapped by not having a restart mechanism. These operators are novel and appears to hold promise for solving simultaneous feature selection problems.
This report describes the 2014 study by the Science Definition Team (SDT) of the Wide-Field Infrared Survey Telescope (WFIRST) mission. It is a space observatory that will address the most compelling scientific problems in dark energy, exoplanets and general astrophysics using a 2.4-m telescope with a wide-field infrared instrument and an optical coronagraph. The Astro2010 Decadal Survey recommended a Wide Field Infrared Survey Telescope as its top priority for a new large space mission. As conceived by the decadal survey, WFIRST would carry out a dark energy science program, a microlensing program to determine the demographics of exoplanets, and a general observing program utilizing its ultra wide field. In October 2012, NASA chartered a Science Definition Team (SDT) to produce, in collaboration with the WFIRST Study Office at GSFC and the Program Office at JPL, a Design Reference Mission (DRM) for an implementation of WFIRST using one of the 2.4-m, Hubble-quality telescope assemblies recently made available to NASA. This DRM builds on the work of the earlier WFIRST SDT, reported by Green et al. (2012) and the previous WFIRST-2.4 DRM, reported by Spergel et. (2013). The 2.4-m primary mirror enables a mission with greater sensitivity and higher angular resolution than the 1.3-m and 1.1-m designs considered previously, increasing both the science return of the primary surveys and the capabilities of WFIRST as a Guest Observer facility. The addition of an on-axis coronagraphic instrument to the baseline design enables imaging and spectroscopic studies of planets around nearby stars.
Few-shot class-incremental learning (FSCIL) faces challenges of memorizing old class distributions and estimating new class distributions given few training samples. In this study, we propose a learnable distribution calibration (LDC) approach, with the aim to systematically solve these two challenges using a unified framework. LDC is built upon a parameterized calibration unit (PCU), which initializes biased distributions for all classes based on classifier vectors (memory-free) and a single covariance matrix. The covariance matrix is shared by all classes, so that the memory costs are fixed. During base training, PCU is endowed with the ability to calibrate biased distributions by recurrently updating sampled features under the supervision of real distributions. During incremental learning, PCU recovers distributions for old classes to avoid `forgetting', as well as estimating distributions and augmenting samples for new classes to alleviate `over-fitting' caused by the biased distributions of few-shot samples. LDC is theoretically plausible by formatting a variational inference procedure. It improves FSCIL's flexibility as the training procedure requires no class similarity priori. Experiments on CUB200, CIFAR100, and mini-ImageNet datasets show that LDC outperforms the state-of-the-arts by 4.64%, 1.98%, and 3.97%, respectively. LDC's effectiveness is also validated on few-shot learning scenarios.
Generative adversarial networks (GANs) are effective in generating realistic images but the training is often unstable. There are existing efforts that model the training dynamics of GANs in the parameter space but the analysis cannot directly motivate practically effective stabilizing methods. To this end, we present a conceptually novel perspective from control theory to directly model the dynamics of GANs in the function space and provide simple yet effective methods to stabilize GANs' training. We first analyze the training dynamic of a prototypical Dirac GAN and adopt the widely-used closed-loop control (CLC) to improve its stability. We then extend CLC to stabilize the training dynamic of normal GANs, where CLC is implemented as a squared $L2$ regularizer on the output of the discriminator. Empirical results show that our method can effectively stabilize the training and obtain state-of-the-art performance on data generation tasks.
We investigate anomalous spin and orbital Hall phenomena in antiferromagnetic (AF) materials via orbital pumping experiments. Conducting spin and orbital pumping experiments on YIG/Pt/Ir20Mn80 heterostructures, we unexpectedly observe strong spin and orbital anomalous signals in an out-of-plane configuration. We report a sevenfold increase in the signal of the anomalous inverse orbital Hall effect (AIOHE) compared to conventional effects. Our study suggests expanding the Orbital Hall angle ({\theta}_OH) to a rank 3 tensor, akin to the Spin Hall angle ({\theta}_SH), to explain AIOHE. This work pioneers converting spin-orbital currents into charge current, advancing the spin-orbitronics domain in AF materials.
Lecture notes of a tutorial on Combinatorial Hodge Theory in Simplicial Signal Processing held at international conference for digital audio effects (DAFx-23) in Copenhagen, Denmark.
We present the positive-partial-transpose squared conjecture introduced by M. Christandl at Banff International Research Station Workshop: Operator Structures in Quantum Information Theory (Banff International Research Station, Alberta, 2012). We investigate the conjecture in higher dimensions and offer two novel approaches (decomposition and composition of quantum channels) and correspondingly, several schemes for finding counterexamples to this conjecture. One of the schemes involving the composition of PPT quantum channels in unsolved dimensions yields a potential counterexample.
The problem of underwater acoustic (UWA) channel estimation is the non-uniform sparse representation that may increase the algorithm complexity and the required time. A mathematical framework utilizing l21 constraint with two-dimensional frequency domain is employed to enhance the channel estimation. The frame work depends on both main and auxiliary channel information. The simulation results have been demonstrated that the proposed estimation method can improve some problems that are achieved with other norms like l1. Furthermore, it can achieve a better performance in terms of mean square error (MSE) and execution time.
We propose a novel strategy for extracting features in supervised learning that can be used to construct a classifier which is more robust to small perturbations in the input space. Our method builds upon the idea of the information bottleneck by introducing an additional penalty term that encourages the Fisher information of the extracted features to be small, when parametrized by the inputs. By tuning the regularization parameter, we can explicitly trade off the opposing desiderata of robustness and accuracy when constructing a classifier. We derive the optimal solution to the robust information bottleneck when the inputs and outputs are jointly Gaussian, proving that the optimally robust features are also jointly Gaussian in that setting. Furthermore, we propose a method for optimizing a variational bound on the robust information bottleneck objective in general settings using stochastic gradient descent, which may be implemented efficiently in neural networks. Our experimental results for synthetic and real data sets show that the proposed feature extraction method indeed produces classifiers with increased robustness to perturbations.
It has long been recognized that the finite speed of light can affect the observed time of an event. For example, as a source moves radially toward or away from an observer, the path length and therefore the light travel time to the observer decreases or increases, causing the event to appear earlier or later than otherwise expected, respectively. This light travel time effect (LTTE) has been applied to transits and eclipses for a variety of purposes, including studies of eclipse timing variations (ETVs) and transit timing variations (TTVs) that reveal the presence of additional bodies in the system. Here we highlight another non-relativistic effect on eclipse or transit times arising from the finite speed of light---caused by an asymmetry in the transverse velocity of the two eclipsing objects, relative to the observer. This asymmetry can be due to a non-unity mass ratio or to the presence of external barycentric motion. Although usually constant, this barycentric and asymmetric transverse velocities (BATV) effect can vary between sequential eclipses if either the path length between the two objects or the barycentric transverse velocity varies in time. We discuss this BATV effect and estimate its magnitude for both time-dependent and time-independent cases. For the time-dependent cases, we consider binaries that experience a change in orbital inclination, eccentric systems with and without apsidal motion, and hierarchical triple systems. We also consider the time-independent case which, by affecting the primary and secondary eclipses differently, can influence the inferred system parameters, such as the orbital eccentricity.
Study of high-redshift radio galaxies (HzRGs) can shed light on the active galactic nuclei (AGNs) evolution in massive elliptical galaxies. The vast majority of observed high-redshift AGNs are quasars, and there are very few radio galaxies at redshifts $z>3$. We present the radio properties of 173 sources optically identified with radio galaxies at $z\geqslant1$ with flux densities $S_{1.4}\geqslant20$ mJy. Literature data were collected for compilation of broadband radio spectra, estimation of radio variability, radio luminosity, and radio loudness. Almost 60% of the galaxies have steep or ultra-steep radio spectra; 22% have flat, inverted, upturn, and complex spectral shapes, and 18% have peaked spectra (PS). The majority of the PS sources in the sample (20/31) are megahertz-peaked spectrum sources candidates, i.e. possibly very young and compact radio galaxies. The median values of the variability indices at 11 and 5 GHz are $V_{S_{11}}=0.14$ and $V_{S_{5}}=0.13$, which generally indicates a weak or moderate character of the long-term variability of the studied galaxies. The typical radio luminosity and radio loudness are $L_{5}=10^{43}$ - $10^{44}$ erg*s$^{-1}$ and $\log R=3$ - $4$ respectively. We have found less prominent features of the bright compact radio cores for our sample compared to high-redshift quasars at $z\geq3$. The variety of the obtained radio properties shows the different conditions for the formation of radio emission sources in galaxies.
We present new evolutionary models of primordial very massive stars, with initial masses ranging from $100\,\mathrm{{M}_{\odot}}$ to $1000\,\mathrm{{M}_{\odot}}$, that extend from the main sequence until the onset of dynamical instability caused by the creation of electron-positron pairs during core C, Ne, or O burning, depending on the star's mass and metallicity. Mass loss accounts for radiation-driven winds as well as pulsation-driven mass-loss on the main sequence and during the red supergiant phase. After examining the evolutionary properties, we focus on the final outcome of the models and associated compact remnants. Stars that avoid the pair-instability supernova channel, should produce black holes with masses ranging from $\approx 40\, \mathrm{{M}_{\odot}}$ to $\approx 1000\,\mathrm{{M}_{\odot}}$. In particular, stars with initial masses of about $100\,\mathrm{{M}_{\odot}}$ could leave black holes of $\simeq 85-90\, \mathrm{{M}_{\odot}}$, values consistent with the estimated primary black hole mass of the GW190521 merger event. Overall, these results may contribute to explain future data from next-generation gravitational-wave detectors, such as the Einstein Telescope and Cosmic Explorer, which will have access to as-yet unexplored BH mass range of $\approx 10^2-10^4\,\mathrm{{M}_{\odot}}$ in the early universe.
We develop the thermodynamics of field theories characterized by non-local propagators. We analyze the partition function and main thermodynamic properties arising from perturbative thermal loops. We focus on the p-adic models associated with the tachyon phenomenology in string theories. We reproduce well known features of these theories, but also obtain many new results. In particular, we explain how to maintain consistency of such non-local theories by avoiding the appearance of ghosts at finite temperature. As a consequence of this fact, the vacuum energy in p-adic theories becomes positive. It is also hierarchically suppressed, and we explore the parameter space where it is consistent with the observed value of the cosmological constant.
Some nonlinear codes, such as Kerdock and Preparata codes, can be represented as binary images under the Gray map of linear codes over rings. This paper introduces MAP decoding of Kerdock and Preparata codes by working with their quaternary representation (linear codes over Z4 ) with the complexity of O(N2log2N), where N is the code length in Z4. A sub-optimal bitwise APP decoder with good error-correcting performance and complexity of O(Nlog2N) that is constructed using the decoder lifting technique is also introduced. This APP decoder extends upon the original lifting decoder by working with likelihoods instead of hard decisions and is not limited to Kerdock and Preparata code families. Simulations show that our novel decoders significantly outperform several popular decoders in terms of error rate.
Molecular Dynamics simulations of glycerol confined in $\gamma$-Al$_2$O$_3$ slit nanopores are used to explain controversial and inconsistent observations reported in the literature regarding the dynamics of viscous fluids in confined geometries. Analysing the effects of the degree of confinement and pore saturation in this system, we found that the presence of the solid/liquid interface and the liquid/gas interface in partially saturated pores are the main contributors for the disruption of the hydrogen bond network of glycerol. Despite the reduction of hydrogen bonds between glycerol molecules caused by the presence of the solid, glycerol molecules near the solid surface can establish hydrogen bonds with the hydroxyl groups of $\gamma$-Al$_2$O$_3$ that significantly slow-down the dynamics of the confined fluid compared to the bulk liquid. On the other hand, the disruption of the hydrogen bond network caused by the liquid/gas interface in unsaturated pores reduces significantly the number of hydrogen bonds between glycerol molecules and results in a faster dynamics than in the bulk liquid. Therefore, we suggest that the discrepancies reported in the literature are a consequence of measurements carried out under different pore saturation conditions.
Evidence for intra-unit-cell (IUC) magnetic order in the pseudogap region of high-$T_c$ cuprates below a temperature $T^\ast$ is found in several studies, but NMR and $\mu$SR experiments do not observe the expected static local magnetic fields. It has been noted, however, that such fields could be averaged by fluctuations. Our measurements of muon spin relaxation rates in single crystals of YBa$_2$Cu$_3$O$_y$ reveal magnetic fluctuations of the expected order of magnitude that exhibit critical slowing down at $T^\ast$. These results are strong evidence for fluctuating IUC magnetic order in the pseudogap phase.
We extend the Erd\H os-R\' enyi law of large numbers to the averaging setup both in discrete and continuous time cases. We consider both stochastic processes and dynamical systems as fast motions whenever they are fast mixing and satisfy large deviations estimates. In the continuous time case we consider flows with large deviations estimates which allow a suspension representation and it turns out that fast mixing of corresponding base transformation suffices for our results.
In this paper we analyze from the game theory point of view Byzantine Fault Tolerant blockchains when processes exhibit rational or Byzantine behavior. Our work is the first to model the Byzantine-consensus based blockchains as a committee coordination game. Our first contribution is to offer a game-theoretical methodology to analyse equilibrium interactions between Byzantine and rational committee members in Byzantine Fault Tolerant blockchains. Byzantine processes seek to inflict maximum damage to the system, while rational processes best-respond to maximise their expected net gains. Our second contribution is to derive conditions under which consensus properties are satisfied or not in equilibrium. When the majority threshold is lower than the proportion of Byzantine processes, invalid blocks are accepted in equilibrium. When the majority threshold is large, equilibrium can involve coordination failures , in which no block is ever accepted. However, when the cost of accepting invalid blocks is large, there exists an equilibrium in which blocks are accepted iff they are valid.
This paper is concerned with polynomial optimization problems. We show how to exploit term (or monomial) sparsity of the input polynomials to obtain a new converging hierarchy of semidefinite programming relaxations. The novelty (and distinguishing feature) of such relaxations is to involve block-diagonal matrices obtained in an iterative procedure performing completion of the connected components of certain adjacency graphs. The graphs are related to the terms arising in the original data and not to the links between variables. Our theoretical framework is then applied to compute lower bounds for polynomial optimization problems either randomly generated or coming from the networked systems literature.
The goal of this study is to test two different computing platforms with respect to their suitability for running deep networks as part of a humanoid robot software system. One of the platforms is the CPU-centered Intel NUC7i7BNH and the other is a NVIDIA Jetson TX2 system that puts more emphasis on GPU processing. The experiments addressed a number of benchmarking tasks including pedestrian detection using deep neural networks. Some of the results were unexpected but demonstrate that platforms exhibit both advantages and disadvantages when taking computational performance and electrical power requirements of such a system into account.
Machine learning (ML), being now widely accessible to the research community at large, has fostered a proliferation of new and striking applications of these emergent mathematical techniques across a wide range of disciplines. In this paper, we will focus on a particular case study: the field of paleoanthropology, which seeks to understand the evolution of the human species based on biological and cultural evidence. As we will show, the easy availability of ML algorithms and lack of expertise on their proper use among the anthropological research community has led to foundational misapplications that have appeared throughout the literature. The resulting unreliable results not only undermine efforts to legitimately incorporate ML into anthropological research, but produce potentially faulty understandings about our human evolutionary and behavioral past. The aim of this paper is to provide a brief introduction to some of the ways in which ML has been applied within paleoanthropology; we also include a survey of some basic ML algorithms for those who are not fully conversant with the field, which remains under active development. We discuss a series of missteps, errors, and violations of correct protocols of ML methods that appear disconcertingly often within the accumulating body of anthropological literature. These mistakes include use of outdated algorithms and practices; inappropriate train/test splits, sample composition, and textual explanations; as well as an absence of transparency due to the lack of data/code sharing, and the subsequent limitations imposed on independent replication. We assert that expanding samples, sharing data and code, re-evaluating approaches to peer review, and, most importantly, developing interdisciplinary teams that include experts in ML are all necessary for progress in future research incorporating ML within anthropology.
For the truncated moment problem associated to a complex sequence $\gamma ^{(2n)}=\{\gamma _{ij}\}_{i,j\in Z_{+},i+j \leq 2n}$ to have a representing measure $\mu $, it is necessary for the moment matrix $M(n)$ to be positive semidefinite, and for the algebraic variety $\mathcal{V}_{\gamma}$ to satisfy $\operatorname{rank}\;M(n) \leq \;$ card$\;\mathcal{V}_{\gamma}$ as well as a consistency condition: the Riesz functional vanishes on every polynomial of degree at most $2n$ that vanishes on $\mathcal{V}_{\gamma}$. In previous work with L. Fialkow and M. M\"{o}ller, the first-named author proved that for the extremal case (rank$\;M(n)=$ card$\;\mathcal{V}_{\gamma}$), positivity and consistency are sufficient for the existence of a representing measure. In this paper we solve the truncated moment problem for cubic column relations in $M(3)$ of the form $Z^{3}=itZ+u\bar{Z}$ ($u,t \in \mathbb{R}$); we do this by checking consistency. For $(u,t)$ in the open cone determined by $0 < \left|u\right| < t < 2 \left|u\right|$, we first prove that the algebraic variety has exactly $7$ points and $\operatorname{rank}\;M(3)=7$; we then apply the above mentioned result to obtain a concrete, computable, necessary and sufficient condition for the existence of a representing measure.
We derive effective recursion formulae of top intersections in the tautological ring $R^*(M_g)$ of the moduli space of curves of genus $g\geq 2$. As an application, we prove a convolution-type tautological relation in $R^{g-2}(M_g)$.