text
stringlengths
6
128k
Multi-frequency matched filters (MMFs) are routinely used to detect galaxy clusters from CMB data through the thermal Sunyaev-Zeldovich (tSZ) effect, leading to cluster catalogues that can be used for cosmological inference. In order to be applied, MMFs require knowledge of the cross-frequency power spectra of the noise in the maps. This is typically estimated from the data and taken to be equal to the power spectra of the data, assuming the contribution from the tSZ signal of the detections to be negligible. Using both analytical arguments and \textit{Planck}-like mock observations, we show that doing so causes the MMF noise to be overestimated, inducing a loss of signal-to-noise. Furthermore, the MMF cluster observable (the amplitude $\hat{y}_0$ or the signal-to-noise $q$) does not behave as expected, which can potentially bias cosmological inference. In particular, the observable becomes biased with respect to its theoretical prediction and displays a variance that also differs from its predicted value. We propose an iterative MMF (iMMF) approach designed to mitigate these effects. In this approach, after a first standard MMF step, the noise power spectra are reestimated by masking the detections from the data, delivering an updated iterative cluster catalogue. Applying our iMMF to our \textit{Planck}-like mock observations, we find that the aforementioned effects are completely suppressed. This leads to a signal-to-noise gain relative to the standard MMF, with more significant detections and a higher number of them, and to a cluster observable with the expected theoretical properties, thus eliminating any potential biases in the cosmological constraints.
A selfcontained proof of the KAM theorem in the Thirring model is discussed.
The possibility for detuned spins to display synchronous oscillations in local observables is analyzed in the presence of collective dissipation and incoherent pumping. We show that there exist two distinct mechanisms that can give rise to synchronization, that is, non-degenerate subradiance and coalescence. The former, known as transient synchronization, is here generalized in the presence of pumping. It is due to long-lasting coherences leading to a progressive frequency selection. In the same set-up, even if under different conditions, coalescence and exceptional points are found which can lead to regimes where a single oscillation frequency is present in the relevant quantities. Still, we show that synchronization can be established only after steady phase-locking occurs. Distinctive spectral features of synchronization by these two different mechanisms are reported for two-time correlations.
Graphical representations of classical Friedmann's models are often misleading when one considers the age of the universe. Most textbooks disregard conceptual differences in the representations, as far as ages are concerned. We discuss the details of the scale-factor versus time function for Friedmann's solutions in the time range that includes the ages of model universes.
Topology establishes a unifying framework for a diverse range of scientific areas including particle physics, cosmology, and condensed matter physics. One of the most fascinating manifestations of topology in the context of condensed matter is the topological Hall effect, and its relative: the Skyrmion Hall effect. Skyrmions are stable vortex-like spin configurations in certain chiral magnets, and when subject to external electric currents can drift in the transverse direction to the current. These quasi-particles are characterised by a conserved topological charge which in the Skyrmion Hall effect plays the role of electric charges in the ordinary Hall effect. Recently, it has been shown that liquid crystals endowed with chiral properties serve as an ideal testbed for the fundamental investigation of topological solitons, including their two- and three-dimensional realisations. Here, we show experimentally and numerically that three-dimensional solitons aka "torons" exhibit a Hall-like effect when driven by shear flows: the torons are deflected in the direction perpendicular to the shear plane. The experimental results are rationalised in terms of the dynamic Ericksen-Leslie equations, which predict the emergence of the transverse component of the net mass flow, the magnitude of which scales as the 3rd power of the shear rate. The perturbation analysis highlights an interplay of the viscous and chiral elastic torques as the mechanism for the emergence of net transverse currents. Numerical simulations demonstrate, however, that torons are not merely dragged by the flow but move with their own transverse speed, much larger than the average flow velocity in the transverse direction. Our findings may enable responsive microfluidic applications relying on soft topological solitons.
This paper identifies the homotopy theories of topological stacks and orbispaces with unstable global homotopy theory. At the same time, we provide a new perspective by interpreting it as the homotopy theory of `spaces with an action of the universal compact Lie group'. The upshot is a novel way to construct and study genuine cohomology theories on stacks, orbifolds, and orbispaces, defined from stable global homotopy types represented by orthogonal spectra. The universal compact Lie group (which is neither compact nor a Lie group) is a well known object, namely the topological monoid $\mathcal L$ of linear isometric self-embeddings of $\mathbb R^\infty$. The underlying space of $\mathcal L$ is contractible, and the homotopy theory of $\mathcal L$-spaces with respect to underlying weak equivalences is just another model for the homotopy theory of spaces. However, the monoid $\mathcal L$ contains copies of all compact Lie groups in a specific way, and we define global equivalences of $\mathcal L$-spaces by testing on corresponding fixed points. We establish a global model structure on the category of $\mathcal L$-spaces and prove it to be Quillen equivalent to the global model category of orthogonal spaces, and to the category of orbispaces, i.e., presheaves of spaces on the global orbit category.
We propose a monotonic logic of internalised non-monotonic or instant interactive proofs (LiiP) and reconstruct an existing monotonic logic of internalised monotonic or persistent interactive proofs (LiP) as a minimal conservative extension of LiiP. Instant interactive proofs effect a fragile epistemic impact in their intended communities of peer reviewers that consists in the impermanent induction of the knowledge of their proof goal by means of the knowledge of the proof with the interpreting reviewer: If my peer reviewer knew my proof then she would at least then (in that instant) know that its proof goal is true. Their impact is fragile and their induction of knowledge impermanent in the sense of being the case possibly only at the instant of learning the proof. This accounts for the important possibility of internalising proofs of statements whose truth value can vary, which, as opposed to invariant statements, cannot have persistent proofs. So instant interactive proofs effect a temporary transfer of certain propositional knowledge (knowable ephemeral facts) via the transmission of certain individual knowledge (knowable non-monotonic proofs) in distributed systems of multiple interacting agents.
A short review is given of the simplified differential equations approach to Master Integrals, which was recently proposed by one of the authors. We show its applicability by calculating some non-trivial two-loop planar Master Integrals, namely those contributing to amplitudes of massive diboson VV' production at the LHC with massless internal lines.
Feature transformation aims to reconstruct an effective representation space by mathematically refining the existing features. It serves as a pivotal approach to combat the curse of dimensionality, enhance model generalization, mitigate data sparsity, and extend the applicability of classical models. Existing research predominantly focuses on domain knowledge-based feature engineering or learning latent representations. However, these methods, while insightful, lack full automation and fail to yield a traceable and optimal representation space. An indispensable question arises: Can we concurrently address these limitations when reconstructing a feature space for a machine-learning task? Our initial work took a pioneering step towards this challenge by introducing a novel self-optimizing framework. This framework leverages the power of three cascading reinforced agents to automatically select candidate features and operations for generating improved feature transformation combinations. Despite the impressive strides made, there was room for enhancing its effectiveness and generalization capability. In this extended journal version, we advance our initial work from two distinct yet interconnected perspectives: 1) We propose a refinement of the original framework, which integrates a graph-based state representation method to capture the feature interactions more effectively and develop different Q-learning strategies to alleviate Q-value overestimation further. 2) We utilize a new optimization technique (actor-critic) to train the entire self-optimizing framework in order to accelerate the model convergence and improve the feature transformation performance. Finally, to validate the improved effectiveness and generalization capability of our framework, we perform extensive experiments and conduct comprehensive analyses.
We provide an integral formula for the Maslov index of a pair $(E,F)$ over a surface $\Sigma$, where $E\rightarrow\Sigma$ is a complex vector bundle and $F\subset E_{|\partial\Sigma}$ is a totally real subbundle. As in Chern-Weil theory, this formula is written in terms of the curvature of $E$ plus a boundary contribution. When $(E,F)$ is obtained via an immersion of $(\Sigma,\partial\Sigma)$ into a pair $(M,L)$ where $M$ is K\"ahler and $L$ is totally real, the formula allows us to control the Maslov index in terms of the geometry of $(M,L)$. We exhibit natural conditions on $(M,L)$ which lead to bounds and monotonicity results.
Here, we report the unusual behaviour shown by the (BiFeO3)1-x-(PbTiO3)x (BF-xPT) films prepared using a multilayer deposition approach by chemical solution deposition method. Thin film samples of various compositions were prepared by depositing several bilayers of BF and PT precursors by varying the BF or PT layer thicknesses. X-ray diffraction showed that final samples of all compositions show mixing of the two compounds resulting in a single phase mixture, also confirmed by transmission electron microscopy. In contrast to bulk equilibrium compositions, our samples show a monoclinic (MA type) structure suggesting disappearance of morphotropic phase boundary (MPB) about x = 0.30 as observed in the bulk. This is accompanied by the lack of any enhancement of remnant polarization at MPB as shown by the ferroelectric measurements. Magnetic measurements show that the magnetization of the samples increases with increasing BF content. Significant magnetization of the samples indicates melting of spin spirals in the BF-xPT arising from random distribution of iron atoms across the film. Absence of Fe2+ ions in the films was corroborated by X-ray photoelectron spectroscopy measurements. The results illustrate that used thin film processing methodology significantly changes the structural evolution in contrast to predictions from the equilibrium phase diagram as well as modify the functional characteristics of BP-xPT system dramatically.
In previous work, we presented a novel information-theoretic privacy criterion for query forgery in the domain of information retrieval. Our criterion measured privacy risk as a divergence between the user's and the population's query distribution, and contemplated the entropy of the user's distribution as a particular case. In this work, we make a twofold contribution. First, we thoroughly interpret and justify the privacy metric proposed in our previous work, elaborating on the intimate connection between the celebrated method of entropy maximization and the use of entropies and divergences as measures of privacy. Secondly, we attempt to bridge the gap between the privacy and the information-theoretic communities by substantially adapting some technicalities of our original work to reach a wider audience, not intimately familiar with information theory and the method of types.
We introduce a new technique to calculate perturbative corrections to neutron-deuteron ($nd$) scattering that does not require calculation of the full off-shell scattering amplitude. Its relation to the more familiar partial-resummation technique is explained. Also included is a calculation of the SD-mixing term that occurs at next-to-next-to-leading-order (NNLO) in pionless effective field theory $\mathrm{EFT}_{\not{\pi}}$\xspace. Using the new technique with the SD-mixing term a complete strictly perturbative phase-shift analysis of $nd$ scattering is performed up to NNLO including eigen-phases and mixing angles. This is compared to potential model calculations and good agreement is found with the eigen-phases and some of the mixing angles at low energies.
We analyze the long-time evolution of open quantum many-body systems using a variational approach. For the dissipative Ising model, where mean-field theory predicts a wide region of bistable behavior, we find genuine bistability only at a singular point, confirming the previously suggested picture of a first order transition. The situation is dramatically different when considering a majority-voter model including three-body interactions, where we find bistable behavior in an extended region, owing to the breaking of detailed balance in the the effective description of the system. In this model, genuine bistability persists even when quantum fluctuations are added.
In this comment on "The Markov blanket trick: On the scope of the free energy principle and active inference" by Raja and colleagues (2021) in Physics of Life Reviews, I argue that the argument presented by the authors is valid; however, I claim that the argument contains a flawed premise, which undermines their conclusions. In addition, I argue that work on the FEP that has appeared since the target paper was published underwrites a cogent response to the issues that are raised by Raja and colleagues.
Tutte initiated the study of nowhere-zero flows and proved the following fundamental theorem: For every graph $G$ there is a polynomial $f$ so that for every abelian group $\Gamma$ of order $n$, the number of nowhere-zero $\Gamma$-flows in $G$ is $f(n)$. For signed graphs (which have bidirected orientations), the situation is more subtle. For a finite group $\Gamma$, let $\epsilon_2(\Gamma)$ be the largest integer $d$ so that $\Gamma$ has a subgroup isomorphic to $\mathbb{Z}_2^d$. We prove that for every signed graph $G$ and $d \ge 0$ there is a polynomial $f_d$ so that $f_d(n)$ is the number of nowhere-zero $\Gamma$-flows in $G$ for every abelian group $\Gamma$ with $\epsilon_2(\Gamma) = d$ and $|\Gamma| = 2^d n$. Beck and Zaslavsky had previously established the special case of this result when $d=0$ (i.e., when $\Gamma$ has odd order).
We study how the presence of world-sheet currents affects the evolution of cosmic string networks, and their impact on predictions for the cosmic microwave background (CMB) anisotropies generated by these networks. We provide a general description of string networks with currents and explicitly investigate in detail two physically motivated examples: wiggly and superconducting cosmic string networks. By using a modified version of the CMBact code, we show quantitatively how the relevant network parameters in both of these cases influence the predicted CMB signal. Our analysis suggests that previous studies have overestimated the amplitude of the anisotropies for wiggly strings. For superconducting strings the amplitude of the anisotropies depends on parameters which presently are not well known - but which can be measured in future high-resolution numerical simulations.
The periodic standing wave approach to binary inspiral assumes rigid rotation of gravitational fields and hence helically symmetric solutions. To exploit the symmetry, numerical computations must solve for ``helical scalars,'' fields that are functions only of corotating coordinates, the labels on the helical Killing trajectories. Here we present the formalism for describing linearized general relativity in terms of helical scalars and we present solutions to the mixed partial differential equations of the linearized gravity problem (and to a toy nonlinear problem) using the adapted coordinates and numerical techniques previously developed for scalar periodic standing wave computations. We argue that the formalism developed may suffice for periodic standing wave computations for post-Minkowskian computations and for full general relativity.
We present the galaxy luminosity function (LF) of the Abell 119 cluster down to $M_r\sim-14$ mag based on deep images in the $u$-, $g$-, and $r$-bands taken by using MOSAIC II CCD mounted on the Blanco 4m telescope at the CTIO. The cluster membership was accurately determined based on the radial velocity information as well as on the color-magnitude relation for bright galaxies and the scaling relation for faint galaxies. The overall LF exhibits a bimodal behavior with a distinct dip at $r\sim18.5$ mag ($M_r\sim-17.8$ mag), which is more appropriately described by a two-component function. The shape of the LF strongly depends on the cluster-centric distance and on the local galaxy density. The LF of galaxies in the outer, low-density region exhibits a steeper slope and more prominent dip compared with that of counterparts in the inner, high-density region. We found evidence for a substructure in the projected galaxy distribution in which several overdense regions in the Abell 119 cluster appear to be closely associated with the surrounding, possible filamentary structure. The combined LF of the overdense regions exhibits a two-component function with a distinct dip, while the LF of the central region is well described by a single Schechter function. We suggest that, in the context of the hierarchical cluster formation scenario, the observed overdense regions are the relics of galaxy groups, retaining their two-component LFs with a dip, which acquired their shapes through galaxy merging process in group environments, before they fall into a cluster.
We introduce Inference-Time Intervention (ITI), a technique designed to enhance the "truthfulness" of large language models (LLMs). ITI operates by shifting model activations during inference, following a set of directions across a limited number of attention heads. This intervention significantly improves the performance of LLaMA models on the TruthfulQA benchmark. On an instruction-finetuned LLaMA called Alpaca, ITI improves its truthfulness from 32.5% to 65.1%. We identify a tradeoff between truthfulness and helpfulness and demonstrate how to balance it by tuning the intervention strength. ITI is minimally invasive and computationally inexpensive. Moreover, the technique is data efficient: while approaches like RLHF require extensive annotations, ITI locates truthful directions using only few hundred examples. Our findings suggest that LLMs may have an internal representation of the likelihood of something being true, even as they produce falsehoods on the surface.
We present the effect of adapting to human preferences on trust in a human-robot teaming task. The team performs a task in which the robot acts as an action recommender to the human. It is assumed that the behavior of the human and the robot is based on some reward function they try to optimize. We use a new human trust-behavior model that enables the robot to learn and adapt to the human's preferences in real-time during their interaction using Bayesian Inverse Reinforcement Learning. We present three strategies for the robot to interact with a human: a non-learner strategy, in which the robot assumes that the human's reward function is the same as the robot's, a non-adaptive learner strategy that learns the human's reward function for performance estimation, but still optimizes its own reward function, and an adaptive-learner strategy that learns the human's reward function for performance estimation and also optimizes this learned reward function. Results show that adapting to the human's reward function results in the highest trust in the robot.
The angular momentum of galaxies is routinely ascribed to a process of tidal torques acting during the early stages of gravitational collapse, and is predicted from the initial mass distribution using second-order perturbation theory and the Zel'dovich approximation. We have tested this theory for a flat hierarchical cosmogony using a large N-body simulation with sufficient dynamic range to include tidal fields, allow resolution of individual galaxies, and thereby expand on previous studies. We find relatively good correlation between the predictions of linear theory and actual galaxy evolution. While structure formation from early times is a complex history of hierarchical merging, salient features are well described by the simple spherical-collapse model. Most notably, we test several methods for determining the turnaround epoch, and find that turnaround is succesfully described by the spherical collapse model. The angular momentum of collapsing structures grows linearly until turnaround, as predicted, and continues quasi-linearly until shell crossing. The predicted angular momentum for well-resolved galaxies at turnaround overestimates the true turnaround and final values by a factor of ~3 with a scatter of ~70 percent, and only marginally yields the correct direction of the angular momentum vector. We recover the prediction that final angular momentum scales as mass to the 5/3 power. We find that mass and angular momentum also vary proportionally with peak height.
We consider three-dimensional gravity based on torsion. Specifically, we consider an extension of the so-called Teleparallel Equivalent of General Relativity in the presence of a scalar field with a self-interacting potential, where the scalar field is non-minimally coupled with the torsion scalar. Then, we find asymptotically AdS hairy black hole solutions, which are characterized by a scalar field with a power-law behavior, being regular outside the event horizon and null at spatial infinity and by a self-interacting potential, which tends to an effective cosmological constant at spatial infinity.
Reed showed that, if two graphs are $P_4$-isomorphic, then either both are perfect or none of them is. In this note we will derive an analogous result for perfect digraphs.
We investigate a spin-boson model with two boson baths that are coupled to two perpendicular components of the spin by employing the density matrix renormalization group method with an optimized boson basis. It is revealed that in the deep sub-Ohmic regime there exists a novel second-order phase transition between two types of doubly degenerate states, which is reduced to one of the usual type for nonzero tunneling. In addition, it is found that expectation values of the spin components display jumps at the phase boundary in the absence of bias and tunneling.
We investigate the nonequilibrium dynamics of spherical active Brownian particles in three spatial dimensions that interact via a pair potential. The investigation is based on a predictive local field theory that is derived by a rigorous coarse-graining starting from the overdamped Langevin dynamics of the particles. This field theory is highly accurate and applicable even for the highest activities. It includes configurational order parameters and derivatives up to infinite orders. We present also three finite reduced models that result from the general field theory by suitable approximations and are easier to apply. Furthermore, we use the general field theory and the simplest one of the reduced models to derive analytic expressions for the density-dependent mean swimming speed and the spinodal corresponding to the onset of motility-induced phase separation of the particles, respectively. Both of these results show a good agreement with recent findings described in the literature. The analytic result for the spinodal yields also a prediction for the associated critical point whose position has not been determined before.
Multi-frequency radio polarimetric observations of the diffuse Galactic synchrotron background enable us to study the structure of the diffuse ionized gas via rotation measure maps. However, depolarization will introduce artifacts in the resulting rotation measure, most notably in the form of narrow, elongated ``depolarization canals''. We use numerical models of a non-emitting Faraday rotating medium to study the RM distribution needed to create depolarization canals by depolarization due to a finite beam width, and to estimate the influence of this depolarization mechanism on the determination of RM. We argue that the depolarization canals indeed can be caused by beam depolarization, which in turn is a natural consequence when observing a turbulent medium with limited resolution. Furthermore, we estimate that beam depolarization can induce an additional error of about 20% in RM determinations, and considerably less in regions that are not affected by depolarization canals.
This paper explores the recently proposed Graph Convolutional Network architecture proposed in (Kipf & Welling, 2016) The key points of their work is summarized and their results are reproduced. Graph regularization and alternative graph convolution approaches are explored. I find that explicit graph regularization was correctly rejected by (Kipf & Welling, 2016). I attempt to improve the performance of GCN by approximating a k-step transition matrix in place of the normalized graph laplacian, but I fail to find positive results. Nonetheless, the performance of several configurations of this GCN variation is shown for the Cora, Citeseer, and Pubmed datasets.
Chaotic eigenstates of quantum systems are known to localize on either side of a classical partial transport barrier if the flux connecting the two sides is quantum mechanically not resolved due to Heisenberg's uncertainty. Surprisingly, in open systems with escape chaotic resonance states can localize even if the flux is quantum mechanically resolved. We explain this using the concept of conditionally invariant measures from classical dynamical systems by introducing a new quantum mechanically relevant class of such fractal measures. We numerically find quantum-to-classical correspondence for localization transitions depending on the openness of the system and on the decay rate of resonance states.
Kinematical and luminosity relations for black-hole jet sources are reviewed. If the TeV flares observed from PKS 2155-304 in 2006 July are assumed to originate from a black hole with mass $\approx 10^8 M_8 M_\odot$, then the $\sim 5$ minute variability timescale is consistent with the light-travel time across the Schwarzschild radius of the black hole if $M_8\sim 1$. The absolute jet power in a synchrotron/SSC model exceeds, however, the Eddington luminosity for a black hole with $M_8\sim 1$ unless the jet is highly efficient. The maximum Blandford-Znajek power is $\sim 10^{46}M_8$ ergs s$^{-1}$ if the magnetic-field energy density threading the horizon is equated with the luminous energy density in the vicinity of the black hole. An external Compton component can relax power requirements, so a black hole with mass $\sim 10^8 M_\odot$ could explain the observed flaring behavior. For the Swift and HESS data taken in 2006 July, relativistic outflows with bulk Lorentz factor $\Gamma \gtrsim 30$ satisfy $\gamma$-$\gamma$ attenuation limits. If this system harbors a binary black hole, then the accretion disk from a more massive, $\sim 10^9 M_\odot$ black-hole primary would make an additional external radiation component. Dual thermal accretion disk signatures would confirm this scenario.
We employ numerical simulations and finite-size scaling techniques to investigate the properties of the dynamic phase transition that is encountered in the Blume-Capel model subjected to a periodically oscillating magnetic field. We mainly focus on the study of the two-dimensional system for various values of the crystal-field coupling in the second-order transition regime. Our results indicate that the present non-equilibrium phase transition belongs to the universality class of the equilibrium Ising model and allow us to construct a dynamic phase diagram, in analogy to the equilibrium case, at least for the range of parameters considered. Finally, we present some complementary results for the three-dimensional model, where again the obtained estimates for the critical exponents fall into the universality class of the corresponding three-dimensional equilibrium Ising ferromagnet.
In the first part of this paper we review a mathematical model for the onset and progression of Alzheimer's disease (AD) that was developed in subsequent steps over several years. The model is meant to describe the evolution of AD in vivo. In [Y. Achdou et al., 2013] we treated the problem at a microscopic scale, where the typical length scale is a multiple of the size of the soma of a single neuron. Subsequently, in [M. Bertsch at al., 2016] we concentrated on the macroscopic scale, where brain neurons are regarded as a continuous medium, structured by their degree of malfunctioning. In the second part of the paper we consider the relation between the microscopic and the macroscopic models. In particular we show under which assumptions the kinetic transport equation, which in the macroscopic model governs the evolution of the probability measure for the degree of malfunctioning of neurons, can be derived from a particle-based setting. In the microscopic model we consider basically mechanism i), modelling it by a system of Smoluchowski equations for the amyloid concentration (describing the agglomeration phenomenon), with the addition of a diffusion term as well as of a source term on the neuronal membrane. At the macroscopic level instead we model processes i) and ii) by a system of Smoluchowski equations for the amyloid concentration, coupled to a kinetic-type transport equation for the distribution function of the degree of malfunctioning of the neurons. The second equation contains an integral term describing the random onset of the disease as a jump process localized in particularly sensitive areas of the brain. Even though we deliberately neglected many aspects of the complexity of the brain and the disease, numerical simulations are in both cases (microscopic and macroscopic) in good qualitative agreement with clinical data.
Existing Simultaneous Localization and Mapping (SLAM) approaches are limited in their scalability due to growing map size in long-term robot operation. Moreover, processing such maps for localization and planning tasks leads to the increased computational resources required onboard. To address the problem of memory consumption in long-term operation, we develop a novel real-time SLAM algorithm, MeSLAM, that is based on neural field implicit map representation. It combines the proposed global mapping strategy, including neural networks distribution and region tracking, with an external odometry system. As a result, the algorithm is able to efficiently train multiple networks representing different map regions and track poses accurately in large-scale environments. Experimental results show that the accuracy of the proposed approach is comparable to the state-of-the-art methods (on average, 6.6 cm on TUM RGB-D sequences) and outperforms the baseline, iMAP$^*$. Moreover, the proposed SLAM approach provides the most compact-sized maps without details distortion (1.9 MB to store 57 m$^3$) among the state-of-the-art SLAM approaches.
In this paper, we use the latest Higgs measurements from ATLAS and CMS to constrain the parameter space of the model of Schmaltz, Stolarski and Thaler, a Little Higgs model with two Higgs doublets, which we will refer to as the BLH model. We account for all production and decay modes explored at ATLAS and CMS in two scenarios: a general case, which assumes the $h_0$ state is light ($m_{h_0} \approx 125$ GeV) and the masses of the other neutral scalars ($H_0$ and $A_0$) are allowed to vary, and a case with a near-degeneracy between the masses of the $h_0$ and $A_0$ and, for some choices of parameters, the $H_0$ states. The near-degeneracy scenario can result in an enhanced diphoton rate, as measured by ATLAS, but is largely ruled out by a combination of the $h_0 \rightarrow \tau^+\tau^-$ and the heavy $H_0 \rightarrow W^+W^-$ measurements. In the general case, we find large regions of parameter space that are in better agreement with either the ATLAS or CMS results than is the SM. However, a significantly enhanced diphoton rate is only possible through large contributions to the $h_0 \gamma \gamma$ effective coupling from charged Higgs bosons in a region of parameter space that borders on violation of perturbativity in the scalar sector.
We connect two key concepts in quantum information: compatibility and divisibility of quantum channels. Two channels are compatible if they can be both obtained via marginalization from a third channel. A channel divides another channel if it reproduces its action by sequential composition with a third channel. (In)compatibility is of central importance for studying the difference between classical and quantum dynamics. The relevance of divisibility stands in its close relationship with the onset of Markovianity. We emphasize the simulability character of compatibility and divisibility, and, despite their structural difference, we find a set of channels -- self-degradable channels -- for which the two notions coincide. We also show that, for degradable channels, compatibility implies divisibility, and that, for anti-degradable channels, divisibility implies compatibility. These results motivate further research on these classes of channels and shed new light on the meaning of these two largely studied notions.
We investigate the implications of the model with a SU(2)-singlet up-type quark, heavy enough not to be produced at the LHC, namely, the contribution of the new quark to the branching ratios of the K to \pi \nu \bar{\nu}, B to \pi \nu \bar{\nu} and B to K \nu \bar{\nu} decays. We show that the deviation from the Standard Model can be up to 10% in the case of a 5 TeV quark. Precise measurements of these branching ratios at the future experiments will allow to observe the contributions of the new quark or to impose stronger constraints on its mass.
Recent remarkable advances in the experimental techniques have provided a background for inferring neuronal couplings from point process data that includes a great number of neurons. Here, we propose a systematic procedure for pre- and post-processing generic point process data in an objective manner, to handle data in the framework of a binary simple statistical model, the Ising or generalized McCulloch--Pitts model. The procedure involves two steps: (1) determining time-bin size for transforming the point-process data into discrete-time binary data and (2) screening relevant couplings from the estimated couplings. For the first step, we decide the optimal time-bin size by introducing the null hypothesis that all neurons would fire independently, then choosing a time-bin size so that the null hypothesis is rejected with the most strict criterion. The likelihood associated with the null hypothesis is analytically evaluated and used for the rejection process. For the second post-processing step, after a certain estimator of coupling is obtained based on the pre-processed dataset, the estimate is compared with many other estimates derived from datasets obtained by randomizing the original dataset in the time direction. We accept the original estimate as relevant only if its absolute value is sufficiently larger than them of randomized datasets. These manipulations suppress false positive couplings induced by statistical noise. We apply this inference procedure to spiking data from synthetic and in vitro neuronal networks. The results show that the proposed procedure identifies the presence/absence of synaptic couplings fairly well including their signs, for the synthetic and experimental data. In particular, the results support that we can infer the physical connections of underlying systems in favorable situations, even when using the simple statistical model.
Spilling tea or coffee leads to a tell-tale circular stain after the drying of the droplet. This phenomenon was termed after the latter example as the "coffee ring effect". The evaporation of suspension droplets is a complex physical process, and prediction and control over particle deposit patterns obtained from sessile droplet evaporation are essential for many industrial processes such as ink-jet printing or crop-care applications. In this article, we present a systematic investigation of the effect of surface wettability on the evaporation dynamics of a particle-laden droplet, including the effect on the contact line stick-slip, the hydrodynamic flow of the suspended particles and the resulting particle deposit after evaporation. We tune the wettability of glass slides using silanisation and quantify the internal flow during the evaporation by tracking fluorescent tracer particles. We find that the internal flow shifts from a predominantly outward flow towards the contact line for low contact angles to an inward flow for large contact angles. Additionally, the corresponding deposit gradually changes from the typical coffee-ring to a central stain upon increasing the hydrophobicity of the substrate. Last, we corroborate these experimental findings with dynamic density functional theory, modelling the droplet evaporation process and stick-slip behaviour of the contact line. Our investigation suggests that the wettability of the substrate can substantially alter hydrodynamic flow within drying droplets and therefore the resulting particle deposit.
We use the process of quantum hamiltonian reduction of SU(2)_k, at rational level k, to study explicitly the correlators of the h_{1,s} fields in the c_{p,q} models. We find from direct calculation of the correlators that we have the possibility of extra, chiral and non-chiral, multiplet structure in the h_{1,s} operators beyond the `minimal' sector. At the level of the vacuum null vector h_{1,2p-1}=(p-1)(q-1) we find that there can be two extra non-chiral fermionic fields. The extra indicial structure present here permeates throughout the entire theory. In particular we find we have a chiral triplet of fields at h_{1,4p-1}=(2p-1)(2q-1). We conjecture that this triplet algebra may produce a rational extended c_{p,q} model. We also find a doublet of fields at h_{1,3p-1}=(\f{3p}{2}-1)(\f{3q}{2}-1). These are chiral fermionic operators if p and q are not both odd and otherwise parafermionic.
We suggest a new model for the structure of a magnetic field embedded high $\beta$ turbulent plasma, based on the popular notion that the magnetic field will tend to separate into individual flux tubes. We point out that interactions between the flux tubes will be dominated by coherent effects stemming from the turbulent wakes created as the fluid streams by the flux tubes. Balancing the attraction caused by shielding effects with turbulent diffusion we find that flux tubes have typical radii comparable to the local Mach number squared times the large scale eddy length, are arranged in a one dimensional fractal pattern, have a radius of curvature comparable to the largest scale eddies in the turbulence, and have an internal magnetic pressure comparable to the ambient pressure. When the average magnetic energy density is much less than the turbulent energy density the radius, internal magnetic field and curvature scale of the flux tubes will be smaller than these estimates. Realistic resistivity does not alter the macroscopic properties of the fluid or the large scale magnetic field. In either case we show that the Sweet-Parker reconnection rate is much faster than an eddy turnover time. Realistic stellar plasmas are expected to either be in the ideal limit (e.g. the solar photosphere) or the resistive limit (most of the solar convection zone). All current numerical simulations of three dimensional MHD turbulence are in the viscous regime and are inapplicable to stars or accretion disks.
In this paper we are interested in extending Bailey's identity to other classical hypergeometric functions. Bailey's identity states that under a suitable choice of parameters, Appell's $F_4$ decomposes into a product of two ${}_2F_1$'s. We will show how Bailey-type factorizations can be found for Horn's hypergeometric functions $H_1, H_4$ and $H_5$.
The UN-Habitat estimates that over one billion people live in slums around the world. However, state-of-the-art techniques to detect the location of slum areas employ high-resolution satellite imagery, which is costly to obtain and process. As a result, researchers have started to look at utilising free and open-access medium resolution satellite imagery. Yet, there is no clear consensus on which data preparation and machine learning approaches are the most appropriate to use with such imagery data. In this paper, we evaluate two techniques (multi-spectral data and grey-level co-occurrence matrix feature extraction) on an open-access dataset consisting of labelled Sentinel-2 images with a spatial resolution of 10 meters. Both techniques were paired with a canonical correlation forests classifier. The results show that the grey-level co-occurrence matrix performed better than multi-spectral data for all four cities. It had an average accuracy for the slum class of 97% and a mean intersection over union of 94%, while multi-spectral data had 75% and 64% for the respective metrics. These results indicate that open-access satellite imagery with a resolution of at least 10 meters may be suitable for keeping track of development goals such as the detection of slums in cities.
Coset diagrams have been used to study quotients, orbits, subgroups and structure of the finitely generated groups. In this paper we use coset diagrams and modular arithmetic to determine the $G$-orbits of $\QQ^*(\sqrt{p^k})$, $\QQ^*(\sqrt{2p^k})$, $\QQ^*(\sqrt{2^2p^k})$, and in general $\QQ^*(\sqrt{2^lp^k})$, for each $l\geq3$ and $k=2h+1\geq3$, for each odd prime $p$.
We present a rapid design methodology that combines automated hyper-parameter tuning with semi-supervised training to build highly accurate and robust models for voice commands classification. Proposed approach allows quick evaluation of network architectures to fit performance and power constraints of available hardware, while ensuring good hyper-parameter choices for each network in real-world scenarios. Leveraging the vast amount of unlabeled data with a student/teacher based semi-supervised method, classification accuracy is improved from 84% to 94% in the validation set. For model optimization, we explore the hyper-parameter space through population based training and obtain an optimized model in the same time frame as it takes to train a single model.
We perform Monte Carlo simulations of cosmic ray-induced hard X-ray radiation from the Earth's atmosphere. We find that the shape of the spectrum emergent from the atmosphere in the energy range 25-300 keV is mainly determined by Compton scatterings and photoabsorption, and is almost insensitive to the incident cosmic-ray spectrum. We provide a fitting formula for the hard X-ray surface brightness of the atmosphere as would be measured by a satellite-born instrument, as a function of energy, solar modulation level, geomagnetic cutoff rigidity and zenith angle. A recent measurement by the INTEGRAL observatory of the atmospheric hard X-ray flux during the occultation of the cosmic X-ray background by the Earth agrees with our prediction within 10%. This suggests that Earth observations could be used for in-orbit calibration of future hard X-ray telescopes. We also demonstrate that the hard X-ray spectra generated by cosmic rays in the crusts of the Moon, Mars and Mercury should be significantly different from that emitted by the Earth's atmosphere.
We demonstrate simultaneous detection of current driven antidamping-like and field-like spin-orbit torques in heavy metal/ferromagnetic metal bilayers by measuring all three magnetization components m_(x,) m_y, and m_z using the vector magneto-optic Kerr effect. We have also implemented a self-calibration method to accurately determine the effective fields of spin-orbit torques. With this technique, we investigate the magnitude and direction of spin-orbit torques in a series of platinum/permalloy samples. The values found are in excellent agreement with results obtained via quadratic magneto-optic Kerr effect, planar Hall effect, and spin transfer ferromagnetic resonance measurements.
We study the dynamics of a two-degrees-of-freedom (two DOF) nonlinear oscillator representing a quartercar model excited by a road roughness profile. Modelling the road profile by means of a harmonic function we derive the Melnikov criterion for a system transition to chaos or escape. The analytically obtained estimations are confirmed by numerical simulations. To analyze the transient vibrations we used recurrences.
The most common way to listen to recorded music nowadays is via streaming platforms which provide access to tens of millions of tracks. To assist users in effectively browsing these large catalogs, the integration of Music Recommender Systems (MRSs) has become essential. Current real-world MRSs are often quite complex and optimized for recommendation accuracy. They combine several building blocks based on collaborative filtering and content-based recommendation. This complexity can hinder the ability to explain recommendations to end users, which is particularly important for recommendations perceived as unexpected or inappropriate. While pure recommendation performance often correlates with user satisfaction, explainability has a positive impact on other factors such as trust and forgiveness, which are ultimately essential to maintain user loyalty. In this article, we discuss how explainability can be addressed in the context of MRSs. We provide perspectives on how explainability could improve music recommendation algorithms and enhance user experience. First, we review common dimensions and goals of recommenders' explainability and in general of eXplainable Artificial Intelligence (XAI), and elaborate on the extent to which these apply -- or need to be adapted -- to the specific characteristics of music consumption and recommendation. Then, we show how explainability components can be integrated within a MRS and in what form explanations can be provided. Since the evaluation of explanation quality is decoupled from pure accuracy-based evaluation criteria, we also discuss requirements and strategies for evaluating explanations of music recommendations. Finally, we describe the current challenges for introducing explainability within a large-scale industrial music recommender system and provide research perspectives.
The temperature dependent resistivity of two Pr1-xCaxMnO3 (x=0.5 and 0.4) thin films grown on LaAlO3 has been studied as a function of hydrostatic pressure (up to 2.5 GPa) and magnetic field (up to 9T). Both samples show a monotonic decrease in the resistivity with an increase in pressure, corresponding to a change of -35% at 2.5 GPa. No pressure induced metal-to-insulator transition was observed in the temperature-dependent resistivity. The non-trivial interaction between high pressure and magnetic field reveals that the effect of pressure cannot be simply rescaled to that of a specific field, as has been reported for the corresponding bulk material. We propose an interpretation of the data based on phase separation, where two different insulating phases coexist: the charge ordered phase, which is sensitive to both magnetic field and pressure, and a second insulating phase that can be tuned by magnetic field. Such a result demonstrates that phase separation can be manipulated in thin films by independent application of magnetic field and/or external pressure.
We report a proof-of-principle experimental demonstration of a turbulence-resistant quantum Lidar system. As a key technology for sensing and ranging, Lidar has drawn considerable attention for a study from quantum perspective, in search of proven advantages complementary to the capabilities of conventional Lidar technologies. Environmental factors such as strong atmospheric turbulence can have detrimental effects on the performance of these systems. We demonstrate the possibility of turbulence-resistant operation of a quantum Lidar system via two-photon interference of entangled photon pairs. Additionally, the reported quantum Lidar also demonstrates the expected noise resistance. This study suggests a potential high precision timing-positioning technology operable under turbulence and noise.
We present new radio continuum observations of NGC253 from the Murchison Widefield Array at frequencies between 76 and 227 MHz. We model the broadband radio spectral energy distribution for the total flux density of NGC253 between 76 MHz and 11 GHz. The spectrum is best described as a sum of central starburst and extended emission. The central component, corresponding to the inner 500pc of the starburst region of the galaxy, is best modelled as an internally free-free absorbed synchrotron plasma, with a turnover frequency around 230 MHz. The extended emission component of the NGC253 spectrum is best described as a synchrotron emission flattening at low radio frequencies. We find that 34% of the extended emission (outside the central starburst region) at 1 GHz becomes partially absorbed at low radio frequencies. Most of this flattening occurs in the western region of the SE halo, and may be indicative of synchrotron self-absorption of shock re-accelerated electrons or an intrinsic low-energy cut off of the electron distribution. Furthermore, we detect the large-scale synchrotron radio halo of NGC253 in our radio images. At 154 - 231 MHz the halo displays the well known X-shaped/horn-like structure, and extends out to ~8kpc in z-direction (from major axis).
In his recent article [arXiv:1604.04950], Adler questions the usefulness of the bound found in our experimental search for genuine effects of hyper-complex quantum mechanics [arXiv:1602.01624]. Our experiment was performed using a black-box (instrumentalist) approach to generalized probabilistic theories; therefore, it does not assume a priori any particular underlying mechanism. From that point of view our experimental results do indeed place meaningful bounds on possible effects of "post-quantum theories", including quaternionic quantum mechanics. In his article, Adler compares our experiment to non-relativistic and M\"oller formal scattering theory within quaternionic quantum mechanics. With a particular set of assumptions, he finds that quaternionic effects would likely not manifest themselves in general. Although these assumptions are justified in the non-relativistic case, a proper calculation for relativistic particles is still missing. Here, we provide a concrete relativistic example of Klein-Gordon scattering wherein the quaternionic effects persist. We note that when the Klein-Gordon equation is formulated using a Hamiltonian formalism it displays a so-called "indefinite metric", a characteristic feature of relativistic quantum wave equations. In Adler's example this is directly forbidden by his assumptions, and therefore our present example is not in contradiction to his work. In complex quantum mechanics this problem of an indefinite metric is solved in second quantization. Unfortunately, there is no known algorithm for canonical field quantization in quaternionic quantum mechanics.
We present a method of constructing generic single-centered and multi-centered extremal black hole solutions in a large class of 4D N=2 supergravities coupled to vector-multiplets with cubic prepotentials. The method is applicable to models for which the 3D moduli spaces obtained via c*-map are symmetric coset spaces. The attractor solutions are generated by certain nilpotent elements in the coset algebra. We present explicit computations in 4D N=2 supergravity coupled to one vector-multiplet, whose 3D moduli space is the symmetric coset space G_{2(2)}/SL(2,R)^2. The non-supersymmetric multi-centered black holes in this model are found to lack the intricate moduli space of bound configurations that are typical of the supersymmetric case.
The Krylov--Safonov theory for fully nonlinear nonlocal operators on hyperbolic spaces of dimension three is established. Since the operators on hyperbolic spaces exhibit qualitatively different behavior than those on manifolds with nonnegative curvature, new scale functions are introduced which take the effect of negative curvature into account. The regularity theory in this work provides unified regularity results for fractional-order and second-order operators in the sense that the regularity estimates stay uniform as the fractional-order approaches 2. In the unified regularity theory, the asymptotic behavior of the normalizing constant for the fractional Laplacian plays a fundamental role. The dimension restriction has been imposed to compute the explicit value of this constant by using the Fourier analysis on hyperbolic spaces.
Objects in the Edgeworth-Kuiper belt and the main asteroid belt should emit microwaves that may give rise to extra anisotropy signals in the multipole of the cosmic microwave background (CMB) experiment. Constraints are derived from the absence of positive detection of such anisotropies for ell < 50, giving the total mass of Edgeworth-Kuiper belt objects to be smaller than 0.2 earth mass. This limit is consistent with the mass extrapolated from the observable population with the size of a > 15 km, assuming that the small-object population follows the power law in size dN/da ~ a^{-q} with the canonical index expected for collisional equilibrium, q ~ 3.5, with which 23% of the mass is ascribed to objects smaller than are observationally accessible down to grains. A similar argument applied to the main asteroid belt indicates that the grain population should not increase faster than q ~ 3.6 towards smaller radii, if it follows the power law continued to observed asteroids with larger radii. It is underlined that both cases are at or only slightly above the limit that can be physically significant, implying the importance of tightening further the CMB anisotropy limit, which may be attained with the observation at higher radio frequencies.
We present ObjBlur, a novel curriculum learning approach to improve layout-to-image generation models, where the task is to produce realistic images from layouts composed of boxes and labels. Our method is based on progressive object-level blurring, which effectively stabilizes training and enhances the quality of generated images. This curriculum learning strategy systematically applies varying degrees of blurring to individual objects or the background during training, starting from strong blurring to progressively cleaner images. Our findings reveal that this approach yields significant performance improvements, stabilized training, smoother convergence, and reduced variance between multiple runs. Moreover, our technique demonstrates its versatility by being compatible with generative adversarial networks and diffusion models, underlining its applicability across various generative modeling paradigms. With ObjBlur, we reach new state-of-the-art results on the complex COCO and Visual Genome datasets.
The stochastic quantization of the fermion field is performed starting from Dirac equations. The statistical properties of stochastic terms in Langevin equations are described by explicit formulae of a Markov process. The interaction of the field is introduced as correlation of the stochastic terms. In the long time limit free fermions disappear and proper combinations of field components propagate as a scalar boson field. The existence and uniqueness of the long time limit is proved in the first order approximation of stochastic Liouville equation.
We show that an in-plane Zeeman field applied to non-centrosymmetric Ising superconductors converts singlet $s$-wave Cooper pairs to equal-spin triplet $if$ pairs, leading to an enhancement of the critical transition line beyond expected from Ising spin-orbit coupling. Singlet to triplet conversion relates to a phase transformation due to spin rotation by the Zeeman field and has a geometric origin. The discussion is especially relevant, but not limited to monolayer transition metal dichalcogenides.
It has been suggested by Sorkin that a three-slit Young experiment could reveal the validity a fundamental ingredient in the foundations of one of the cornerstones in modern physics namely quantum mechanics. In terms of a certain parameter $\kappa_S$, it was argued that a non-zero value could imply a breakdown of the fundamental Born's rule as well as the superposition principle. Here we argue that a physical realization of such arguments could lead to an erroneous conclusion and contradict the basic rules of quantum mechanics. In fact, we argue that a proper interpretation of the procedures involved in a physical determination of $\kappa_S$ does not necessarily lead to $\kappa_S=0$. In order to show this we consider a mono-chromatic source of photons prepared in an {\it arbitrary} quantum state and a simple version of the well-established photon detection theory of Glauber which, by construction, obeys all the rules of quantum mechanics. It is, however, also argued that after a proper identification of the relevant quantum-mechanical probability amplitudes one can be reach $\kappa_S=0$. As long as one only consider a single photon detector, it is verified that, in this context, there is no fundamental difference between quantum-mechanical interference and interference as expressed in terms of classical electro-magnetic waves.
Galaxy clusters contain a diffuse stellar component outside the cluster's galaxies, which is observed as faint intracluster light (ICL). Using Gemini/GMOS-N deep imaging and multi-object spectroscopy of a massive fossil cluster at a redshift of $z=0.47$, RX J105453.3+552102 (J1054), we improve the observational constraints on the formation mechanism of the ICL. We extract the ICL surface brightness and colour profiles out to 155 kpc from the brightest cluster galaxy (BCG) with a detection limit of 28.7 mag/arcsec$^2$ (1$\sigma$, 4.8" x 4.8"; i-band). The colour of the diffuse light is similar to that of the BCG and central bright galaxies out to $\sim$ 70 kpc, becoming slightly bluer toward the outside. We find that the ICL distribution shows better agreement with the spatial distribution of member galaxies than with the BCG-dominated cluster luminosity distribution. We report the ICL fraction of J1054 as $15.07 \pm 4.57 \%$ in the range of $60 \sim 155$ kpc from the BCG, which appears to be higher than the ICL fraction-redshift trend in previous studies. Our findings suggest that intracluster stars seems not to be explained by one dominant production mechanism. However, a significant fraction of the ICL of J1054 may have been generated from the outskirts of infalling/satellite galaxies more recently rather than by the BCG at the early stage of the cluster.
We use leading-order anisotropic hydrodynamics to study an azimuthally-symmetric boost-invariant quark-gluon plasma. We impose a realistic lattice-based equation of state and perform self-consistent anisotropic freeze-out to hadronic degrees of freedom. We then compare our results for the full spatiotemporal evolution of the quark-gluon plasma and its subsequent freeze-out to results obtained using 1+1d Israel-Stewart second-order viscous hydrodynamics. We find that for small shear viscosities, 4 pi eta/s ~ 1, the two methods agree well for nucleus-nucleus collisions, however, for large shear viscosity to entropy density ratios or proton-nucleus collisions we find important corrections to the Israel-Stewart results for the final particle spectra and the total number of charged particles. Finally, we demonstrate that the total number of charged particles produced is a monotonically increasing function of 4 pi eta/s in Israel-Stewart viscous hydrodynamics whereas in anisotropic hydrodynamics it has a maximum at 4 pi eta/s ~ 10. For all 4 pi eta/s > 0, we find that for Pb-Pb collisions Israel-Stewart viscous hydrodynamics predicts more dissipative particle production than anisotropic hydrodynamics.
Categorizing music files according to their genre is a challenging task in the area of music information retrieval (MIR). In this study, we compare the performance of two classes of models. The first is a deep learning approach wherein a CNN model is trained end-to-end, to predict the genre label of an audio signal, solely using its spectrogram. The second approach utilizes hand-crafted features, both from the time domain and the frequency domain. We train four traditional machine learning classifiers with these features and compare their performance. The features that contribute the most towards this multi-class classification task are identified. The experiments are conducted on the Audio set data set and we report an AUC value of 0.894 for an ensemble classifier which combines the two proposed approaches.
Pure de Sitter, anti de Sitter, and orthogonal gauge theories in four-dimensional Euclidean spacetime are studied. It is shown that, if the theory is asymptotically free and a dynamical mass is generated, then an effective geometry may be induced and a gravity theory emerges. The asymptotic freedom and the running of the mass might account for an In\"on\"u-Wigner contraction which induces a breaking of the gauge group to the Lorentz group, while the mass itself is responsible for the coset sector of the gauge field to be identified with the effective vierbein. Furthermore, the resulting local isometries are Lorentzian for the anti de Sitter group and Euclidean for the de Sitter and orthogonal groups.
With this paper we provide a mathematical review on the initial-value problem of the one-particle Dirac equation on space-like Cauchy hypersurfaces for compactly supported external potentials. We, first, discuss the physically relevant spaces of solutions and initial values in position and mass shell representation; second, review the action of the Poincar\'e group as well as gauge transformations on those spaces; third, introduce generalized Fourier transforms between those spaces and prove convenient Paley-Wiener- and Sobolev-type estimates. These generalized Fourier transforms immediately allow the construction of a unitary evolution operator for the free Dirac equation between the Hilbert spaces of square-integrable wave functions of two respective Cauchy surfaces. With a Picard-Lindel\"of argument this evolution map is generalized to the Dirac evolution including the external potential. For the latter we introduce a convenient interaction picture on Cauchy surfaces. These tools immediately provide another proof of the well-known existence and uniqueness of classical solutions and their causal structure.
We present three new methods for determining the age of groups of pre-main-sequence stars. The first, creating empirical isochrones allows us to create a robust age ordering, but not to derive actual ages. The second, using the width of the gap in colour-magnitude space between the pre-main-sequence and main-sequence (the radiative convective gap) has promise as a distance and extinction independent measure of age, but is as yet uncalibrated. Finally we discuss tau-squared fitting of the main sequence as the stars approach the terminus of the main sequence. This method suggests that there is a factor two difference between these "nuclear" ages, and more conventional pre-main-sequence contraction ages.
Perturbative calculations in field theory at finite temperature involve sums over the Matsubara frequencies. Besides the usual difficulties that appear in perturbative computations, these sums give rise to some new obstacles that are carefully analized here. I present a fast and realible recipe to work out sums over the Matsubara frequencies. As this algorithm leads to deal with very cumbersome algebraic expressions, it has been written for computers by using the symbolic manipulation program Mathematica. It is also shown this algorithm to be self-consistent when it is applied to more than one loop computations.
We present ALMA Band-3/7 observations towards "the Heart" of a massive hub-filament system (HFS) SDC335, to investigate its fragmentation and accretion. At a resolution of $\sim0.03$ pc, 3 mm continuum emission resolves two massive dense cores MM1 and MM2, with $383(^{+234}_{-120})$ $M_\odot$ (10-24% mass of "the Heart") and $74(^{+47}_{-24})$ $M_\odot$, respectively. With a resolution down to 0.01 pc, 0.87 mm continuum emission shows MM1 further fragments into six condensations and multi-transition lines of H$_2$CS provide temperature estimation. The relation between separation and mass of condensations at a scale of 0.01 pc favors turbulent Jeans fragmentation where the turbulence seems to be scale-free rather than scale-dependent. We use the H$^{13}$CO$^+$ (1-0) emission line to resolve the complex gas motion inside "the Heart" in position-position-velocity space. We identify four major gas streams connected to large-scale filaments, inheriting the anti-clockwise spiral pattern. Along these streams, gas feeds the central massive core MM1. Assuming an inclination angle of $45(\pm15)^{\circ}$ and a H$^{13}$CO$^+$ abundance of $5(\pm3)\times10^{-11}$, the total mass infall rate is estimated to be $2.40(\pm0.78)\times10^{-3}$ $M_\odot$ yr$^{-1}$, numerically consistent with the accretion rates derived from the clump-scale spherical infall model and the core-scale outflows. The consistency suggests a continuous, near steady-state, and efficient accretion from global collapse, therefore ensuring core feeding. Our comprehensive study of SDC335 showcases the detailed gas kinematics in a prototypical massive infalling clump and calls for further systematic and statistical analyses in a large sample.
Recent theoretical developments for observing the Epoch of Reionization (EOR) have concentrated on the power spectrum signature of redshifted 21 cm emission. These studies have demonstrated the great potential of statistical EOR observations, however, the sensitivity calculations for proposed low frequency radio arrays have been highly approximate. The formalism developed for interferometric measurements of the cosmic microwave background can be extended to three dimensions to naturally incorporate the line-of-sight information inherent in the EOR signal. In this paper we demonstrate how to accurately calculate the EOR power spectrum sensitivity of an array, and develop scaling relationships which can be used to guide the design of EOR observatories. The implications for antenna distribution, antenna size, and correlator requirements on the EOR sensitivity are detailed.
A by-no-means-complete collection of references for those interested in intonational meaning, with other miscellaneous references on intonation included. Additional references are welcome, and should be sent to <EMAIL_ADDRESS>
Anomalous motional heating is a major obstacle to scalable quantum information processing with trapped ions. While the source of this heating is not yet understood, several previous studies suggest that surface contaminants may be largely responsible. We demonstrate an improvement by a factor of four in the room-temperature heating rate of a niobium surface electrode trap by in situ plasma cleaning of the trap surface. This surface treatment was performed with a simple homebuilt coil assembly and commercially-available matching network and is considerably gentler than other treatments, such as ion milling or laser cleaning, that have previously been shown to improve ion heating rates. We do not see an improvement in the heating rate when the trap is operated at cryogenic temperatures, pointing to a role of thermally-activated surface contaminants in motional heating whose activity may freeze out at low temperatures.
Split learning (SL) is a promising approach for training artificial intelligence (AI) models, in which devices collaborate with a server to train an AI model in a distributed manner, based on a same fixed split point. However, due to the device heterogeneity and variation of channel conditions, this way is not optimal in training delay and energy consumption. In this paper, we design an adaptive split learning (ASL) scheme which can dynamically select split points for devices and allocate computing resource for the server in wireless edge networks. We formulate an optimization problem to minimize the average training latency subject to long-term energy consumption constraint. The difficulties in solving this problem are the lack of future information and mixed integer programming (MIP). To solve it, we propose an online algorithm leveraging the Lyapunov theory, named OPEN, which decomposes it into a new MIP problem only with the current information. Then, a two-layer optimization method is proposed to solve the MIP problem. Extensive simulation results demonstrate that the ASL scheme can reduce the average training delay and energy consumption by 53.7% and 22.1%, respectively, as compared to the existing SL schemes.
The main task of Multimodal Emotion Recognition in Conversations (MERC) is to identify the emotions in modalities, e.g., text, audio, image and video, which is a significant development direction for realizing machine intelligence. However, many data in MERC naturally exhibit an imbalanced distribution of emotion categories, and researchers ignore the negative impact of imbalanced data on emotion recognition. To tackle this problem, we systematically analyze it from three aspects: data augmentation, loss sensitivity, and sampling strategy, and propose the Class Boundary Enhanced Representation Learning (CBERL) model. Concretely, we first design a multimodal generative adversarial network to address the imbalanced distribution of {emotion} categories in raw data. Secondly, a deep joint variational autoencoder is proposed to fuse complementary semantic information across modalities and obtain discriminative feature representations. Finally, we implement a multi-task graph neural network with mask reconstruction and classification optimization to solve the problem of overfitting and underfitting in class boundary learning, and achieve cross-modal emotion recognition. We have conducted extensive experiments on the IEMOCAP and MELD benchmark datasets, and the results show that CBERL has achieved a certain performance improvement in the effectiveness of emotion recognition. Especially on the minority class fear and disgust emotion labels, our model improves the accuracy and F1 value by 10% to 20%.
The entropy of a hierarchical network topology in an ensemble of sparse random networks with "hidden variables" associated to its nodes, is the log-likelihood that a given network topology is present in the chosen ensemble.We obtain a general formula for this entropy,which has a clear simple interpretation in some simple limiting cases. The results provide new keys with which to solve the general problem of "fitting" a given network with an appropriate ensemble of random networks.
In this paper we present the "Small Bodies: Near and Far" Infrared Database, an easy-to-use tool intended to facilitate the modeling of thermal emission of small Solar System bodies. Our database collects thermal emission measurements of small Solar Systems targets that are otherwise available in scattered sources and gives a complete description of the data, with all information necessary to perform direct scientific analyses and without the need to access additional, external resources. This public database contains representative data of asteroid observations of large surveys (e.g. AKARI, IRAS and WISE) as well as a collection of small body observations of infrared space telescopes (e.g. the Herschel Space Observatory) and provides a web interface to access this data (https://ird.konkoly.hu). We also provide an example for the direct application of the database and show how it can be used to estimate the thermal inertia of specific populations, e.g. asteroids within a given size range. We show how different scalings of thermal inertia with heliocentric distance (i.e. temperature) may affect our interpretation of the data and discuss why the widely-used radiative conductivity exponent ($\alpha$=-3/4) might not be adequate in general, as hinted by previous studies.
In this paper, we introduce the bi-periodic Lucas matrix sequence and present some fundamental properties of this generalized matrix sequence. Moreover, we investigate the important relationships between the bi-periodic Fibonacci and Lucas matrix sequences. We express that some behaviours of bi-periodic Lucas numbers also can be obtained by considering properties of this new matrix sequence. Finally, we say that the matrix sequences as Lucas, $k$-Lucas and Pell-Lucas are special cases of this generalized matrix sequence.
We review the case for the photon having a tiny mass compatible with the experimental limits. We go over some possible experimental tests for such a photon mass including the violation of Lorentz symmetry. We point out that such violations may already have been witnessed in tests involving high energy gamma rays from outer space as also ultra high energy cosmic rays.
We demonstrate the use of the Unified Transform Method or Method of Fokas for boundary value problems for systems of constant-coefficient linear partial differential equations. We discuss how the apparent branch singularities typically appearing in the global relation are removable, allowing the method to proceed, in essence, as for scalar problems. We illustrate the use of the method with boundary value problems for the Klein-Gordon equation and the linearized Fitzhugh-Nagumo system. The case of wave equations is treated separately in an appendix.
The Trojan asteroids remain quite poorly understood, yet their physical properties provide unique perspective on chemical and dynamical processes that shaped the Solar System. The current study was undertaken to investigate surface compositions of these objects. We present 66 new near-infrared (NIR; 0.7 to 2.5 microns) spectra of 58 Trojans, including members of both the leading and trailing swarms. We also include in the analysis previously published NIR spectra of 13 Trojans (3 of which overlap with the new sample). This data set permits not only a direct search for compositional signatures, but also a search for patterns that may reveal clues to the origin of the Trojans. We do not report any confirmed absorption features in the new spectra. Analysis of the spectral slopes, however, reveals an interesting bimodality among the NIR data. The two spectral groups identified appear to be equally abundant in the leading and trailing swarms. The spectral groups are not a result of family membership; they occur in the background, non-family population. The average albedos of the two groups are the same within uncertainties (0.051\pm0.016 and 0.055\pm0.016). No correlations between spectral slope and any other physical or orbital parameter are detected, with the exception of a possible weak correlation with inclination among the less-red spectral group. Synthesizing these results with previously published properties, we conclude that the two spectral groups represent objects with different intrinsic compositions. We further suggest that while the less-red group originated near Jupiter or in the main asteroid belt, the redder spectral group originated farther out in the Solar System. If correct, the Trojan swarms offer the most readily accessible large reservoir of Kuiper Belt material as well as a unique reservoir for the study of material from the middle part of the solar nebula.
The success of neural architecture search (NAS) has historically been limited by excessive compute requirements. While modern weight-sharing NAS methods such as DARTS are able to finish the search in single-digit GPU days, extracting the final best architecture from the shared weights is notoriously unreliable. Training-Speed-Estimate (TSE), a recently developed generalization estimator with a Bayesian marginal likelihood interpretation, has previously been used in place of the validation loss for gradient-based optimization in DARTS. This prevents the DARTS skip connection collapse, which significantly improves performance on NASBench-201 and the original DARTS search space. We extend those results by applying various DARTS diagnostics and show several unusual behaviors arising from not using a validation set. Furthermore, our experiments yield concrete examples of the depth gap and topology selection in DARTS having a strongly negative impact on the search performance despite generally receiving limited attention in the literature compared to the operations selection.
We prove a generalization of the author's work to show that any subset of the primes which is `well-distributed' in arithmetic progressions contains many primes which are close together. Moreover, our bounds hold with some uniformity in the parameters. As applications, we show there are infinitely many intervals of length $(\log{x})^{\epsilon}$ containing $\gg_\epsilon \log\log{x}$ primes, and show lower bounds of the correct order of magnitude for the number of strings of $m$ congruent primes with $p_{n+m}-p_n\le \epsilon\log{x}$.
This paper provides an overview of NVIDIA NeMo's neural machine translation systems for the constrained data track of the WMT21 News and Biomedical Shared Translation Tasks. Our news task submissions for English-German (En-De) and English-Russian (En-Ru) are built on top of a baseline transformer-based sequence-to-sequence model. Specifically, we use a combination of 1) checkpoint averaging 2) model scaling 3) data augmentation with backtranslation and knowledge distillation from right-to-left factorized models 4) finetuning on test sets from previous years 5) model ensembling 6) shallow fusion decoding with transformer language models and 7) noisy channel re-ranking. Additionally, our biomedical task submission for English-Russian uses a biomedically biased vocabulary and is trained from scratch on news task data, medically relevant text curated from the news task dataset, and biomedical data provided by the shared task. Our news system achieves a sacreBLEU score of 39.5 on the WMT'20 En-De test set outperforming the best submission from last year's task of 38.8. Our biomedical task Ru-En and En-Ru systems reach BLEU scores of 43.8 and 40.3 respectively on the WMT'20 Biomedical Task Test set, outperforming the previous year's best submissions.
We have obtained Hubble Space Telescope/STIS low-resolution ultraviolet spectra of the X-ray pulsar 4U 1626-67 (=KZ TrA); 4U 1626-67 is unusual even among X-ray pulsars due to its ultra-short binary period (P=41.4 min) and remarkably low mass-function (<1.3e-6 Msun). The far-UV spectrum was exposed for a total of 32ks and has sufficient signal-to-noise to reveal numerous broad emission and prominent narrower absorption lines. Most of the absorption lines are consistent in strength with a purely interstellar origin. However, there is evidence that both CI and CIV require additional absorbing gas local to the system. In emission, the usual prominent lines of NV and HeII are absent, whilst both OIV and OV are relatively strong. We further identify a rarely seen feature at ~1660A as the OIII] multiplet. Our ultraviolet spectra therefore provide independent support for the recent suggestion that the mass donor is the chemically fractionated core of either a C-O-Ne or O-Ne-Mg white dwarf; this was put forward to explain the results of Chandra high-resolution X-ray spectroscopy. The velocity profiles of the ultraviolet lines are in all cases broad and/or flat-topped, or perhaps even double-peaked for the highest ionization cases of O; in either case the ultraviolet line profiles are in broad agreement with the Doppler pairs found in the X-ray spectra. Both the X-ray and far-UV lines are plausibly formed in (or in an corona just above) a Keplerian accretion disc; the combination of ultraviolet and X-ray spectral data may provide a rich data set for follow-on detailed models of the disk dynamics and ionization structure in this highly unusual low-mass X-ray pulsar system.
Transformer large language models (LLMs) have sparked admiration for their exceptional performance on tasks that demand intricate multi-step reasoning. Yet, these models simultaneously show failures on surprisingly trivial problems. This begs the question: Are these errors incidental, or do they signal more substantial limitations? In an attempt to demystify transformer LLMs, we investigate the limits of these models across three representative compositional tasks -- multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer. We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures. Our empirical findings suggest that transformer LLMs solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching, without necessarily developing systematic problem-solving skills. To round off our empirical study, we provide theoretical arguments on abstract multi-step reasoning problems that highlight how autoregressive generations' performance can rapidly decay with\,increased\,task\,complexity.
In this paper we present a particle-number-conserving (PNC) functional formalism to describe the dynamics of a cold bosonic gas. Treating the total number of particles as a constraint, whereby the phase invariance of the theory becomes local in time, we study this U(1) gauge theory using DeWitt's "gauge invariant effective action" techniques. Our functional formulation and earlier PNC proposals are shown to yield equivalent results to next-to-leading order in an expansion in the inverse powers of the total number of particles. In this more general framework we also show that earlier PNC proposals can be seen as different gauge (and gauge fixing condition) choices within the same physical theory.
Honeycomb structure has a natural extension to the three dimensions. Simple examples are hyperhoneycomb and stripy-honeycomb lattices, which are realized in $\beta $-Li$_{2}$IrO$_{3}$ and $\gamma $-Li$_{2}$IrO$_{3}$, respectively. We propose a wide class of three-dimensional (3D) honeycomb lattices which are loop-nodal semimetals. Their edge states have intriguing properties similar to the two-dimensional honeycomb lattice in spite of dimensional difference. Partial flat bands emerge at the zigzag or beard edge of the 3D honeycomb lattice, whose boundary is given by the Fermi loop in the bulk spectrum. Analytic solutions are explicitly constructed for them. On the other hand, perfect flat bands emerge in the zigzag-beard edge or when the anisotropy is large. All these 3D honeycomb lattices become strong topological insulators with the inclusion of the spin-orbit interaction. Furthermore, point-nodal semimetals may be realized in the presence of both the antiferromagnetic order and the spin-orbit interaction.
Computer vision applications such as visual relationship detection and human object interaction can be formulated as a composite (structured) set detection problem in which both the parts (subject, object, and predicate) and the sum (triplet as a whole) are to be detected in a hierarchical fashion. In this paper, we present a new approach, denoted Part-and-Sum detection Transformer (PST), to perform end-to-end visual composite set detection. Different from existing Transformers in which queries are at a single level, we simultaneously model the joint part and sum hypotheses/interactions with composite queries and attention modules. We explicitly incorporate sum queries to enable better modeling of the part-and-sum relations that are absent in the standard Transformers. Our approach also uses novel tensor-based part queries and vector-based sum queries, and models their joint interaction. We report experiments on two vision tasks, visual relationship detection and human object interaction and demonstrate that PST achieves state of the art results among single-stage models, while nearly matching the results of custom designed two-stage models.
This paper proposes a new method to drastically speed up deep reinforcement learning (deep RL) training for problems that have the property of state-action permissibility (SAP). Two types of permissibility are defined under SAP. The first type says that after an action $a_t$ is performed in a state $s_t$ and the agent has reached the new state $s_{t+1}$, the agent can decide whether $a_t$ is permissible or not permissible in $s_t$. The second type says that even without performing $a_t$ in $s_t$, the agent can already decide whether $a_t$ is permissible or not in $s_t$. An action is not permissible in a state if the action can never lead to an optimal solution and thus should not be tried (over and over again). We incorporate the proposed SAP property and encode action permissibility knowledge into two state-of-the-art deep RL algorithms to guide their state-action exploration together with a virtual stopping strategy. Results show that the SAP-based guidance can markedly speed up RL training.
Many forecasts consist not of point predictions but concern the evolution of quantities. For example, a central bank might predict the interest rates during the next quarter, an epidemiologist might predict trajectories of infection rates, a clinician might predict the behaviour of medical markers over the next day, etc. The situation is further complicated since these forecasts sometimes only concern the approximate "shape of the future evolution" or "order of events". Formally, such forecasts can be seen as probability measures on spaces of equivalence classes of paths modulo time-parametrization. We leverage the statistical framework of proper scoring rules with classical mathematical results to derive a principled approach to decision making with such forecasts. In particular, we introduce notions of gradients, entropy, and divergence that are tailor-made to respect the underlying non-Euclidean structure.
The stationary points of the total scalar curvature functional on the space of unit volume metrics on a given closed manifold are known to be precisely the Einstein metrics. One may consider the modified problem of finding stationary points for the volume functional on the space of metrics whose scalar curvature is equal to a given constant. In this paper, we localize a condition satisfied by such stationary points to smooth bounded domains. The condition involves a generalization of the static equations, and we interpret solutions (and their boundary values) of this equation variationally. On domains carrying a metric that does not satisfy the condition, we establish a local deformation theorem that allows one to achieve simultaneously small prescribed changes of the scalar curvature and of the volume by a compactly supported variation of the metric. We apply this result to obtain a localized gluing theorem for constant scalar curvature metrics in which the total volume is preserved. Finally, we note that starting from a counterexample of Min-Oo's conjecture such as that of Brendle-Marques-Neves, counterexamples of arbitrarily large volume and different topological types can be constructed.
A system of three point vortices in an unbounded plane has a special family of self-similarly contracting or expanding solutions: during the motion, vortex triangle remains similar to the original one, while its area decreases (grows) at a constant rate. A contracting configuration brings three vortices to a single point in a finite time; this phenomenon known as vortex collapse is of principal importance for many-vortex systems. Dynamics of close-to-collapse vortex configurations depends on the way the collapse conditions are violated. Using an effective potential representation, a detailed quantitative analysis of all the different types of near-collapse dynamics is performed when two of the vortices are identical. We discuss time and length scales, emerging in the problem, and their behavior as the initial vortex triangle is approaching to an exact collapse configuration. Different types of critical behaviors, such as logarithmic or power-law divergences are exhibited, which emphasizes the importance of the way the collapse is approached. Period asymptotics for all singular cases are presented as functions of the initial vortices configurations. Special features of passive particle mixing by a near-collapse flows are illustrated numerically.
Neural parameter allocation search (NPAS) automates parameter sharing by obtaining weights for a network given an arbitrary, fixed parameter budget. Prior work has two major drawbacks we aim to address. First, there is a disconnect in the sharing pattern between the search and training steps, where weights are warped for layers of different sizes during the search to measure similarity, but not during training, resulting in reduced performance. To address this, we generate layer weights by learning to compose sets of SuperWeights, which represent a group of trainable parameters. These SuperWeights are created to be large enough so they can be used to represent any layer in the network, but small enough that they are computationally efficient. The second drawback we address is the method of measuring similarity between shared parameters. Whereas prior work compared the weights themselves, we argue this does not take into account the amount of conflict between the shared weights. Instead, we use gradient information to identify layers with shared weights that wish to diverge from each other. We demonstrate that our SuperWeight Networks consistently boost performance over the state-of-the-art on the ImageNet and CIFAR datasets in the NPAS setting. We further show that our approach can generate parameters for many network architectures using the same set of weights. This enables us to support tasks like efficient ensembling and anytime prediction, outperforming fully-parameterized ensembles with 17% fewer parameters.
We show how temperature-induced disorder can be combined in a direct way with first-principles scattering theory to study diffusive transport in real materials. Excellent (good) agreement with experiment is found for the resistivity of Cu, Pd, Pt (and Fe) when lattice (and spin) disorder are calculated from first principles. For Fe, the agreement with experiment is limited by how well the magnetization (of itinerant ferromagnets) can be calculated as a function of temperature. By introducing a simple Debye-like model of spin disorder parameterized to reproduce the experimental magnetization, the temperature dependence of the average resistivity, the anisotropic magnetoresistance and the spin polarization of a Ni$_{80}$Fe$_{20}$ alloy are calculated and found to be in good agreement with existing data. Extension of the method to complex, inhomogeneous materials as well as to the calculation of other finite-temperature physical properties within the adiabatic approximation is straightforward.
The electromagnetic transition of two-level atomic systems in a waveguide is calculated. Compared with the result in free space, the spontaneous emission rate decrease because the phase space is smaller, and meanwhile, some resonance appears in some cases. Moreover, the influence of non-uniform electromagnetic field in a waveguide on absorption and stimulated emission is considered. Applying the results to lasers, a method to enhance the laser power is proposed.
We discuss the iron and nickel properties in the nuclear X-ray reflecting region of the Circinus Galaxy, studied with XMM-Newton. The main results are: a) from the depth of the Fe Kalpha edge, a value of A_Fe=1.7 in number with respect to the cosmic value (as for Anders & Grevesse 1989) is measured, if the (not directly visible) illuminating spectrum is assumed to be that measured by BeppoSAX. If the slope of the primary power law is left free to vary, a steeper spectrum and a lower iron abundance (about 1.2) are found. b) From the Ni to Fe Kalpha lines flux ratio, a nickel-to-iron abundance ratio of 0.055-0.075 is found. c) The Fe Kbeta/Kalpha flux ratio is slightly lower than expected, possibly due to a mild ionization of iron (which however cannot be much more ionized than X). d) The presence of the Fe Kalpha Compton Shoulder, already discovered by Chandra, is confirmed, its relative flux implying Compton-thick matter. This further supports the identification of the reflecting region with the absorber.
The performance of stochastic gradient descent (SGD), which is the simplest first-order optimizer for training deep neural networks, depends on not only the learning rate but also the batch size. They both affect the number of iterations and the stochastic first-order oracle (SFO) complexity needed for training. In particular, the previous numerical results indicated that, for SGD using a constant learning rate, the number of iterations needed for training decreases when the batch size increases, and the SFO complexity needed for training is minimized at a critical batch size and that it increases once the batch size exceeds that size. Here, we study the relationship between batch size and the iteration and SFO complexities needed for nonconvex optimization in deep learning with SGD using constant or decaying learning rates and show that SGD using the critical batch size minimizes the SFO complexity. We also provide numerical comparisons of SGD with the existing first-order optimizers and show the usefulness of SGD using a critical batch size. Moreover, we show that measured critical batch sizes are close to the sizes estimated from our theoretical results.
Due to their inherent compliance, soft robots are more versatile than rigid linked robots when they interact with their environment, such as object manipulation or biomimetic motion, and considered the key element in introducing robots to everyday environments. Although various soft robotic actuators exist, past research has focused primarily on designing and analyzing single components. Limited effort has been made to combine each component to create an overall capable, integrated soft robot. Ideally, the behavior of such a robot can be accurately modeled, and its motion within an environment uses its proprioception, without requiring external sensors. This work presents a design and modeling process for a Soft continuum Proprioceptive Arm (SoPrA) actuated by pneumatics. The integrated design is suitable for an analytical model due to its internal capacitive flex sensor for proprioceptive measurements and its fiber-reinforced fluidic elastomer actuators. The proposed analytical dynamical model accounts for the inertial effects of the actuator's mass and the material properties, and predicts in real-time the soft robot's behavior. Our estimation method integrates the analytical model with proprioceptive sensors to calculate external forces, all without relying on an external motion capture system. SoPrA is validated in a series of experiments demonstrating the model's and sensor's accuracy in estimation. SoPrA will enable soft arm manipulation including force sensing while operating in obstructed environments that disallows exteroceptive measurements.
In this paper we present a combinatorial proof of a relation between the generating functions of unicellular and bicellular maps. This relation is a consequence of the Schwinger-Dyson equation of matrix theory. Alternatively it can be proved using representation theory of the symmetric group. Here we give a bijective proof by rewiring unicellular maps of topological genus $(g+1)$ into bicellular maps of genus $g$ and pairs of unicellular maps of lower topological genera. Our result has immediate consequences for the folding of RNA interaction structures, since the time complexity of folding the transformed structure is $O((n+m)^5)$, where $n,m$ are the lengths of the respective backbones, while the folding of the original structure has $O(n^6)$ time complexity, where $n$ is the length of the longer sequence.
There is a growing interest in social robots to be considered in the therapy of children with autism due to their effectiveness in improving the outcomes. However, children on the spectrum exhibit challenging behaviors that need to be considered when designing robots for them. A child could involuntarily throw a small social robot during meltdown and that could hit another person's head and cause harm (e.g. concussion). In this paper, the application of soft materials is investigated for its potential in attenuating head's linear acceleration upon impact. The thickness and storage modulus of three different soft materials were considered as the control factors while the noise factor was the impact velocity. The design of experiments was based on Taguchi method. A total of 27 experiments were conducted on a developed dummy head setup that reports the linear acceleration of the head. ANOVA tests were performed to analyze the data. The findings showed that the control factors are not statistically significant in attenuating the response. The optimal values of the control factors were identified using the signal-to-noise (S/N) ratio optimization technique. Confirmation runs at the optimal parameters (i.e. thickness of 3 mm and 5 mm) showed a better response as compared to other conditions. Designers of social robots should consider the application of soft materials to their designs as it help in reducing the potential harm to the head.
We investigate the dynamics of clumps that coexisted with/in advection-dominated accretion flows by considering thermal conductivity. Thermal conduction can be one of the effective factors in the energy transportation of ADAFs; hence it may indirectly affect the dynamics of clumps by means of a contact force between them and their host medium. We first study the ensemble of clumps by assuming them as collision-less particles and secondly we find the orbital motion of these clouds as individuals. For both parts, clumps are subject to the gravity of the central object and a drag force. The strong coupling between clumps and ADAF leads to equality between the average treatment of the clumps and the dynamics of their background. By employing the collision-less Boltzmann equation we calculate the velocity dispersion of the clumps which turns out approximately one order of magnitude higher than the ADAF. In fact, involving drag force in such a system causes the angular momentum of the clumps can be transported outwards by the ADAF, and hence the clouds eventually will be captured at the tidal radius. The results show that the presence of thermal conduction increases the root of the averaged radial velocity square and this in turn speeds up the process of capturing the clouds through the tidal force. In the end, we focus on a typical individual cloud, the spiral orbits appear thanks to only the toroidal component of friction force. The parametric study again proves that the operation of thermal conduction helps for decreasing the lifetime of clumps.
Three-dimensional (3D) magnetic nulls are abundant in the solar atmosphere, as been firmly established through contemporary observations. They are established to be important magnetic structures in, for example, jets and circular ribbon flares. While simulations and extrapolations support this, the mechanisms behind 3D null generation remain an open question. Recent magnetohydrodynamics (MHD) simulations propose that magnetic reconnection is responsible for both generating and annihilating 3D nulls, a novel concept. However, these simulations began with initial magnetic fields already supporting pre-existing nulls, raising the question of whether magnetic reconnection can create nulls in fields initially devoid of them. Previously, this question was briefly explored in a simulation with an initial chaotic magnetic field. However, the study failed to precisely identify locations, topological degrees, and natures (spiral or radial) of nulls, and it approximated magnetic reconnection without fully tracking field line in time. In this paper these findings are revisited in light of recent advancements and tools used to locate and trace nulls, along with the tracing of field lines, through which the concept of generation/annihilation of 3D nulls from chaotic fields is established in a precise manner.