text
stringlengths
6
128k
We present a calculus that models a form of process interaction based on copyless message passing, in the style of Singularity OS. The calculus is equipped with a type system ensuring that well-typed processes are free from memory faults, memory leaks, and communication errors. The type system is essentially linear, but we show that linearity alone is inadequate, because it leaves room for scenarios where well-typed processes leak significant amounts of memory. We address these problems basing the type system upon an original variant of session types.
Binary populations in young star clusters show multiplicity fractions both lower and up to twice as high as those observed in the Galactic field. We follow the evolution of a population of binary stars in dense and loose star clusters starting with an invariant initial binary population and a formal multiplicity fraction of unity, and demonstrate that these models can explain the observed binary properties in Taurus, Rho-Ophiuchus, Chamaeleon, Orion, IC 348, Upper Scorpius A, Praesepe, and the Pleiades. The model needs to consider solely different birth densities for these regions. The evolved theoretical orbital-parameter distributions are highly probable parent distributions for the observed ones. We constrain the birth conditions (stellar mass, M_ecl, and half-mass radius, r_h) for the derived progenitors of the star clusters and the overall present-day binary fractions allowed by the present model. The results compare very well with properties of molecular cloud clumps on the verge of star formation. Combining these with previously and independently obtained constraints on the birth densities of globular clusters, we identify a weak stellar mass -- half-mass radius correlation for cluster-forming cloud clumps, r_h / pc ~ (M_ecl / M_sun)^(0.13+-0.04). The ability of the model to reproduce the binary properties in all the investigated young objects, covering present-day densities from 1-10 stars pc^-3 (Taurus) to 2x10^4 stars pc^-3 (Orion), suggests that environment-dependent dynamical evolution plays an important role in shaping the present-day properties of binary populations in star clusters, and that the initial binary properties may not vary dramatically between different environments.
Compressible isothermal turbulence is analyzed under the assumption of homogeneity and in the asymptotic limit of a high Reynolds number. An exact relation is derived for some two-point correlation functions which reveals a fundamental difference with the incompressible case. The main difference resides in the presence of a new type of term which acts on the inertial range similarly as a source or a sink for the mean energy transfer rate. When isotropy is assumed, compressible turbulence may be described by the relation, $- {2 \over 3} \epsilon_{\rm{eff}} r = {\cal F}_r(r)$, where ${\cal F}_r$ is the radial component of the two-point correlation functions and $\epsilon_{\rm{eff}}$ is an effective mean total energy injection rate. By dimensional arguments we predict that a spectrum in $k^{-5/3}$ may still be preserved at small scales if the density-weighted fluid velocity, $\rho^{1/3} \uu$, is used.
Most weakly supervised semantic segmentation (WSSS) methods follow the pipeline that generates pseudo-masks initially and trains the segmentation model with the pseudo-masks in fully supervised manner after. However, we find some matters related to the pseudo-masks, including high quality pseudo-masks generation from class activation maps (CAMs), and training with noisy pseudo-mask supervision. For these matters, we propose the following designs to push the performance to new state-of-art: (i) Coefficient of Variation Smoothing to smooth the CAMs adaptively; (ii) Proportional Pseudo-mask Generation to project the expanded CAMs to pseudo-mask based on a new metric indicating the importance of each class on each location, instead of the scores trained from binary classifiers. (iii) Pretended Under-Fitting strategy to suppress the influence of noise in pseudo-mask; (iv) Cyclic Pseudo-mask to boost the pseudo-masks during training of fully supervised semantic segmentation (FSSS). Experiments based on our methods achieve new state-of-art results on two changeling weakly supervised semantic segmentation datasets, pushing the mIoU to 70.0% and 40.2% on PAS-CAL VOC 2012 and MS COCO 2014 respectively. Codes including segmentation framework are released at https://github.com/Eli-YiLi/PMM
Information seeking process is an important topic in information seeking behavior research. Both qualitative and empirical methods have been adopted in analyzing information seeking processes, with major focus on uncovering the latent search tactics behind user behaviors. Most of the existing works require defining search tactics in advance and coding data manually. Among the few works that can recognize search tactics automatically, they missed making sense of those tactics. In this paper, we proposed using an automatic technique, i.e. the Hidden Markov Model (HMM), to explicitly model the search tactics. HMM results show that the identified search tactics of individual information seeking behaviors are consistent with Marchioninis Information seeking process model. With the advantages of showing the connections between search tactics and search actions and the transitions among search tactics, we argue that HMM is a useful tool to investigate information seeking process, or at least it provides a feasible way to analyze large scale dataset.
High-performance, high-volume-manufacturing Si3N4 photonics requires extremely low waveguide losses augmented with heterogeneously integrated lasers for applications beyond traditional markets of high-capacity interconnects. State-of-the-art quality factors (Q) over 200 million at 1550 nm have been shown previously; however, maintaining high Qs throughout laser fabrication has not been shown. Here, Si3N4 resonator intrinsic Qs over 100 million are demonstrated on a fully integrated heterogeneous laser platform. Qi is measured throughout laser processing steps, showing degradation down to 50 million from dry etching, metal evaporation, and ion implant steps, and controllable recovery to over 100 million from annealing at 250C - 350C.
We report a measurement of the radium ion's $7p$ ${}^{2}P_{3/2}$ state lifetime, $\tau=4.78(3)$ ns. The measured lifetime is in good agreement with theoretical calculations, and will enable a determination of the differential scalar polarizability of the narrow linewidth $7s$ ${}^{2}S_{1/2}\rightarrow$ $6d$ ${}^{2}D_{5/2}$ optical clock transition.
The one-body tunnel picture of single-molecule magnets (SMMs) is not always sufficient to explain the measured tunnel transitions. An improvement to the picture is proposed by including also two-body tunnel transitions such as spin-spin cross-relaxation (SSCR) which are mediated by dipolar and weak superexchange interactions between molecules. A Mn4 SMM is used as a model system. At certain external fields, SSCRs lead to additional quantum resonances which show up in hysteresis loop measurements as well defined steps.
We introduce the notion of generalised Gorenstein spin structure on a curve and we give an explicit description of the associated section ring for curves of genus two with ample canonical bundle, obtaining five different formats.
In binaries composed of either early-type stars or white dwarfs, the dominant tidal process involves the excitation of internal gravity waves (IGWs), which propagate towards the stellar surface, and their dissipation via nonlinear wave breaking. We perform 2D hydrodynamical simulations of this wave breaking process in a stratified, isothermal atmosphere. We find that, after an initial transient phase, the dissipation of the IGWs naturally generates a sharp critical layer, separating the lower stationary region (with no mean flow) and the upper "synchronized" region (with the mean flow velocity equal to the horizontal wave phase speed). While the critical layer is steepened by absorption of these waves, it is simultaneously broadened by Kelvin-Helmholtz instabilities such that, in steady state, the critical layer width is determined by the Richardson criterion. We study the absorption and reflection of incident waves off the critical layer and provide analytical formulae describing its long-term evolution. The result of this study is important for characterizing the evolution of tidally heated white dwarfs and other binary stars.
We describe a UHV setup for grazing incidence fast atom diffraction (GIFAD) experiments. The overall geometry is simply a source of keV atoms facing an imaging detector. Therefore, It is very similar to the geometry of RHEED experiments, reflection high energy electron diffraction used to monitor growth at surfaces. Several custom instrumental developments are described making GIFAD operation efficient and straightforward. The difficulties associated with accurately measuring the small scattering angle and the related calibration are carefully analyzed.
The vapor-liquid critical behavior of intrinsically asymmetric fluids is studied in finite systems of linear dimensions, $L$, focusing on periodic boundary conditions, as appropriate for simulations. The recently propounded ``complete'' thermodynamic $(L\to\infty)$ scaling theory incorporating pressure mixing in the scaling fields as well as corrections to scaling ${[arXiv:cond-mat/0212145]}$, is extended to finite $L$, initially in a grand canonical representation. The theory allows for a Yang-Yang anomaly in which, when $L\to\infty$, the second temperature derivative, $(d^{2}\mu_{\sigma}/dT^{2})$, of the chemical potential along the phase boundary, $\mu_{\sigma}(T)$, diverges when $T\to\Tc -$. The finite-size behavior of various special {\em critical loci} in the temperature-density or $(T,\rho)$ plane, in particular, the $k$-inflection susceptibility loci and the $Q$-maximal loci -- derived from $Q_{L}(T,<\rho>_{L}) \equiv < m^{2}>^{2}_{L}/< m^{4}>_{L}$ where $m \equiv \rho - <\rho>_{L}$ -- is carefully elucidated and shown to be of value in estimating $\Tc$ and $\rhoc$. Concrete illustrations are presented for the hard-core square-well fluid and for the restricted primitive model electrolyte including an estimate of the correlation exponent $\nu$ that confirms Ising-type character. The treatment is extended to the canonical representation where further complications appear.
Modularity for multilayer networks, also called multislice modularity, is parametric to a resolution factor and an inter-layer coupling factor. The former is useful to express layer-specific relevance and the latter quantifies the strength of node linkage across the layers of a network. However, such parameters can be set arbitrarily, thus discarding any structure information at graph or community level. Other issues are related to the inability of properly modeling order relations over the layers, which is required for dynamic networks. In this paper we propose a new definition of modularity for multilayer networks that aims to overcome major issues of existing multislice modularity. We revise the role and semantics of the layer-specific resolution and inter-layer coupling terms, and define parameter-free unsupervised approaches for their computation, by using information from the within-layer and inter-layer structures of the communities. Moreover, our formulation of multilayer modularity is general enough to account for an available ordering of the layers and relating constraints on layer coupling. Experimental evaluation was conducted using three state-of-the-art methods for multilayer community detection and nine real-world multilayer networks. Results have shown the significance of our modularity, disclosing the effects of different combinations of the resolution and inter-layer coupling functions. This work can pave the way for the development of new optimization methods for discovering community structures in multilayer networks.
Motivated by the results of Wu-Yau-Zheng \cite{WuYauZheng}, we show that under a certain curvature assumption the harmonic representative of any boundary class of the K\"ahler cone is nonnegative.
We briefly show how the use of topological spaces and $\sigma$-algebras in physics can be rederived and understood as the fundamental requirement of experimental verifiability. We will see that a set of experimentally distinguishable objects will necessarily be endowed with a topology that is Kolmogorov (i.e. $T_0$) and second countable, which both puts constraints on well-formed scientific theories and allows us to give concrete physical meaning to the mathematical constructs. These insights can be taken as a first step in a general mathematical theory for experimental science.
Degradation in performances of air conditioners and refrigerators is caused by a frost formation and adhesion on the surface. In the present study, by means of the classical molecular dynamics simulation, we investigate how and how much the nanotextured surface characteristics, such as surface wettability and geometry, influenced the interfacial thermal resistance (ITR) between the solid wall and the water/ice. The ITR of the interfacial region was comparable in both the water and the ice states. As the nanostructure gaps became narrower, the ITR of the interfacial region decreased. The local ITR had a weak negative correlation with the local H2O molecule density regardless of the phase of the H2O molecules. The local ITR decreased as the local density increased. The greater amount of the thermal energy was transferred through the material interface by means of the intermolecular interaction when more the H2O molecules were located in the proximity area, which was closer to the Pt solid wall than the first adsorption layer peak. When the H2O molecules were in the crystal form on the solid wall, the proximity molecules decreased, and then the local ITR significantly increased.
This work extends the analysis of the theoretical results presented within the paper Is Q-Learning Provably Efficient? by Jin et al. We include a survey of related research to contextualize the need for strengthening the theoretical guarantees related to perhaps the most important threads of model-free reinforcement learning. We also expound upon the reasoning used in the proofs to highlight the critical steps leading to the main result showing that Q-learning with UCB exploration achieves a sample efficiency that matches the optimal regret that can be achieved by any model-based approach.
Perceiving 3D structures from RGB images based on CAD model primitives can enable an effective, efficient 3D object-based representation of scenes. However, current approaches rely on supervision from expensive annotations of CAD models associated with real images, and encounter challenges due to the inherent ambiguities in the task -- both in depth-scale ambiguity in monocular perception, as well as inexact matches of CAD database models to real observations. We thus propose DiffCAD, the first weakly-supervised probabilistic approach to CAD retrieval and alignment from an RGB image. We formulate this as a conditional generative task, leveraging diffusion to learn implicit probabilistic models capturing the shape, pose, and scale of CAD objects in an image. This enables multi-hypothesis generation of different plausible CAD reconstructions, requiring only a few hypotheses to characterize ambiguities in depth/scale and inexact shape matches. Our approach is trained only on synthetic data, leveraging monocular depth and mask estimates to enable robust zero-shot adaptation to various real target domains. Despite being trained solely on synthetic data, our multi-hypothesis approach can even surpass the supervised state-of-the-art on the Scan2CAD dataset by 5.9% with 8 hypotheses.
Generative artificial intelligence (GenAI), exemplified by ChatGPT, Midjourney, and other state-of-the-art large language models and diffusion models, holds significant potential for transforming education and enhancing human productivity. While the prevalence of GenAI in education has motivated numerous research initiatives, integrating these technologies within the learning analytics (LA) cycle and their implications for practical interventions remain underexplored. This paper delves into the prospective opportunities and challenges GenAI poses for advancing LA. We present a concise overview of the current GenAI landscape and contextualise its potential roles within Clow's generic framework of the LA cycle. We posit that GenAI can play pivotal roles in analysing unstructured data, generating synthetic learner data, enriching multimodal learner interactions, advancing interactive and explanatory analytics, and facilitating personalisation and adaptive interventions. As the lines blur between learners and GenAI tools, a renewed understanding of learners is needed. Future research can delve deep into frameworks and methodologies that advocate for human-AI collaboration. The LA community can play a pivotal role in capturing data about human and AI contributions and exploring how they can collaborate most effectively. As LA advances, it is essential to consider the pedagogical implications and broader socioeconomic impact of GenAI for ensuring an inclusive future.
Variational Autoencoders (VAEs) are powerful in data representation inference, but it cannot learn relations between features with its vanilla form and common variations. The ability to capture relations within data can provide the much needed inductive bias necessary for building more robust Machine Learning algorithms with more interpretable results. In this paper, inspired by recent advances in relational learning using Graph Neural Networks, we propose the Self-Attention Graph Variational AutoEncoder (SAG-VAE) network which can simultaneously learn feature relations and data representations in an end-to-end manner. SAG-VAE is trained by jointly inferring the posterior distribution of two types of latent variables, which denote the data representation and a shared graph structure, respectively. Furthermore, we introduce a novel self-attention graph network that improves the generative capabilities of SAG-VAE by parameterizing the generative distribution allowing SAG-VAE to generate new data via graph convolution, while still trainable via backpropagation. A learnable relational graph representation enhances SAG-VAE's robustness to perturbation and noise, while also providing deeper intuition into model performance. Experiments based on graphs show that SAG-VAE is capable of approximately retrieving edges and links between nodes based entirely on feature observations. Finally, results on image data illustrate that SAG-VAE is fairly robust against perturbations in image reconstruction and sampling.
The Unification Model for active galactic nuclei posits that Seyfert 2s are intrinsically like Seyfert 1s, but that their broad-line regions (BLRs) are hidden from our view. A Seyfert 2 nucleus that truly lacked a BLR, instead of simply having it hidden, would be a so-called "true" Seyfert 2. No object has as yet been conclusively proven to be one. We present a detailed analysis of four of the best "true" Seyfert 2 candidates discovered to date: IC 3639, NGC 3982, NGC 5283, and NGC 5427. None of the four has a broad H-alpha emission line, either in direct or polarized light. All four have rich, high-excitation spectra, blue continua, and Hubble Space Telescope (HST) images showing them to be unresolved sources with no host-galaxy obscuration. To check for possible obscuration on scales smaller than that resolvable by HST, we obtained X-ray observations using the Chandra X-ray Observatory. All four objects show evidence of obscuration and therefore could have hidden BLRs. The picture that emerges is of moderate to high, but not necessarily Compton-thick, obscuration of the nucleus, with extra-nuclear soft emission extended on the hundreds-of-parsecs scale that may originate in the narrow-line region. Since the extended soft emission compensates, in part, for the nuclear soft emission lost to absorption, both absorption and luminosity are likely to be severely underestimated unless the X-ray spectrum is of sufficient quality to distinguish the two components. This is of special concern where the source is too faint to produce a large number of counts, or where the source is too far away to resolve the extended soft X-ray emitting region.
Plasticity in body-centred cubic (BCC) metals, including dislocation interactions at grain boundaries, is much less understood than in face-centred cubic (FCC) metals. At low temperatures additional resistance to dislocation motion due to the Peierls barrier becomes important, which increases the complexity of plasticity. Iron-silicon steel is an interesting, model BCC material since the evolution of the dislocation structure in specifically-oriented grains and at particular grain boundaries have far-reaching effects not only on the deformation behaviour but also on the magnetic properties, which are important in its final application as electrical steel. In this study, two different orientations of micropillars (1, 2, 4 microns in diameter) and macropillars (2500 microns) and their corresponding bi crystals are analysed after compression experiments with respect to the effect of size on strength and dislocation structures. Using different experimental methods, such as slip trace analysis, plane tilt analysis and cross-sectional EBSD, we show that direct slip transmission occurs, and different slip systems are active in the bi-crystals compared to their single-crystal counterparts. However, in spite of direct transmission and a very high transmission factor, dislocation pile-up at the grain boundary is also observed at early stages of deformation. Moreover, an effect of size scaling with the pillar size in single crystals and the grain size in bi-crystals is found, which is consistent with investigations elsewhere in FCC metals.
Principles, Techniques and Practice of Spreadsheet Style
A graph is called chordal if it forbids induced cycles of length 4 or more. In this paper, we attempt to identify the non-nilpotent groups whose power graph is a chordal graph (this question was raised by Cameron in [4]). In this direction, we characterise the direct product of finite groups having chordal power graphs. We classify all finite simple groups of Lie type whose power graph is chordal. Further, we prove that the power graph of a sporadic simple group is always non-chordal. In addition, we show that almost all groups of order up to 47 have chordal power graphs.
We study the quantization of some cosmological models within the theory of N=1 supergravity with a positive cosmological constant. We find, by imposing the supersymmetry and Lorentz constraints, that there are no physical states in the models we have considered. For the k=1 Friedmann-Robertson-Walker model, where the fermionic degrees of freedom of the gravitino field are very restricted, we have found two bosonic quantum physical states, namely the wormhole and the Hartle-Hawking state. From the point of view of perturbation theory, it seems that the gravitational and gravitino modes that are allowed to be excited in a supersymmetric Bianchi-IX model contribute in such a way to forbid any physical solutions of the quantum constraints. This suggests that in a complete perturbation expansion we would have to conclude that the full theory of N=1 supergravity with a non-zero cosmological constant should have no physical states.
To mitigate the performance gap between CPU and the main memory, multi-level cache architectures are widely used in modern processors. Therefore, modeling the behaviors of the downstream caches becomes a critical part of the processor performance evaluation in the early stage of Design Space Exploration (DSE). In this paper, we propose a fast and accurate L2 cache reuse distance histogram model, which can be used to predict the behaviors of the multi-level cache architectures where the L1 cache uses the LRU replacement policy and the L2 cache uses LRU/Random replacement policies. We use the profiled L1 reuse distance histogram and two newly proposed metrics, namely the RST table and the Hit-RDH, that describing more detailed information of the software traces as the inputs. For a given L1 cache configuration, the profiling results can be reused for different configurations of the L2 cache. The output of our model is the L2 cache reuse distance histogram, based on which the L2 cache miss rates can be evaluated. We compare the L2 cache miss rates with the results from gem5 cycle-accurate simulations of 15 benchmarks chosen from SPEC CPU 2006 and 9 benchmarks from SPEC CPU 2017. The average absolute error is less than 5%, while the evaluation time for each L2 configuration can be sped up almost 30X for four L2 cache candidates.
Microlensing of stars, e.g. in the Galactic bulge and Andromeda galaxy (M31), is among the most robust, powerful method to constrain primordial black holes (PBHs) that are a viable candidate of dark matter. If PBHs are in the mass range $M_{\rm PBH} \lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10^{-10}M_\odot$, its Schwarzschild radius ($r_{\rm Sch}$) becomes comparable with or shorter than optical wavelength ($\lambda)$ used in a microlensing search, and in this regime the wave optics effect on microlensing needs to be taken into account. For a lensing PBH with mass satisfying $r_{\rm Sch}\sim \lambda$, it causes a characteristic oscillatory feature in the microlensing light curve, and it will give a smoking gun evidence of PBH if detected, because any astrophysical object cannot have such a tiny Schwarzschild radius. Even in a statistical study, e.g. constraining the abundance of PBHs from a systematic search of microlensing events for a sample of many source stars, the wave effect needs to be taken into account. We examine the impact of wave effect on the PBH constraints obtained from the $r$-band (6210\AA) monitoring observation of M31 stars in Niikura et al. (2019), and find that a finite source size effect is dominant over the wave effect for PBHs in the mass range $M_{\rm PBH}\simeq[10^{-11},10^{-10}]M_\odot$. We also discuss that, if a denser-cadence (10~sec), $g$-band monitoring observation for a sample of white dwarfs over a year timescale is available, it would allow one to explore the wave optics effect on microlensing light curve, if it occurs, or improve the PBH constraints in $M_{\rm PBH}\lower.5ex\hbox{$\; \buildrel < \over \sim \;$} 10^{-11}M_\odot$ even from a null detection.
Causal decision making (CDM) based on machine learning has become a routine part of business. Businesses algorithmically target offers, incentives, and recommendations to affect consumer behavior. Recently, we have seen an acceleration of research related to CDM and causal effect estimation (CEE) using machine-learned models. This article highlights an important perspective: CDM is not the same as CEE, and counterintuitively, accurate CEE is not necessary for accurate CDM. Our experience is that this is not well understood by practitioners or most researchers. Technically, the estimand of interest is different, and this has important implications both for modeling and for the use of statistical models for CDM. We draw on prior research to highlight three implications. (1) We should consider carefully the objective function of the causal machine learning, and if possible, optimize for accurate treatment assignment rather than for accurate effect-size estimation. (2) Confounding does not have the same effect on CDM as it does on CEE. The upshot is that for supporting CDM it may be just as good or even better to learn with confounded data as with unconfounded data. Finally, (3) causal statistical modeling may not be necessary to support CDM because a proxy target for statistical modeling might do as well or better. This third observation helps to explain at least one broad common CDM practice that seems wrong at first blush: the widespread use of non-causal models for targeting interventions. The last two implications are particularly important in practice, as acquiring (unconfounded) data on all counterfactuals can be costly and often impracticable. These observations open substantial research ground. We hope to facilitate research in this area by pointing to related articles from multiple contributing fields, including two dozen articles published the last three to four years.
For the sake of protecting data privacy and due to the rapid development of mobile devices, e.g., powerful central processing unit (CPU) and nascent neural processing unit (NPU), collaborative machine learning on mobile devices, e.g., federated learning, has been envisioned as a new AI approach with broad application prospects. However, the learning process of the existing federated learning platforms rely on the direct communication between the model owner, e.g., central cloud or edge server, and the mobile devices for transferring the model update. Such a direct communication may be energy inefficient or even unavailable in mobile environments. In this paper, we consider adopting the relay network to construct a cooperative communication platform for supporting model update transfer and trading. In the system, the mobile devices generate model updates based on their training data. The model updates are then forwarded to the model owner through the cooperative relay network. The model owner enjoys the learning service provided by the mobile devices. In return, the mobile devices charge the model owner certain prices. Due to the coupled interference of wireless transmission among the mobile devices that use the same relay node, the rational mobile devices have to choose their relay nodes as well as deciding on their transmission powers. Thus, we formulate a Stackelberg game model to investigate the interaction among the mobile devices and that between the mobile devices and the model owner. The Stackelberg equilibrium is investigated by capitalizing on the exterior point method. Moreover, we provide a series of insightful analytical and numerical results on the equilibrium of the Stackelberg game.
The topological superfluid 3He-B provides many examples of the interplay of symmetry and topology. Here we consider the effect of magnetic field on topological properties of 3He-B. Magnetic field violates the time reversal symmetry. As a result, the topological invariant supported by this symmetry ceases to exist; and thus the gapless fermions on the surface of 3He-B are not protected any more by topology: they become fully gapped. Nevertheless, if perturbation of symmetry is small, the surface fermions remain relativistic with mass proportional to symmetry violating perturbation -- magnetic field. The 3He-B symmetry gives rise to the Ising variable I=+/- 1, which emerges in magnetic field and which characterizes the states of the surface of 3He-B. This variable also determines the sign of the mass term of surface fermions and the topological invariant describing their effective Hamiltonian. The line on the surface, which separates the surface domains with different I, contains 1+1 gapless fermions, which are protected by combined action of symmetry and topology.
Topological Anderson transitions, which are direct phase transitions between topologically distinct Anderson localised phases, allow for criticality in 1D disordered systems. We analyse the statistical properties of an emsemble of critical wavefunctions at such transitions. We find that the local moments are strongly inhomogeneous, with significant amplification towards the edges of the system. In particular, we obtain an analytic expression for the spatial profile of the local moments which is valid at all topological Anderson transitions in 1D, as we verify by direct comparison with numerical simulations of various lattice models.
Given the distributed nature, detecting and defending against the backdoor attack under federated learning (FL) systems is challenging. In this paper, we observe that the cosine similarity of the last layer's weight between the global model and each local update could be used effectively as an indicator of malicious model updates. Therefore, we propose CosDefense, a cosine-similarity-based attacker detection algorithm. Specifically, under CosDefense, the server calculates the cosine similarity score of the last layer's weight between the global model and each client update, labels malicious clients whose score is much higher than the average, and filters them out of the model aggregation in each round. Compared to existing defense schemes, CosDefense does not require any extra information besides the received model updates to operate and is compatible with client sampling. Experiment results on three real-world datasets demonstrate that CosDefense could provide robust performance under the state-of-the-art FL poisoning attack.
We report on Atacama Large Millimeter/submillimeter Array (ALMA) detections of molecular absorption lines in Bands 3, 6 and 7 toward four radio-loud quasars, which were observed as the bandpass and complex gain calibrators. The absorption systems, three of which are newly detected, are found to be Galactic origin. Moreover, HCO absorption lines toward two objects are detected, which almost doubles the number of HCO absorption samples in the Galactic diffuse medium. In addition, high HCO to H13CO+ column density ratios are found, suggesting that the interstellar media (ISM) observed toward the two calibrators are in photodissociation regions, which observationally illustrates the chemistry of diffuse ISM driven by ultraviolet (UV) radiation. These results demonstrate that calibrators in the ALMA Archive are potential sources for the quest for new absorption systems and for detailed investigation of the nature of the ISM.
We present a practical method for calculating the gravitational self-force, as well as the electromagnetic and scalar self forces, for a particle in a generic orbit around a Kerr black hole. In particular, we provide the values of all the regularization parameters needed for implementing the (previously introduced) {\it mode-sum regularization} method. We also address the gauge-regularization problem, as well as a few other issues involved in the calculation of gravitational radiation-reaction in Kerr spacetime.
In this work, bulk Czochralski-grown single crystals of 10 mol. % Al2O3 alloyed B-Ga2O3 - monoclinic 10% AGO or B-(Al0.1Ga0.9)2O3 - are obtained, which show +0.20 eV increase in the bandgap compared with unintentionally doped B-Ga2O3. Further, growths of 33% AGO - B-(Al0.33Ga0.67)2O3 - and 50% AGO - B-(Al0.5Ga0.5)2O3 or B-AlGaO3 - produce polycrystalline single-phase monoclinic material (B-AGO). All three compositions are investigated by x-ray diffraction, Raman spectroscopy, optical absorption, and 27Al nuclear magnetic resonance (NMR). By investigating single phase B-AGO over a large range of Al2O3 concentrations (10 - 50 mol. %), broad trends in the lattice parameter, vibrational modes, optical bandgap, and crystallographic site preference are determined. All lattice parameters show a linear trend with Al incorporation. According to NMR, aluminum incorporates on both crystallographic sites of B-Ga2O3, with a slight preference for the octahedral (GaII) site, which becomes more disordered with increasing Al. Single crystals of 10% AGO were also characterized by x-ray rocking curve, transmission electron microscopy, purity (glow discharge mass spectroscopy and x-ray fluorescence), optical transmission (200 nm - 20 um wavelengths), and resistivity. These measurements suggest that electrical compensation by impurity acceptor doping is not the likely explanation for high resistivity, but rather the shift of a hydrogen level from a shallow donor to a deep acceptor due to Al alloying. .. Cont. This article may be downloaded for personal use only. Any other use requires prior permission of the author and AIP Publishing. This article appeared in Journal of Applied Physics 131 155702.
The $g$-good-neighbor conditional diagnosability is a new measure for fault diagnosis of systems. Xu et al. [Theor. Comput. Sci. 659 (2017) 53--63] determined the $g$-good-neighbor conditional diagnosability of $(n, k)$-star networks $S_{n, k}$ (i.e., $t_g(S_{n, k})$) with $1\leq k\leq n-1$ for $1\leq g\leq n-k$ under the PMC model and the MM$^*$ model. In this paper, we determine $t_g(S_{n, k})$ for all the remaining cases with $1\leq k\leq n-1$ for $1\leq g\leq n-1$ under the two models, from which we can obtain the $g$-good-neighbor conditional diagnosability of the star graph obtained by Li et al. [to appear in Theor. Comput. Sci.] for $1\leq g\leq n-2$.
We study the primary DNA structure of four of the most completely sequenced human chromosomes (including chromosome 19 which is the most dense in coding), using Non-extensive Statistics. We show that the exponents governing the decay of the coding size distributions vary between $5.2 \le r \le 5.7$ for the short scales and $1.45 \le q \le 1.50$ for the large scales. On the contrary, the exponents governing the decay of the non-coding size distributions in these four chromosomes, take the values $2.4 \le r \le 3.2$ for the short scales and $1.50 \le q \le 1.72$ for the large scales. This quantitative difference, in particular in the tail exponent $q$, indicates that the non-coding (coding) size distributions have long (short) range correlations. This non-trivial difference in the DNA statistics is attributed to the non-conservative (conservative) evolution dynamics acting on the non-coding (coding) DNA sequences.
The main objective of this study is to investigate the phenomenon of the bouncing scenario of the universe. The most widely recognized cosmological framework is the standard cosmological model, sometimes referred to as the Big Bang model. This is mainly because of its inherent properties and its consistent alignment with recent observational studies. However, the standard cosmological model faces some challenges concerning the physical conditions at the initial epochs. Some of these issues include the initial singularity problem, flatness problem, horizon problem, etc. Some of these challenges could potentially be addressed by incorporating the inflationary scenario into the cosmological framework of the universe. However, the inflationary mechanism is not able to tackle the occurrence of the initial singularity. The bouncing cosmology offers a probable solution to this initial singularity issue. In addition, it is capable of addressing some other issues that may arise during the early stages. Hence, in the modified gravity theory, bounce cosmology has been discussed.
Angle-integrated cross-section measurements of the $^{56}$Ni(d,n) and (d,p) stripping reactions have been performed to determine the single-particle strengths of low-lying excited states in the mirror nuclei pair $^{57}$Cu-$^{57}$Ni situated adjacent to the doubly magic nucleus $^{56}$Ni. The reactions were studied in inverse kinematics utilizing a beam of radioactive $^{56}$Ni ions in conjunction with the GRETINA $\gamma$-array. Spectroscopic factors are compared with new shell-model calculations using a full $pf$ model space with the GPFX1A Hamiltonian for the isospin-conserving strong interaction plus Coulomb and charge-dependent Hamiltonians. These results were used to set new constraints on the $^{56}$Ni(p,$\gamma$)$^{57}$Cu reaction rate for explosive burning conditions in x-ray bursts, where $^{56}$Ni represents a key waiting point in the astrophysical rp-process.
We investigate the X-ray spectrum of the Seyfert galaxy NGC 4151 using the simultaneous Suzaku/NuSTAR observation and flux-resolved INTEGRAL spectra supplemented by Suzaku and XMM observations. Our best spectral solution indicates that the narrow Fe Kalpha line is produced in Compton-thin matter at the distance of several hundred gravitational radii. In such a model, we find a weak but significant relativistic reflection from a disk truncated at about ten gravitational radii when the source is in bright X-ray states. We do not find evidence either for or against the presence of relativistic reflection in the dim X-ray state. We also rule out models with X-ray emission dominated by a source located very close to the black hole horizon, which was proposed in previous works implementing the lamp-post geometry for this source. We point out that accurate computation of the thermal Comptonization spectrum and its distortion by strong gravity is crucial in applications of the lamp-post geometry to the NuSTAR data.
We show that the eigenschemes of $4 \times 4 \times 4$ symmetric tensors are parametrized by a linear subvariety of the Grassmannian $\operatorname{Gr}(3,\mathbb{P}^{14})$. We also study the decomposition of the eigenscheme into the subscheme associated to the zero eigenvalue and its residue. In particular, we categorize the possible degrees and dimensions.
In this paper necessary and sufficient conditions are deduced for the starlikeness of Bessel functions of the first kind and their derivatives of the second and third order by using a result of Shah and Trimble about transcendental entire functions with univalent derivatives and some Mittag-Leffler expansions for the derivatives of Bessel functions of the first kind, as well as some results on the zeros of these functions.
This paper presents original and close to optimal stability conditions linking the time step and the space step, stronger than the CFL criterion: $\delta t\leq C\delta x^\alpha$ with $\alpha=\frac{2r}{2r-1}$, $r$ an integer, for some numerical schemes we produce, when solving convection-dominated problems. We test this condition numerically and prove that it applies to nonlinear equations under smoothness assumptions.
We study the computational power of machines that specify their own acceptance types, and show that they accept exactly the languages that $\manyonesharp$-reduce to NP sets. A natural variant accepts exactly the languages that $\manyonesharp$-reduce to P sets. We show that these two classes coincide if and only if $\psone = \psnnoplusbigohone$, where the latter class denotes the sets acceptable via at most one question to $\sharpp$ followed by at most a constant number of questions to $\np$.
Newly-introduced deep learning architectures, namely BERT, XLNet, RoBERTa and ALBERT, have been proved to be robust on several NLP tasks. However, the datasets trained on these architectures are fixed in terms of size and generalizability. To relieve this issue, we apply one of the most inexpensive solutions to update these datasets. We call this approach BET by which we analyze the backtranslation data augmentation on the transformer-based architectures. Using the Google Translate API with ten intermediary languages from ten different language families, we externally evaluate the results in the context of automatic paraphrase identification in a transformer-based framework. Our findings suggest that BET improves the paraphrase identification performance on the Microsoft Research Paraphrase Corpus (MRPC) to more than 3% on both accuracy and F1 score. We also analyze the augmentation in the low-data regime with downsampled versions of MRPC, Twitter Paraphrase Corpus (TPC) and Quora Question Pairs. In many low-data cases, we observe a switch from a failing model on the test set to reasonable performances. The results demonstrate that BET is a highly promising data augmentation technique: to push the current state-of-the-art of existing datasets and to bootstrap the utilization of deep learning architectures in the low-data regime of a hundred samples.
Using {\em ab initio} density functional theory calculations, we characterize changes in the electronic structure of MoS$_{2}$ monolayers introduced by missing or additional adsorbed sulfur atoms. We furthermore identify the chemical and electronic function of substances that have been reported to reduce the adverse effect of sulfur vacancies in quenching photoluminescence and reducing electronic conductance. We find that thiol-group containing molecules adsorbed at vacancy sites may re-insert missing sulfur atoms. In presence of additional adsorbed sulfur atoms, thiols may form disulfides on the MoS$_{2}$ surface to mitigate the adverse effect of defects.
We use virtual neighborhood technique to establish GW-invariants, Quantum cohomology, equivariant GW-invariants, equivariant quantum cohomology and Floer cohomology for general symplectic manifold. We also establish GW-invariants for a family of symplectic manifolds. As a consequence, we prove Arnold conjecture for nondegenerate Hamiltonian symplectomorphisms.
This paper examines the relationship between changes in telecommunications provider concentration on international long distance routes and changes in prices on those routes. Overall, decreased concentration is associated with significantly lower prices to consumers of long distance services. However, the relationship between concentration and price varies according to the type of long distance plan considered. For the international flagship plans frequently selected by more price-conscious consumers of international long distance, increased competition on a route is associated with lower prices. In contrast, for the basic international plans that are the default selection for consumers, increased competition on a route is actually associated with higher prices. Thus, somewhat surprisingly, price dispersion appears to increase as competition increases.
In this paper, we use Soergel calculus to define a monoidal functor, called the evaluation functor, from extended affine type A Soergel bimodules to the homotopy category of bounded complexes in finite type A Soergel bimodules. This functor categorifies the well-known evaluation homomorphism from the extended affine type A Hecke algebra to the finite type A Hecke algebra. Through it, one can pull back the triangulated birepresentation induced by any finitary birepresentation of finite type A Soergel bimodules to obtain a triangulated birepresentation of extended affine type A Soergel bimodules. We show that if the initial finitary birepresentation in finite type A is a cell birepresentation, the evaluation birepresentation in extended affine type A has a finitary cover, which we illustrate by working out the case of cell birepresentations with subregular apex in detail.
"Eddy saturation" is the regime in which the total time-mean volume transport of an oceanic current is relatively insensitive to the wind stress forcing and is often invoked as a dynamical description of Southern Ocean circulation. We revisit the problem of eddy saturation using a primitive-equations model in an idealized channel setup with bathymetry. We apply only mechanical wind stress forcing; there is no diapycnal mixing or surface buoyancy forcing. Our main aim is to assess the relative importance of two mechanisms for producing eddy saturated states: (i) the commonly invoked baroclinic mechanism that involves the competition of sloping isopycnals and restratification by production of baroclinic eddies, and (ii) the barotropic mechanism, that involves production of eddies through lateral shear instabilities or through the interaction of the barotropic current with bathymetric features. Our results suggest that the barotropic flow-component plays a crucial role in determining the total volume transport.
The framework of the Perturbed Static Path Approximation (PSPA) is used to calculate the partition function of a finite Fermi system from a Hamiltonian with a separable two body interaction. Therein, the collective degree of freedom is introduced in self-consistent fashion through a Hubbard-Stratonovich transformation. In this way all transport coefficients which dominate the decay of a meta-stable system are defined and calculated microscopically. Otherwise the same formalism is applied as in the Caldeira-Leggett model to deduce the decay rate from the free energy above the so called crossover temperature $T_0$.
Nearest neighbor is a popular class of classification methods with many desirable properties. For a large data set which cannot be loaded into the memory of a single machine due to computation, communication, privacy, or ownership limitations, we consider the divide and conquer scheme: the entire data set is divided into small subsamples, on which nearest neighbor predictions are made, and then a final decision is reached by aggregating the predictions on subsamples by majority voting. We name this method the big Nearest Neighbor (bigNN) classifier, and provide its rates of convergence under minimal assumptions, in terms of both the excess risk and the classification instability, which are proven to be the same rates as the oracle nearest neighbor classifier and cannot be improved. To significantly reduce the prediction time that is required for achieving the optimal rate, we also consider the pre-training acceleration technique applied to the bigNN method, with proven convergence rate. We find that in the distributed setting, the optimal choice of the neighbor $k$ should scale with both the total sample size and the number of partitions, and there is a theoretical upper limit for the latter. Numerical studies have verified the theoretical findings.
Reconfigurable intelligent surfaces (RISs) bring various benefits to the current and upcoming wireless networks, including enhanced spectrum and energy efficiency, soft handover, transmission reliability, and even localization accuracy. These remarkable improvements result from the reconfigurability, programmability, and adaptation capabilities of RISs for fine-tuning radio propagation environments, which can be realized in a cost- and energy-efficient manner. In this paper, we focus on the upgrade of the existing fifth-generation (5G) cellular network with the introduction of an RIS owning a full-dimensional uniform planar array structure for unleashing advanced three-dimensional connectivity. The deployed RIS is exploited for serving unmanned aerial vehicles (UAVs) flying in the sky with ultra-high data rate, a challenging task to be achieved with conventional base stations (BSs) that are designed mainly to serve ground users. By taking into account the line-of-sight probability for the RIS-UAV and BS-UAV links, we formulate the average achievable rate, analyze the effect of environmental parameters, and make insightful performance comparisons. Simulation results show that the deployment of RISs can bring impressive gains and significantly outperform conventional RIS-free 5G networks.
Recently an explicit resolution of the Calabi-Yau cone over the inhomogeneous five-dimensional Einstein-Sasaki space Y^{2,1} was obtained. It was constructed by specialising the parameters in the BPS limit of recently-discovered Kerr-NUT-AdS metrics in higher dimensions. We study the occurrence of such non-singular resolutions of Calabi-Yau cones in a more general context. Although no further six-dimensional examples arise as resolutions of cones over the L^{pqr} Einstein-Sasaki spaces, we find general classes of non-singular cohomogeneity-2 resolutions of higher-dimensional Einstein-Sasaki spaces. The topologies of the resolved spaces are of the form of an R^2 bundle over a base manifold that is itself an $S^2$ bundle over an Einstein-Kahler manifold.
We compute point schemes of some regular algebras using (Wolfram) Mathematica. These algebras are Ore extensions of regular graded skew Clifford algebras of global dimension 3.
We consider a group of $m$ trusted and authenticated nodes that aim to create a shared secret key $K$ over a wireless channel in the presence of an eavesdropper Eve. We assume that there exists a state dependent wireless broadcast channel from one of the honest nodes to the rest of them including Eve. All of the trusted nodes can also discuss over a cost-free, noiseless and unlimited rate public channel which is also overheard by Eve. For this setup, we develop an information-theoretically secure secret key agreement protocol. We show the optimality of this protocol for "linear deterministic" wireless broadcast channels. This model generalizes the packet erasure model studied in literature for wireless broadcast channels. For "state-dependent Gaussian" wireless broadcast channels, we propose an achievability scheme based on a multi-layer wiretap code. Finding the best achievable secret key generation rate leads to solving a non-convex power allocation problem. We show that using a dynamic programming algorithm, one can obtain the best power allocation for this problem. Moreover, we prove the optimality of the proposed achievability scheme for the regime of high-SNR and large-dynamic range over the channel states in the (generalized) degrees of freedom sense.
Deep neural networks for machine comprehension typically utilizes only word or character embeddings without explicitly taking advantage of structured linguistic information such as constituency trees and dependency trees. In this paper, we propose structural embedding of syntactic trees (SEST), an algorithm framework to utilize structured information and encode them into vector representations that can boost the performance of algorithms for the machine comprehension. We evaluate our approach using a state-of-the-art neural attention model on the SQuAD dataset. Experimental results demonstrate that our model can accurately identify the syntactic boundaries of the sentences and extract answers that are syntactically coherent over the baseline methods.
We study the Edwards-Anderson model on a simple cubic lattice with a finite constant external field. We employ an indicator composed of a ratio of susceptibilities at finite wavenumbers, which was recently proposed to avoid the difficulties of a zero momentum quantity, for capturing the spin glass phase transition. Unfortunately, this new indicator is fairly noisy, so a large pool of samples at low temperature and small external field are needed to generate results with sufficiently small statistical error for analysis. We thus implement the Monte Carlo method using graphics processing units to drastically speedup the simulation. We confirm previous findings that conventional indicators for the spin glass transition, including the Binder ratio and the correlation length do not show any indication of a transition for rather low temperatures. However, the ratio of spin glass susceptibilities do show crossing behavior, albeit a systematic analysis is beyond the reach of the present data. This calls for a more thorough study of the three-dimension Edwards-Anderson model in an external field.
Topology is bringing new tools for the study of fluid waves. The existence of unidirectional Yanai and Kelvin equatorial waves has been related to a topological invariant, the Chern number, that describes the winding of $f$-plane shallow water eigenmodes around band crossing points in parameter space. In this previous study, the topological invariant was a property of the interface between two hemispheres. Here we ask whether a topological index can be assigned to each hemisphere. We show that this can be done if the shallow water model in $f$-plane geometry is regularized by an additional odd-viscous term. We then compute the spectrum of a shallow water model with a sharp equator separating two flat hemispheres, and recover the Kelvin and Yanai waves as two exponentially trapped waves along the equator, with all the other modes delocalized into the bulk. This model provides an exactly solvable example of bulk-interface correspondence in a flow with a sharp interface, and offers a topological interpretation for some of the transition modes described by [Iga, Journal of Fluid Mechanics 1995]. It also paves the way towards a topological interpretation of coastal Kelvin waves along a boundary, and more generally, to an understanding of bulk-boundary correspondence in continuous media.
We describe some connections between three different fields: combinatorics (umbral calculus), functional analysis (linear functionals and operators) and harmonic analysis (convolutions on group-like structures). Systematic usage of cancellative semigroup, their convolution algebras, and tokens between them provides a common language for description of objects from these three fields. Keywords: cancellative semigroups, umbral calculus, harmonic analysis, token, convolution algebra, integral transform
The bandwidth of a $n$-vertex graph $G$ is the smallest integer $b$ such that there exists a bijective function $f : V(G) \rightarrow \{1,...,n\}$, called a layout of $G$, such that for every edge $uv \in E(G)$, $|f(u) - f(v)| \leq b$. In the {\sc Bandwidth} problem we are given as input a graph $G$ and integer $b$, and asked whether the bandwidth of $G$ is at most $b$. We present two results concerning the parameterized complexity of the {\sc Bandwidth} problem on trees. First we show that an algorithm for {\sc Bandwidth} with running time $f(b)n^{o(b)}$ would violate the Exponential Time Hypothesis, even if the input graphs are restricted to be trees of pathwidth at most two. Our lower bound shows that the classical $2^{O(b)}n^{b+1}$ time algorithm by Saxe [SIAM Journal on Algebraic and Discrete Methods, 1980] is essentially optimal. Our second result is a polynomial time algorithm that given a tree $T$ and integer $b$, either correctly concludes that the bandwidth of $T$ is more than $b$ or finds a layout of $T$ of bandwidth at most $b^{O(b)}$. This is the first parameterized approximation algorithm for the bandwidth of trees.
The XXX Gaudin model with generic integrable boundaries specified by the most general non-diagonal K-matrices is studied by the off-diagonal Bethe ansatz method. The eigenvalues of the associated Gaudin operators and the corresponding Bethe ansatz equations are obtained.
Lithium Niobate is an electro-optic material with many applications in microwave signal processing, communication, quantum sensing, and quantum computing. In this letter, we present findings on evaluating the complex electromagnetic permittivity of lithium niobate at millikelvin temperatures. Measurements are carried out using a resonant-type method with a superconducting radio-frequency (SRF) cavity operating at 7 GHz and designed to characterize anisotropic dielectrics. The relative permittivity tensor and loss tangent are measured at 50 mK with unprecedented accuracy.
Recently claimed \cite{dj1,dj2,FC} anomalous nuclear effects in di-jet production are analyzed in view of multiple interaction of projectile/ejectile partons in nuclear matter. We derive model--independent relations between A-dependence of the cross section and nuclear broadening of transverse momentum. Comparison with the data show that initial/final state interaction of partons participating in hard process is hard as well. This is a solid argument in favor of smallness of a color neutralization radius of a hadronizing highly virtual quark.
Absolutely Maximally Entangled (AME) states are those multipartite quantum states that carry absolute maximum entanglement in all possible partitions. AME states are known to play a relevant role in multipartite teleportation, in quantum secret sharing and they provide the basis novel tensor networks related to holography. We present alternative constructions of AME states and show their link with combinatorial designs. We also analyze a key property of AME, namely their relation to tensors that can be understood as unitary transformations in every of its bi-partitions. We call this property multi-unitarity.
Gambles are random variables that model possible changes in monetary wealth. Classic decision theory transforms money into utility through a utility function and defines the value of a gamble as the expectation value of utility changes. Utility functions aim to capture individual psychological characteristics, but their generality limits predictive power. Expectation value maximizers are defined as rational in economics, but expectation values are only meaningful in the presence of ensembles or in systems with ergodic properties, whereas decision-makers have no access to ensembles and the variables representing wealth in the usual growth models do not have the relevant ergodic properties. Simultaneously addressing the shortcomings of utility and those of expectations, we propose to evaluate gambles by averaging wealth growth over time. No utility function is needed, but a dynamic must be specified to compute time averages. Linear and logarithmic "utility functions" appear as transformations that generate ergodic observables for purely additive and purely multiplicative dynamics, respectively. We highlight inconsistencies throughout the development of decision theory, whose correction clarifies that our perspective is legitimate. These invalidate a commonly cited argument for bounded utility functions.
Characterization of the frequency response of coherent radiometric receivers is a key element in estimating the flux of astrophysical emissions, since the measured signal depends on the convolution of the source spectral emission with the instrument band shape. Laboratory Radio Frequency (RF) measurements of the instrument bandpass often require complex test setups and are subject to a number of systematic effects driven by thermal issues and impedance matching, particularly if cryogenic operation is involved. In this paper we present an approach to modeling radiometers bandpasses by integrating simulations and RF measurements of individual components. This method is based on QUCS (Quasi Universal Circuit Simulator), an open-source circuit simulator, which gives the flexibility of choosing among the available devices, implementing new analytical software models or using measured S-parameters. Therefore an independent estimate of the instrument bandpass is achieved using standard individual component measurements and validated analytical simulations. In order to automate the process of preparing input data, running simulations and exporting results we developed the Python package python-qucs and released it under GNU Public License. We discuss, as working cases, bandpass response modeling of the COFE and Planck Low Frequency Instrument (LFI) radiometers and compare results obtained with QUCS and with a commercial circuit simulator software. The main purpose of bandpass modeling in COFE is to optimize component matching, while in LFI they represent the best estimation of frequency response, since end-to-end measurements were strongly affected by systematic effects.
Let $G$ be a Hamiltonian graph with $n$ vertices. A nonempty vertex set $X\subseteq V(G)$ is called a Hamiltonian cycle enforcing set (in short, an $H$-force set) of $G$ if every $X$-cycle of $G$ (i.e., a cycle of $G$ containing all vertices of $X$) is a Hamiltonian cycle. For the graph $G$, $h(G)$ is the smallest cardinality of an $H$-force set of $G$ and call it the $H$-force number of $G$. Ore's theorem states that the graph $G$ is Hamiltonian if $d(u)+d(v)\geq n$ for every pair of nonadjacent vertices $u,v$ of $G$. In this paper, we study the $H$-force sets of the graphs satisfying the condition of Ore's theorem, show that the $H$-force number of these graphs is possibly $n$, or $n-2$, or $\frac{n}{2}$ and give a classification of these graphs due to the $H$-force number.
Let $A$ be a subvariety of affine space $\mathbb{A}^n$ whose irreducible components are $d$-dimensional linear or affine subspaces of $\mathbb{A}^n$. Denote by $D(A)\subset\mathbb{N}^n$ the set of exponents of standard monomials of $A$. We show that the combinatorial object $D(A)$ reflects the geometry of $A$ in a very direct way. More precisely, we define a $d$-plane in $\mathbb{N}^n$ as being a set $\gamma+\oplus_{j\in J}\mathbb{N}e_{j}$, where $#J=d$ and $\gamma_{j}=0$ for all $j\in J$. We call the $d$-plane thus defined to be parallel to $\oplus_{j\in J}\mathbb{N}e_{j}$. We show that the number of $d$-planes in $D(A)$ equals the number of components of $A$. This generalises a classical result, the finiteness algorithm, which holds in the case $d=0$. In addition to that, we determine the number of all $d$-planes in $D(A)$ parallel to $\oplus_{j\in J}\mathbb{N}e_{j}$, for all $J$. Furthermore, we describe $D(A)$ in terms of the standard sets of the intersections $A\cap\{X_{1}=\lambda\}$, where $\lambda$ runs through $\mathbb{A}^1$.
Galactic Archaeology, i.e. the use of chemo-dynamical information for stellar samples covering large portions of the Milky Way to infer the dominant processes involved in its formation and evolution, is now a powerful method thanks to the large recently completed and ongoing spectroscopic surveys. It is now important to ask the right questions when analyzing and interpreting the information contained in these rich datasets. To this aim, we have developed a chemodynamical model for the Milky Way that provides quantitative predictions to be compared with the chemo-kinematical properties extracted from the stellar spectra. Three key parameters are needed to make the comparison between data and model predictions useful in order to advance in the field, namely: precise proper-motions, distances and ages. The uncertainties involved in the estimate of ages and distances for field stars are currently the main obstacles in the Galactic Archaeology method. Two important developments might change this situation in the near future: asteroseismology and the now launched Gaia. When combined with the large datasets from surveys like RAVE, SEGUE, LAMOST, Gaia-ESO, APOGEE , HERMES and the future 4MOST we will have the basic ingredients for the reconstruction of the MW history in hands. In the light of these observational advances, the development of detailed chemo-dynamical models tailored to the Milky Way is urgently needed in the field. Here we show the steps we have taken, both in terms of data analysis and modelling. The examples shown here illustrate how powerful can the Galactic Archaeology method become once ages and distances are known with better precision than what is currently feasible.
We discuss the possibility to study oscillations of atmospheric neutrinos in the ATLAS experiment at CERN. Due to the large total detector mass, a significant number of events is expected, and during the shutdown phases of the LHC, reconstruction of these events will be possible with very good energy and angular resolutions, and with charge identification. We argue that 500 live days of neutrino running could be achieved, and that a total of ~160 contained \nu_\mu events and ~360 upward going muons could be collected during this time. Despite the low statistics, the excellent detector resolution will allow for an unambiguous confirmation of atmospheric neutrino oscillations and for measurements of the leading oscillation parameters. Though our detailed simulations show that the sensitivity of ATLAS is worse than that of dedicated neutrino experiments, we demonstrate that more sophisticated detectors, e.g. at the ILC, could be highly competitive with upcoming superbeam experiments, and might even give indications for the mass hierarchy and for the value of theta-13.
We present a study of the prevalence, strength, and kinematics of ultraviolet FeII and MgII emission lines in 212 star-forming galaxies at z = 1 selected from the DEEP2 survey. We find FeII* emission in composite spectra assembled on the basis of different galaxy properties, indicating that FeII* emission is prevalent at z = 1. In these composites, FeII* emission is observed at roughly the systemic velocity. At z = 1, we find that the strength of FeII* emission is most strongly modulated by dust attenuation, and is additionally correlated with redshift, star-formation rate, and [OII] equivalent width, such that systems at higher redshifts with lower dust levels, lower star-formation rates, and larger [OII] equivalent widths show stronger FeII* emission. We detect MgII emission in at least 15% of the individual spectra and we find that objects showing stronger MgII emission have higher specific star-formation rates, smaller [OII] linewidths, larger [OII] equivalent widths, lower dust attenuations, and lower stellar masses than the sample as a whole. MgII emission strength exhibits the strongest correlation with specific star-formation rate, although we find evidence that dust attenuation and stellar mass also play roles in the regulation of MgII emission. Future integral field unit observations of the spatial extent of FeII* and MgII emission in galaxies with high specific star-formation rates, low dust attenuations, and low stellar masses will be important for probing the morphology of circumgalactic gas.
The ability to store and manipulate information is a hallmark of computational systems. Whereas computers are carefully engineered to represent and perform mathematical operations on structured data, neurobiological systems perform analogous functions despite flexible organization and unstructured sensory input. Recent efforts have made progress in modeling the representation and recall of information in neural systems. However, precisely how neural systems learn to modify these representations remains far from understood. Here we demonstrate that a recurrent neural network (RNN) can learn to modify its representation of complex information using only examples, and we explain the associated learning mechanism with new theory. Specifically, we drive an RNN with examples of translated, linearly transformed, or pre-bifurcated time series from a chaotic Lorenz system, alongside an additional control signal that changes value for each example. By training the network to replicate the Lorenz inputs, it learns to autonomously evolve about a Lorenz-shaped manifold. Additionally, it learns to continuously interpolate and extrapolate the translation, transformation, and bifurcation of this representation far beyond the training data by changing the control signal. Finally, we provide a mechanism for how these computations are learned, and demonstrate that a single network can simultaneously learn multiple computations. Together, our results provide a simple but powerful mechanism by which an RNN can learn to manipulate internal representations of complex information, allowing for the principled study and precise design of RNNs.
Datacenters are increasingly becoming heterogeneous, and are starting to include specialized hardware for networking, video processing, and especially deep learning. To leverage the heterogeneous compute capability of modern datacenters, we develop an approach for compiler-level partitioning of deep neural networks (DNNs) onto multiple interconnected hardware devices. We present a general framework for heterogeneous DNN compilation, offering automatic partitioning and device mapping. Our scheduler integrates both an exact solver, through a mixed integer linear programming (MILP) formulation, and a modularity-based heuristic for scalability. Furthermore, we propose a theoretical lower bound formula for the optimal solution, which enables the assessment of the heuristic solutions' quality. We evaluate our scheduler in optimizing both conventional DNNs and randomly-wired neural networks, subject to latency and throughput constraints, on a heterogeneous system comprised of a CPU and two distinct GPUs. Compared to na\"ively running DNNs on the fastest GPU, he proposed framework can achieve more than 3$\times$ times lower latency and up to 2.9$\times$ higher throughput by automatically leveraging both data and model parallelism to deploy DNNs on our sample heterogeneous server node. Moreover, our modularity-based "splitting" heuristic improves the solution runtime up to 395$\times$ without noticeably sacrificing solution quality compared to an exact MILP solution, and outperforms all other heuristics by 30-60% solution quality. Finally, our case study shows how we can extend our framework to schedule large language models across multiple heterogeneous servers by exploiting symmetry in the hardware setup. Our code can be easily plugged in to existing frameworks, and is available at https://github.com/abdelfattah-lab/diviml.
Statistical mechanics can predict thermal equilibrium states for most classical systems, but for an isolated quantum system there is no general understanding on how equilibrium states dynamically emerge from the microscopic Hamiltonian. For instance, quantum systems that are near-integrable usually fail to thermalize in an experimentally realistic time scale and, instead, relax to quasi-stationary prethermal states that can be described by statistical mechanics when approximately conserved quantities are appropriately included in a generalized Gibbs ensemble (GGE). Here we experimentally study the relaxation dynamics of a chain of up to 22 spins evolving under a long-range transverse field Ising Hamiltonian following a sudden quench. For sufficiently long-ranged interactions the system relaxes to a new type of prethermal state that retains a strong memory of the initial conditions. In this case, the prethermal state cannot be described by a GGE, but rather arises from an emergent double-well potential felt by the spin excitations. This result shows that prethermalization occurs in a significantly broader context than previously thought, and reveals new challenges for a generic understanding of the thermalization of quantum systems, particularly in the presence of long-range interactions.
We propose a general method to realize and calculate the transmission in a Weyl semimetal (WSM) heterostructure by employing a periodic three-dimensional topoelectrical (TE) circuit network. By drawing the analogy between inductor-capacitor circuit lattices and quantum mechanical tight-binding (TB) models, we show that the energy flux in a TE network is analogous to the probability flux in a TB Hamiltonian. TE systems offer a key advantage in that they can be easily tuned to achieve different topological WSM phases simply by varying the capacitances and inductances. The above analogy opens the way to the study of tunneling across heterojunctions separating different types of WSMs in TE circuits, a situation which is virtually impossible to realize in physical WSM materials. We show that the energy flux transmission in a WSM heterostructure depends highly on the relative orientation of the transport direction and the $k$-space tilt direction. For the transmission from a Type I WSM source lead to a Type II WSM drain lead, all valleys transmit equally when the tilt and transmission directions are perpendicular to each other. In contrast, large inter-valley scattering is required for transmission when the tilt and transport directions are parallel to each other, leading to valley-dependent transmission. We describe a Type III WSM phase intermediate between the Type I and Type II phases. An `anti-Klein' tunneling occurs between a Type I source and Type III drain where the transmission is totally suppressed for some valleys at normal incidence. This is in direct contrast to the usual Klein tunneling in Dirac materials where normally incident flux is perfectly transmitted. Owing to the ease of fabrication and experimental accessibility, TE circuits offer an excellent testbed to study the extraordinary transport phenomena in WSM based heterostructures.
Black holes are interesting astrophysical objects that have been studied as systems sensitive to quantum gravitational data. The accelerated geometry in the exterior of extremal black holes can induce large center-of-mass energies between particles with particular momenta at the horizon. This is known as the Ba\~nados-Silk-West (BSW) effect. For point particles, the BSW effect requires tuning to have the collision coincide with the horizon. However, this tuning is relaxed for string-theoretic objects. String scattering amplitudes are large in the Regge limit, occurring at large center-of-mass energies and shallow scattering angles, parametrically surpassing quantum field theoretic amplitudes. In this limit, longitudinal string spreading is induced between strings with a large difference in light-cone momenta, and this spread can be used to 'detune' the BSW effect. With this in mind, quantum gravitational data, as described by string theory, may play an important role in near horizon dynamics of extremal Kerr black holes. Further, though it may be hard to realize astrophysically, this system acts as a natural particle accelerator for probing the nature of small-scale physics at Planckian energies.
We analyse $f(R)$ theories of gravity from a dynamical system perspective, showing how the $R^2$ correction in Starobinsky's model plays a crucial role from the viewpoint of the inflationary paradigm. Then, we propose a modification of Starobinsky's model by adding an exponential term in the $f(R)$ Lagrangian. We show how this modification could allow to test the robustness of the model by means of the predictions on the scalar spectral index $n_s$.
Group fairness is an important concern for machine learning researchers, developers, and regulators. However, the strictness to which models must be constrained to be considered fair is still under debate. The focus of this work is on constraining the expected outcome of subpopulations in kernel regression and, in particular, decision tree regression, with application to random forests, boosted trees and other ensemble models. While individual constraints were previously addressed, this work addresses concerns about incorporating multiple constraints simultaneously. The proposed solution does not affect the order of computational or memory complexity of the decision trees and is easily integrated into models post training.
The presence of magnetic noise in magnetoresistive-based magnetic sensors degrades their detection limit at low frequencies. In this paper, different ways of stabilizing the magnetic sensing layer to suppress magnetic noise are investigated by applying a pinning field, either by an external field, internally in the stack or by shape anisotropy. We show that these three methods are equivalent, could be combined and that there is a competition between noise suppression and sensitivity reduction, which results in an optimum total pinning field, for which the detection limit of the sensor is improved up to a factor of ten.
The scotogenic model is one of the simplest scenarios for physics beyond the Standard Model that can account for neutrino masses and dark matter at the TeV scale. It contains another scalar doublet and three additional singlet fermions (N_i), all odd under a Z_2 symmetry. In this paper, we examine the possibility that the dark matter candidate, N_1, does not reach thermal equilibrium in the early Universe so that it behaves as a Feebly Interacting Massive Particle (FIMP). In that case, it is found that the freeze-in production of dark matter is entirely dominated by the decays of the odd scalars. We compute the resulting dark matter abundance and study its dependence with the parameters of the model. The freeze-in mechanism is shown to be able to account for the observed relic density over a wide range of dark matter masses, from the keV to the TeV scale. In addition to freeze-in, the N_1 relic density receives a further contribution from the late decay of the next-to-lightest odd particle, which we also analyze. Finally, we consider the possibility that the dark matter particle is a WIMP but receives an extra contribution to its relic density from the decay of the FIMP (N_1). In this case, important signals at direct and indirect detection experiments are generally expected.
We extend the Abrams-Strogatz model for competition between two languages [Nature 424, 900 (2003)] to the case of n(>=2) competing states (i.e., languages). Although the Abrams-Strogatz model for n=2 can be interpreted as modeling either majority preference or minority aversion, the two mechanisms are distinct when n>=3. We find that the condition for the coexistence of different states is independent of n under the pure majority preference, whereas it depends on n under the pure minority aversion. We also show that the stable coexistence equilibrium and stable monopoly equilibria can be multistable under the minority aversion and not under the majority preference. Furthermore, we obtain the phase diagram of the model when the effects of the majority preference and minority aversion are mixed, under the condition that different states have the same attractiveness. We show that the multistability is a generic property of the model facilitated by large n.
We investigate necessary conditions of optimality for the Bolza-type infinite horizon problem with free right end. The optimality is understood in the sense of weakly uniformly overtaking optimal control. No previous knowledge in the asymptotic behaviour of trajectories or adjoint variables is necessary. Following Seierstad's idea, we obtain the necessary boundary condition at infinity in the form of a transversality condition for the maximum principle. Those transversality conditions may be expressed in the integral form through an Aseev--Kryazhimskii-type formulae for co-state arcs. The connection between these formulae and limiting gradients of payoff function at infinity is identified; several conditions under which it is possible to explicitly specify the co-state arc through those Aseev--Kryazhimskii-type formulae are found. For infinite horizon problem of Bolza type, an example is given to clarify the use of the Aseev--Kryazhimskii formula as explicit expression of the co-state arc.
Considering galaxies as self - gravitating systems of many collisionless particles allows to use methods of statistical mechanics inferring the distribution function of these stellar systems. Actually, the long range nature of the gravitational force contrasts with the underlying assumptions of Boltzmann statistics where the interactions among particles are assumed to be short ranged. A particular generalization of the classical Boltzmann formalism is available within the nonextensive context of Tsallis q -statistics, subject to non -additivity of the entropies of sub - systems. Assuming stationarity and isotropy in the velocity space, it is possible solving the generalized collsionless Boltzmann equation to derive the galaxy distribution function and density profile. We present a particular set of nonextensive models and investigate their dynamical and observable properties. As a test of the viability of this generalized context, we fit the rotation curve of M33 showing that the proposed approach leads to dark matter haloes in excellent agreement with the observed data.
The process e+e- --> p anti-p gamma is studied using 469 fb-1 of integrated luminosity collected with the BABAR detector at the PEP-II collider, at an e+e- center-of-mass energy of 10.6 GeV. From the analysis of the p anti-p invariant mass spectrum, the energy dependence of the cross section for e+e- --> p anti-p is measured from threshold to 4.5 GeV. The energy dependence of the ratio of electric and magnetic form factors, |G_E/G_M|, and the asymmetry in the proton angular distribution are measured for p anti-p masses below 3 GeV. We also measure the branching fractions for the decays J/psi --> p anti-p and psi(2S) --> p anti p.
The optical properties of a hexagonal Boron Nitride (BN) monolayer across the UV spectrum are studied by tuning its planar buckling. The strong $\sigma\text{-}\sigma$ bond through sp$^2$ hybridization of a flat BN monolayer can be changed to a stronger $\sigma\text{-}\pi$ bond through sp$^3$ hybridization by increasing the planar buckling. This gives rise to the $s$- and $p$-orbital contributions to form a density of states around the Fermi energy, and these states dislocate to a lower energy in the presence of an increased planar buckling. Consequently, the wide band gap of a flat BN monolayer is reduced to a smaller band gap in a buckled BN monolayer enhancing its optical activity in the Deep-UV region. The optical properties such as the dielectric function, the reflectivity, the absorption, and the optical conductivity spectra are investigated. It is shown that the absorption rate can be enhanced by $(12\text{-}15)\%$ for intermediate values of planar buckling in the Deep-UV region, and $(15\text{-}20)\%$ at higher values of planar buckling in the near-UV region. Furthermore, the optical conductivity is enhanced by increased planar buckling in both the visible and the Deep-UV regions depending on the direction of the polarization of the incoming light. Our results may be useful for optoelectronic BN monolayer devices in the UV range including UV spectroscopy, deep-UV communications, and UV photodetectors.
We propose a nested weighted Tchebycheff Multi-objective Bayesian optimization framework where we build a regression model selection procedure from an ensemble of models, towards better estimation of the uncertain parameters of the weighted-Tchebycheff expensive black-box multi-objective function. In existing work, a weighted Tchebycheff MOBO approach has been demonstrated which attempts to estimate the unknown utopia in formulating acquisition function, through calibration using a priori selected regression model. However, the existing MOBO model lacks flexibility in selecting the appropriate regression models given the guided sampled data and therefore, can under-fit or over-fit as the iterations of the MOBO progress, reducing the overall MOBO performance. As it is too complex to a priori guarantee a best model in general, this motivates us to consider a portfolio of different families of predictive models fitted with current training data, guided by the WTB MOBO; the best model is selected following a user-defined prediction root mean-square-error-based approach. The proposed approach is implemented in optimizing a multi-modal benchmark problem and a thin tube design under constant loading of temperature-pressure, with minimizing the risk of creep-fatigue failure and design cost. Finally, the nested weighted Tchebycheff MOBO model performance is compared with different MOBO frameworks with respect to accuracy in parameter estimation, Pareto-optimal solutions and function evaluation cost. This method is generalized enough to consider different families of predictive models in the portfolio for best model selection, where the overall design architecture allows for solving any high-dimensional (multiple functions) complex black-box problems and can be extended to any other global criterion multi-objective optimization methods where prior knowledge of utopia is required.
A wealth of X-ray and radio observations has revealed in the past decade a growing diversity of neutron stars (NSs) with properties spanning orders of magnitude in magnetic field strength and ages, and with emission processes explained by a range of mechanisms dictating their radiation properties. However, serious difficulties exist with the magneto-dipole model of isolated neutron star fields and their inferred ages, such as a large range of observed braking indices ($n$, with values often $<$3) and a mismatch between the neutron star and associated supernova remnant (SNR) ages. This problem arises primarily from the assumptions of a constant magnetic field with $n$=3, and an initial spin period that is much smaller than the observed current period. It has been suggested that a solution to this problem involves magnetic field evolution, with some NSs having magnetic fields buried within the crust by accretion of fall-back supernova material following their birth. In this work we explore a parametric phenomenological model for magnetic field growth that generalizes previous suggested field evolution functions, and apply it to a variety of NSs with both secure SNR associations and known ages. We explore the flexibility of the model by recovering the results of previous work on buried magnetic fields in young neutron stars. Our model fits suggest that apparently disparate classes of NSs may be related to one another through the time-evolution of the magnetic field.
We obtain some interesting results about the parity of the Fourier coefficients of hauptmoduln $j_{N}(z)$ and $j_{N}^{+}(z),$ for some positive integers $N$. We use elementary methods and the techniques of O. Kolberg's proof for the parity of the partition function.
We report the existence of broad and weakly asymmetric features in the high-energy (G) Raman modes of freely suspended metallic carbon nanotubes of defined chiral index. A significant variation in peak width (from 12 cm-1 to 110 cm-1) is observed as a function of the nanotube's chiral structure. When the nanotubes are electrostatically gated, the peak widths decrease. The broadness of the Raman features is understood as the consequence of coupling of the phonon to electron-hole pairs, the strength of which varies with the nanotube chiral index and the position of the Fermi energy.
Integrative analysis of data from multiple sources is critical to making generalizable discoveries. Associations that are consistently observed across multiple source populations are more likely to be generalized to target populations with possible distributional shifts. In this paper, we model the heterogeneous multi-source data with multiple high-dimensional regressions and make inferences for the maximin effect (Meinshausen, B{\"u}hlmann, AoS, 43(4), 1801--1830). The maximin effect provides a measure of stable associations across multi-source data. A significant maximin effect indicates that a variable has commonly shared effects across multiple source populations, and these shared effects may be generalized to a broader set of target populations. There are challenges associated with inferring maximin effects because its point estimator can have a non-standard limiting distribution. We devise a novel sampling method to construct valid confidence intervals for maximin effects. The proposed confidence interval attains a parametric length. This sampling procedure and the related theoretical analysis are of independent interest for solving other non-standard inference problems. Using genetic data on yeast growth in multiple environments, we demonstrate that the genetic variants with significant maximin effects have generalizable effects under new environments.
Robots operating in households must find objects on shelves, under tables, and in cupboards. In such environments, it is crucial to search efficiently at 3D scale while coping with limited field of view and the complexity of searching for multiple objects. Principled approaches to object search frequently use Partially Observable Markov Decision Process (POMDP) as the underlying framework for computing search strategies, but constrain the search space in 2D. In this paper, we present a POMDP formulation for multi-object search in a 3D region with a frustum-shaped field-of-view. To efficiently solve this POMDP, we propose a multi-resolution planning algorithm based on online Monte-Carlo tree search. In this approach, we design a novel octree-based belief representation to capture uncertainty of the target objects at different resolution levels, then derive abstract POMDPs at lower resolutions with dramatically smaller state and observation spaces. Evaluation in a simulated 3D domain shows that our approach finds objects more efficiently and successfully compared to a set of baselines without resolution hierarchy in larger instances under the same computational requirement. We demonstrate our approach on a mobile robot to find objects placed at different heights in two 10m$^2 \times 2$m regions by moving its base and actuating its torso.
Many-body theories such as dynamical mean field theory (DMFT) have enabled the description of the electron exchange-correlation interactions that are missing in current density functional theory (DFT) calculations. However, there has been relatively little focus on the wavefunctions from these theories. We present the methodology of the newly developed Elk-TRIQS interface and how to calculate the DFT with DMFT (DFT+DMFT) wavefunctions, which can be used to calculate DFT+DMFT wavefunction dependent quantities. We illustrate this by calculating the electron localized function (ELF) in monolayer SrVO$_3$ and CaFe$_2$As$_2$, which provides a means of visualizing their chemical bonds. Monolayer SrVO$_3$ ELFs are sensitive to the charge redistribution between the DFT, one-shot DFT+DMFT and fully charge self-consistent DFT+DMFT calculations. In both tetragonal and collapsed tetragonal CaFe$_2$As$_2$ phases, the ELF changes weakly with correlation induced charge redistribution of the hybridized As-p and Fe-d states. Nonetheless, the interlayer As-As bond in the collapsed tetragonal structure is robust to the changes at and around the Fermi level.
We theoretically investigate electrostatic properties between two charged surfaces with a grafted polyelectrolyte layer in an aqueous electrolyte solution by using the Poisson-Boltzmann approach accounting for ion partitioning. In order to consider the ion partitioning effect, we focus on changes of electrostatic properties due to the difference in dielectric permittivity of the polyelectrolyte layer and the aqueous electrolyte solution. We find that ion partitioning enhances electrostatic potential in the region between two charged soft surfaces and hence increases the electrostatic interaction between two charged soft surfaces. Ion partitioning effect on osmotic pressure is enhanced not only by an increase in the thickness of polyelectrolyte layer and Debye length but also by a decrease in ion radius.
Soft robot serial chain manipulators with the capability for growth, stiffness control, and discrete joints have the potential to approach the dexterity of traditional robot arms, while improving safety, lowering cost, and providing an increased workspace, with potential application in home environments. This paper presents an approach for design optimization of such robots to reach specified targets while minimizing the number of discrete joints and thus construction and actuation costs. We define a maximum number of allowable joints, as well as hardware constraints imposed by the materials and actuation available for soft growing robots, and we formulate and solve an optimization problem to output a planar robot design, i.e., the total number of potential joints and their locations along the robot body, which reaches all the desired targets, avoids known obstacles, and maximizes the workspace. We demonstrate a process to rapidly construct the resulting soft growing robot design. Finally, we use our algorithm to evaluate the ability of this design to reach new targets and demonstrate the algorithm's utility as a design tool to explore robot capabilities given various constraints and objectives.
Given a finitely generated free monoid $X$ and a morphism $\phi : X\to X$, we show that one can construct an algebra, which we call an iterative algebra, in a natural way. We show that many ring theoretic properties of iterative algebras can be easily characterized in terms of linear algebra and combinatorial data from the morphism and that, moreover, it is decidable whether or not an iterative algebra has these properties. Finally, we use our construction to answer several questions of Greenfeld, Leroy, Smoktunowicz, and Ziembowski by constructing a primitive graded nilpotent algebra with Gelfand-Kirillov dimension two that is finitely generated as a Lie algebra.
We measure the imprint of baryon acoustic oscillations (BAOs) in the galaxy clustering pattern at the highest redshift achieved to date, z=0.6, using the distribution of N=132,509 emission-line galaxies in the WiggleZ Dark Energy Survey. We quantify BAOs using three statistics: the galaxy correlation function, power spectrum and the band-filtered estimator introduced by Xu et al. (2010). The results are mutually consistent, corresponding to a 4.0% measurement of the cosmic distance-redshift relation at z=0.6 (in terms of the acoustic parameter "A(z)" introduced by Eisenstein et al. (2005) we find A(z=0.6) = 0.452 +/- 0.018). Both BAOs and power spectrum shape information contribute toward these constraints. The statistical significance of the detection of the acoustic peak in the correlation function, relative to a wiggle-free model, is 3.2-sigma. The ratios of our distance measurements to those obtained using BAOs in the distribution of Luminous Red Galaxies at redshifts z=0.2 and z=0.35 are consistent with a flat Lambda Cold Dark Matter model that also provides a good fit to the pattern of observed fluctuations in the Cosmic Microwave Background (CMB) radiation. The addition of the current WiggleZ data results in a ~ 30% improvement in the measurement accuracy of a constant equation-of-state, w, using BAO data alone. Based solely on geometric BAO distance ratios, accelerating expansion (w < -1/3) is required with a probability of 99.8%, providing a consistency check of conclusions based on supernovae observations. Further improvements in cosmological constraints will result when the WiggleZ Survey dataset is complete.
This review summarises results of recent magnetic and chemical abundance surface mapping studies of early-type stars. We discuss main trends uncovered by observational investigations and consider reliability of spectropolarimetric inversion techniques used to infer these results. A critical assessment of theoretical attempts to interpret empirical magnetic and chemical maps in the framework of, respectively, the fossil field and atomic diffusion theories is also presented. This confrontation of theory and observations demonstrates that 3D MHD models of fossil field relaxation are successful in matching the observed range of surface magnetic field geometries. At the same time, even the most recent time-dependent atomic diffusion calculations fail to reproduce diverse horizontal abundance distributions found in real magnetic hot stars.
We derive a Voronoi-type series approximation for the local weighted mean of an arithmetical function that is associated to Dirichlet series satisfying a functional equation with gamma factors. The series is exploited to study the oscillation frequency with a method of Heath-Brown and Tsang [7]. A by-product is another proof for the well-known result of no element in the Selberg class of degree 0 \textless{} d \textless{} 1. Our major applications include the sign-change problem of the coefficients of automorphic L-functions for GL m , which improves significantly some results of Liu and Wu [14]. The cases of modular forms of half-integral weight and Siegel eigenforms are also considered.
Let $\psi$ be a positive function defined near the origin such that $\lim_{t\to 0^{+}}\psi(t)=0$. We consider the operator \begin{equation*} T_\theta f(x) = \lim_{\varepsilon\to 0^+} \int_\varepsilon^1 e^{i\gamma(t)}f(x-t) \frac{dt}{t^{\theta}\psi(t)^{1-\theta}}, \end{equation*} where $\gamma$ is a real function with $\lim_{t\to 0^+}|\gamma(t)| = \infty$ and $0 \le \theta \le 1$. Assuming certain regularity and growth conditions on $\psi$ and $\gamma$, we show that $T_1$ is of weak type $(1,1)$.