text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
Over the past few years, intensive studies of ultrathin epitaxial films of perovskite oxides have often revealed exciting properties like giant magnetoresistive tunnelling and electric field effects. Spinel oxides appear as even more versatile due to their more complex structure and the resulting many degrees of freedom. Here we show that the epitaxial growth of nanometric NiFe2O4 films onto perovskite substrates allows the stabilization of novel ferrite phases with properties dramatically differing from bulk ones. Indeed, NiFe2O4 films few nanometres thick have a saturation magnetization at least twice that of the bulk compound and their resistivity can be tuned by orders of magnitude, depending on the growth conditions. By integrating such thin NiFe2O4 layers into spin-dependent tunnelling heterostructures, we demonstrate that this versatile material can be useful for spintronics, either as a conductive electrode in magnetic tunnel junctions or as a spin-filtering insulating barrier in the little explored type of tunnel junction called spin-filter. Our findings are thus opening the way for the realisation of monolithic spintronics architectures integrating several layers of a single material, where the layers are functionalised in a controlled manner.
|
Spinel ferrites: old materials bring new opportunities for spintronics
|
We identify the quantum numbers of baryon exotics in the Quark Model, the Skyrme Model and QCD, and show that they agree for arbitrary colors and flavors. We define exoticness, E, which can be used to classify the states. The exotic baryons include the recently discovered qqqq qbar pentaquarks (E=1), as well as exotic baryons with additional q qbar pairs (E >=1). The mass formula for non-exotic and exotic baryons is given as an expansion in 1/N, and allows one to relate the moment of inertia of the Skyrme soliton to the mass of a constituent quark.
|
Baryon Exotics in the Quark Model, the Skyrme Model and QCD
|
We report on fabrication of large-scale arrays of suspended molybdenum disulfide (MoS2) atomic layers, as two-dimensional (2D) MoS2 nanomechanical resonators. We employ a water-assisted lift-off process to release chemical vapor deposited (CVD) MoS2 atomic layers from a donor substrate, followed by an all-dry transfer onto microtrench arrays. The resultant large arrays of suspended single- and few-layer MoS2 drumhead resonators (0.5 to 2um in diameter) offer fundamental resonances (f_0) in the very high frequency (VHF) band (up to ~120MHz) and excellent figures-of-merit up to f_0*Q ~ 3*10^10Hz. A stretched circular diaphragm model allows us to estimate low pre-tension levels of typically ~15mN/m in these devices. Compared to previous approaches, our transfer process features high yield and uniformity with minimal liquid and chemical exposure (only involving DI water), resulting in high-quality MoS2 crystals and exceptional device performance and homogeneity; and our process is readily applicable to other 2D materials.
|
Large-Scale Arrays of Single- and Few-Layer MoS2 Nanomechanical Resonators
|
We give a short proof of Bose Einstein Condensation of dilute Bose gases on length scales much longer than the Gross-Pitaevskii scale.
|
Length scales for BEC in the dilute Bose gas
|
In recent years, the assessment of fundamental movement skills integrated with physical education has focused on both teaching practice and the feasibility of assessment. The object of assessment has shifted from multiple ages to subdivided ages, while the content of assessment has changed from complex and time-consuming to concise and efficient. Therefore, we apply deep learning to physical fitness evaluation, we propose a system based on the Canadian Agility and Movement Skill Assessment (CAMSA) Physical Fitness Evaluation System (CPFES), which evaluates children's physical fitness based on CAMSA, and gives recommendations based on the scores obtained by CPFES to help children grow. We have designed a landmark detection module and a pose estimation module, and we have also designed a pose evaluation module for the CAMSA criteria that can effectively evaluate the actions of the child being tested. Our experimental results demonstrate the high accuracy of the proposed system.
|
CPFES: Physical Fitness Evaluation Based on Canadian Agility and Movement Skill Assessment
|
We present measurements of the optical and X-ray continua of 108 AGN (Seyfert 1s and quasars) from the Rosat International X-ray/Optical Survey (RIXOS). The sample covers a wide range in redshift (0<z<3.3), in X-ray spectral slope (-1.5 <ax<2.6) and in optical-to-X-ray ratio (0.4$<aox<1.5). A correlation is found between ax and aox; similar correlations have recently been reported in other X-ray and optical samples. We also identify previously unreported relationships between the optical slope (aopt) and ax (particularly at high redshifts) and between aopt and aox. These trends show that the overall optical-to-X-ray conti nuum changes from convex to concave as ax hardens, demonstrating a strong beha vioural link between the optical/UV big blue bump (BBB) and the soft X-ray ex cess, which is consistent with them being part of the same spectral component. By constructing models of the optical-to-X-ray continuum, we demonstrate that the observed correlations are consistent with an intrinsic spectrum which is ab sorbed through different amounts of cold gas and dust. The intrinsic spectrum is the sum of an optical-to-soft X-ray `big bump' component and an ax=1 power law; the column density of the cold gas ranges from 0 to ~4E21 cm-2 and the dust-to-gas ratio is assumed to be Galactic. The `big bump' may be represented by a T~1E6 K thermal bremsstrahlung or an accretion disk with a surrounding hot corona. The scatter in the data can accommodate a wide range in big bump tempe rature (or black hole mass) and strength. A source for the absorbing gas may be the dusty, molecular torus which lies beyond the broad line-emitting regions, although with a much lower column density than observed in Seyfert 2 galaxies. Alternatively, it may be the bulge of a spiral host galaxy or an elliptical host galaxy.
|
Optical and X-ray properties of the RIXOS AGN: I - The continua
|
The standard relativistic mean-field model is extended by including dynamical effects that arise in the coupling of single-nucleon motion to collective surface vibrations. A phenomenological scheme, based on a linear ansatz for the energy dependence of the scalar and vector components of the nucleon self-energy for states close to the Fermi surface, allows a simultaneous description of binding energies, radii, deformations and single-nucleon spectra in a self-consistent relativistic framework. The model is applied to the spherical, doubly closed-shell nuclei 132Sn and 208Pb.
|
Beyond the relativistic Hartree mean-field approximation: energy dependent effective mass
|
We introduce a new class of Green-Naghdi type models for the propagation of internal waves between two (1+1)-dimensional layers of homogeneous, immiscible, ideal, incompressible, irrotational fluids, vertically delimited by a flat bottom and a rigid lid. These models are tailored to improve the frequency dispersion of the original bi-layer Green-Naghdi model, and in particular to manage high-frequency Kelvin-Helmholtz instabilities, while maintaining its precision in the sense of consistency. Our models preserve the Hamiltonian structure, symmetry groups and conserved quantities of the original model. We provide a rigorous justification of a class of our models thanks to consistency, well-posedness and stability results. These results apply in particular to the original Green-Naghdi model as well as to the Saint-Venant (hydrostatic shallow-water) system with surface tension.
|
A new class of two-layer Green-Naghdi systems with improved frequency dispersion
|
We investigate the hydrodynamics of three-dimensional classical Bondi-Hoyle accretion. A totally absorbing sphere of different sizes (1, 0.1 and 0.02 accretion radii) exerts gravity on and moves at different Mach numbers (0.6, 1.4, 3.0 and 10) relative to a homogeneous and slightly perturbed medium, which is taken to be an ideal gas ($\gamma=4/3$). We examine the influence of Mach number of the flow and size of the accretor upon the physical behaviour of the flow and the accretion rates. The hydrodynamics is modeled by the ``Piecewise Parabolic Method'' (PPM). The resolution in the vicinity of the accretor is increased by multiply nesting several $32^3$-zone grids around the sphere, each finer grid being a factor of two smaller in zone dimension than the next coarser grid. This allows us to include a coarse model for the surface of the accretor (vacuum sphere) on the finest grid while at the same time evolving the gas on the coarser grids. For small Mach numbers (0.6 and~1.4) the flow patterns tend towards a steady state, while in the case of supersonic flow (Mach~3 and~10) and small enough accretors, (radius of~0.1 and~0.02 accretion radii) an unstable Mach cone develops, destroying axisymmetry. Our 3D models do not show the highly dynamic flip-flop flow so prominent in 2D calculations performed by other authors. In the gamma=4/3 models, the shock front remains closer to the accretor and the mass accretion rates are higher than in the gamma=5/3 models, whereas the rms of the specific angular momentum accreted does not change.
|
THREE-DIMENSIONAL HYDRODYNAMIC BONDI-HOYLE ACCRETION. IV. SPECIFIC HEAT RATIO 4/3.
|
We review recent developments in the understanding of meson properties as solutions of the Bethe-Salpeter equation in rainbow-ladder truncation. Included are recent results for the pseudoscalar and vector meson masses and leptonic decay constants, ranging from pions up to c\bar{c} bound states; extrapolation to b\bar{b} states is explored. We also present a new and improved calculation of F_\pi(Q^2) and an analysis of the \pi\gamma\gamma transition form factor for both \pi(140) and \pi(1330). Lattice-QCD results for propagators and the quark-gluon vertex are analyzed, and the effects of quark-gluon vertex dressing and the three-gluon coupling upon meson masses are considered.
|
QCD modeling of hadron physics
|
We present deep optical and X-ray follow-up observations of the bright unassociated Fermi-LAT gammaray source 1FGL J1311.7-3429. The source was already known as an unidentified EGRET source (3EG J1314-3431, EGR J1314-3417), hence its nature has remained uncertain for the past two decades. For the putative counterpart, we detected a quasi-sinusoidal optical modulation of delta_m\sim2 mag with a period of ~1.5 hr in the Rc, r' and g' bands. Moreover, we found that the amplitude of the modulation and peak intensity changed by > 1 mag and 0.5 mag respectively, over our total six nights of observations from 2012 March and May. Combined with Swif t UVOT data, the optical-UV spectrum is consistent with a blackbody temperature, kT \sim1 eV, and the emission volume radius Rbb\sim 1.5x10^4 km. In contrast, deep Suzaku observations conducted in 2009 and 2011 revealed strong X-ray flares with a lightcurve characterized with a power spectrum density of P(f) propto f^(-2) but the folded X-ray light curves suggest an orbital modulation also in X-rays. Together with the non-detection of a radio counterpart, and significant curved spectrum and non-detection of variability in gamma-rays, the source may be the second radio-quiet gamma-ray emitting milli-second pulsar candidate after 1FGL J2339.7-0531, although the origin of flaring X-ray and optical variability remains an open question.
|
Toward Identifying the Unassociated Gamma-ray Source 1FGL J1311.7-3429 with X-ray and Optical Observations
|
Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts, by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly $k$NN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that $k$NN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks. When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would completely eliminate the risks, while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs. Our code is available at: https://github.com/Princeton-SysML/kNNLM_privacy .
|
Privacy Implications of Retrieval-Based Language Models
|
Generative models, especially diffusion models (DMs), have achieved promising results for generating feature-rich geometries and advancing foundational science problems such as molecule design. Inspired by the recent huge success of Stable (latent) Diffusion models, we propose a novel and principled method for 3D molecule generation named Geometric Latent Diffusion Models (GeoLDM). GeoLDM is the first latent DM model for the molecular geometry domain, composed of autoencoders encoding structures into continuous latent codes and DMs operating in the latent space. Our key innovation is that for modeling the 3D molecular geometries, we capture its critical roto-translational equivariance constraints by building a point-structured latent space with both invariant scalars and equivariant tensors. Extensive experiments demonstrate that GeoLDM can consistently achieve better performance on multiple molecule generation benchmarks, with up to 7\% improvement for the valid percentage of large biomolecules. Results also demonstrate GeoLDM's higher capacity for controllable generation thanks to the latent modeling. Code is provided at \url{https://github.com/MinkaiXu/GeoLDM}.
|
Geometric Latent Diffusion Models for 3D Molecule Generation
|
Atomically resolved electron energy-loss spectroscopy experiments are commonplace in modern aberrationcorrected transmission electron microscopes. Energy resolution has also been increasing steadily with the continuous improvement of electron monochromators. Electronic excitations however are known to be delocalised due to the long range interaction of the charged accelerated electrons with the electrons in a sample. This has made several scientists question the value of combined high spatial and energy resolution for mapping interband transitions and possibly phonon excitation in crystals. In this paper we demonstrate experimentally that atomic resolution information is indeed available at very low energy losses around 100 meV expressed as a modulation of the broadening of the zero loss peak. Careful data analysis allows us to get a glimpse of what are likely phonon excitations with both an energy loss and gain part. These experiments confirm recent theoretical predictions on the strong localisation of phonon excitations as opposed to electronic excitations and show that a combination of atomic resolution and recent developments in increased energy resolution will offer great benefit for mapping phonon modes in real space.
|
Atomic resolution mapping of phonon excitations in STEM-EELS experiments
|
One of the most puzzling aspects of the high $T_c$ superconductors is the appearance of Fermi arcs in the normal state of the underdoped cuprate materials. These are loci of low energy excitations covering part of the fermi surface, that suddenly appear above $T_c$ instead of the nodal quasiparticles. Based on a semiclassical theory, we argue that partial Fermi surfaces arise naturally in a d-wave superconductor that is destroyed by thermal phase fluctuations. Specifically, we show that the electron spectral function develops a square root singularity at low frequencies for wave-vectors positioned on the bare Fermi surface. We predict a temperature dependence of the arc length that can partially account for results of recent angle resolved photo emission (ARPES) experiments.
|
Evolution of the Fermi surface in phase fluctuating d-wave superconductors
|
The explosive growth of bandwidth hungry Internet applications has led to the rapid development of new generation mobile network technologies that are expected to provide broadband access to the Internet in a pervasive manner. For example, 6G networks are capable of providing high-speed network access by exploiting higher frequency spectrum; high-throughout satellite communication services are also adopted to achieve pervasive coverage in remote and isolated areas. In order to enable seamless access, Integrated Satellite-Terrestrial Communication Networks (ISTCN) has emerged as an important research area. ISTCN aims to provide high speed and pervasive network services by integrating broadband terrestrial mobile networks with satellite communication networks. As terrestrial mobile networks began to use higher frequency spectrum (between 3GHz to 40GHz) which overlaps with that of satellite communication (4GHz to 8GHz for C band and 26GHz to 40GHz for Ka band), there are opportunities and challenges. On one hand, satellite terminals can potentially access terrestrial networks in an integrated manner; on the other hand, there will be more congestion and interference in this spectrum, hence more efficient spectrum management techniques are required. In this paper, we propose a new technique to improve spectrum sharing performance by introducing Non-orthogonal Frequency Division Multiplexing (NOMA) and Cognitive Radio (CR) in the spectrum sharing of ISTCN. In essence, NOMA technology improves spectrum efficiency by allowing different users to transmit on the same carrier and distinguishing users by user power levels while CR technology improves spectrum efficiency through dynamic spectrum sharing. Furthermore, some open researches and challenges in ISTCN will be discussed.
|
Spectrum Sharing for 6G Integrated Satellite-Terrestrial Communication Networks Based on NOMA and Cognitive Radio
|
We compare the structure of star-forming molecular clouds in different regions of Orion A to determine how the column density probability distribution function (N-PDF) varies with environmental conditions such as the fraction of young protostars. A correlation between the N-PDF slope and Class 0 protostar fraction has been previously observed in a low-mass star-formation region (Perseus) by Sadavoy; here we test if a similar correlation is observed in a high-mass star-forming region. We use Herschel data to derive a column density map of Orion A. We use the Herschel Orion Protostar Survey catalog for accurate identification and classification of the Orion A young stellar object (YSO) content, including the short-lived Class 0 protostars (with a $\sim$ 0.14 Myr lifetime). We divide Orion A into eight independent 13.5 pc$^2$ regions; in each region we fit the N-PDF distribution with a power-law, and we measure the fraction of Class 0 protostars. We use a maximum likelihood method to measure the N-PDF power-law index without binning. We find that the Class 0 fraction is higher in regions with flatter column density distributions. We test the effects of incompleteness, YSO misclassification, resolution, and pixel-scale. We show that these effects cannot account for the observed trend. Our observations demonstrate an association between the slope of the power-law N-PDF and the Class 0 fractions within Orion A. Various interpretations are discussed including timescales based on the Class 0 protostar fraction assuming a constant star-formation rate. The observed relation suggests that the N-PDF can be related to an "evolutionary state" of the gas. If universal, such a relation permits an evaluation of the evolutionary state from the N-PDF power-law index at much greater distances than those accesible with protostar counts. (abridged)
|
Evolution of column density distributions within Orion~A
|
In this paper, a privacy preserving image classification method is proposed under the use of ConvMixer models. To protect the visual information of test images, a test image is divided into blocks, and then every block is encrypted by using a random orthogonal matrix. Moreover, a ConvMixer model trained with plain images is transformed by the random orthogonal matrix used for encrypting test images, on the basis of the embedding structure of ConvMixer. The proposed method allows us not only to use the same classification accuracy as that of ConvMixer models without considering privacy protection but to also enhance robustness against various attacks compared to conventional privacy-preserving learning.
|
A Privacy Preserving Method with a Random Orthogonal Matrix for ConvMixer Models
|
'The spectral leakage from windowing and the picket fence effect from discretization' have been among the standard contents in textbooks for many decades. The spectral leakage and picket fence effect would cause the distortions in amplitude, frequency, and phase of signals, which have always been of concern, and attempts have been made to solve them. This paper proposes two novel decomposition theorems that can totally eliminate the spectral leakage and picket fence effect, and could broaden the knowledge of signal processing. First, two generalized eigenvalue equations are constructed for multifrequency discrete real signals and complex signals. The two decomposition theorems are then proved. On these bases, exact decomposition methods for real and complex signals are proposed. For a noise-free multifrequency real signal with m sinusoidal components, the frequency, amplitude, and phase of each component can be exactly calculated by using just 4m-1 discrete values and its second-order derivatives. For a multifrequency complex signal, only 2m-1 discrete values and its first-order derivatives are needed. The numerical experiments show that the proposed methods have very high resolution, and the sampling rate does not necessarily obey the Nyquist sampling theorem. With noisy signals, the proposed methods have extraordinary accuracy.
|
Exact Decomposition of Multifrequency Discrete Real and Complex Signals
|
Recently, higher-order topological phases that do not obey the usual bulk-edge correspondence principle have been introduced in electronic insulators and brought into classical systems, featuring with in-gap corner/hinge states. So far, second-order topological insulators have been realized in mechanical metamaterials, microwave circuit, topolectrical circuit and acoustic metamaterials. Here, using near-field scanning measurements, we show the direct observation of corner states in second-order topological photonic crystal (PC) slabs consisting of periodic dielectric rods on a perfect electric conductor (PEC). Based on the generalized two-dimensional (2D) Su-Schrieffer-Heeger (SSH) model, we show that the emergence of corner states roots in the nonzero edge dipolar polarization instead of the nonzero bulk quadrupole polarization. We demonstrate the topological transition of 2D Zak phases of PC slabs by tuning intra-cell distances between two neighboring rods. We also directly observe in-gap 1D edge states and 0D corner states in the microwave regime. Our work presents that the PC slab is a powerful platform to directly observe topological states, and paves the way to study higher-order photonic topological insulators.
|
Direct observation of corner states in second-order topological photonic crystal slabs
|
In this paper, we develop a unified framework for studying constrained robust optimal control problems with adjustable uncertainty sets. In contrast to standard constrained robust optimal control problems with known uncertainty sets, we treat the uncertainty sets in our problems as additional decision variables. In particular, given a finite prediction horizon and a metric for adjusting the uncertainty sets, we address the question of determining the optimal size and shape of the uncertainty sets, while simultaneously ensuring the existence of a control policy that will keep the system within its constraints for all possible disturbance realizations inside the adjusted uncertainty set. Since our problem subsumes the classical constrained robust optimal control design problem, it is computationally intractable in general. We demonstrate that by restricting the families of admissible uncertainty sets and control policies, the problem can be formulated as a tractable convex optimization problem. We show that our framework captures several families of (convex) uncertainty sets of practical interest, and illustrate our approach on a demand response problem of providing control reserves for a power system.
|
Robust Optimal Control with Adjustable Uncertainty Sets
|
Repeated root Cyclic and Negacyclic codes over Galois rings have been studied much less than their simple root counterparts. This situation is beginning to change. For example, repeated root codes of length $p^s$, where $p$ is the characteristic of the alphabet ring, have been studied under some additional hypotheses. In each one of those cases, the ambient space for the codes has turned out to be a chain ring. In this paper, all remaining cases of cyclic and negacyclic codes of length $p^s$ over a Galois ring alphabet are considered. In these cases the ambient space is a local ring with simple socle but not a chain ring. Nonetheless, by reducing the problem to one dealing with uniserial subambients, a method for computing the Hamming distance of these codes is provided.
|
On the Hamming weight of Repeated Root Cyclic and Negacyclic Codes over Galois Rings
|
The synchronization between two dynamical systems is one of the most appealing phenomena occurring in Nature. Already observed by Huygens in the case of two pendula, it is a current area of research in the case of chaotic systems, with numerous applications in Physics, Biology or Engineering. We present an elementary but detailed exploration of the theory behind this phenomenon, including some graphical animations, with the aid of the free CAS Maxima, but the code can be easily ported to other CASs. The examples used are the Lorentz attractor and a pair of coupled pendula because these are well-known models of dynamical systems, but the procedures are applicable to any system described by a system of first-order differential equations.
|
Synchronization of dynamical systems: an approach using a Computer Algebra System
|
Explainability is an important factor to drive user trust in the use of neural networks for tasks with material impact. However, most of the work done in this area focuses on image analysis and does not take into account 3D data. We extend the saliency methods that have been shown to work on image data to deal with 3D data. We analyse the features in point clouds and voxel spaces and show that edges and corners in 3D data are deemed as important features while planar surfaces are deemed less important. The approach is model-agnostic and can provide useful information about learnt features. Driven by the insight that 3D data is inherently sparse, we visualise the features learnt by a voxel-based classification network and show that these features are also sparse and can be pruned relatively easily, leading to more efficient neural networks. Our results show that the Voxception-ResNet model can be pruned down to 5\% of its parameters with negligible loss in accuracy.
|
3D Point Cloud Feature Explanations Using Gradient-Based Methods
|
Effective Retrospective meetings are vital for ensuring productive development processes because they provide the means for Agile software development teams to discuss and decide on future improvements of their collaboration. Retrospective agendas often include activities that encourage sharing ideas and motivate participants to discuss possible improvements. The outcomes of these activities steer the future directions of team dynamics and influence team happiness. However, few empirical evaluations of Retrospective activities are currently available. Additionally, most activities rely on team members experiences and neglect to take existing project data into account. With this paper we want to make a case for data-driven decision-making principles, which have largely been adopted in other business areas. Towards this goal we review existing retrospective activities and highlight activities that already use project data as well as activities that could be augmented to take advantage of additional, more subjective data sources. We conclude that data-driven decision-making principles, are advantageous, and yet underused, in modern Agile software development. Making use of project data in retrospective activities would strengthen this principle and is a viable approach as such data can support the teams in making decisions on process improvement.
|
Experience vs Data: A Case for More Data-informed Retrospective Activities
|
We present Lyncs-API, a Python API for Lattice QCD applications currently under development. Lyncs aims to bring several widely used libraries for Lattice QCD under a common framework. Lyncs flexibly links to libraries for CPUs and GPUs in a way that can accommodate additional computing architectures as these arise, achieving performance-portability for the calculations while maintaining the same high-level workflow. Lyncs distributes calculations using Dask and mpi4py, with bindings to the libraries automatically generated by cppyy. While Lyncs is designed to allow linking to multiple libraries, we focus on a set of targeted packages that include DDalphaAMG, tmLQCD, QUDA and c-lime. More libraries will be added in the future. We also develop generic-purpose tools for facilitating the usage of Python in Lattice QCD and HPC in general. The project is open-source, community-oriented and available on Github.
|
Lyncs-API: a Python API for Lattice QCD applications
|
We consider a scattering theory for convolution operators on $\mathcal{H}=\ell^2(\mathbb{Z}^d; \mathbb{C}^n)$ perturbed with a long-range potential $V:\mathbb{Z}^d\to\mathbb{R}^n$. One of the motivating examples is discrete Schr\"odinger operators on $\mathbb{Z}^d$-periodic graphs. We construct time-independent modifiers, so-called Isozaki-Kitada modifiers, and we prove that the modified wave operators with the above-mentioned Isozaki-Kitada modifiers exist and that they are complete.
|
Construction of Isozaki-Kitada modifiers for discrete Schr\"odinger operators on general lattices
|
In this paper, we focus on a space-time fractional diffusion equation with the generalized Caputo's fractional derivative operator and a general space nonlocal operator (with the fractional Laplace operator as a special case). A weak Harnack's inequality has been established by using a special test function and some properties of the space nonlocal operator. Based on the weak Harnack's inequality, a strong maximum principle has been obtained which is an important characterization of fractional parabolic equations. With these tools, we establish a uniqueness result for an inverse source problem on the determination of the temporal component of the inhomogeneous term.
|
Studies on an inverse source problem for a space-time fractional diffusion equation by constructing a strong maximum principle
|
We report the detection of very high-energy gamma-ray emission from the intermediate-frequency-peaked BL Lacertae object W Comae (z=0.102) by VERITAS. The source was observed between January and April 2008. A strong outburst of gamma-ray emission was measured in the middle of March, lasting for only four days. The energy spectrum measured during the two highest flare nights is fit by a power-law and is found to be very steep, with a differential photon spectral index of Gamma = 3.81 +- 0.35_stat +- 0.34_syst. The integral photon flux above 200GeV during those two nights corresponds to roughly 9% of the flux from the Crab Nebula. Quasi-simultaneous Swift observations at X-ray energies were triggered by the VERITAS observations. The spectral energy distribution of the flare data can be described by synchrotron-self-Compton (SSC) or external-Compton (EC) leptonic jet models, with the latter offering a more natural set of parameters to fit the data.
|
VERITAS Discovery of >200GeV Gamma-ray Emission from the Intermediate-frequency-peaked BL Lac Object W Comae
|
We analyze rigorously the dynamics of the entanglement between two qubits which interact only through collective and local environments. Our approach is based on the resonance perturbation theory which assumes a small interaction between the qubits and the environments. The main advantage of our approach is that the expressions for (i) characteristic time-scales, such as decoherence, disentanglement, and relaxation, and (ii) observables are not limited by finite times. We introduce a new classification of decoherence times based on clustering of the reduced density matrix elements. The characteristic dynamical properties such as creation and decay of entanglement are examined. We also discuss possible applications of our results for superconducting quantum computation and quantum measurement technologies.
|
Evolution of Entanglement of Two Qubits Interacting through Local and Collective Environments
|
Markets for zero-day exploits (software vulnerabilities unknown to the vendor) have a long history and a growing popularity. We study these markets from a revenue-maximizing mechanism design perspective. We first propose a theoretical model for zero-day exploits markets. In our model, one exploit is being sold to multiple buyers. There are two kinds of buyers, which we call the defenders and the offenders. The defenders are buyers who buy vulnerabilities in order to fix them (e.g., software vendors). The offenders, on the other hand, are buyers who intend to utilize the exploits (e.g., national security agencies and police). Our model is more than a single-item auction. First, an exploit is a piece of information, so one exploit can be sold to multiple buyers. Second, buyers have externalities. If one defender wins, then the exploit becomes worthless to the offenders. Third, if we disclose the details of the exploit to the buyers before the auction, then they may leave with the information without paying. On the other hand, if we do not disclose the details, then it is difficult for the buyers to come up with their private valuations. Considering the above, our proposed mechanism discloses the details of the exploit to all offenders before the auction. The offenders then pay to delay the exploit being disclosed to the defenders.
|
Revenue Maximizing Markets for Zero-Day Exploits
|
Modern Deep Neural Networks (DNNs) have achieved very high performance at the expense of computational resources. To decrease the computational burden, several techniques have proposed to extract, from a given DNN, efficient subnetworks which are able to preserve performance while reducing the number of network parameters. The literature provides a broad set of techniques to discover such subnetworks, but few works have studied the peculiar topologies of such pruned architectures. In this paper, we propose a novel \emph{unrolled input-aware} bipartite Graph Encoding (GE) that is able to generate, for each layer in an either sparse or dense neural network, its corresponding graph representation based on its relation with the input data. We also extend it into a multipartite GE, to capture the relation between layers. Then, we leverage on topological properties to study the difference between the existing pruning algorithms and algorithm categories, as well as the relation between topologies and performance.
|
Peeking inside Sparse Neural Networks using Multi-Partite Graph Representations
|
In this study, an early fire detection algorithm has been proposed based on low cost array sensing system, utilizing gas sensors, dust particles and ambient sensors such as temperature and humidity sensor. The odor or smell-print emanated from various fire sources and building construction materials at early stage are measured. For this purpose, odor profile data from five common fire sources and three common building construction materials were used to develop the classification model. Normalized feature extractions of the smell print data were performed before subjected to prediction classifier. These features represent the odor signals in the time domain. The obtained features undergo the proposed multi-stage feature selection technique and lastly, further reduced by Principal Component Analysis (PCA), a dimension reduction technique. The hybrid PCA-PNN based approach has been applied on different datasets from in-house developed system and the portable electronic nose unit. Experimental classification results show that the dimension reduction process performed by PCA has improved the classification accuracy and provided high reliability, regardless of ambient temperature and humidity variation, baseline sensor drift, the different gas concentration level and exposure towards different heating temperature range.
|
Multi-Stage Feature Selection Based Intelligent Classifier for Classification of Incipient Stage Fire in Building
|
It is shown that $n$ points and $e$ lines in the complex Euclidean plane ${\mathbb C}^2$ determine $O(n^{2/3}e^{2/3}+n+e)$ point-line incidences. This bound is the best possible, and it generalizes the celebrated theorem by Szemer\'edi and Trotter about point-line incidences in the real Euclidean plane ${\mathbb R}^2$.
|
The Szemeredi-Trotter Theorem in the Complex Plane
|
We use a theoretical model of the $\gamma ~d \to ~K^+ K^- ~n ~p $ reaction adapted to the experiment done at LEPS where a peak was observed and associated to the $\Theta^{+}(1540)$ pentaquark. The study shows that the method used in the experiment to associate momenta to the undetected proton and neutron, together with the chosen cuts, necessarily creates an artificial broad peak in the assumed $K^+ n$ invariant mass in the region of the claimed $\Theta^{+}(1540)$, such that the remaining strength seen for the experimental peak is compatible with a fluctuation of 2$\sigma$ significance.
|
A novel interpretation of the "$\Theta^{+}(1540)$ pentaquark" peak
|
We study a relation between Hadamard powers and polynomial kernel perceptrons. The rank of Hadamard powers for the special case of a Boolean matrix and for the generic case of a real matrix is computed explicitly. These results are interpreted in terms of the classification capacities of perceptrons.
|
Hadamard Powers and Kernel Perceptrons
|
Quantum Cryptography is a rapidly developing field of research that benefits from the properties of Quantum Mechanics in performing cryptographic tasks. Quantum walks are a powerful model for quantum computation and very promising for quantum information processing. In this paper, we present a quantum public-key cryptographic system based on quantum walks. In particular, in the proposed protocol the public key is given by a quantum state generated by performing a quantum walk. We show that the protocol is secure and analyze the complexity of public-key generation and encryption/decryption procedures.
|
Quantum walks public key cryptographic system
|
There are recent cryptographic protocols that are based on Multiple Simultaneous Conjugacy Problems in braid groups. We improve an algorithm, due to Sang Jin Lee and Eonkyung Lee, to solve these problems, by applying a method developed by the author and Nuno Franco, originally intended to solve the Conjugacy Search Problem in braid groups.
|
Improving an algorithm to solve Multiple Simultaneous Conjugacy Problems in braid groups
|
Let G be a simply connected simple algebraic group over an algebraically closed field K of characteristic p>0 with root system R, and let ${\mathfrak g}={\cal L}(G)$ be its restricted Lie algebra. Let V be a finite dimensional ${\mathfrak g}$-module over K. For any point $v\inV$, the {\it isotropy subalgebra} of $v$ in $\mathfrak g$ is ${\mathfrak g}_v=\{x\in{\mathfrak g}/x\cdot v=0\}$. A restricted ${\mathfrak g}$-module V is called exceptional if for each $v\in V$ the isotropy subalgebra ${\mathfrak g}_v$ contains a non-central element (that is, ${\mathfrak g}_v\not\subseteq {\mathfrak z(\mathfrak g)}$). This work is devoted to classifying irreducible exceptional $\mathfrak g$-modules. A necessary condition for a $\mathfrak g$-module to be exceptional is found and a complete classification of modules over groups of exceptional type is obtained. For modules over groups of classical type, the general problem is reduced to a short list of unclassified modules. The classification of exceptional modules is expected to have applications in modular invariant theory and in classifying modular simple Lie superalgebras.
|
Exceptional representations of simple algebraic groups in prime characteristic
|
We present deep VI photometry of stars in the globular cluster M5 (NGC 5904) based on images taken with the Hubble Space Telescope. The resulting color-magnitude diagram reaches below V ~ 27 mag, revealing the upper 2-3 magnitudes of the white dwarf cooling sequence, and main sequence stars eight magnitudes and more below the turn-off. We fit the main sequence to subdwarfs of known parallax to obtain a true distance modulus of (m-M)_0 = 14.45 +/- 0.11 mag. A second distance estimate based on fitting the cluster white dwarf sequence to field white dwarfs with known parallax yielded (m-M)_0 = 14.67 +/- 0.18 mag. We couple our distance estimates with extensive photometry of the cluster's RR Lyrae variables to provide a calibration of the RR Lyrae absolute magnitude yielding M_V(RR) = 0.42 +/- 0.10 mag at [Fe/H] = -1.11 dex. We provide another luminosity calibration in the form of reddening-free Wasenheit functions. Comparison of our calibrations with predictions based on recent models combining stellar evolution and pulsation theories shows encouraging agreement. (Abridged)
|
Deep Photometry of the Globular Cluster M5: Distance Estimates from White Dwarf and Main Sequence Stars
|
The eavesdropper technique nowadays is already improved from the theoretical perspective to the experimental perspective. The technique now more focusing on the loopholes of the components used such as modulator, laser, and detector. These all components actually are the classical component which normally being used in the communication system. The technique called "blinding detector" introduce by Vadim Makarov et. al. exploit the unavailability of a true single-photon detector. The detector behavior which is avalanche photodiode (APD) is being used in almost all quantum system and exploiting its vulnerability towards the quantum attacks which is limit the potential to detect the presence of the eavesdropper attack.
|
Avoiding Fake State or Bright Light attack on the Single-Photon Detector
|
Information leakage rate is an intuitive metric that reflects the level of security in a wireless communication system, however, there are few studies taking it into consideration. Existing work on information leakage rate has two major limitations due to the complicated expression for the leakage rate: 1) the analytical and numerical results give few insights into the trade-off between system throughput and information leakage rate; 2) and the corresponding optimal designs of transmission rates are not analytically tractable. To overcome such limitations and obtain an in-depth understanding of information leakage rate in secure wireless communications, we propose an approximation for the average information leakage rate in the fixed-rate transmission scheme. Different from the complicated expression for information leakage rate in the literature, our proposed approximation has a low-complexity expression, and hence, it is easy for further analysis. Based on our approximation, the corresponding approximate optimal transmission rates are obtained for two transmission schemes with different design objectives. Through analytical and numerical results, we find that for the system maximizing throughput subject to information leakage rate constraint, the throughput is an upward convex non-decreasing function of the security constraint and much too loose security constraint does not contribute to higher throughput; while for the system minimizing information leakage rate subject to throughput constraint, the average information leakage rate is a lower convex increasing function of the throughput constraint.
|
On Secure Transmission Design: An Information Leakage Perspective
|
We consider approximate dynamic programming in $\gamma$-discounted Markov decision processes and apply it to approximate planning with linear value-function approximation. Our first contribution is a new variant of Approximate Policy Iteration (API), called Confident Approximate Policy Iteration (CAPI), which computes a deterministic stationary policy with an optimal error bound scaling linearly with the product of the effective horizon $H$ and the worst-case approximation error $\epsilon$ of the action-value functions of stationary policies. This improvement over API (whose error scales with $H^2$) comes at the price of an $H$-fold increase in memory cost. Unlike Scherrer and Lesner [2012], who recommended computing a non-stationary policy to achieve a similar improvement (with the same memory overhead), we are able to stick to stationary policies. This allows for our second contribution, the application of CAPI to planning with local access to a simulator and $d$-dimensional linear function approximation. As such, we design a planning algorithm that applies CAPI to obtain a sequence of policies with successively refined accuracies on a dynamically evolving set of states. The algorithm outputs an $\tilde O(\sqrt{d}H\epsilon)$-optimal policy after issuing $\tilde O(dH^4/\epsilon^2)$ queries to the simulator, simultaneously achieving the optimal accuracy bound and the best known query complexity bound, while earlier algorithms in the literature achieve only one of them. This query complexity is shown to be tight in all parameters except $H$. These improvements come at the expense of a mild (polynomial) increase in memory and computational costs of both the algorithm and its output policy.
|
Confident Approximate Policy Iteration for Efficient Local Planning in $q^\pi$-realizable MDPs
|
We define subvarieties of $\mathcal{M}_{0,n}$ equipped with algebraic functions that are solutions to the generic double shuffle equations satisfied by multiple polylogarithms on $\mathcal{M}_{0,n}$.
|
On generic double shuffle relations, localized multiple polylogarithms and algebraic functions
|
The study of five-fold (P even, T odd) correlation in the interaction of slow polarized neutrons with aligned nuclei is a possible way of testing the time reversal invariance due to the expected enhancement of T violating effects in compound resonances. Possible nuclear targets are discussed which can be aligned both dynamically as well as by the "brute force" method at low temperature. A statistical estimation is performed of the five-fold correlation for low lying p wave compound resonances of the $^{121}$Sb, $^{123}$Sb and $^{127}$I nuclei. It is shown that a significant improvement can be achieved for the bound on the intensity of the fundamental parity conserving time violating (PCTV) interaction.
|
Testing T Invariance in the Interaction of Slow Neutrons with Aligned Nuclei
|
Let $G$ be a graph and let $f$ be a positive integer-valued function on $V(G)$. Assume that for all $S\subseteq V(G)$, $$\sum_{v\in I(G\setminus S)}f(v)(f(v)+1)\le |S|,$$ where $I(G\setminus S)$ denotes the set of isolated vertices of $G\setminus S$. In this paper, we show that if for all $S\subseteq V(G)$, $$\omega(G\setminus S)\le \sum_{v\in S}(f(v)-1)+1,$$ and $\sum_{v\in V(G)}f(v)$ is even, then $G$ has a factor $F$ such that for each vertex $v$, $d_F(v)=f(v)$, where $\omega(G\setminus S)$ denotes the number of components of $G\setminus S$. Moreover, we show that if for all $S\subseteq V(G)$, $$\omega(G\setminus S)\le \frac{1}{4}|S|+1,$$ and $f\ge 2$, then $G$ has a connected factor $H$ such that for each vertex $v$, $d_H(v)\in \{f(v),f(v)+1\}$.
|
Factors and connected factors in tough graphs with high isolated toughness
|
Observing a human demonstrator manipulate objects provides a rich, scalable and inexpensive source of data for learning robotic policies. However, transferring skills from human videos to a robotic manipulator poses several challenges, not least a difference in action and observation spaces. In this work, we use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies. Thanks to the diversity of this training data, the learned reward function sufficiently generalizes to image observations from a previously unseen robot embodiment and environment to provide a meaningful prior for directed exploration in reinforcement learning. We propose two methods for scoring states relative to a goal image: through direct temporal regression, and through distances in an embedding space obtained with time-contrastive learning. By conditioning the function on a goal image, we are able to reuse one model across a variety of tasks. Unlike prior work on leveraging human videos to teach robots, our method, Human Offline Learned Distances (HOLD) requires neither a priori data from the robot environment, nor a set of task-specific human demonstrations, nor a predefined notion of correspondence across morphologies, yet it is able to accelerate training of several manipulation tasks on a simulated robot arm compared to using only a sparse reward obtained from task completion.
|
Learning Reward Functions for Robotic Manipulation by Observing Humans
|
In this note, we affirm the partial answer to the long open Conjecture which states that any closed embeddable strictly pseudoconvex CR $3$-manifold admits a contact form $\theta $ with the vanishing CR $Q$-curvature. More precisely, we deform the contact form according to an CR analogue of $Q$%-curvature flow in a closed strictly pseudoconvex CR $3$-manifold $(M,\ J,[\theta_{0}])$ of the vanishing first Chern class $c_{1}(T_{1,0}M)$. Suppose that $M$ is embeddable and the CR Paneitz operator $P_{0}$ is nonnegative with kernel consisting of the CR pluriharmonic functions. We show that the solution of CR $Q$-curvature flow exists for all time and has smoothly asymptotic convergence on $M\times \lbrack 0,\infty ).$\ As a consequence, we are able to affirm the Conjecture in a closed strictly pseudoconvex CR $3$-manifold of the vanishing first Chern class and vanishing torsion.
|
Global existence and convergence for the CR Q-curvature flow in a closed strictly pseudoconvex CR 3-manifold
|
Network coding has been widely used as a technology to ensure efficient and reliable communication. The ability to recode packets at the intermediate nodes is a major benefit of network coding implementations. This allows the intermediate nodes to choose a different code rate and fine-tune the outgoing transmission to the channel conditions, decoupling the requirement for the source node to compensate for cumulative losses over a multi-hop network. Block network coding solutions already have practical recoders but an on-the-fly recoder for sliding window network coding has not been studied in detail. In this paper, we present the implementation details of a practical recoder for sliding window network coding for the first time along with a comprehensive performance analysis of a multi-hop network using the recoder. The sliding window recoder ensures that the network performs closest to its capacity and that each node can use its outgoing links efficiently.
|
Practical Sliding Window Recoder: Design, Analysis, and Usecases
|
Multidrug resistance consists of a series of genetic and epigenetic alternations that involve multifactorial and complex processes, which are a challenge to successful cancer treatments. Accompanied by advances in biotechnology and high-dimensional data analysis techniques that are bringing in new opportunities in modeling biological systems with continuous phenotypic structured models, we study a cancer cell population model that considers a multi-dimensional continuous resistance trait to multiple drugs to investigate multidrug resistance. We compare our continuous resistance trait model with classical models that assume a discrete resistance state and classify the cases when the continuum and discrete models yield different dynamical patterns in the emerging heterogeneity in response to drugs. We also compute the maximal fitness resistance trait for various continuum models and study the effect of epimutations. Finally, we demonstrate how our approach can be used to study tumor growth regarding the turnover rate and the proliferating fraction, and show that a continuous resistance level may result in a different dynamics when compared with the predictions of other discrete models.
|
Modeling continuous levels of resistance to multidrug therapy in cancer
|
The increased uncertainty and complexity of nonlinear systems have motivated investigators to consider generalized approaches to defining an entropy function. New insights are achieved by defining the average uncertainty in the probability domain as a transformation of entropy functions. The Shannon entropy when transformed to the probability domain is the weighted geometric mean of the probabilities. For the exponential and Gaussian distributions, we show that the weighted geometric mean of the distribution is equal to the density of the distribution at the location plus the scale, i.e. at the width of the distribution. The average uncertainty is generalized via the weighted generalized mean, in which the moment is a function of the nonlinear source. Both the Renyi and Tsallis entropies transform to this definition of the generalized average uncertainty in the probability domain. For the generalized Pareto and Student's t-distributions, which are the maximum entropy distributions for these generalized entropies, the appropriate weighted generalized mean also equals the density of the distribution at the location plus scale. A coupled entropy function is proposed, which is equal to the normalized Tsallis entropy divided by one plus the coupling.
|
On the average uncertainty for systems with nonlinear coupling
|
The Diffuse Ionised Gas (DIG) in galaxies traces photoionisation feedback from massive stars. Through three dimensional photoionisation simulations, we study the propagation of ionising photons, photoionisation heating and the resulting distribution of ionised and neutral gas within snapshots of magnetohydrodynamic simulations of a supernova driven turbulent interstellar medium. We also investigate the impact of non-photoionisation heating on observed optical emission line ratios. Inclusion of a heating term which scales less steeply with electron density than photoionisation is required to produce diagnostic emission line ratios similar to those observed with the Wisconsin H{\alpha} Mapper. Once such heating terms have been included, we are also able to produce temperatures similar to those inferred from observations of the DIG, with temperatures increasing to above 15000 K at heights |z| > 1 kpc. We find that ionising photons travel through low density regions close to the midplane of the simulations, while travelling through diffuse low density regions at large heights. The majority of photons travel small distances (< 100pc); however some travel kiloparsecs and ionise the DIG.
|
Photoionisation and Heating of a Supernova Driven, Turbulent, Interstellar Medium
|
Universal properties of two-dimensional conformal interfaces are encoded by the flux of energy transmitted and reflected during a scattering process. We develop an innovative method that allows us to use results for the energy transmission in thin-brane holographic models to find the energy transmission for general smooth domain-wall solutions of three-dimensional gravity. Our method is based on treating the continuous geometry as a discrete set of branes. As an application, we compute the transmission coefficient of a Janus interface in terms of its deformation parameter.
|
Energy Transport for Thick Holographic Branes
|
Alkali halide (100) surfaces are anomalously poorly wetted by their own melt at the triple point. We carried out simulations for NaCl(100) within a simple (BMHFT) model potential. Calculations of the solid-vapor, solid-liquid and liquid-vapor free energies showed that solid NaCl(100) is a nonmelting surface, and that the incomplete wetting can be traced to the conspiracy of three factors: surface anharmonicities stabilizing the solid surface; a large density jump causing bad liquid-solid adhesion; incipient NaCl molecular correlations destabilizing the liquid surface, reducing in particular its entropy much below that of solid NaCl(100). Presently, we are making use of the nonmelting properties of this surface to conduct case study simulations of hard tips sliding on a hot stable crystal surface. Preliminary results reveal novel phenomena whose applicability is likely of greater generality.
|
Physics and Nanofriction of Alkali Halide Solid Surfaces at the Melting Point
|
Headline generation is a special type of text summarization task. While the amount of available training data for this task is almost unlimited, it still remains challenging, as learning to generate headlines for news articles implies that the model has strong reasoning about natural language. To overcome this issue, we applied recent Universal Transformer architecture paired with byte-pair encoding technique and achieved new state-of-the-art results on the New York Times Annotated corpus with ROUGE-L F1-score 24.84 and ROUGE-2 F1-score 13.48. We also present the new RIA corpus and reach ROUGE-L F1-score 36.81 and ROUGE-2 F1-score 22.15 on it.
|
Self-Attentive Model for Headline Generation
|
[Abridged] We combine HST images from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey with JWST images from the Cosmic Evolution Early Release Science (CEERS) survey to measure the stellar and dust-obscured star formation distributions of a mass-complete ($>10^{10}M_\odot$) sample of 95 star-forming galaxies (SFGs) at $0.1<z<2.5$. Rest-mid-infrared (rest-MIR) morphologies (sizes and S\'ersic indices) are determined using their sharpest Mid-InfraRed Instrument (MIRI) images dominated by dust emission. Rest-MIR S\'ersic indices are only measured for the brightest MIRI sources ($S/N>75$; 38 galaxies). At lower $S/N$, simulations show that simultaneous measurements of the size and S\'ersic index become unreliable. We extend our study to fainter sources ($S/N>10$; 95 galaxies) by fixing their S\'ersic index to unity. The S\'ersic index of bright galaxies ($S/N>75$) has a median value of 0.7, which, together with their axis ratio distribution, suggests a disk-like morphology in the rest-MIR. Galaxies above the main sequence (MS; i.e., starbursts) have rest-MIR sizes that are a factor 2 smaller than their rest-optical sizes. The median rest-optical to rest-MIR size ratio of MS galaxies increases with stellar mass, from 1.1 at $10^{9.8}M_\odot$ to 1.6 at $10^{11}M_\odot$. This mass-dependent trend resembles the one found in the literature between the rest-optical and rest-near-infrared sizes of SFGs, suggesting that it is due to dust attenuation gradients affecting rest-optical sizes and that the sizes of the stellar and star-forming components of SFGs are, on average, consistent at all masses. There is, however, a small population of SFGs (15%) with a compact star-forming component embedded in a larger stellar structure. This could be the missing link between galaxies with an extended stellar component and those with a compact stellar component; the so-called blue nuggets.
|
CEERS: MIRI deciphers the spatial distribution of dust-obscured star formation in galaxies at $0.1<z<2.5$
|
A theory is presented showing that, under appropriate conditions, a ferroelectric in a cavity resonator can emit superradiant pulses. Initially, the ferroelectric has to be prepared in a nonequilibrium state from which it relaxes emitting a coherent pulse in the infrared region. Polarization dipolar waves play the role of the triggering mechanism initiating the beginning of the process.
|
Superradiance by ferroelectrics in cavity resonators
|
Considering coordinates as operators whose measured values are expectations between generalized coherent states based on the group SO(N,1) leads to coordinate noncommutativity together with full $N$ dimensional rotation invariance. Through the introduction of a gauge potential this theory can additionally be made invariant under $N$ dimensional translations. Fluctuations in coordinate measurements are determined by two scales. For small distances these fluctuations are fixed at the noncommutativity parameter while for larger distances they are proportional to the distance itself divided by a {\em very} large number. Limits on this number will lbe available from LIGO measurements.
|
Coherent States and N Dimensional Coordinate Noncommutativity
|
Recent high-resolution measurements suggest that the soft X-ray spectrum of obscured Radio Galaxies (RG) exhibits signatures of photoionised gas (e.g. 3C 445 and 3C 33) similar to those observed in radio-quiet obscured Active Galactic Nuclei (AGN). While signatures of warm absorbing gas covering a wide range of temperature and ionisation states have been detected in about one half of the population of nearby Seyfert 1 galaxies, no traces of warm absorber gas have been reported to date in the high-resolution spectra of Broad Line Radio Galaxies (BLRG). We present here the first detection of a soft X-ray warm absorber in the powerful FRII BLRG 3C 382 using the Reflection Grating Spectrometer (RGS) on-board XMM-Newton. The absorption gas appears to be highly ionised, with column density of the order of 10^{22} cm^{-2}, ionisation parameter log\xi>2 erg cm s^{-1} and outflow velocities of the order of 10^{3} km s^{-1}. The absorption lines may come from regions located outside the torus, however at distances less than 60 pc. This result may indicate that a plasma ejected at velocities near the speed of light and a photoionised gas with slower, outflow velocities can coexist in the same source beyond the Broad Line Regions.
|
First high-resolution detection of a warm absorber in the Broad Line Radio Galaxy 3C 382
|
This paper presents a Predictive Maneuver Planning with Deep Reinforcement Learning (PMP-DRL) model for maneuver planning. Traditional rule-based maneuver planning approaches often have to improve their abilities to handle the variabilities of real-world driving scenarios. By learning from its experience, a Reinforcement Learning (RL)-based driving agent can adapt to changing driving conditions and improve its performance over time. Our proposed approach combines a predictive model and an RL agent to plan for comfortable and safe maneuvers. The predictive model is trained using historical driving data to predict the future positions of other surrounding vehicles. The surrounding vehicles' past and predicted future positions are embedded in context-aware grid maps. At the same time, the RL agent learns to make maneuvers based on this spatio-temporal context information. Performance evaluation of PMP-DRL has been carried out using simulated environments generated from publicly available NGSIM US101 and I80 datasets. The training sequence shows the continuous improvement in the driving experiences. It shows that proposed PMP-DRL can learn the trade-off between safety and comfortability. The decisions generated by the recent imitation learning-based model are compared with the proposed PMP-DRL for unseen scenarios. The results clearly show that PMP-DRL can handle complex real-world scenarios and make better comfortable and safe maneuver decisions than rule-based and imitative models.
|
Predictive Maneuver Planning with Deep Reinforcement Learning (PMP-DRL) for comfortable and safe autonomous driving
|
Two popular approaches to model-free continuous control tasks are SAC and TD3. At first glance these approaches seem rather different; SAC aims to solve the entropy-augmented MDP by minimising the KL-divergence between a stochastic proposal policy and a hypotheical energy-basd soft Q-function policy, whereas TD3 is derived from DPG, which uses a deterministic policy to perform policy gradient ascent along the value function. In reality, both approaches are remarkably similar, and belong to a family of approaches we call `Off-Policy Continuous Generalized Policy Iteration'. This illuminates their similar performance in most continuous control benchmarks, and indeed when hyperparameters are matched, their performance can be statistically indistinguishable. To further remove any difference due to implementation, we provide OffCon$^3$ (Off-Policy Continuous Control: Consolidated), a code base featuring state-of-the-art versions of both algorithms.
|
OffCon$^3$: What is state of the art anyway?
|
The magnetic properties of the alloy system Fe3-xMnxSi have been studied by measuring magnetization for samples with x = 0, 0.1, 0.25, 0.5, and by thermal scanning techniques for samples with x = 0, 0.1. The results reveal that the system is ferromagnetic in this composition range. Zero field cooling and field cooling magnetization measurements indicate a similar magnetic ordering and magnetic anisotropy in all samples. The saturation magnetization for the annealed samples was higher than that for as prepared samples. This is attributed to the reduction of magnetic domain boundaries rather than to improving magnetic order as a result of annealing. Further, TC values determined from thermal DSC measurements are in good agreement with previously reported results based on magnetic measurements.
|
Magnetization measurements on as prepared and annealed Fe3-xMnxSi alloys
|
In federated optimization, heterogeneity in the clients' local datasets and computation speeds results in large variations in the number of local updates performed by each client in each communication round. Naive weighted aggregation of such models causes objective inconsistency, that is, the global model converges to a stationary point of a mismatched objective function which can be arbitrarily different from the true objective. This paper provides a general framework to analyze the convergence of federated heterogeneous optimization algorithms. It subsumes previously proposed methods such as FedAvg and FedProx and provides the first principled understanding of the solution bias and the convergence slowdown due to objective inconsistency. Using insights from this analysis, we propose FedNova, a normalized averaging method that eliminates objective inconsistency while preserving fast error convergence.
|
Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization
|
Vortices have been postulated at a range of size scales in the universe including at the stellar size-scale. Whilst hydrodynamically simulating the wind from an asymptotic giant branch (AGB) star moving through and sweeping up its surrounding interstellar medium (ISM), we have found vortices on the size scale of 10^-1 pc to 10^1 pc in the wake of the star. These vortices appear to be the result of instabilities at the head of the bow shock formed upstream of the AGB star. The instabilities peel off downstream and form vortices in the tail of AGB material behind the bow shock, mixing with the surrounding ISM. We suggest such structures are visible in the planetary nebula Sh 2-188.
|
Vortices in the wakes of AGB stars
|
Neural networks have achieved impressive breakthroughs in both industry and academia. How to effectively develop neural networks on quantum computing devices is a challenging open problem. Here, we propose a new quantum neural network model for quantum neural computing using (classically-controlled) single-qubit operations and measurements on real-world quantum systems with naturally occurring environment-induced decoherence, which greatly reduces the difficulties of physical implementations. Our model circumvents the problem that the state-space size grows exponentially with the number of neurons, thereby greatly reducing memory requirements and allowing for fast optimization with traditional optimization algorithms. We benchmark our model for handwritten digit recognition and other nonlinear classification tasks. The results show that our model has an amazing nonlinear classification ability and robustness to noise. Furthermore, our model allows quantum computing to be applied in a wider context and inspires the earlier development of a quantum neural computer than standard quantum computers.
|
Quantum Neural Network for Quantum Neural Computing
|
Dielectric particles in weakly conducting fluids rotate spontaneously when subject to strong electric fields. Such Quincke rotation near a plane electrode leads to particle translation that enables physical models of active matter. Here, we show that Quincke rollers can also exhibit oscillatory dynamics, whereby particles move back and forth about a fixed location. We explain how oscillations arise for micron-scale particles commensurate with the thickness of a field-induced boundary layer in the nonpolar electrolyte. This work enables the design of colloidal oscillators.
|
Quincke oscillations of colloids at planar electrodes
|
Reference-based super-resolution (RefSR) has gained considerable success in the field of super-resolution with the addition of high-resolution reference images to reconstruct low-resolution (LR) inputs with more high-frequency details, thereby overcoming some limitations of single image super-resolution (SISR). Previous research in the field of RefSR has mostly focused on two crucial aspects. The first is accurate correspondence matching between the LR and the reference (Ref) image. The second is the effective transfer and aggregation of similar texture information from the Ref images. Nonetheless, an important detail of perceptual loss and adversarial loss has been underestimated, which has a certain adverse effect on texture transfer and reconstruction. In this study, we propose a feature reuse framework that guides the step-by-step texture reconstruction process through different stages, reducing the negative impacts of perceptual and adversarial loss. The feature reuse framework can be used for any RefSR model, and several RefSR approaches have improved their performance after being retrained using our framework. Additionally, we introduce a single image feature embedding module and a texture-adaptive aggregation module. The single image feature embedding module assists in reconstructing the features of the LR inputs itself and effectively lowers the possibility of including irrelevant textures. The texture-adaptive aggregation module dynamically perceives and aggregates texture information between the LR inputs and the Ref images using dynamic filters. This enhances the utilization of the reference texture while reducing reference misuse. The source code is available at https://github.com/Yi-Yang355/FRFSR.
|
A Feature Reuse Framework with Texture-adaptive Aggregation for Reference-based Super-Resolution
|
In this paper we study the best decay rate of the solutions of a damped plate equation in a square and with a homogeneous Dirichlet boundary conditions. We show that the fastest decay rate is given by the supremum of the real part of the spectrum of the infinitesimal generator of the underlying semigroup, if the damping coefficient is in $L^\infty(\Omega).$ Moreover, we give some numerical illustrations by spectral computation of the spectrum associated to the damped plate equation. The numerical results obtained for various cases of damping are in a good agreement with theoretical ones. Computation of the spectrum and energy of discrete solution of damped plate show that the best decay rate is given by spectral abscissa of numerical solution.
|
The best decay rate of the damped plate equation in a square
|
It is shown that distributions arising in Renyi-Tsallis maximum entropy setting are related to the Generalized Pareto Distributions (GPD) that are widely used for modeling the tails of distributions. The relevance of such modelization, as well as the ubiquity of GPD in practical situations follows from Balkema-De Haan-Pickands theorem on the distribution of excesses (over a high threshold). We provide an entropic view of this result, by showing that the distribution of a suitably normalized excess variable converges to the solution of a maximum Tsallis entropy, which is the GPD. This highlights the relevance of the so-called Tsallis distributions in many applications as well as some relevance to the use of the corresponding entropy.
|
An entropic view of Pickands' theorem
|
Higher-order guiding-center polarization and magnetization effects are introduced in gyrokinetic theory by keeping first-order terms in background magnetic-field nonuniformity. These results confirm the consistency of the two-step perturbation analysis used in modern gyrokinetic theory.
|
Guiding-center polarization and magnetization effects in gyrokinetic theory
|
In their work on `Coxeter-like complexes', Babson and Reiner introduced a simplicial complex $\Delta_T$ associated to each tree $T$ on $n$ nodes, generalizing chessboard complexes and type A Coxeter complexes. They conjectured that $\Delta_T$ is $(n-b-1)$-connected when the tree has $b$ leaves. We provide a shelling for the $(n-b)$-skeleton of $\Delta_T$, thereby proving this conjecture. In the process, we introduce notions of weak order and inversion functions on the labellings of a tree $T$ which imply shellability of $\Delta_T$, and we construct such inversion functions for a large enough class of trees to deduce the aforementioned conjecture and also recover the shellability of chessboard complexes $M_{m,n}$ with $n \ge 2m-1$. We also prove that the existence or nonexistence of an inversion function for a fixed tree governs which networks with a tree structure admit greedy sorting algorithms by inversion elimination and provide an inversion function for trees where each vertex has capacity at least its degree minus one.
|
Shelling Coxeter-like Complexes and Sorting on Trees
|
The development of fair and ethical AI systems requires careful consideration of bias mitigation, an area often overlooked or ignored. In this study, we introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA), which leverages classical data augmentation techniques to tackle the pressing issue of bias in data and models. Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance. To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces. These bias annotations are published for the first time in this study, providing a valuable resource for future research. Through Counterfactual Bias Insertion, we discovered that biases associated with the frame, ruler, and glasses had a significant impact on models. By randomly introducing biases during training, we mitigated these biases and achieved a substantial decrease in bias measures, ranging from two-fold to more than 50-fold, while maintaining a negligible increase in the error rate.
|
Targeted Data Augmentation for bias mitigation
|
Text classification helps analyse texts for semantic meaning and relevance, by mapping the words against this hierarchy. An analysis of various types of texts is invaluable to understanding both their semantic meaning, as well as their relevance. Text classification is a method of categorising documents. It combines computer text classification and natural language processing to analyse text in aggregate. This method provides a descriptive categorization of the text, with features like content type, object field, lexical characteristics, and style traits. In this research, the authors aim to use natural language feature extraction methods in machine learning which are then used to train some of the basic machine learning models like Naive Bayes, Logistic Regression, and Support Vector Machine. These models are used to detect when a teacher must get involved in a discussion when the lines go off-topic.
|
Classifying text using machine learning models and determining conversation drift
|
We study the gluon and ghost propagators of SU(2) lattice Landau gauge theory and find their low-momentum behavior being sensitive to the lowest non-trivial eigenvalue (\lambda_1) of the Faddeev-Popov operator. If the gauge-fixing favors Gribov copies with small (large) values for \lambda_1 both the ghost dressing function and the gluon propagator get enhanced (suppressed) at low momentum. For larger momenta no dependence on Gribov copies is seen. We compare our lattice data to the corresponding (decoupling) solutions from the DSE/FRGE study of Fischer, Maas and Pawlowski [Annals Phys. 324 (2009) 2408] and find qualitatively good agreement.
|
Another look at the Landau-gauge gluon and ghost propagators at low momentum
|
Let $p_1,p_2,p_3$ be three distinct points in the plane, and, for $i=1,2,3$, let $\mathcal C_i$ be a family of $n$ unit circles that pass through $p_i$. We address a conjecture made by Sz\'ekely, and show that the number of points incident to a circle of each family is $O(n^{11/6})$, improving an earlier bound for this problem due to Elekes, Simonovits, and Szab\'o [Combin. Probab. Comput., 2009]. The problem is a special instance of a more general problem studied by Elekes and Szab\'o [Combinatorica, 2012] (and by Elekes and R\'onyai [J. Combin. Theory Ser. A, 2000]).
|
On triple intersections of three families of unit circles
|
We present integrated photometry and color-magnitude diagrams (CMDs) for 24 star clusters in M33, of which 12 were previously uncataloged. These results are based on Advanced Camera for Surveys observations from the Hubble Space Telescope of two fields in M33. Our integrated V magnitudes and V-I colors for the previously identified objects are in good agreement with published photometry. We are able to estimate ages for 21 of these clusters using features in the CMDs, including isochrone fitting to the main sequence turnoffs for 17 of the clusters. Comparisons of these ages with the clusters' integrated colors and magnitudes suggest that simple stellar population models perform reasonably well in predicting these properties.
|
Newly Identified Star Clusters in M33. I. Integrated Photometry and Color-Magnitude Diagrams
|
After a short biographical summary of the scientific life of Oskar Klein, a more detailed and hopefully didactic presentation of his derivation of the relativistic Klein-Gordon wave equation is given. It was a result coming out of his unification of electromagnetism and gravitation based on Einstein's general theory of relativity in a five-dimensional spacetime. This idea had previously been explored by Kaluza, but Klein made it more acceptable by suggesting that the extra dimension could be compactified and therefore remain unobservable when it is small enough.
|
Oskar Klein and the fifth dimension
|
We show that time-dependent density functional theory (TDDFT) is applicable to coherent optical phonon generation by intense laser pulses in solids. The two mechanisms invoked in phenomenological theories, namely impulsively stimulated Raman scattering and displacive excitation, are present in the TDDFT. Taking the example of crystalline Si, we find that the theory reproduces the phenomena observed experimentally: dependence on polarization, strong growth at the direct band gap, and the change of phase from below to above the band gap. We conclude that the TDDFT offers a predictive ab initio framework to treat coherent optical phonon generation.
|
Ab initio theory of coherent phonon generation by laser excitation
|
The W-Band ($75-110\; \mathrm{GHz}$) sky contains a plethora of information about star formation, galaxy evolution and the cosmic microwave background. We have designed and fabricated a dual-purpose superconducting circuit to facilitate the next generation of astronomical observations in this regime by providing proof-of-concept for both a millimeter-wave low-loss phase shifter, which can operate as an on-chip Fourier transform spectrometer (FTS) and a traveling wave kinetic inductance parametric amplifier (TKIP). Superconducting transmission lines have a propagation speed that depends on the inductance in the line which is a combination of geometric inductance and kinetic inductance in the superconductor. The kinetic inductance has a non-linear component with a characteristic current, $I_*$, and can be modulated by applying a DC current, changing the propagation speed and effective path length. Our test circuit is designed to measure the path length difference or phase shift, $\Delta \phi$, between two symmetric transmission lines when one line is biased with a DC current. To provide a measurement of $\Delta\phi$, a key parameter for optimizing a high gain W-Band TKIP, and modulate signal path length in FTS operation, our $3.6 \times 2.5\; \mathrm{cm}$ chip employs a pair of $503\; \mathrm{mm}$ long NbTiN inverted microstrip lines coupled to circular waveguide ports through radial probes. For a line of width $3\; \mathrm{\mu m}$ and film thickness $20\; \mathrm{nm}$, we predict $\Delta\phi\approx1767\; \mathrm{rad}$ at $90\; \mathrm{GHz}$ when biased at close to $I_*$. We have fabricated a prototype with $200\; \mathrm{nm}$ thick Nb film and the same line length and width. The predicted phase shift for our prototype is $\Delta\phi\approx30\; \mathrm{rad}$ at $90\; \mathrm{GHz}$ when biased at close to $I_*$ for Nb.
|
A Superconducting Phase Shifter and Traveling Wave Kinetic Inductance Parametric Amplifier for W-Band Astronomy
|
In traditional robot behavior programming, the edit-compile-simulate-deploy-run cycle creates a large mental disconnect between program creation and eventual robot behavior. This significantly slows down behavior development because there is no immediate mental connection between the program and the resulting behavior. With live programming the development cycle is made extremely tight, realizing such an immediate connection. In our work on programming of ROS robots in a more dynamic fashion through PhaROS, we have experimented with the use of the Live Robot Programming language. This has given rise to a number of requirements for such live programming of robots. In this text we introduce these requirements and illustrate them using an example robot behavior.
|
Towards Live Programming in ROS with PhaROS and LRP
|
We describe a new and computationally efficient Bayesian methodology for inferring species trees and demographics from unlinked binary markers. Likelihood calculations are carried out using diffusion models of allele frequency dynamics combined with a new algorithm for numerically computing likelihoods of quantitative traits. The diffusion approach allows for analysis of datasets containing hundreds or thousands of individuals. The method, which we call \snapper, has been implemented as part of the Beast2 package. We introduce the models, the efficient algorithms, and report performance of \snapper on simulated data sets and on SNP data from rattlesnakes and freshwater turtles.
|
Bayesian inference of species trees using diffusion models
|
The ground state of a spin-orbit-coupled Bose gas in a one-dimensional optical lattice is known to exhibit a mixed regime, where the condensate wave function is given by a superposition of multiple Bloch-wave components, and an unmixed one, in which the atoms occupy a single Bloch state. The unmixed regime features two unpolarized Bloch-wave phases, having quasimomentum at the center or at the edge of the first Brillouin zone, and a polarized Bloch-wave phase at intermediate quasimomenta. By calculating the critical values of the Raman coupling and of the lattice strength at the transitions among the various phases, we show the existence of a tricritical point where the mixed, the polarized and the edge-quasimomentum phases meet, and whose appearance is a consequence of the spin-dependent interaction. Furthermore, we evaluate the excitation spectrum in the unmixed regime and we characterize the behavior of the phonon and the roton modes, pointing out the instabilities occurring when a phase transition is approached.
|
Quantum phases and collective excitations of a spin-orbit-coupled Bose-Einstein condensate in a one-dimensional optical lattice
|
In this paper, we study the design and the delay-exponent of anytime codes over a three terminal relay network. We propose a bilayer anytime code based on anytime spatially coupled low-density parity-check (LDPC) codes and investigate the anytime characteristics through density evolution analysis. By using mathematical induction technique, we find analytical expressions of the delay-exponent for the proposed code. Through comparison, we show that the analytical delay-exponent has a close match with the delay-exponent obtained from numerical results.
|
Delay-Exponent of Bilayer Anytime Code
|
The integer Cech cohomology of canonical projection tilings of dimension three and codimension three is derived. These formulae are then evaluated for several icosahedral tilings known from the literature. Rather surprisingly, the cohomologies of all these tilings turn out to have torsion. This is the case even for the Danzer tiling, which is, in some sense, the simplest of all icosahedral tilings. This result is in contrast to the case of two-dimensional canonical projection tilings, where many examples without torsion are known.
|
Integer Cech Cohomology of Icosahedral Projection Tilings
|
It is known in previous literature that if a Wess-Zumino model with an R-symmetry gives a supersymmetric vacuum, the superpotential vanishes at the vacuum. In this work, we establish a formal notion of genericity, and show that if the R-symmetric superpotential has generic coefficients, the superpotential vanishes term-by-term at a supersymmetric vacuum. This result constrains the form of the superpotential which leads to a supersymmetric vacuum. It may contribute to a refined classification of R-symmetric Wess-Zumino models, and find applications in string constructions of vacua with small superpotentials. A similar result for a scalar potential system with a scaling symmetry is discussed.
|
A formal notion of genericity and term-by-term vanishing superpotentials at supersymmetric vacua from R-symmetric Wess-Zumino models
|
We present spectroscopic observations of the short-period cataclysmic variable SW Ursa Majoris, obtained by the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite while the system was in quiescence. The data include the resonance lines of O VI at 1031.91 and 1037.61 A. These lines are present in emission, and they exhibit both narrow (~ 150 km/s) and broad (~ 2000 km/s) components. The narrow O VI emission lines exhibit unusual double-peaked and redshifted profiles. We attribute the source of this emission to a cooling flow onto the surface of the white dwarf primary. The broad O VI emission most likely originates in a thin, photoionized surface layer on the accretion disk. We searched for emission from H_2 at 1050 and 1100 A, motivated by the expectation that the bulk of the quiescent accretion disk is in the form of cool, molecular gas. If H_2 is present, then our limits on the fluxes of the H_2 lines are consistent with the presence of a surface layer of atomic H that shields the interior of the disk. These results may indicate that accretion operates primarily in the surface layers of the disk in SW UMa. We also investigate the far-UV continuum of SW UMa and place an upper limit of 15,000 K on the effective temperature of the white dwarf.
|
FUSE Observations of the Dwarf Nova SW UMa During Quiescence
|
Multicomponent methods seek to treat select nuclei, typically protons, fully quantum mechanically and equivalent to the electrons of a chemical system. In such methods, it is well known that due to the neglect of electron-proton correlation, a Hartree-Fock (HF) description of the electron-proton interaction catastrophically fails leading to qualitatively incorrect protonic properties. In single-component quantum chemistry, the qualitative failure of HF is normally indicative of the need for multireference methods such as complete active space self-consistent field (CASSCF). While a multicomponent CASSCF method was implemented nearly twenty years ago, it is only able to perform calculations with very small active spaces (~105 multicomponent configurations). Therefore, in order to extend the realm of applicability of the multicomponent CASSCF method, this study derives and implements a new two-step multicomponent CASSCF method that uses multicomponent heat-bath configuration interaction for the configuration interaction step, enabling calculations with very large active spaces (up to 16 electrons in 48 orbitals). We find that large electronic active spaces are needed to obtain qualitatively accurate protonic densities for the HCN and FHF- molecules. Additionally, the multicomponent CASSCF method implemented here should have further applications for double-well protonic potentials and systems that are inherently electronically multireference.
|
Multicomponent CASSCF Revisited: Large Active Spaces are Needed for Qualitatively Accurate Protonic Densities
|
In [1], KKLT give a mechanism to generate de Sitter vacua in string theory. And the scenario, \emph{Landscape}, is suggested to explain the problem of the cosmological constant. In this paper, adopting a simple potential describing the \emph{landscape}, we investigate the decay of the vacuum and the evolution of the universe after the decay. We find that the big crunch of the universe is inevitable. But, according to the modified Friedmann equation in [11], the singularity of the big crunch is avoided. Furthermore, we find that this gives a cyclic cosmological model.
|
A Cyclic Cosmological Model in the Landscape Scenario
|
The alpha-rich freezeout from equilibrium occurs during the core-collapse explosion of a massive star when the supernova shock wave passes through the Si-rich shell of the star. The nuclei are heated to high temperature and broken down into nucleons and alpha particles. These subsequently reassemble as the material expands and cools, thereby producing new heavy nuclei, including a number of important supernova observables. In this paper we introduce two web-based applications. The first displays the results of a reaction-rate sensitivity study of alpha-rich freezeout yields. The second allows the interested reader to run paramaterized explosive silicon burning calculations in which the user inputs his own parameters. These tools are intended to aid in the identification of nuclear reaction rates important for experimental study. We then analyze several iron-group isotopes (59Ni, 57Co, 56Co, and 55Fe) in terms of their roles as observables and examine the reaction rates that are important in their production.
|
Nuclear Reactions Important in Alpha-Rich Freezeouts
|
The study of Music Cognition and neural responses to music has been invaluable in understanding human emotions. Brain signals, though, manifest a highly complex structure that makes processing and retrieving meaningful features challenging, particularly of abstract constructs like affect. Moreover, the performance of learning models is undermined by the limited amount of available neuronal data and their severe inter-subject variability. In this paper we extract efficient, personalized affective representations from EEG signals during music listening. To this end, we employ music signals as a supervisory modality to EEG, aiming to project their semantic correspondence onto a common representation space. We utilize a bi-modal framework by combining an LSTM-based attention model to process EEG and a pre-trained model for music tagging, along with a reverse domain discriminator to align the distributions of the two modalities, further constraining the learning process with emotion tags. The resulting framework can be utilized for emotion recognition both directly, by performing supervised predictions from either modality, and indirectly, by providing relevant music samples to EEG input queries. The experimental findings show the potential of enhancing neuronal data through stimulus information for recognition purposes and yield insights into the distribution and temporal variance of music-induced affective features.
|
Enhancing Affective Representations of Music-Induced EEG through Multimodal Supervision and latent Domain Adaptation
|
The influence of relative humidity (RH) on quasistatic current-voltage ${(I-V)}$ characteristics of Bifidobacterium animalis subsp. lactis BB-12 thin layers have been studied for the first time. The value of electrical conductivity in 75$ \%$ RH was found to be in the order of 10$^{-7}$ (ohm cm)$^{-1}$ which was 10$^{6}$ orders of magnitude higher than that observed in dry atmosphere. Here we also demonstrated that RH played a key role in hysteresis behaviour of the measured ${(I-V)}$ characteristics. FTIR measurements showed that under water moisture environment the associated bonds for amine and carboxyl group were greatly strengthened that was the source of number of free charge carries after ionization. The type of surface charge of Bifidobacterium animalis subsp. lactis BB-12 was found to be negative by zeta potential measurements, claiming that electrons were the charge carriers.
|
Charge Transport in Bifidobacterium animalis subsp. lactis BB-12 under the various Atmosphere
|
Deep learning hardware designs have been bottlenecked by conventional memories such as SRAM due to density, leakage and parallel computing challenges. Resistive devices can address the density and volatility issues, but have been limited by peripheral circuit integration. In this work, we demonstrate a scalable RRAM based in-memory computing design, termed XNOR-RRAM, which is fabricated in a 90nm CMOS technology with monolithic integration of RRAM devices between metal 1 and 2. We integrated a 128x64 RRAM array with CMOS peripheral circuits including row/column decoders and flash analog-to-digital converters (ADCs), which collectively become a core component for scalable RRAM-based in-memory computing towards large deep neural networks (DNNs). To maximize the parallelism of in-memory computing, we assert all 128 wordlines of the RRAM array simultaneously, perform analog computing along the bitlines, and digitize the bitline voltages using ADCs. The resistance distribution of low resistance states is tightened by write-verify scheme, and the ADC offset is calibrated. Prototype chip measurements show that the proposed design achieves high binary DNN accuracy of 98.5% for MNIST and 83.5% for CIFAR-10 datasets, respectively, with energy efficiency of 24 TOPS/W and 158 GOPS throughput. This represents 5.6X, 3.2X, 14.1X improvements in throughput, energy-delay product (EDP), and energy-delay-squared product (ED2P), respectively, compared to the state-of-the-art literature. The proposed XNOR-RRAM can enable intelligent functionalities for area-/energy-constrained edge computing devices.
|
High-Throughput In-Memory Computing for Binary Deep Neural Networks with Monolithically Integrated RRAM and 90nm CMOS
|
The electromagnetic transients accompanying compact binary mergers ($\gamma$-ray bursts, afterglows and 'macronovae') are crucial to pinpoint the sky location of gravitational wave sources. Macronovae are caused by the radioactivity from freshly synthesised heavy elements, e.g. from dynamic ejecta and various types of winds. We study macronova signatures by using multi-dimensional radiative transfer calculations. We employ the radiative transfer code SuperNu and state-of-the-art LTE opacities for a few representative elements from the wind and dynamical ejecta (Cr, Pd, Se, Te, Br, Zr, Sm, Ce, Nd, U) to calculate synthetic light curves and spectra for a range of ejecta morphologies. The radioactive power of the resulting macronova is calculated with the detailed input of decay products. We assess the detection prospects for our most complex models, based on the portion of viewing angles that are sufficiently bright, at different cosmological redshifts ($z$). The brighter emission from the wind is unobscured by the lanthanides (or actinides) in some of the models, permitting non-zero detection probabilities for redshifts up to $z=0.07$. We also find the nuclear mass model and the resulting radioactive heating rate are crucial for the detectability. While for the most pessimistic heating rate (from the FRDM model) no reasonable increase in the ejecta mass or velocity, or wind mass or velocity, can possibly make the light curves agree with the observed nIR excess after GRB130603B, a more optimistic heating rate (from the Duflo-Zuker model) leads to good agreement. We conclude that future reliable macronova observations would constrain nuclear heating rates, and consequently help constrain nuclear mass models.
|
Impact of ejecta morphology and composition on the electromagnetic signatures of neutron star mergers
|
We use recent microlensing observations toward the central bulge of the Galaxy to probe the overall stellar plus brown dwarf initial mass function (IMF) in these regions well within the brown dwarf domain. We find that the IMF is consistent with the same Chabrier (2005) IMF characteristic of the Galactic disk. In contrast, other IMFs suggested in the literature overpredict the number of short-time events, thus of very-low mass stars and brown dwarfs, compared with observations. This, again, supports the suggestion that brown dwarfs and stars form predominantly via the same mechanism. We show that claims for different IMFs in the stellar and substellar domains rather arise from an incorrect parameterization of the IMF. Furthermore, we show that the IMF in the central regions of the bulge seems to be bottom-heavy, as illustrated by the large number of short-time events compared with the other regions. This recalls our previous analysis of the IMF in massive early type galaxies and suggests the same kind of two-phase formation scenario, with the central bulge initially formed under more violent, burst-like conditions than the rest of the Galaxy.
|
Probing the Milky Way stellar and brown dwarf initial mass function with modern microlensing observations
|
A self-consistent exact solution for a Reissner-Nordstr\"om black-and-white hole formed as a result of accretion has been considered. Prior to the formation of a black-and-white hole, there is a bulk charged sphere at the center of the system. The occurrence of a turning (bounce) point in the newly formed black-and-white hole is investigated. The occurrence of a turning point was investigated by an example of solution for the accretion of a neutral spherical dust shell. The model equations are written with allowance for the cosmological $\Lambda$-term. Within the model under consideration, both the black-and-white hole and its turning point are formed in the already existing Universe; therefore, the black-and-white hole in this model is not <<eternal>>.
|
Occurrence of a turning point in the dynamic solution for a Reissner-Nordstr\"om black hole
|
The paper addresses the problem of providing suitable reference trajectories in motion planning problems for autonomous vehicles. Among the various approaches to compute a reference trajectory, our aim is to find those trajectories which optimize a given performance criterion, for instance fuel consumption, comfort, safety, time, and obey constraints, e.g. collision avoidance, safety regions, control bounds. This task can be approached by geometric shortest path problems or by optimal control problems, which need to be solved efficiently. To this end we use direct discretization schemes and model-predictive control in combination with sensitivity updates to predict optimal solutions in the presence of perturbations. Applications arising in autonomous driving are presented. In particular, a distributed control algorithm for traffic scenarios with several autonomous vehicles that use car-to-car communication is introduced.
|
Optimization-based Motion Planning in Virtual Driving Scenarios with Application to Communicating Autonomous Vehicles
|
This is a survey of the three main methods developed in the last 15 years to prove the existence of integral canonical models of Shimura varieties of Hodge type. The only new part is formed by corrections to results of Kisin.
|
Three methods to prove the existence of integral canonical models of Shimura varieties of Hodge type
|
We present a theoretical study for adhesion-induced lateral phase separation for a membrane with short stickers, long stickers and repellers confined between two hard walls. The effects of confinement and repellers on lateral phase separation are investigated. We find that the critical potential depth of the stickers for lateral phase separation increases as the distance between the hard walls decreases. This suggests confinement-induced or force-induced mixing of stickers. We also find that stiff repellers tend to enhance, while soft repellers tend to suppress adhesion-induced lateral phase separation.
|
Adhesion-induced lateral phase separation of multi-component membranes: the effect of repellers and confinement
|
Observations from the first flight of the Medium Scale Anisotropy Measurement (MSAM) are analyzed to place limits on Gaussian fluctuations in the Cosmic Microwave Background Radiation (CMBR). This instrument chops a 30\arcmin\ beam in a 3 position pattern with a throw of $\pm40\arcmin$; the resulting data is analyzed in statistically independent single and double difference datasets. We observe in four spectral channels at 5.6, 9.0, 16.5, and 22.5~\icm, allowing the separation of interstellar dust emission from CMBR fluctuations. The dust component is correlated with the \IRAS\ 100~\micron\ map. The CMBR component has two regions where the signature of an unresolved source is seen. Rejecting these two source regions, we obtain a detection of fluctuations which match CMBR in our spectral bands of $0.6 \times 10^{-5} < \Delta T/T < 2.2 \times 10^{-5}$ (90\% CL interval) for total rms Gaussian fluctuations with correlation angle 0\fdg5, using the single difference demodulation. For the double difference demodulation, the result is $1.1 \times 10^{-5} < \Delta T/T < 3.1 \times 10^{-5}$ (90\% CL interval) at a correlation angle of 0\fdg3.
|
A Measurement of the Medium-Scale Anisotropy in the Cosmic Microwave Background Radiation
|
A major practical limitation of the Maddah-Ali-Niesen coded caching techniques is their high subpacketization level. For the simple network with a single server and multiple users, Yan \emph{et al.} proposed an alternative scheme with the so-called placement delivery arrays (PDA). Such a scheme requires slightly higher transmission rates but significantly reduces the subpacketization level. In this paper, we extend the PDA framework and propose three low-subpacketization schemes for combination networks, i.e., networks with a single server, multiple relays, and multiple cache-aided users that are connected to subsets of relays. One of the schemes achieves the cutset lower bound on the link rate when the cache memories are sufficiently large. Our other two schemes apply only to \emph{resolvable} combination networks. For these networks and for a wide range of cache sizes, the new schemes perform closely to the coded caching schemes that directly apply Maddah-Ali-Niesen scheme while having significantly reduced subpacketization levels.
|
Placement Delivery Array Design for Combination Networks with Edge Caching
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.