text
stringlengths
6
128k
This paper studies the synchronization of a finite number of Kuramoto oscillators in a frequency-dependent bidirectional tree network. We assume that the coupling strength of each link in each direction is equal to the product of a common coefficient and the exogenous frequency of its corresponding head oscillator. We derive a sufficient condition for the common coupling strength in order to guarantee frequency synchronization in tree networks. Moreover, we discuss the dependency of the obtained bound on both the graph structure and the way that exogenous frequencies are distributed. Further, we present an application of the obtained result by means of an event-triggered algorithm for achieving frequency synchronization in a star network assuming that the common coupling coefficient is given.
A detailed understanding of quantization conductance (QC), their correlation with resistive switching phenomena and controlled manipulation of quantized states is crucial for realizing atomic-scale multilevel memory elements. Here, we demonstrate highly stable and reproducible quantized conductance states (QC-states) in Al/Niobium oxide/Pt resistive switching devices. Three levels of control over the QC-states, required for multilevel quantized state memories, like, switching ON to different quantized states, switching OFF from quantized states, and controlled inter-state switching among one QC states to another has been demonstrated by imposing limiting conditions of stop-voltage and current compliance. The well defined multiple QC-states along with a working principle for switching among various states show promise for implementation of multilevel memory devices.
A semiconductor model of rocks is shown to describe unipolar magnetic pulses, a phenomenon that has been observed prior to earthquakes. These pulses are observable because their extremely long wavelength allows them to pass through the Earth's crust. Interestingly, the source of these pulses may be triangulated to pinpoint locations where stress is building deep within the crust. We couple a semiconductor drift-diffusion model to a magnetic field in order to describe the electromagnetic effects associated with electrical currents flowing within rocks. The resulting system of equations is solved numerically and it is seen that a volume of rock may act as a diode that produces transient currents when it switches bias. These unidirectional currents are expected to produce transient unipolar magnetic pulses similar in form, amplitude, and duration to those observed before earthquakes, and this suggests that the pulses could be the result of geophysical semiconductor processes.
Motivated by the theory of Riemann surfaces and specifically the significance of Weierstrass points, we classify all finite simple groups that have a faithful transitive action with fixity 4, along with details about all possible such actions.
In this paper we give classification of two-dimensional real evolution algebras. For several chains of evolution algebras we study their classification dynamics.
In this paper we give an extension of the Barbashin-Krasovski-LaSalle Theorem to a class of time-varying dynamical systems, namely the class of systems for which the restricted vector field to the zero-set of the time derivative of the Liapunov function is time invariant and this set includes some trajectories. Our goal is to improve the sufficient conditions for the case of uniform asymptotic stability of the equilibrium. We obtain an extension of an well-known linear result to the case of zero-state detectability (given (C,A) a detectable pair, if there exists a positive semidefinite matrix P>=0 such that: A^TP+PA+C^TC=0 then A is Hurwitz - i.e. it has all eigenvalues with negative real part) as well as a result about robust stabilizability of nonlinear affine control systems.
In this paper we formulate a conjecture on the relationship between the equivariant \epsilon-constants (associated to a local p-adic representation V and a finite extension of local fields L/K) and local Galois cohomology groups of a Galois stable \mathbb{Z}_{p}-lattice T of V. We prove the conjecture for L/K being an unramified extension of degree prime to p and T being a p-adic Tate module of a one-dimensional Lubin-Tate group defined over \mathbb{Z}_{p} by extending the ideas of \cite{Breu} from the case of the multiplicative group \mathbb{G}_{m} to arbitrary one-dimensional Lubin-Tate groups. For the connection to the different formulations of the \epsilon-conjecture in \cite{BB}, \cite{FK}, \cite{Breu}, \cite{BlB} and \cite{BF} see \cite{Iz}.
The plan for the International Linear Collider (ILC) is now being prepared as a staged design, with the first stage of 2 ${\rm ab^{-1}}$ at 250GeV and later stages achieving the full project specifications with 4 ${\rm ab^{-1}}$ at 500 GeV. The talk presents the capabilities for precision Higgs boson measurements at 250 GeV. It is shown that the 250 GeV stage of the ILC will already provide many compelling results in Higgs physics, with new measurements unavailable at the Large Hadron Collider, model-independent determinations of key parameters, and possible discrimination of a variety of scenarios for new physics.
We apply complex network analysis to the state spaces of random Boolean networks (RBNs). An RBN contains $N$ Boolean elements each with $K$ inputs. A directed state space network (SSN) is constructed by linking each dynamical state, represented as a node, to its temporal successor. We study the heterogeneity of an SSN at both local and global scales, as well as sample-to-sample fluctuations within an ensemble of SSNs. We use in-degrees of nodes as a local topological measure, and the path diversity [Phys. Rev. Lett. 98, 198701 (2007)] of an SSN as a global topological measure. RBNs with $2 \leq K \leq 5$ exhibit non-trivial fluctuations at both local and global scales, while K=2 exhibits the largest sample-to-sample, possibly non-self-averaging, fluctuations. We interpret the observed ``multi scale'' fluctuations in the SSNs as indicative of the criticality and complexity of K=2 RBNs. ``Garden of Eden'' (GoE) states are nodes on an SSN that have in-degree zero. While in-degrees of non-GoE nodes for $K>1$ SSNs can assume any integer value between 0 and $2^N$, for K=1 all the non-GoE nodes in an SSN have the same in-degree which is always a power of two.
Quantum bits (qubits) are prone to several types of errors due to uncontrolled interactions with their environment. Common strategies to correct these errors are based on architectures of qubits involving daunting hardware overheads. A hopeful path forward is to build qubits that are inherently protected against certain types of errors, so that the overhead required to correct remaining ones is significantly reduced. However, the foreseen benefit rests on a severe condition: quantum manipulations of the qubit must not break the protection that has been so carefully engineered. A recent qubit - the cat-qubit - is encoded in the manifold of metastable states of a quantum dynamical system, thereby acquiring continuous and autonomous protection against bit-flips. Here, in a superconducting circuit experiment, we implement a cat-qubit with bit-flip times exceeding 10 seconds. This is a four order of magnitude improvement over previous cat-qubit implementations. We prepare and image quantum superposition states, and measure phase-flip times above 490 nanoseconds. Most importantly, we control the phase of these quantum superpositions without breaking bit-flip protection. This experiment demonstrates the compatibility of quantum control and inherent bit-flip protection at an unprecedented level, showing the viability of these dynamical qubits for future quantum technologies.
We consider a population constituted by two types of individuals; each of them can produce offspring in two different islands (as a particular case the islands can be interpreted as active or dormant individuals). We model the evolution of the population of each type using a two-type Feller diffusion with immigration, and we study the frequency of one of the types, in each island, when the total population size in each island is forced to be constant at a dense set of times. This leads to the solution of a SDE which we call the asymmetric two-island frequency process. We derive properties of this process and obtain a large population limit when the total size of each island tends to infinity. Additionally, we compute the fluctuations of the process around its deterministic limit. We establish conditions under which the asymmetric two-island frequency process has a moment dual. The dual is a continuous-time two-dimensional Markov chain that can be interpreted in terms of mutation, branching, pairwise branching, coalescence, and a novel mixed selection-migration term. Also, we conduct a stability analysis of the limiting deterministic dynamical system and present some numerical results to study fixation and a new form of balancing selection. When restricting to the seedbank model, we observe that some combinations of the parameters lead to balancing selection. Besides finding yet another way in which genetic reservoirs increase the genetic variability, we find that if a population that sustains a seedbank competes with one that does not, the seed producers will have a selective advantage if they reproduce faster, but will not have a selective disadvantage if they reproduce slower: their worst case scenario is balancing selection.
We model the emission lines generated in the photoionised debris of a tidally disrupted horizontal branch star. We find that at late times, the brightest optical emission lines are [N II] \lambda\lambda 6548,6583 and [O III] \lambda\lambda 4959,5007. Models of a red clump horizontal branch star undergoing mild disruption by a massive (50 -- 100 M_\sun) black hole yield an emission line spectrum that is in good agreement with that observed in the NGC 1399 globular cluster hosting the ultraluminous X-ray source CXOJ033831.8 - 352604. We make predictions for the UV emission line spectrum that can verify the tidal disruption scenario and constrain the mass of the BH.
Flip-sort is a natural sorting procedure which raises fascinating combinatorial questions. It finds its roots in the seminal work of Knuth on stack-based sorting algorithms and leads to many links with permutation patterns. We present several structural, enumerative, and algorithmic results on permutations that need few (resp. many) iterations of this procedure to be sorted. In particular, we give the shape of the permutations after one iteration, and characterize several families of permutations related to the best and worst cases of flip-sort. En passant, we also give some links between pop-stack sorting, automata, and lattice paths, and introduce several tactics of bijective proofs which have their own interest.
Mkr 841 is a bright Seyfert 1 galaxy known to harbor a strong soft excess and a variable K$\alpha$ iron line. It has been observed during 3 different periods by XMM for a total cumulated exposure time of $\sim$108 ks. We present in this paper a broad band spectral analysis of the complete EPIC-pn data sets. We were able to test two different models for the soft excess, a relativistically blurred photoionized reflection (\r model) and a relativistically smeared ionized absorption (\a model). The continuum is modeled by a simple cut-off power law and we also add a neutral reflection. These observations reveal the extreme and puzzling spectral and temporal behaviors of the soft excess and iron line. The 0.5-3 keV soft X-ray flux decreases by a factor 3 between 2001 and 2005 and the line shape appears to be a mixture of broad and narrow components. We succeed in describing this complex broad-band 0.5-10 keV spectral variability using either \r or \a to fit the soft excess. Both models give statistically equivalent results even including simultaneous BeppoSAX data up to 200 keV. Both models are consistent with the presence of remote reflection characterized by a constant narrow component in the data. However they differ in the presence of a broad line component present in \r but not needed in \a. This study also reveals the sporadic presence of relativistically redshifted narrow iron lines.
A graph with $v$ vertices is $(r)$-pancyclic if it contains precisely $r$ cycles of every length from 3 to $v$. A bipartite graph with even number of vertices $v$ is said to be $(r)$-bipancyclic if it contains precisely $r$ cycles of each even length from 4 to $v$. A bipartite graph with odd number of vertices $v$ and minimum degree at least 2 is said to be oddly $(r)$-bipancyclic if it contains precisely $r$ cycles of each even length from 4 to $v-1$. In this paper, using computer search, we classify all $(r)$-pancyclic and $(r)$-bipancyclic graphs with $v$ vertices and at most $v+5$ edges. We also classify all oddly $(r)$-bipancyclic graphs with $v$ vertices and at most $v+4$ edges.
Suppose $X^{N}$ is a closed oriented manifold, $\alpha \in H^*(X;\mathbb{R})$ is a cohomology class, and $Z \in H_{N-k}(X)$ is an integral homology class. We ask the following question: is there an oriented embedded submanifold $Y^{N-k} \subset X$ with homology class $Z$ such that $\alpha|_Y = 0 \in H^*(Y;\mathbb{R})$? In this article, we provide a family of computable obstructions to the existence of such 'exact' submanifolds in a given homology class which arise from studying formal deformations of the de Rham complex. In the final section, we apply these obstructions to prove that the following symplectic manifolds admit no non-separating exact (a fortiori contact-type) hypersurfaces: K\"ahler manifolds, symplectically uniruled manifolds, and the Kodaira--Thurston manifold.
We introduce a notion of uniform Ding stability for a projective manifold with big anticanonical class, and prove that the existence of a unique K\"ahler-Einstein metric on such a manifold implies uniform Ding stability. The main new techniques are to develop a general theory of Deligne functionals - and corresponding slope formulas - for singular metrics, and to prove a slope formula for the Ding functional in the big setting. This extends work of Berman in the Fano situation, when the anticanonical class is actually ample, and proves one direction of the analogue of the Yau-Tian-Donaldson conjecture in this setting. We also speculate about the relevance of uniform Ding stability and K-stability to moduli in the big setting.
The elastic string (rod) of a large mass M is considered, the left end of which is fixed to a body of a small mass m. The second body of mass m is fixed to the right end of the string. The force of the delta-function form is applied to the left side of the string. We find the propagation of a pulse in the system. The problem is related to the quark-string model of mesons.
We focus on a comparison of the space densities of FRI and FRII extended radio sources at different epochs, and find that FRI and FRII sources show similar space density enhancements in various redshift ranges, possibly implying a common evolution.
In this work we characterize a full Kostant-Toda system in terms of a family of matrix polynomials orthogonal with respect to a complex matrix measure. In order to study the solution of this dynamical system we give explicit expressions for the Weyl function and we also obtain, under some conditions, a representation of the vector of linear functionals associated with this system.
In this work we introduce a concept of complexity for undirected graphs in terms of the spectral analysis of the Laplacian operator defined by the incidence matrix of the graph. Precisely, we compute the norm of the vector of eigenvalues of both the graph and its complement and take their product. Doing so, we obtain a quantity that satisfies two basic properties that are the expected for a measure of complexity. First,complexity of fully connected and fully disconnected graphs vanish. Second, complexity of complementary graphs coincide. This notion of complexity allows us to distinguish different kinds of graphs by placing them in a "croissant-shaped" region of the plane link density - complexity, highlighting some features like connectivity,concentration, uniformity or regularity and existence of clique-like clusters. Indeed, considering graphs with a fixed number of nodes, by plotting the link density versus the complexity we find that graphs generated by different methods take place at different regions of the plane. We consider some generated graphs, in particular the Erd\"os-R\'enyi, the Watts-Strogatz and the Barab\'asi-Albert models. Also, we place some particular, let us say deterministic, to wit, lattices, stars, hyper-concentrated and cliques-containing graphs. It is worthy noticing that these deterministic classical models of graphs depict the boundary of the croissant-shaped region. Finally, as an application to graphs generated by real measurements, we consider the brain connectivity graphs from two epileptic patients obtained from magnetoencephalography (MEG) recording, both in a baseline period and in ictal periods .In this case, our definition of complexity could be used as a tool for discerning between states, by the analysis of differences at distinct frequencies of the MEG recording.
Radio relics are diffuse radio sources observed in galaxy clusters, probably produced by shock acceleration during cluster-cluster mergers. Their large size, of the order of 1 Mpc, indicates that the emitting electrons need to be (re)accelerated locally. The usually invoked Diffusive Shock Acceleration models have been challenged by recent observations and theory. We report the discovery of complex radio emission in the Galaxy cluster PLCKG287.0 +32.9, which hosts two relics, a radio halo, and several radio filamentary emission. Optical observations suggest that the cluster is elongated, likely along an intergalactic filament, and displays a significant amount of substructure. The peculiar features of this radio relic are that (i) it appears to be connected to the lobes of a radio galaxy and (ii) the radio spectrum steepens on either side of the radio relic. We discuss the origins of these features in the context of particle re-acceleration.
We formulate conjectures regarding percolation on planar triangulations suggested by assuming (quasi) invariance under coarse conformal uniformization.
It was recently found that the stabilizer and hypergraph entropy cones coincide for four parties, leading to a conjecture of their equivalence at higher party numbers. In this note, we show this conjecture to be false by proving new inequalities obeyed by all hypergraph entropy vectors that exclude particular stabilizer states on six qubits. By further leveraging this connection, we improve the characterization of stabilizer entropies and show that all linear rank inequalities at five parties, except for classical monotonicity, form facets of the stabilizer cone. Additionally, by studying minimum cuts on hypergraphs, we prove some structural properties of hypergraph representations of entanglement and generalize the notion of entanglement wedge nesting in holography.
During Big Bang Nucleosynthesis (BBN), in the first 20 minutes of the evolution of the Universe, the light nuclides, D, 3He, 4He, and 7Li were synthesized in astrophysically interesting abundances. The Cosmic Microwave Background Radiation (CMB) observed at present was last scattered some 400 thousand years later. BBN and the CMB (supplemented by more recent Large Scale Structure data), provide complementary probes of the early evolution of the Universe and enable constraints on the high temperature/energy physical processes in it. In this overview the predictions and observations of two physical quantities, the baryon density parameter and the expansion rate parameter, are compared to see if there is agreement between theory and observation at these two widely separated epochs. After answering this question in the affirmative, the consequences of this concordance for physics beyond the standard models of particle physics and cosmology is discussed.
Results are presented from the measurement by ATLAS of long-range ($|\Delta\eta|>2$)dihadron angular correlations in $\sqrt{s}=8$ and 13 TeV $pp$ collisions containing a $Z$ boson. The analysis is performed using 19.4 $\mathrm{fb}^{-1}$ of $\sqrt{s}=8$ TeV data recorded during Run 1 of the LHC and 36.1 $\mathrm{fb}^{-1}$ of $\sqrt{s}=13$ TeV data recorded during Run 2. Two-particle correlation functions are measured as a function of relative azimuthal angle over the relative pseudorapidity range $2<|\Delta\eta|<5$ for different intervals of charged-particle multiplicity and transverse momentum. The measurements are corrected for the presence of background charged particles generated by collisions that occur during one passage of two colliding proton bunches in the LHC. Contributions to the two-particle correlation functions from hard processes are removed using a template-fitting procedure. Sinusoidal modulation in the correlation functions is observed and quantified by the second Fourier coefficient of the correlation function, $v_{2,2}$, which in turn is used to obtain the single-particle anisotropy coefficient $v_{2}$. The $v_{2}$ values in the $Z$-tagged events, integrated over $0.5<p_{\mathrm{T}}<5$ GeV, are found to be independent of multiplicity and $\sqrt{s}$, and consistent within uncertainties with previous measurements in inclusive $pp$ collisions. As a function of charged-particle $p_{\mathrm{T}}$, the $Z$-tagged and inclusive $v_{2}$ values are consistent within uncertainties for $p_{\mathrm{T}}<3$ GeV.
We discuss the phase diagram of moderately dense, locally neutral three-flavor quark matter using the framework of an effective model of quantum chromodynamics with a local interaction. The phase diagrams in the plane of temperature and quark chemical potential as well as in the plane of temperature and lepton-number chemical potential are discussed.
We provide a concise, yet fairly complete discussion of the concept of essential closures of subsets of the real axis and their intimate connection with the topological support of absolutely continuous measures. As an elementary application of the notion of the essential closure of subsets of $\bbR$ we revisit the fact that CMV, Jacobi, and Schr\"odinger operators, reflectionless on a set E of positive Lebesgue measure, have absolutely continuous spectrum on the essential closure of the set E (with uniform multiplicity two on E). Though this result in the case of Schr\"odinger and Jacobi operators is known to experts, we feel it nicely illustrates the concept and usefulness of essential closures in the spectral theory of classes of reflectionless differential and difference operators.
Vector meson correlators are studied in effective chiral quark models with both local and non-local regulators. A set of sum rules based on dispersion relations is derived, which allows for a comparison of model predictions to data. We show that the two Weinberg sum rules for vector correlators hold in the non-local model.
The proof of the Khalfin Theorem for neutral meson complex is analyzed. It is shown that the unitarity of the time evolution operator for the total system under considerations assures that the Khalfin's Theorem holds. The consequences of this Theorem for the neutral mesons system are discussed: it is shown, eg., that diagonal matrix elements of the exact effective Hamiltonian for the neutral meson complex can not be equal if CPT symmetry holds and CP symmetry is violated. Properties of time evolution governed by a time--independent effective Hamiltonian acting in the neutral mesons subspace of states are considered. Using the Khalfin's Theorem it is shown that if such Hamiltonian is time--independent then the evolution operator for the total system containing the neutral meson complex can not be a unitary operator. It is shown graphically for a given specific model how the Khalfin's Theorem works. It is also shown for this model how the difference of the mentioned diagonal matrix elements of the effective Hamiltonian varies in time.
The new multi-epoch near-infrared VVV survey (VISTA Variables in the Via Lactea) is sampling 562 sq. deg of the Galactic bulge and adjacent regions of the disk. Accurate astrometry established for the region surveyed allows the VVV data to be merged with overlapping surveys (e.g., GLIMPSE, WISE, 2MASS, etc.), thereby enabling the construction of longer baseline spectral energy distributions for astronomical targets. However, in order to maximize use of the VVV data, a set of transformation equations are required to place the VVV JHKs photometry onto the 2MASS system. The impetus for this work is to develop those transformations via a comparison of 2MASS targets in 152 VVV fields sampling the Galactic disk. The transformation coefficients derived exhibit a reliance on variables such as extinction. The transformed data were subsequently employed to establish a mean reddening law of E_{J-H}/E_{H-Ks}=2.13 +/- 0.04, which is the most precise determination to date and merely emphasizes the pertinence of the VVV data for determining such important parameters.
An element $a$ in a ring $R$ is strongly J-clean if it is the sum of an idempotent and an element in the Jacobson radical that commutes. We characterize the strongly J-clean $2\times 2$ matrices over 2-projective-free non-commutative rings.
An improvement of the author's result, proved in 1961, concerning necessary and sufficient conditions for the compactness of an imbedding operator is given.
Network slicing has been introduced in 5G/6G networks to address the challenge of providing new services with different and sometimes conflicting requirements. With SDN and NFV technologies being used in the design of 5G and 6G wireless network slicing, as well as the centralization of control over these technologies, new services such as resource calendaring can also be used in wireless networks. In bandwidth calendaring, traffic with a low latency sensitivity and a high volume is shifted to later time slots so that applications with a high latency sensitivity can be served instead. We discuss how to calendar radio resources in the C-RAN architecture, which also makes use of network slicing. This is referred to as Slice-Aware Radio Resource Calendaring. A model of the problem is developed as an ILP problem and two heuristic algorithms are proposed for solving it due to complexity of optimal solution. Observations have shown that when resources are shared between tenants, the number of accepted requests increases.
In the literature the performance of quantum data transmission systems is usually evaluated in the absence of thermal noise. A more realistic approach taking into account the thermal noise is intrinsically more difficult because it requires dealing with Glauber coherent states in an infinite--dimensional space. In particular, the exact evaluation of the optimal measurement operators is a very difficult task, and numerical approximation is unavoidable. The paper faces the problem by approximating the P-representation of the noisy quantum states with a large but finite number of terms and applying to them the square root measurement (SRM) approach. Comparisons with the exact solution obtained with convex semidefinite programming show that the SRM approach gives quite accurate results. As application, the performance of quadrature amplitude modulation (QAM) and phase shift keying (PSK) systems is considered. In spite of the fact that the SRM approach is not optimal and overestimates the error probability, also in these cases the quantum detection maintains its superiority with respect to the classical homodyne detection.
Single self-assembled InAs/GaAs quantum dots are promising bright sources of indistinguishable photons for quantum information science. However, their distribution in emission wavelength, due to inhomogeneous broadening inherent to their growth, has limited the ability to create multiple identical sources. Quantum frequency conversion can overcome this issue, particularly if implemented using scalable chip-integrated technologies. Here, we report the first demonstration of quantum frequency conversion of a quantum dot single-photon source on a silicon nanophotonic chip. Single photons from a quantum dot in a micropillar cavity are shifted in wavelength with an on-chip conversion efficiency of approximately 12 %, limited by the linewidth of the quantum dot photons. The intensity autocorrelation function g(2)(tau) for the frequency-converted light is antibunched with g(2)(0) = 0.290 +/- 0.030, compared to the before-conversion value g(2)(0) = 0.080 +/- 0.003. We demonstrate the suitability of our frequency conversion interface as a resource for quantum dot sources by characterizing its effectiveness across a wide span of input wavelengths (840 nm to 980 nm), and its ability to achieve tunable wavelength shifts difficult to obtain by other approaches.
By developing the method of multipliers, we establish sufficient conditions which guarantee the total absence of eigenvalues of the Laplacian in the half-space, subject to variable complex Robin boundary conditions. As a further application of this technique, uniform resolvent estimates are derived under the same assumptions on the potential. Some of the results are new even in the self-adjoint setting, where we obtain quantum-mechanically natural conditions.
Specialized compute blocks have been developed for efficient DNN execution. However, due to the vast amount of data and parameter movements, the interconnects and on-chip memories form another bottleneck, impairing power and performance. This work addresses this bottleneck by contributing a low-power technique for edge-AI inference engines that combines overhead-free coding with a statistical analysis of the data and parameters of neural networks. Our approach reduces the interconnect and memory power consumption by up to 80% for state-of-the-art benchmarks while providing additional power savings for the compute blocks by up to 39%. These power improvements are achieved with no loss of accuracy and negligible hardware cost.
The semi-classical study of a 1-dimensional Schr\"odinger operator near a non-degenerate maximum of the potential has lead Colin de Verdi\`ere and Parisse to prove a microlocal normal form theorem for any 1-dimensional pseudo-differential operator with the same kind of singularity. We present here a generalization of this result to pseudo-differential integrable systems of any finite degree of freedom with a Morse singularity. Our results are based upon Eliasson's study of critical integrable systems.
We discuss path integrals for quantum mechanics with a potential which is a perturbation of the upside-down oscillator. We express the path integral (in the real time) by the Wiener measure. We obtain the Feynman integral for perturbations which are the Fourier-Laplace transforms of a complex measure and for polynomials of the fotm $x^{4n}$ and $x^{4n+2}$ (where $n$ is a natural number). We extend the method to quantum field theory (QFT) with complex scaled spatial coordinates ${\bf x}\rightarrow i{\bf x}$. We show that such a complex extension of the path integral (in the real time) allows a rigorous path integral treatment of a large class of potentials including the ones unbounded from below.
Pathological diagnosis is the gold standard for cancer diagnosis, but it is labor-intensive, in which tasks such as cell detection, classification, and counting are particularly prominent. A common solution for automating these tasks is using nucleus segmentation technology. However, it is hard to train a robust nucleus segmentation model, due to several challenging problems, the nucleus adhesion, stacking, and excessive fusion with the background. Recently, some researchers proposed a series of automatic nucleus segmentation methods based on point annotation, which can significant improve the model performance. Nevertheless, the point annotation needs to be marked by experienced pathologists. In order to take advantage of segmentation methods based on point annotation, further alleviate the manual workload, and make cancer diagnosis more efficient and accurate, it is necessary to develop an automatic nucleus detection algorithm, which can automatically and efficiently locate the position of the nucleus in the pathological image and extract valuable information for pathologists. In this paper, we propose a W-shaped network for automatic nucleus detection. Different from the traditional U-Net based method, mapping the original pathology image to the target mask directly, our proposed method split the detection task into two sub-tasks. The first sub-task maps the original pathology image to the binary mask, then the binary mask is mapped to the density mask in the second sub-task. After the task is split, the task's difficulty is significantly reduced, and the network's overall performance is improved.
Mathematical models are fundamental building blocks in the design of dynamical control systems. As control systems are becoming increasingly complex and networked, approaches for obtaining such models based on first principles reach their limits. Data-driven methods provide an alternative. However, without structural knowledge, these methods are prone to finding spurious correlations in the training data, which can hamper generalization capabilities of the obtained models. This can significantly lower control and prediction performance when the system is exposed to unknown situations. A preceding causal identification can prevent this pitfall. In this paper, we propose a method that identifies the causal structure of control systems. We design experiments based on the concept of controllability, which provides a systematic way to compute input trajectories that steer the system to specific regions in its state space. We then analyze the resulting data leveraging powerful techniques from causal inference and extend them to control systems. Further, we derive conditions that guarantee the discovery of the true causal structure of the system. Experiments on a robot arm demonstrate reliable causal identification from real-world data and enhanced generalization capabilities.
Consider a 2-dimensional soft random geometric graph $G(\lambda,s,\phi)$, obtained by placing a Poisson($\lambda s^2$) number of vertices uniformly at random in a square of side $s$, with edges placed between each pair $x,y$ of vertices with probability $\phi(\|x-y\|)$, where $\phi: {\bf R}_+ \to [0,1]$ is a finite-range connection function. This paper is concerned with the asymptotic behaviour of the graph $G(\lambda,s,\phi)$ in the large-$s$ limit with $(\lambda,\phi)$ fixed. We prove that the proportion of vertices in the largest component converges in probability to the percolation probability for the corresponding random connection model, which is a random graph defined similarly for a Poisson process on the whole plane. We do not cover the case where $\lambda$ equals the critical value $\lambda_c(\phi)$.
We consider Hamilton Jacobi Bellman equations in an inifinite dimensional Hilbert space, with quadratic (respectively superquadratic) hamiltonian and with continuous (respectively lipschitz continuous) final conditions. This allows to study stochastic optimal control problems for suitable controlled Ornstein Uhlenbeck process with unbounded control processes.
This paper describes an architecture for predicting the price of cryptocurrencies for the next seven days using the Adaptive Network Based Fuzzy Inference System (ANFIS). Historical data of cryptocurrencies and indexes that are considered are Bitcoin (BTC), Ethereum (ETH), Bitcoin Dominance (BTC.D), and Ethereum Dominance (ETH.D) in a daily timeframe. The methods used to teach the data are hybrid and backpropagation algorithms, as well as grid partition, subtractive clustering, and Fuzzy C-means clustering (FCM) algorithms, which are used in data clustering. The architectural performance designed in this paper has been compared with different inputs and neural network models in terms of statistical evaluation criteria. Finally, the proposed method can predict the price of digital currencies in a short time.
Reconstruction of gene regulatory networks is the process of identifying gene dependency from gene expression profile through some computation techniques. In our human body, though all cells pose similar genetic material but the activation state may vary. This variation in the activation of genes helps researchers to understand more about the function of the cells. Researchers get insight about diseases like mental illness, infectious disease, cancer disease and heart disease from microarray technology, etc. In this study, a cancer-specific gene regulatory network has been constructed using a simple and novel machine learning approach. In First Step, linear regression algorithm provided us the significant genes those expressed themselves differently. Next, regulatory relationships between the identified genes has been computed using Pearson correlation coefficient. Finally, the obtained results have been validated with the available databases and literatures. We can identify the hub genes and can be targeted for the cancer diagnosis.
We propose a first-order augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem min |sigma(F(X)-G)|_alpha + |C(X)- d|_beta subject to A(X)-b in Q; where sigma(X) denotes the vector of singular values of X, the matrix norm |sigma(X)|_alpha denotes either the Frobenius, the nuclear, or the L2-operator norm of X, the vector norm |.|_beta denotes either the L1-norm, L2-norm or the L infty-norm; Q is a closed convex set and A(.), C(.), F(.) are linear operators from matrices to vector spaces of appropriate dimensions. Basis pursuit, matrix completion, robust principal component pursuit (PCP), and stable PCP problems are all special cases of the composite norm minimization problem. Thus, FALC is able to solve all these problems in a unified manner. We show that any limit point of FALC iterate sequence is an optimal solution of the composite norm minimization problem. We also show that for all epsilon > 0, the FALC iterates are epsilon-feasible and epsilon-optimal after O(log(1/epsilon)) iterations, which require O(1/epsilon) constrained shrinkage operations and Euclidean projection onto the set Q. Surprisingly, on the problem sets we tested, FALC required only O(log(1/epsilon)) constrained shrinkage, instead of the O(1/epsilon) worst case bound, to compute an epsilon-feasible and epsilon-optimal solution. To best of our knowledge, FALC is the first algorithm with a known complexity bound that solves the stable PCP problem.
We discuss the recent developments in the theory of diffusive shock acceleration (DSA) by using both first-principle kinetic plasma simulations and analytical theory based on the solution of the convection/diffusion equation. In particular, we show how simulations reveal that the spectra of accelerated particles are significantly steeper than the $E^{-2}$ predicted by the standard theory of DSA for strong shocks, in agreement with several observational pieces of evidence. We single out which standard assumptions of test-particle and non-linear DSA are violated in the presence of strong (self-generated) magnetic turbulence and put forward a novel theory in better agreement with the particle spectra inferred with multi-wavelength observations of young SN remnants, radio-supernovae, and Galactic cosmic rays.
A relation between sigma-additivity and linearizability, conjectured by Jacob Feldman in 1971 for continuous products of probability spaces, is established by relating both notions to a recent idea of noise stability/sensitivity.
Wave energy is a fast-developing and promising renewable energy resource. The primary goal of this research is to maximise the total harnessed power of a large wave farm consisting of fully-submerged three-tether wave energy converters (WECs). Energy maximisation for large farms is a challenging search problem due to the costly calculations of the hydrodynamic interactions between WECs in a large wave farm and the high dimensionality of the search space. To address this problem, we propose a new hybrid multi-strategy evolutionary framework combining smart initialisation, binary population-based evolutionary algorithm, discrete local search and continuous global optimisation. For assessing the performance of the proposed hybrid method, we compare it with a wide variety of state-of-the-art optimisation approaches, including six continuous evolutionary algorithms, four discrete search techniques and three hybrid optimisation methods. The results show that the proposed method performs considerably better in terms of convergence speed and farm output.
The existence of normalizable zero modes of the twisted Dirac operator is proven for a class of static Einstein-Yang-Mills background fields with a half-integer Chern-Simons number. The proof holds for any gauge group and applies to Dirac spinors in an arbitrary representation of the gauge group. The class of background fields contains all regular, asymptotically flat, CP-symmetric configurations with a connection that is globally described by a time-independent spatial one-form which vanishes sufficiently fast at infinity. A subset is provided by all neutral, spherically symmetric configurations which satisfy a certain genericity condition, and for which the gauge potential is purely magnetic with real magnetic amplitudes.
We completely classify the orientable infinite-type surfaces $S$ such that $\operatorname{PMap}(S)$, the pure mapping class group, has automatic continuity. This classification includes surfaces with noncompact boundary. In the case of surfaces with finitely many ends and no noncompact boundary components, we prove the mapping class group $\operatorname{Map}(S)$ does not have automatic continuity. We also completely classify the surfaces such that $\overline{\operatorname{PMap}_c(S)}$, the subgroup of the pure mapping class group composed of elements with representatives that can be approximated by compactly supported homeomorphisms, has automatic continuity. In some cases when $\overline{\operatorname{PMap}_c(S)}$ has automatic continuity, we show any homomorphism from $\overline{\operatorname{PMap}_c(S)}$ to a countable group is trivial.
In this note, we give an alternative proof of the generating function of $p$-Bernoulli numbers. Our argument is based on the Euler's integral representation.
In this article, we look into geodesics in the Schwarzschild-Anti-de Sitter metric in (3+1) spacetime dimensions. We investigate the class of marginally bound geodesics (timelike and null), while comparing their behavior with the normal Schwarzschild metric. Using $\textit{Mathematica}$, we calculate the shear and rotation tensors, along with other components of the Raychaudhuri equation in this metric and we argue that marginally bound timelike geodesics, in the equatorial plane, always have a turning point, while their null analogues have at least one family of geodesics that are unbound. We also present associated plots for the geodesics and geodesic congruences, in the equatorial plane.
This paper is an invited commentary on Tamas Budavari's presentation, "On statistical cross-identification in astronomy," for the Statistical Challenges in Modern Astronomy V conference held at Pennsylvania State University in June 2011. I begin with a brief review of previous work on probabilistic (Bayesian) assessment of directional and spatio-temporal coincidences in astronomy (e.g., cross-matching or cross-identification of objects across multiple catalogs). Then I discuss an open issue in the recent innovative work of Budavari and his colleagues on large-scale probabilistic cross-identification: how to assign prior probabilities that play an important role in the analysis. With a simple toy problem, I show how Bayesian multilevel modeling (hierarchical Bayes) provides a principled framework that justifies and generalizes pragmatic rules of thumb that have been successfully used by Budavari's team to assign priors.
Multiconfigurational Hartree-Fock theory is presented and implemented in an investigation of the fragmentation of a Bose-Einstein condensate made of identical bosonic atoms in a double well potential at zero temperature. The approach builds in the effects of the condensate mean field and of atomic correlations by describing generalized many-body states that are composed of multiple configurations which incorporate atomic interactions. Nonlinear and linear optimization is utilized in conjunction with the variational and Hylleraas-Undheim theorems to find the optimal ground and excited states of the interacting system. The resulting energy spectrum and associated eigenstates are presented as a function of double well barrier height. Delocalized and localized single configurational states are found in the extreme limits of the simple and fragmented condensate ground states, while multiconfigurational states and macroscopic quantum superposition states are revealed throughout the full extent of barrier heights. Comparison is made to existing theories that either neglect mean field or correlation effects and it is found that contributions from both interactions are essential in order to obtain a robust microscopic understanding of the condensate's atomic structure throughout the fragmentation process.
We address the classification of ancient solutions to the Gauss curvature flow under the assumption that the solutions are contained in a cylinder of bounded cross section. For each cylinder of convex bounded cross-section, we show that there are only two ancient solutions which are asymptotic to this cylinder: the non-compact translating soliton and the compact oval solution obtained by gluing two translating solitons approaching each other from time $-\infty$ from two opposite ends.
We present a next-to-leading order (NLO) treatment of the production of a new charged heavy vector boson, generically called W', at hadron colliders via the Drell-Yan process. We fully consider the interference effects with the Standard Model W boson and allow for arbitrary chiral couplings to quarks and leptons. We present results at both leading order (LO) and NLO in QCD using the MC@NLO/Herwig++ and POWHEG methods. We derive theoretical observation curves on the mass-width plane for both the LO and NLO cases at different collider luminosities. The event generator used, Wpnlo, is fully customisable and publicly available.
The very high rates of second generation star formation detected and inferred in high redshift objects should be accompanied by intense millimetre-wave emission from hot core molecules. We calculate the molecular abundances likely to arise in hot cores associated with massive star formation at high redshift, using several independent models of metallicity in the early Universe. If the number of hot cores exceeds that in the Milky Way Galaxy by a factor of at least one thousand, then a wide range of molecules in high redshift hot cores should have detectable emission. It should be possible to distinguish between independent models for the production of metals and hence hot core molecules should be useful probes of star formation at high redshift.
In this work, we present a systematic derivation of the distribution of eigenfrequencies for oscillations of the ground state of a repulsive Bose-Einstein condensate in the semi-classical (Thomas-Fermi) limit. Our calculations are performed in 1-, 2- and 3-dimensional settings. Connections with the earlier work of Stringari, with numerical computations, and with theoretical expectations for invariant frequencies based on symmetry principles are also given.
We introduce a signed variant of (valued) quivers and a mutation rule that generalizes the classical Fomin-Zelevinsky mutation of quivers. To any signed valued quiver we associate a matrix that is a signed analogue of the Cartan counterpart appearing in the theory of cluster algebras. From this matrix, we construct a Lie algebra via a "Serre-like" presentation. In the mutation Dynkin case, we define root systems using the signed Cartan counterpart and show compatibility with mutation of roots as defined by Parsons. Using results from Barot-Rivera and P\'erez-Rivera, we show that mutation equivalent signed quivers yield isomorphic Lie algebras, giving presentations of simple complex Lie algebras.
In an algorithmic complexity attack, a malicious party takes advantage of the worst-case behavior of an algorithm to cause denial-of-service. A prominent algorithmic complexity attack is regular expression denial-of-service (ReDoS), in which the attacker exploits a vulnerable regular expression by providing a carefully-crafted input string that triggers worst-case behavior of the matching algorithm. This paper proposes a technique for automatically finding ReDoS vulnerabilities in programs. Specifically, our approach automatically identifies vulnerable regular expressions in the program and determines whether an "evil" input string can be matched against a vulnerable regular expression. We have implemented our proposed approach in a tool called REXPLOITER and found 41 exploitable security vulnerabilities in Java web applications.
In this work we focus on two classes of games: XOR nonlocal games and XOR* sequential games with monopartite resources. XOR games have been widely studied in the literature of nonlocal games, and we introduce XOR* games as their natural counterpart within the class of games where a resource system is subjected to a sequence of controlled operations and a final measurement. Examples of XOR* games are $2\rightarrow 1$ quantum random access codes (QRAC) and the CHSH* game introduced by Henaut et al. in [PRA 98,060302(2018)]. We prove, using the diagrammatic language of process theories, that under certain assumptions these two classes of games can be related via an explicit theorem that connects their optimal strategies, and so their classical (Bell) and quantum (Tsirelson) bounds. We also show that two of such assumptions -- the reversibility of transformations and the bi-dimensionality of the resource system in the XOR* games -- are strictly necessary for the theorem to hold by providing explicit counterexamples. We conclude with several examples of pairs of XOR/XOR* games and by discussing in detail the possible resources that power the quantum computational advantages in XOR* games.
Discovered in 1909 the Evershed effect represents strong mass outflows in sunspot penumbra, where the magnetic field of sunspots is filamentary and almost horizontal. These flows play important role in sunspots and have been studied in detail using large ground-based and space telescopes, but the basic understanding of its mechanism is still missing. We present results of realistic numerical simulations of the Sun's subsurface dynamics, and argue that the key mechanism of this effect is in non-linear magnetoconvection that has properties of traveling waves in the presence of strong, highly inclined magnetic field. The simulations reproduce many observed features of the Evershed effect, including the high-speed "Evershed clouds", the filamentary structure of the flows, and the non-stationary quasi-periodic behavior. The results provide a synergy of previous theoretical models and lead to an interesting prediction of a large-scale organization of the outflows.
While Artificial Intelligence (AI) technologies are being progressively developed, artists and researchers are investigating their role in artistic practices. In this work, we present an AI-based Brain-Computer Interface (BCI) in which humans and machines interact to express feelings artistically. This system and its production of images give opportunities to reflect on the complexities and range of human emotions and their expressions. In this discussion, we seek to understand the dynamics of this interaction to reach better co-existence in fairness, inclusion, and aesthetics.
We study modifications of the Hawking emission in the evaporation of miniature black holes possibly produced in accelerators when their mass approaches the fundamental scale of gravity, set to 1 TeV according to some extra dimension models. Back-reaction and quantum gravity corrections are modelled by employing modified relations between the black hole mass and temperature. We release the assumption that black holes explode at 1 TeV or leave a remnant, and let them evaporate to much smaller masses. We have implemented such modified decay processes into an existing micro-black hole event generator, performing a study of the decay products in order to search for phenomenological evidence of quantum gravity effects.
The notion of non-degenerate solutions for the dispersionless Toda hierarchy is generalized to the universal Whitham hierarchy of genus zero with $M+1$ marked points. These solutions are characterized by a Riemann-Hilbert problem (generalized string equations) with respect to two-dimensional canonical transformations, and may be thought of as a kind of general solutions of the hierarchy. The Riemann-Hilbert problem contains $M$ arbitrary functions $H_a(z_0,z_a)$, $a = 1,...,M$, which play the role of generating functions of two-dimensional canonical transformations. The solution of the Riemann-Hilbert problem is described by period maps on the space of $(M+1)$-tuples $(z_\alpha(p) : \alpha = 0,1,...,M)$ of conformal maps from $M$ disks of the Riemann sphere and their complements to the Riemann sphere. The period maps are defined by an infinite number of contour integrals that generalize the notion of harmonic moments. The $F$-function (free energy) of these solutions is also shown to have a contour integral representation.
Understanding stochastic gradient descent (SGD) and its variants is essential for machine learning. However, most of the preceding analyses are conducted under amenable conditions such as unbiased gradient estimator and bounded objective functions, which does not encompass many sophisticated applications, such as variational Monte Carlo, entropy-regularized reinforcement learning and variational inference. In this paper, we consider the SGD algorithm that employ the Markov Chain Monte Carlo (MCMC) estimator to compute the gradient, called MCMC-SGD. Since MCMC reduces the sampling complexity significantly, it is an asymptotically convergent biased estimator in practice. Moreover, by incorporating a general class of unbounded functions, it is much more difficult to analyze the MCMC sampling error. Therefore, we assume that the function is sub-exponential and use the Bernstein inequality for non-stationary Markov chains to derive error bounds of the MCMC estimator. Consequently, MCMC-SGD is proven to have a first order convergence rate $O(\log K/\sqrt{n K})$ with $K$ iterations and a sample size $n$. It partially explains how MCMC influences the behavior of SGD. Furthermore, we verify the correlated negative curvature condition under reasonable assumptions. It is shown that MCMC-SGD escapes from saddle points and reaches $(\epsilon,\epsilon^{1/4})$ approximate second order stationary points or $\epsilon^{1/2}$-variance points at least $O(\epsilon^{-11/2}\log^{2}(1/\epsilon) )$ steps with high probability. Our analysis unveils the convergence pattern of MCMC-SGD across a broad class of stochastic optimization problems, and interprets the convergence phenomena observed in practical applications.
A numerical program is presented which facilitates a computation pertaining to the full set of one-gluon loop diagrams (including ghost loop contributions), with M attached external gluon lines in all possible ways. The feasibility of such a task rests on a suitably defined master formula, which is expressed in terms of a set of Grassmann and a set of Feynman parameters. The program carries out the Grassmann integration and performs the Lorentz trace on the involved functions, expressing the result as a compact sum of parametric integrals. The computation is based on tracing the structure of the final result, thus avoiding all intermediate unnecessary calculations and directly writing the output. Similar terms entering the final result are grouped together. The running time of the program demonstrates its effectiveness, especially for large M.
We demonstrate that the form and location (not the size or spacing) of the energetically preferred geometrical structure of the crystalline quark-hadron mixed phase in a neutron star is very sensitive to finite size terms beyond the surface term. We consider two independent approaches of including further finite size terms, namely the multiple reflection expansion of the bag model and an effective surface tension description. Thus care should be taken in any model requiring detailed knowledge of these crystalline structures.
We present in this letter a high resolution (22'' FWHM) extended map at 2.1mm of the Sunyaev-Zel'dovich effect toward the most luminous X-ray cluster, RXJ1347-1145. These observations have been performed with the DIABOLO photometer working at the focus of the 30m IRAM radiotelescope. We have derived a projected gas mass of $(1.1 \pm 0.1)\times 10^{14} h_{50}^{-5/2}$M$_{\odot}$ within an angular radius of $\theta=74''$ (ie: projected radius of 0.6Mpc, $H_{0}=50$km/s/Mpc, $\Omega_m=0.3$, $\Omega_\Lambda=0.7$). This result matches very well the expected gas mass from the cluster models of X-ray data. With an unprecedented sensitivity level our measurement does not show significant departure from a spherical distribution. The data analysis also allows us to characterize the 2.1mm flux of a well known radio source lying in the center of the cluster: $F_{RS}(2.1\textrm{mm})=5.7\pm 1.6$mJy.
Recent interest in very thin single phase Ga1-xMnxN dilute magnetic layers increased needs for precise, non-destructive, and relatively fast characterization methods with key issues being the macroscopic lateral Mn distribution and the absolute values of Mn concentration x. We report on resonantly enhanced UV Raman scattering studies of high quality Ga1-xMnxN layers grown on GaN templated sapphire by molecular beam epitaxy with 4 < x < 9%. The main advantage of the UV excitation is the restriction of the light penetration depth to nearly a hundred nanometers, eliminating signal from the GaN buffer. Under this conditions we determine the dependence of the 1LO phonon frequency on x, what allows for a fine mapping of its lateral distribution over the entire surface of the samples. Our Raman scanning clearly confirms substantial lateral distribution of Mn atoms across the layer, which is radial with respect to its center. From the established distributions in two deliberately chosen layers the magnitude of the optimal growth temperature for most efficient Mn atoms incorporation in epitaxial GaN has been confirmed. It is shown that the combination of the 1LO line width and its energy provides assessment of the crystalline quality of the investigated layers.
It is shown that in some special cases the Cherenkov radiation from a charged particle moving along the axis of cylindrical waveguide filled with a semi-infinite material consisting of dielectric plates alternated with vacuum gaps is many times stronger than that in the waveguide filled with semi-infinite solid dielectric without vacuum gaps.
In this paper we report the use of superconducting transmon qubit in a 3D cavity for quantum machine learning and photon counting applications. We first describe the realization and characterization of a transmon qubit coupled to a 3D resonator, providing a detailed description of the simulation framework and of the experimental measurement of important parameters, like the dispersive shift and the qubit anharmonicity. We then report on a Quantum Machine Learning application implemented on the single-qubit device to fit the u-quark parton distribution function of the proton. In the final section of the manuscript we present a new microwave photon detection scheme based on two qubits coupled to the same 3D resonator. This could in principle decrease the dark count rate, favouring applications like axion dark matter searches.
If one is not ready to pay a large fine-tuning price within supersymmetric models given the current measurement of the Higgs boson mass, one can envisage a scenario where the supersymmetric spectrum is made of heavy scalar sparticles and much lighter fermionic superpartners. We offer a cosmological explanation of why nature might have chosen such a mass pattern: the opposite mass pattern is not observed experimentally because it is not compatible with the plausible idea that the universe went through a period of primordial inflation.
Visual Geo-localization (VG) has emerged as a significant research area, aiming to identify geolocation based on visual features. Most VG approaches use learnable feature extractors for representation learning. Recently, Self-Supervised Learning (SSL) methods have also demonstrated comparable performance to supervised methods by using numerous unlabeled images for representation learning. In this work, we present a novel unified VG-SSL framework with the goal to enhance performance and training efficiency on a large VG dataset by SSL methods. Our work incorporates multiple SSL methods tailored for VG: SimCLR, MoCov2, BYOL, SimSiam, Barlow Twins, and VICReg. We systematically analyze the performance of different training strategies and study the optimal parameter settings for the adaptation of SSL methods for the VG task. The results demonstrate that our method, without the significant computation and memory usage associated with Hard Negative Mining (HNM), can match or even surpass the VG performance of the baseline that employs HNM. The code is available at https://github.com/arplaboratory/VG_SSL.
Deep optical and near-infrared imaging of the entire Galactic plane is essential for understanding our Galaxy's stars, gas, and dust. The second data release of the DECam Plane Survey (DECaPS2) extends the five-band optical and near-infrared survey of the southern Galactic plane to cover $6.5\%$ of the sky, |b| < 10{\deg} and 6{\deg} > l > -124{\deg}, complementary to coverage by Pan-STARRS1. Typical single-exposure effective depths, including crowding effects and other complications, are 23.5, 22.6, 22.1, 21.6, and 20.8 mag in $g$, $r$, $i$, $z$, and $Y$ bands, respectively, with around 1 arcsecond seeing. The survey comprises 3.32 billion objects built from 34 billion detections in 21.4 thousand exposures, totaling 260 hours open shutter time on the Dark Energy Camera (DECam) at Cerro Tololo. The data reduction pipeline features several improvements, including the addition of synthetic source injection tests to validate photometric solutions across the entire survey footprint. A convenient functional form for the detection bias in the faint limit was derived and leveraged to characterize the photometric pipeline performance. A new post-processing technique was applied to every detection to de-bias and improve uncertainty estimates of the flux in the presence of structured backgrounds, specifically targeting nebulosity. The images and source catalogs are publicly available at http://decaps.skymaps.info/.
We show that the neutrino chirality flip, that can take place in the core of a neutron star at birth, is an efficient process to allow neutrinos to anisotropically escape, thus providing a to induce the neutron star kick velocities. The process is not subject to the {\it no-go theorem} since although the flip from left- to right-handed neutrinos happens at equilibrium, the reverse process does not take place given that right-handed neutrinos do not interact with matter and therefore detailed balance is lost. For simplicity, we model the neutron star core as being made of strange quark matter. We find that the process is efficient when the neutrino magnetic moment is not smaller than $4.7 \times 10^{-15}\mu_B$, where $\mu_B$ is the Bohr magneton. When this lower bound is combined with the most stringent upper bound, that uses the luminosity data obtained from the analysis of SN 1987A, our results set a range for the neutrino magnetic moment given by $4.7 \times 10^{-15} \leq \mu_\nu/\mu_B \leq (0.1 - 0.4)\times 10^{-11}$. The obtained kick velocities for natal conditions are consistent with the observed ones and span the correct range of radii for typical magnetic field intensities.
We prove that an element $g$ of prime order $>3$ belongs to the solvable radical $R(G)$ of a finite (or, more generally, a linear) group if and only if for every $x\in G$ the subgroup generated by $g, xgx^{-1}$ is solvable. This theorem implies that a finite (or a linear) group $G$ is solvable if and only if in each conjugacy class of $G$ every two elements generate a solvable subgroup.
Very-high-energy gamma rays produce electron positron pairs in interactions with low-energy photons of extragalactic background light during propagation through the intergalactic medium. The electron-positron pairs generate secondary gamma rays detectable by gamma-ray telescopes. This secondary emission can be used to detect intergalactic magnetic fields (IGMF) in the voids of large-scale structure. A new gamma-ray observatory, namely, Cherenkov Telescope Array (CTA), will provide an increase in sensitivity for detections of these secondary gamma-ray emission and enable the measurement of its properties for sources at cosmological distances. The interpretation of the CTA data, including detection of IGMF and study of its properties and origins, will require precision modeling of the primary and secondary gamma-ray fluxes. We asses the precision of the modeling of the secondary gamma-ray emission using model calculations with publicly available Monte-Carlo codes CRPropa and ELMAG and compare their predictions with theoretical expectations and with model calculations of a newly developed CRbeam code. We find that model predictions of different codes differ by up to 50% for low-redshift sources, with discrepancies increasing up to order-of-magnitude level with the increasing source redshifts. We identify the origin of these discrepancies and demonstrate that after eliminating the inaccuracies found, the discrepancies between the three codes are reduced to 10% when modeling nearby sources with z~0.1. We argue that the new CRbeam code provides reliable predictions for spectral, timing and imaging properties of the secondary gamma-ray signal for both nearby and distant sources with z~1. Thus, it can be used to study gamma-ray sources and IGMF with a level of precision that is appropriate for the prospective CTA study of the effects of gamma-ray propagation through the intergalactic medium.
Let $\Omega$ be an open set in Euclidean space with finite Lebesgue measure $|\Omega|$. We obtain some properties of the set function $F:\Omega\mapsto \R^+$ defined by $$ F(\Omega)=\frac{T(\Omega)\lambda_1(\Omega)}{|\Omega|} ,$$ where $T(\Omega)$ and $\lambda_1(\Omega)$ are the torsional rigidity and the first eigenvalue of the Dirichlet Laplacian respectively. We improve the classical P\'olya bound $F(\Omega)\le 1,$ and show that $$F(\Omega)\le 1- \nu_m T(\Omega)|\Omega|^{-1-\frac2m},$$ where $\nu_m$ depends only on $m$. For any $m=2,3,\dots$ and $\epsilon\in (0,1)$ we construct an open set $\Omega_{\epsilon}\subset \R^m$ such that $F(\Omega_{\epsilon})\ge 1-\epsilon$.
For the cubic Schr\"odinger system with trapping potentials in $\mathbb{R}^N$, $N\leq3$, or in bounded domains, we investigate the existence and the orbital stability of standing waves having components with prescribed $L^2$-mass. We provide a variational characterization of such solutions, which gives information on the stability through of a condition of Grillakis-Shatah-Strauss type. As an application, we show existence of conditionally orbitally stable solitary waves when: a) the masses are small, for almost every scattering lengths, and b) in the defocusing, weakly interacting case, for any masses.
Oscillatory complex networks in the metastable regime have been used to study the emergence of integrated and segregated activity in the brain, which are hypothesised to be fundamental for cognition. Yet, the parameters and the underlying mechanisms necessary to achieve the metastable regime are hard to identify, often relying on maximising the correlation with empirical functional connectivity dynamics. Here, we propose and show that the brain's hierarchically modular mesoscale structure alone can give rise to robust metastable dynamics and (metastable) chimera states in the presence of phase frustration. We construct unweighted $3$-layer hierarchical networks of identical Kuramoto-Sakaguchi oscillators, parameterized by the average degree of the network and a structural parameter determining the ratio of connections between and within blocks in the upper two layers. Together, these parameters affect the characteristic timescales of the system. Away from the critical synchronization point, we detect the emergence of metastable states in the lowest hierarchical layer coexisting with chimera and metastable states in the upper layers. Using the Laplacian renormalization group flow approach, we uncover two distinct pathways towards achieving the metastable regimes detected in these distinct layers. In the upper layers, we show how the symmetry-breaking states depend on the slow eigenmodes of the system. In the lowest layer instead, metastable dynamics can be achieved as the separation of timescales between layers reaches a critical threshold. Our results show an explicit relationship between metastability, chimera states, and the eigenmodes of the system, bridging the gap between harmonic based studies of empirical data and oscillatory models.
The Chromospheric Lyman Alpha Spectropolarimeter (CLASP) observed the Sun in H I Lyman-{\alpha} during a suborbital rocket flight on September 3, 2015. The Interface Region Imaging Telescope (IRIS) coordinated with the CLASP observations and recorded nearly simultaneous and co-spatial observations in the Mg II h&k lines. The Mg II h and Ly-{\alpha} lines are important transitions, energetically and diagnostically, in the chromosphere. The canonical solar atmosphere model predicts that these lines form in close proximity to each other and so we expect that the line profiles will exhibit similar variability. In this analysis, we present these coordinated observations and discuss how the two profiles compare over a region of quiet sun at viewing angles that approach the limb. In addition to the observations, we synthesize both line profiles using a 3D radiation-MHD simulation. In the observations, we find that the peak width and the peak intensities are well correlated between the lines. For the simulation, we do not find the same relationship. We have attempted to mitigate the instrumental differences between IRIS and CLASP and to reproduce the instrumental factors in the synthetic profiles. The model indicates that formation heights of the lines differ in a somewhat regular fashion related to magnetic geometry. This variation explains to some degree the lack of correlation, observed and synthesized, between Mg II and Ly-{\alpha}. Our analysis will aid in the definition of future observatories that aim to link dynamics in the chromosphere and transition region.
Electronic nematicity have attracted a great deal of interest in the unconventional superconductivity research. We present an experimental method to investigate reflectivity anisotropy by the electronic nematicity and its non-equilibrium dynamics. Our experimental results on Ba(Fe$_{0.955}$Co$_{0.045}$)$_{2}$As$_{2}$ single crystal clearly feature the broken four-fold symmetry along directions parallel to Fe-Fe bonding. Numerical simulations using WVASE from Woollam Co. demonstrate that our method is highly reliable under various experimental conditions. Finally, we perform time-resolved reflectivity anisotropy measurement confirming that photo-excitation by 1.55 eV photons suppresses the reflectivity anisotropy due to nematic order.
Reconstructing human dynamic vision from brain activity is a challenging task with great scientific significance. The difficulty stems from two primary issues: (1) vision-processing mechanisms in the brain are highly intricate and not fully revealed, making it challenging to directly learn a mapping between fMRI and video; (2) the temporal resolution of fMRI is significantly lower than that of natural videos. To overcome these issues, this paper propose a two-stage model named Mind-Animator, which achieves state-of-the-art performance on three public datasets. Specifically, during the fMRI-to-feature stage, we decouple semantic, structural, and motion features from fMRI through fMRI-vision-language tri-modal contrastive learning and sparse causal attention. In the feature-to-video stage, these features are merged to videos by an inflated Stable Diffusion. We substantiate that the reconstructed video dynamics are indeed derived from fMRI, rather than hallucinations of the generative model, through permutation tests. Additionally, the visualization of voxel-wise and ROI-wise importance maps confirms the neurobiological interpretability of our model.
We consider a dynamic server control problem for two parallel queues with randomly varying connectivity and server switchover time between the queues. At each time slot the server decides either to stay with the current queue or switch to the other queue based on the current connectivity and the queue length information. The introduction of switchover time is a new modeling component of this problem, which makes the problem much more challenging. We develop a novel approach to characterize the stability region of the system by using state action frequencies, which are stationary solutions to a Markov Decision Process (MDP) formulation of the corresponding saturated system. We characterize the stability region explicitly in terms of the connectivity parameters and develop a frame-based dynamic control (FBDC) policy that is shown to be throughput-optimal. In fact, the FBDC policy provides a new framework for developing throughput-optimal network control policies using state action frequencies. Furthermore, we develop simple Myopic policies that achieve more than 96% of the stability region. Finally, simulation results show that the Myopic policies may achieve the full stability region and are more delay efficient than the FBDC policy in most cases.
We preform a multi-orbital analysis on the novel superconductivity in Na_{x}CoO_{2} \cdot yH_{2}O. We construct a three-orbital model which reproduces the band structure expected from the LDA calculation. The effective interaction leading to the pairing is estimated by means of the perturbation theory. It is shown that the spin triplet superconductivity is stabilized in the wide parameter region. This is basically owing to the ferromagnetic character of spin fluctuation. The p-wave and f-wave superconductivity are nearly degenerate. The former is realized when the Hund's rule coupling is large, and vice versa. In a part of the parameter space, the d-wave superconductivity is also stabilized. We point out that the orbital degeneracy plays an essential role for these results through the wave function of quasi-particles. The nearly degeneracy of p-wave and f-wave superconductivity is explained by analysing the orbital character of each Fermi surface. We discuss the validity of some reduced models. While the single band Hubbard model reproducing the Fermi surface is qualitatively inappropriate, we find an effective two-orbital model appropriate for studying the superconductivity. We investigate the vertex corrections higher than the third order on the basis of the two-orbital model. It is shown that the vertex correction induces the screening effect but does not affect on the qualitative results.
Magnetic nanoparticles and their magnetization dynamics play an important role in many applications. We focus on magnetization dynamics in large ensembles of single domain nanoparticles being characterized by either Brownian or N\'{e}el rotation mechanisms. Simulations of the respective behavior are obtained by solving advection-diffusion equations on the sphere, for which a unified computational framework is developed and investigated. This builds the basis for solving two parameter identification problems, which are formulated in the context of the chosen application, magnetic particle imaging. The functionality of the computational framework is illustrated by numerical results in the parameter identification problems either compared quantitatively or qualitatively to measured data.
Competing magnetically ordered structures in polymerized orthorhombic A1C60 are studied. A mean-field theory for the equilibrium phases is developed using an Ising model and a classical Heisenberg model to describe the competition between inter- and intra-chain magnetic order in the solid. In the Ising model, the limiting commensurate one-dimensional and three-dimensional phases are separated by a commensurate three-sublattice state and by two sectors containing higher-order commensurate phases. For the Heisenberg model the quasi-1D phase is never the equilibrium state; instead the 3D commensurate phases exhibits a transition to a continuum of coplanar spiral magnetic phases.
Many physical properties of metals can be understood in terms of the free electron model, as proven by the Wiedemann-Franz law. According to this model, electronic thermal conductivity ($\kappa_{el}$) can be inferred from the Boltzmann transport equation (BTE). However, the BTE does not perform well for some complex metals, such as Cu. Moreover, the BTE cannot clearly describe the origin of the thermal energy carried by electrons or how this energy is transported in metals. The charge distribution of conduction electrons in metals is known to reflect the electrostatic potential (EP) of the ion cores. Based on this premise, we develop a new methodology for evaluating $\kappa_{el}$ by combining the free electron model and non-equilibrium ab initio molecular dynamics (NEAIMD) simulations. We demonstrate that the kinetic energy of thermally excited electrons originates from the energy of the spatial electrostatic potential oscillation (EPO), which is induced by the thermal motion of ion cores. This method directly predicts the $\kappa_{el}$ of pure metals with a high degree of accuracy.
Perturbative probability conservation provides a strong constraint on the presence of new interactions of the Higgs boson. In this work we consider CP violating Higgs interactions in conjunction with unitarity constraints in the gauge-Higgs and fermion-Higgs sectors. Injecting signal strength measurements of the recently discovered Higgs boson allows us to make concrete and correlated predictions of how CP-violation in the Higgs sector can be directly constrained through collider searches for either characteristic new states or tell-tale enhancements in multi-Higgs processes.
Bell's theorem has long been considered to establish local realism as the fundamental principle that contradicts quantum mechanics. It is therefore surprising that the quantum pigeonhole effect points to the pigeonhole principle as yet another source of contradiction [Aharonov et al., Proc. Natl. Acad. Sci. USA, 113, 532-535 (2016)]. Here we construct two new forms of Bell's inequality with the pigeonhole principle, then reconstruct Aharonov et al.'s weak measurement on a bipartite system. We show that in both cases it is counterfactual reasoning by the assumption of realism rather than the pigeonhole principle being violated by quantum mechanics. We further show that the quantum pigeonhole effect is in fact a new version of Bell's theorem without inequality. With the pigeonhole principle as the same conduit, a comparison between two versions of Bell's theorem becomes straightforward as it only relies on the orthonormality of the Bell states.
The {\alpha}-element abundances of the globular cluster (GC) and field star populations of galaxies encode information about the formation of each of these components. We use the E-MOSAICS cosmological simulations of ~L* galaxies and their GCs to investigate the [{\alpha}/Fe]-[Fe/H] distribution of field stars and GCs in 25 Milky Way-mass galaxies. The [{\alpha}/Fe]-[Fe/H] distribution go GCs largely follows that of the field stars and can also therefore be used as tracers of the [{\alpha}/Fe]-[Fe/H] evolution of the galaxy. Due to the difference in their star formation histories, GCs associated with stellar streams (i.e. which have recently been accreted) have systematically lower [{\alpha}/Fe] at fixed [Fe/H]. Therefore, if a GC is observed to have low [{\alpha}/Fe] for its [Fe/H] there is an increased probability that this GC was accreted recently alongside a dwarf galaxy. There is a wide range of shapes for the field star [{\alpha}/Fe]-[Fe/H] distribution, with a notable subset of galaxies exhibiting bimodal distributions, in which the high [{\alpha}/Fe] sequence is mostly comprised of stars in the bulge, a high fraction of which are from disrupted GCs. We calculate the contribution of disrupted GCs to the bulge component of the 25 simulated galaxies and find values between 0.3-14 per cent, where this fraction correlates with the galaxy's formation time. The upper range of these fractions is compatible with observationally-inferred measurements for the Milky Way, suggesting that in this respect the Milky Way is not typical of L* galaxies, having experienced a phase of unusually rapid growth at early times.
Experimental quantum physics and computing platforms rely on sophisticated computer control and timing systems that must be deterministic. An exemplar is the sequence used to create a Bose-Einstein condensate at the University of Illinois, which involves 46,812 analog and digital transitions over 100 seconds with 20 ns timing precision and nanosecond timing drift. We present a control and sequencing platform, using industry-standard National Instruments hardware to generate the necessary digital and analog signals, that achieves this level of performance. The system uses a master 10 MHz reference clock that is conditioned to the Global Positioning Satellite constellation and leverages low-phase-noise clock distribution hardware for timing stability. A Python-based user front-end provides a flexible language to describe experimental procedures and easy-to-implement version control. A library of useful peripheral hardware that can be purchased as low-cost evaluation boards provides enhanced capabilities. We provide a GitHub repository containing example python sequences and libraries for peripheral devices as a resource for the community.
We evaluate nuclear shadowing of the total cross section of charm particles production in DIS within the framework of the Gribov theory of nuclear shadowing using as input the recent QCD Pomeron parton density analysis of the HERA diffractive data. Assuming that the QCD factorization theorem is applicable to the charm production off nuclei we also calculate shadowing of the gluon densities in nuclei and find it sufficiently larg for heavy nuclei: $G_{A\sim 200}(x,Q^2)/AG_N(x,Q^2) \sim 0.45-0.5 \cdot (A/200)^{-0.15}$ for $x \sim 10^{-3\div -4}, Q^2 \sim 10 \div 40 GeV^2$ to influence significantly the physics of heavy ion collisions at LHC. We also discuss some properties of the final states for $\gamma^*A $ processes dominated by the scattering off small $x$ gluons like the high $p_t$ jet and charm production.
Using the QCD factorization approach, we reexamine the two-body hadronic charmless $B$-meson decays to final states involving a pseudoscalar~($P$) and a vector~($V$) meson, with inclusion of the penguin contractions of spectator-scattering amplitudes induced by the $b\to D g^\ast g^\ast$~(where $D=d$ or $s$, and $g^\ast$ denotes an off-shell gluon) transitions, which are of order $\alpha_s^2$. Their impacts on the CP-averaged branching ratios and CP-violating asymmetries are examined. We find that these higher order penguin contraction contributions have significant impacts on some specific decay modes. Since $B\to \pi K^{\ast}$, $K \rho$ decays involve the same electro-weak physics as $B\to \pi K$ puzzles, we present a detailed analysis of these decays and find that the five R-ratios for $B\to \pi K^{\ast}$, $K \rho$ system are in agreement with experimental data except for $R(\pi K^*)$. Generally, these new contributions are found to be important for penguin-dominated $B\to PV$ decays.
The electrification revolution in automobile industry and others demands annual production capacity of batteries at least on the order of 102 gigawatts hours, which presents a twofold challenge to supply of key materials such as cobalt and nickel and to recycling when the batteries retire. Pyrometallurgical and hydrometallurgical recycling are currently used in industry but suffer from complexity, high costs, and secondary pollution. Here we report a direct-recycling method in molten salts (MSDR) that is environmentally benign and value-creating based on a techno-economic analysis using real-world data and price information. We also experimentally demonstrate the feasibility of MSDR by upcycling a low-nickel polycrystalline LiNi0.5Mn0.3Co0.2O2 (NMC) cathode material that is widely used in early-year electric vehicles into Ni-rich (Ni > 65%) single-crystal NMCs with increased energy-density (>10% increase) and outstanding electrochemical performance (>94% capacity retention after 500 cycles in pouch-type full cells). This work opens up new opportunities for closed-loop recycling of electric vehicle batteries and manufacturing of next-generation NMC cathode materials.
We present our XMM-Newton RGS observations of X Comae, an AGN behind the Coma cluster. We detect absorption by NeIX and OVIII at the redshift of Coma with an equivalent width of 3.3+/-1.8 eV and 1.7+/-1.3 eV, respectively (90% confidence errors or 2.3 sigma and 1.9 sigma confidence detections determined from Monte Carlo simulations). The combined significance of both lines is 3.0 sigma, again determined from Monte Carlo simulations. The same observation yields a high statistics EPIC spectrum of the Coma cluster gas at the position of X Comae. We detect emission by NeIX with a flux of 2.5+/-1.2 x 10^-8 photons cm^-2 s^-1 arcmin^-2 (90% confidence errors or 3.4 sigma confidence detection). These data permit a number of diagnostics to determine the properties of the material causing the absorption and producing the emission. Although a wide range of properties is permitted, values near the midpoint of the range are T = 4 x 10^6 K, n_H = 6 x 10^-6 cm^-3 corresponding to an overdensity with respect to the mean of 32, line of sight path length through it 41 Z/Zsolar^-1 Mpc where Z/Zsolar is the neon metallicity relative to solar. All of these properties are what has been predicted of the warm-hot intergalactic medium (WHIM), so we conclude that we have detected the WHIM associated with the Coma cluster.
We propose a realistic scheme to detect topological edge states in an optical lattice subjected to a synthetic magnetic field, based on a generalization of Bragg spectroscopy sensitive to angular momentum. We demonstrate that using a well-designed laser probe, the Bragg spectra provide an unambiguous signature of the topological edge states that establishes their chiral nature. This signature is present for a variety of boundaries, from a hard wall to a smooth harmonic potential added on top of the optical lattice. Experimentally, the Bragg signal should be very weak. To make it detectable, we introduce a "shelving method", based on Raman transitions, which transfers angular momentum and changes the internal atomic state simultaneously. This scheme allows to detect the weak signal from the selected edge states on a dark background, and drastically improves the detectivity. It also leads to the possibility to directly visualize the topological edge states, using in situ imaging, offering a unique and instructive view on topological insulating phases.