text
stringlengths
6
128k
A family of quantum measures like the Shannon distinguishability is presented. These measures are defined over the two classes of POVM measurements and related to separate parts in the expression for mutual information. Changes of Ky Fan's norms and the partitioned trace distances under the operation of partial trace are discussed. Upper and lower bounds on the introduced quantities are obtained in terms of partitioned trace distances and Uhlmann's partial fidelities. These inequalities provide a kind of generalization of the well-known bounds on the Shannon distinguishability. The notion of cryptographic exponential indistinguishability for quantum states is revisited. When exponentially fast convergence is required, all the metrics induced by unitarily invariant norms are shown to be equivalent.
We recently hypothesized that a distortion parameter exists such that its signed sum for all images of singular gravitational lensing of a source vanishes identically [K. S. Virbhadra, Phys. Rev. D {\bf 106}, 064038 (2022)]. We found a distortion parameter (the ratio of the tangential to radial magnifications) that satisfied the hypothesis for the images of Schwarzschild lensing with flying colors. We now show that another distortion parameter (the difference of tangential and radial magnifications) also magnificently supports our hypothesis when we perform computations with the primary-secondary and relativistic images. The distortion parameters, which satisfy the aesthetically appealing hypothesis, will likely aid in developing gravitational lensing theory. Finally, we discuss the conservation of distortion of images in gravitational lensing.
A powerful tool for studying geometrical problems in Hilbert space is developed. In particular, we study the quantum pure state tomography problem in finite dimensions from the point of view of dynamical systems and bifurcations theory. First, we introduce a generalization of the Hellinger metric for probability distributions which allows us to find a geometrical interpretation of the quantum state tomography problem. Thereafter, we prove that every solution to the state tomography problem is an attractive fixed point of the so-called physical imposition operator. Additionally, we demonstrate that multiple states corresponding to the same experimental data are associated to bifurcations of this operator. Such a kind of bifurcations only occurs when informationally incomplete set of observables are considered. Finally, we prove that the physical imposition operator has a non-contractive Lipschitz constant 2 for the Bures metric. This value of the Lipschitz constant manifests the existence of the quantum tomography problem for pure states.
Glucose homeostasis is controlled by the islets of Langerhans which are equipped with alpha-cells increasing the blood glucose level, beta-cells decreasing it, and delta-cells the precise role of which still needs identifying. Although intercellular communications between these endocrine cells have recently been observed, their roles in glucose homeostasis have not been clearly understood. In this study, we construct a mathematical model for an islet consisting of two-state alpha-, beta-, and delta-cells, and analyze effects of known chemical interactions between them with emphasis on the combined effects of those interactions. In particular, such features as paracrine signals of neighboring cells and cell-to-cell variations in response to external glucose concentrations as well as glucose dynamics, depending on insulin and glucagon hormone, are considered explicitly. Our model predicts three possible benefits of the cell-to-cell interactions: First, the asymmetric interaction between alpha- and beta-cells contributes to the dynamic stability while the perturbed glucose level recovers to the normal level. Second, the inhibitory interactions of delta-cells for glucagon and insulin secretion prevent the wasteful co-secretion of them at the normal glucose level. Finally, the glucose dose-responses of insulin secretion is modified to become more pronounced at high glucose levels due to the inhibition by delta-cells. It is thus concluded that the intercellular communications in islets of Langerhans should contribute to the effective control of glucose homeostasis.
In this paper, we explore an efficient uncoupled unsourced random access (UURA) scheme for 6G massive communication. UURA is a typical framework of unsourced random access that addresses the problems of codeword detection and message stitching, without the use of check bits. Firstly, we establish a framework for UURA, allowing for immediate decoding of sub-messages upon arrival. Thus, the processing delay is effectively reduced due to the decreasing waiting time. Next, we propose an integrated decoding algorithm for sub-messages by leveraging matrix information geometry (MIG) theory. Specifically, MIG is applied to measure the feature similarities of codewords belonging to the same user equipment, and thus sub-message can be stitched once it is received. This enables the timely recovery of a portion of the original message by simultaneously detecting and stitching codewords within the current sub-slot. Furthermore, we analyze the performance of the proposed integrated decoding-based UURA scheme in terms of computational complexity and convergence rate. Finally, we present extensive simulation results to validate the effectiveness of the proposed scheme in 6G wireless networks.
In the clinic, resected tissue samples are stained with Hematoxylin-and-Eosin (H&E) and/or Immunhistochemistry (IHC) stains and presented to the pathologists on glass slides or as digital scans for diagnosis and assessment of disease progression. Cell-level quantification, e.g. in IHC protein expression scoring, can be extremely inefficient and subjective. We present DeepLIIF (https://deepliif.org), a first free online platform for efficient and reproducible IHC scoring. DeepLIIF outperforms current state-of-the-art approaches (relying on manual error-prone annotations) by virtually restaining clinical IHC slides with more informative multiplex immunofluorescence staining. Our DeepLIIF cloud-native platform supports (1) more than 150 proprietary/non-proprietary input formats via the Bio-Formats standard, (2) interactive adjustment, visualization, and downloading of the IHC quantification results and the accompanying restained images, (3) consumption of an exposed workflow API programmatically or through interactive plugins for open source whole slide image viewers such as QuPath/ImageJ, and (4) auto scaling to efficiently scale GPU resources based on user demand.
In this article we show the following result: if $C$ is an $n$-dimensional convex and compact subset, $f:C\rightarrow[0,\infty)$ is concave, and $\phi:[0,\infty)\rightarrow[0,\infty)$ is a convex function with $\phi(0)=0$, we then characterize the class of sets and concave functions that attain the supremum \[ \sup_{C,f}\int_C\phi(f(x))dx, \] where the supremum ranges over all sets $C$ with $n$-dimensional volume $|C|=c$ and the additional condition that $f(x_{C,f})=k$ for some point $x_{C,f}\in C$ that we introduce in the article, for two non-negative constants $c,k>0$. As a consequence, we extend some results of Milman and Pajor in [MP] and some in [Thm. 1.2, GoMe]. Besides, we also obtain some new estimates on the volume of particular sections of a convex set $K$ passing through a new point of $K$.
The main purpose of this study is to introduce a semi-classical model describing betting scenarios in which, at variance with conventional approaches, the payoff of the gambler is encoded into the internal degrees of freedom of a quantum memory element. In our scheme, we assume that the invested capital is explicitly associated with the quantum analog of the free-energy (i.e. ergotropy functional by Allahverdyan, Balian, and Nieuwenhuizen) of a single mode of the electromagnetic radiation which, depending on the outcome of the betting, experiences attenuation or amplification processes which model losses and winning events. The resulting stochastic evolution of the quantum memory resembles the dynamics of random lasing which we characterize within the theoretical setting of Bosonic Gaussian channels. As in the classical Kelly Criterion for optimal betting, we define the asymptotic doubling rate of the model and identify the optimal gambling strategy for fixed odds and probabilities of winning. The performance of the model are hence studied as a function of the input capital state under the assumption that the latter belongs to the set of Gaussian density matrices (i.e. displaced, squeezed thermal Gibbs states) revealing that the best option for the gambler is to devote all her/his initial resources into coherent state amplitude.
The partition function of a massless scalar field on a Euclidean spacetime manifold $\mathbb{R}^{d-1}\times\mathbb{T}^2$ and with momentum operator in the compact spatial dimension coupled through a purely imaginary chemical potential is computed. It is modular covariant and admits a simple expression in terms of a real analytic SL$(2,\mathbb{Z})$ Eisenstein series with $s=(d+1)/2$. Different techniques for computing the partition function illustrate complementary aspects of the Eisenstein series: the functional approach gives its series representation, the operator approach yields its Fourier series, while the proper time/heat kernel/world-line approach shows that it is the Mellin transform of a Riemann theta function. High/low temperature duality is generalized to the case of a non-vanishing chemical potential. By clarifying the dependence of the partition function on the geometry of the torus, we discuss how modular covariance is a consequence of full SL$(2,\mathbb{Z})$ invariance. When the spacetime manifold is $\mathbb{R}^p\times\mathbb{T}^{q+1}$, the partition function is given in terms of a SL$(q+1,\mathbb{Z})$ Eisenstein series again with $s=(d+1)/2$. In this case, we obtain the high/low temperature duality through a suitably adapted dual parametrization of the lattice defining the torus. On $\mathbb{T}^{d+1}$, the computation is more subtle. An additional divergence leads to an harmonic anomaly.
A quantum walk algorithm can detect the presence of a marked vertex on a graph quadratically faster than the corresponding random walk algorithm (Szegedy, FOCS 2004). However, quantum algorithms that actually find a marked element quadratically faster than a classical random walk were only known for the special case when the marked set consists of just a single vertex, or in the case of some specific graphs. We present a new quantum algorithm for finding a marked vertex in any graph, with any set of marked vertices, that is (up to a log factor) quadratically faster than the corresponding classical random walk.
We have calculated the rate coefficients for D(1s) + H^+ <--> D^+ + H(1s) using recently published theoretical cross sections. We present results for temperatures T from 1 K to 2x10^5 K and provide fits to our data for use in plasma modeling. Our calculations are in good agreement with previously published rate coefficients for 25 <= T <= 300 K, which covers most of the limited range for which those results were given. Our new rate coefficients for T >~ 100 K are significantly larger than the values most commonly used for modeling the chemistry of the early universe and of molecular clouds. This may have important implications for the predicted HD abundance in these environments. Using our results, we have modeled the ionization balance in high redshift QSO absorbers. We find that the new rate coefficients decrease the inferred D/H ratio by <~ 0.4%. This is a factor of >~ 25 smaller than the current >~ 10% uncertainties in QSO absorber D/H measurements.
Large language model (LLM) performance on reasoning problems typically does not generalize out of distribution. Previous work has claimed that this can be mitigated with chain of thought prompting-a method of demonstrating solution procedures-with the intuition that it is possible to in-context teach an LLM an algorithm for solving the problem. This paper presents a case study of chain of thought on problems from Blocksworld, a classical planning domain, and examines the performance of two state-of-the-art LLMs across two axes: generality of examples given in prompt, and complexity of problems queried with each prompt. While our problems are very simple, we only find meaningful performance improvements from chain of thought prompts when those prompts are exceedingly specific to their problem class, and that those improvements quickly deteriorate as the size n of the query-specified stack grows past the size of stacks shown in the examples. We also create scalable variants of three domains commonly studied in previous CoT papers and demonstrate the existence of similar failure modes. Our results hint that, contrary to previous claims in the literature, CoT's performance improvements do not stem from the model learning general algorithmic procedures via demonstrations but depend on carefully engineering highly problem specific prompts. This spotlights drawbacks of chain of thought, especially the sharp tradeoff between possible performance gains and the amount of human labor necessary to generate examples with correct reasoning traces.
These Monte Carlo studies describe the impact of higher order effects in both QCD and EW $t\bar{t}W$ production. Both next-to-leading inclusive and multileg setups are studied for $t\bar{t}W$ QCD production.
Essential to understanding the cuprate pseudogap phase is a study of the charge (and spin) response functions, which we address here via a consistent approach to the Fermi arcs and the Fermi pockets scenario of Yang, Rice and Zhang (YRZ). The two schemes are demonstrated to be formally similar, and to share a common physics platform; we use this consolidation to address the inclusion of vertex corrections which have been omitted in YRZ applications. We show vertex corrections can be easily implemented in a fashion analytically consistent with sum rules and that they yield important contributions to most observables. A study of the charge ordering susceptibility of the YRZ scenario makes their simple physics evident: they represent the inclusion of charged bosonic, spin singlet degrees of freedom, and are found to lead to a double peak structure.
We provide an estimate on the absolute values of the emission rate of photon pairs produced by spontaneous parametric down conversion in a bulk crystal when all interacting fields are in single transverse Gaussian modes. Both collinear and non-collinear configurations are covered, and we arrive at a fully analytical expression for the collinear case. Our results agree reasonably well with values found in typical experiments, which allows this model to be used for understanding the dependency on the relevant experimental parameters.
SARS-COV-2 is a positive single-strand RNA-based macromolecule that has caused the death of more than 6.3 million people since June 2022. Moreover, by disturbing global supply chains through lockdown, the virus has indirectly caused devastating damage to the global economy. It is vital to design and develop drugs for this virus and its various variants. In this paper, we developed an in-silico study-based hybrid framework to repurpose existing therapeutic agents in finding drug-like bioactive molecules that would cure Covid-19. We employed the Lipinski rules on the retrieved molecules from the ChEMBL database and found 133 drug-likeness bioactive molecules against SARS coronavirus 3CL Protease. Based on standard IC50, the dataset was divided into three classes active, inactive, and intermediate. Our comparative analysis demonstrated that the proposed Extra Tree Regressor (ETR) based QSAR model has improved prediction results related to the bioactivity of chemical compounds as compared to Gradient Boosting, XGBoost, Support Vector, Decision Tree, and Random Forest based regressor models. ADMET analysis is carried out to identify thirteen bioactive molecules with ChEMBL IDs 187460, 190743, 222234, 222628, 222735, 222769, 222840, 222893, 225515, 358279, 363535, 365134 and 426898. These molecules are highly suitable drug candidates for SARS-COV-2 3CL Protease. In the next step, the efficacy of bioactive molecules is computed in terms of binding affinity using molecular docking and then shortlisted six bioactive molecules with ChEMBL IDs 187460, 222769, 225515, 358279, 363535, and 365134. These molecules can be suitable drug candidates for SARS-COV-2. It is anticipated that the pharmacologist/drug manufacturer would further investigate these six molecules to find suitable drug candidates for SARS-COV-2. They can adopt these promising compounds for their downstream drug development stages.
This paper studies the gap between quantum one-way communication complexity $Q(f)$ and its classical counterpart $C(f)$, under the {\em unbounded-error} setting, i.e., it is enough that the success probability is strictly greater than 1/2. It is proved that for {\em any} (total or partial) Boolean function $f$, $Q(f)=\lceil C(f)/2 \rceil$, i.e., the former is always exactly one half as large as the latter. The result has an application to obtaining (again an exact) bound for the existence of $(m,n,p)$-QRAC which is the $n$-qubit random access coding that can recover any one of $m$ original bits with success probability $\geq p$. We can prove that $(m,n,>1/2)$-QRAC exists if and only if $m\leq 2^{2n}-1$. Previously, only the construction of QRAC using one qubit, the existence of $(O(n),n,>1/2)$-RAC, and the non-existence of $(2^{2n},n,>1/2)$-QRAC were known.
Multistage robust optimization problems can be interpreted as two-person zero-sum games between two players. We exploit this game-like nature and utilize a game tree search in order to solve quantified integer programs (QIPs). In this algorithmic environment relaxations are repeatedly called to asses the quality of a branching variable and for the generation of bounds. A useful relaxation, however, must be well balanced with regard to its quality and its computing time. We present two relaxations that incorporate scenarios from the uncertainty set, whereby the considered set of scenarios is continuously adapted according to the latest information gathered during the search process. Using selection, assignment, and runway scheduling problems as a testbed, we show the impact of our findings.
Cyclic Prefix Direct Sequence Spread Spectrum (CP-DSSS) is a promising solution for futuristic 6G ultra-reliable low latency communications (URLLC) and massive machine type communication (mMTC) applications. We propose that in such applications, the CP-DSSS waveform would operate as a secondary network at the same frequencies as the primary network but at much lower SNR. In this paper, we evaluate per-user capacity of CP-DSSS when simple matched filtering (MF) is performed on the uplink (UL) and time-reversal (TR) precoding is used on the downlink (DL). In this setting when operating in the low SNR regime, CP-DSSS achieves a per-user capacity that is near the optimum single-user capacity. TR precoding converges to the optimal capacity as the number of antennas at the hub/gateway increases. Using the estimated channel impulse response for MF and TR introduces little to no capacity loss. Given the near-optimal performance of MF detection and TR precoding for each of the users, CP-DSSS can be implemented with simple device transceiver structures, reducing per-unit cost for massively deployed 6G networks.
While convergence of the Alternating Direction Method of Multipliers (ADMM) on convex problems is well studied, convergence on nonconvex problems is only partially understood. In this paper, we consider the Gaussian phase retrieval problem, formulated as a linear constrained optimization problem with a biconvex objective. The particular structure allows for a novel application of the ADMM. It can be shown that the dual variable is zero at the global minimizer. This motivates the analysis of a block coordinate descent algorithm, which is equivalent to the ADMM with the dual variable fixed to be zero. We show that the block coordinate descent algorithm converges to the global minimizer at a linear rate, when starting from a deterministically achievable initialization point.
We present a preliminary report on a calculation of scattering length for I=2 $S$-wave two-pion system directly from the two-pion wave function. Results are compared with those calculated from the time dependence of two-pion four-point functions. Calculations are made with an RG-improved action for gluons and improved Wilson action for quarks at $a^{-1}=1.207(12)$ GeV on $20^3 \times 48$ and $24^3 \times 48$ lattices.
Quantum kinetic equations of motion for the description of the exciton spin dynamics in II-VI diluted magnetic semiconductor quantum wells with laser driving are derived. The model includes the magnetic as well as the nonmagnetic carrier-impurity interaction, the Coulomb interaction, Zeeman terms, and the light-matter coupling, allowing for an explicit treatment of arbitrary excitation pulses. Based on a dynamics-controlled truncation scheme, contributions to the equations of motion up to second order in the generating laser field are taken into account. The correlations between the carrier and the impurity subsystems are treated within the framework of a correlation expansion. For vanishing magnetic field, the Markov limit of the quantum kinetic equations formulated in the exciton basis agrees with existing theories based on Fermi's golden rule. For narrow quantum wells excited at the $1s$ exciton resonance, numerical quantum kinetic simulations reveal pronounced deviations from the Markovian behavior. In particular, the spin decays initially with approximately half the Markovian rate and a non-monotonic decay in the form of an overshoot of up to $10\,\%$ of the initial spin polarization is predicted.
We report scanning transmission X-ray microscopy of mixed helical and skyrmion magnetic states in thin FeGe lamellae. This imaging of the out-of-plane magnetism allows clear identification of the different magnetic states, and reveals details about the coexistence of helical and skyrmion states. In particular, our data show that finite length helices are continuously deformable down to the size of individual skyrmions and are hence topologically equivalent to skyrmions. Furthermore, we observe transition states between helical and skyrmion states across the thickness of the lamella that are evidence for frozen Bloch points in the sample after field cooling.
Polymer nanocomposites based on 2D materials as fillers are the target in the industrial sector, but the ability to manufacture them on a large scale is very limited, and there is a lack of tools to scale up the manufacturing process of these nanocomposites. Here, for the first time, a systematic and fundamental study showing how 2D materials are inserted into the polymeric matrix in order to obtain nanocomposites using conventional and industrially scalable polymer processing machines leading to large-scale manufacturing are described. Two new strategies were used to insert pre-exfoliated 2D material into the polymer matrix, liquid-phase feeder, and solid-solid deposition. Characterizations were beyond micro and nanoscale, allowing the evaluation of the morphology for millimeter samples size. The methodologies described here are extendable to all thermoplastic polymers and 2D materials providing nanocomposites with suitable morphology to obtain singular properties and also triggering the start of the manufacturing process on a large scale.
We consider the effect of electron-phonon interactions on edge states in quantum Hall systems with a single edge branch. The presence of electron-phonon interactions modifies the single-particle propagator for general quantum Hall edges, and, in particular, destroys the Fermi liquid even at integer filling. The effect of the electron-phonon interactions may be detected experimentally in the AC conductance or in the tunneling conductance between integer quantum Hall edges.
The new 1.4 MeV/u front end HSI (HochStromInjektor) of the Unilac accelerates ions with A/q ratios of up to 65 and with beam intensities in emA of up to 0.25 A/q. The maximum beam pulse power is up to 1300 kW. During the stepwise linac commissioning from April to Septem-ber 1999 the beam behind of each cavity was analysed within two weeks. A very stable Ar1+ beam out of a vol-ume plasma source MUCIS was used mainly. The meas-ured norm. 80 % emittance areas around 0.45 pi mm mrad are close to the results from beam simulations. Up to 80 % of the design intensity at the linac exit were achieved. In February 2000 an U4+ beam from the MEVVA source was accelerated for the first time.
Among the many experimental techniques available, those providing directional information have the potential of yielding an unambiguous observation of WIMPs even in the presence of insidious backgrounds. A measurement of the distribution of arrival direction of WIMPs can also discriminate between Galactic Dark Matter halo models. In this article, I will discuss the motivation for directional detectors and review the experimental techniques used by the various experiments. I will then describe one of them, the DMTPC detector, in more detail.
This current work is an extension to work previously done by the authors. The dynamics of a baby skyrmion configuration, in a model Landau-Lifshitz equation, was studied in the presence of various potential obstructions. The baby skyrmion configuration was constructed from two Q=1 hedgehog solutions to the new baby Skyrme model in $(2+1)$ dimensions. The potential obstructions were created by introducing a new term into the Lagrangian which resulted in a localised inhomogeneity in the potential term's coefficient. In the barrier system the normal circular path was deformed as the skyrmions traversed the barrier, after which the skyrmions orbited the boundary of the system. For critical values of the barrier height and width the skyrmions were no longer bound although the unbound behaviour is not clearly distinct from the bound. In the case of a potential hole the dynamics of baby skyrmions is dependent upon the binding energy of the system. Depending upon its value, the skyrmions' behaviour varies. The angular momentum must be modified to ensure overall conservation. We show that there exists a link between the oscillation in the skyrmion's energy density and the periods of non-conservation of the angular momentum in Landau-Lifshitz models.
Spectral estimation can be preformed using the so called THREE-like approach. Such method leads to a convex optimization problem whose solution is characterized through its dual problem. In this paper, we show that the dual problem can be seen as a new parametric spectral estimation problem. This interpretation implies that the THREE-like solution is optimal in terms of closeness to the correlogram over a certain parametric class of spectral densities, enriching in this way its meaningfulness.
Let $p^k m^2$ be an odd perfect number with special prime $p$. In this article, we provide an alternative proof for the biconditional that $\sigma(m^2) \equiv 1 \pmod 4$ holds if and only if $p \equiv k \pmod 8$. We then give an application of this result to the case when $\sigma(m^2)/p^k$ is a square.
This work is concerned with the development of a numerical modelling approach for studying the time-accurate response of aerospace fasteners subjected to high electrical current loading from a simulated lightning strike. The electromagnetic, thermal and elastoplastic response of individual fastener components is captured by this method allowing a critical analysis of fastener design and material layering. Under high electrical current loading, ionisation of gas filled cavities in the fastener assembly can lead to viable current paths across internal voids. This ionisation can lead to localised pockets of high pressure plasma through the Joule heating effect. The multi-physics approach developed in this paper extends an existing methodology that allows a two-way dynamic non-linear coupling of the plasma arc, the titanium aerospace fastener components, the surrounding aircraft panels, the internal supporting structure and internal plasma-filled cavities. Results from this model are compared with experimental measurements of a titanium fastener holding together carbon composite panels separated by thin dielectric layers. The current distribution measurements are shown to be accurately reproduced. A parameter study is used to assess the internal cavity modelling strategy and to quantify the relation between the internal cavity plasma pressure, the electrical current distribution and changes in the internal cavity geometry.
We report the first terahertz Kerr measurements on bulk crystals of the topological insulator Bi2Se3. At T=10K and fields up to 8T, the real and imaginary Kerr angle and reflectance measurements utilizing both linearly and circularly polarized incident radiation were measured at a frequency of 5.24meV. A single fluid free carrier bulk response can not describe the line-shape. Surface states with a small mass and surprisingly large associated spectral weight quantitatively fit all data. However, carrier concentration inhomogeneity has not been ruled out. A method employing a gate is shown to be promising for separating surface from bulk effects.
We show autoconsistent chemical and spectro-photometric evolution models applied to spiral and irregular galaxies. Evolutionary synthesis models usually used to explain the stellar component spectro-photometric data, are combined with chemical evolution models, to determine precisely the evolutionary history of spiral and irregular galaxies. In this piece of work we will show the results obtained for a wide grid of modeled theoretical galaxies
Efficiency in passage times is an important issue in designing networks, such as transportation or computer networks. The small-world networks have structures that yield high efficiency, while keeping the network highly clustered. We show that among all networks with the small-world structure, the most efficient ones have a single ``center'', from which all shortcuts are connected to uniformly distributed nodes over the network. The networks with several centers and a connected subnetwork of shortcuts are shown to be ``almost'' as efficient. Genetic-algorithm simulations further support our results.
We present two ways of regularizing a one parameter family of piece-wise smooth dynamical systems undergoing a codimension one grazing-sliding global bifurcation of periodic orbits. First we use the Sotomayor-Teixeira regularization and prove that the regularized family has a saddle-node bifurcation of periodic orbits. Then we perform a hysteretic regularization and show that the regularized family has chaotic dynamics. Our result shows that, in spite that the two regularizations will give the same dynamics in the sliding modes, when a tangency appears the hysteretic process generates chaotic dynamics.
We study the stability of the electroweak vacuum in the supersymmetric (SUSY) standard model (SM), paying particular attention to its relation to the SUSY contribution to the muon anomalous magnetic moment $a_\mu$. If the SUSY contribution to $a_\mu$ is sizable, the electroweak vacuum may become unstable because of enhanced trilinear scalar interactions in particular when the sleptons are heavy. Consequently, assuming enhanced SUSY contribution to $a_\mu$, an upper bound on the slepton masses is obtained. We give a detailed prescription to perform a full one-loop calculation of the decay rate of the electroweak vacuum for the case that the SUSY contribution to $a_\mu$ is enhanced. We also give an upper bound on the slepton masses as a function of the SUSY contribution to $a_\mu$.
In this article, we employed nanosecond Z-scan technique to demonstrate the nonlinear optical response in Ge30Se55Bi15 thin films after thermal and photo annealing. The intensity dependent open aperture Z-scan traces reveal that for all the samples, i.e. as-prepared, thermal and photo annealed thin films exhibit reverse saturable absorption (RSA). The experimental results indicate that both thermal and photo annealing can be efficiently used to enhance the nonlinear absorption coefficient compared to as-prepared sample. We further demonstrate that beta value of thermally annealed and as-prepared samples increase significantly at higher intensities. On the contrary, beta for photo annealed sample does not exhibit appreciable changes against the intensity variation.
Through a series of papers in the 1980's, Bouchet introduced isotropic systems and the Tutte-Martin polynomial of an isotropic system. Then, Arratia, Bollob\'as, and Sorkin developed the interlace polynomial of a graph in [ABS00] in response to a DNA sequencing application. The interlace polynomial has generated considerable recent attention, with new results including realizing the original interlace polynomial by a closed form generating function expression instead of by the original recursive definition (see Aigner and van der Holst [AvdH04], and Arratia, Bollob\'as, and Sorkin [ABS04b]). Now, Bouchet [Bou05] recognizes the vertex-nullity interlace polynomial of a graph as the Tutte-Martin polynomial of an associated isotropic system. This suggests that the machinery of isotropic systems may be well-suited to investigating properties of the interlace polynomial. Thus, we present here an alternative proof for the closed form presentation of the vertex-nullity interlace polynomial using the machinery of isotropic systems. This approach both illustrates the intimate connection between the vertex-nullity interlace polynomial and the Tutte-Martin polynomial of an isotropic system and also provides a concrete example of manipulating isotropic systems. We also provide a brief survey of related work.
An Automata Network is a map ${f:Q^n\rightarrow Q^n}$ where $Q$ is a finite alphabet. It can be viewed as a network of $n$ entities, each holding a state from $Q$, and evolving according to a deterministic synchronous update rule in such a way that each entity only depends on its neighbors in the network's graph, called interaction graph. A major trend in automata network theory is to understand how the interaction graph affects dynamical properties of $f$. In this work we introduce the following property called expansivity: the observation of the sequence of states at any given node is sufficient to determine the initial configuration of the whole network. Our main result is a characterization of interaction graphs that allow expansivity. Moreover, we show that this property is generic among linear automata networks over such graphs with large enough alphabet. We show however that the situation is more complex when the alphabet is fixed independently of the size of the interaction graph: no alphabet is sufficient to obtain expansivity on all admissible graphs, and only non-linear solutions exist in some cases. Finally, among other results, we consider a stronger version of expansivity where we ask to determine the initial configuration from any large enough observation of the system. We show that it can be achieved for any number of nodes and naturally gives rise to maximum distance separable codes.
We present a numerically efficient approach for learning a risk-neutral measure for paths of simulated spot and option prices up to a finite horizon under convex transaction costs and convex trading constraints. This approach can then be used to implement a stochastic implied volatility model in the following two steps: 1. Train a market simulator for option prices, as discussed for example in our recent; 2. Find a risk-neutral density, specifically the minimal entropy martingale measure. The resulting model can be used for risk-neutral pricing, or for Deep Hedging in the case of transaction costs or trading constraints. To motivate the proposed approach, we also show that market dynamics are free from "statistical arbitrage" in the absence of transaction costs if and only if they follow a risk-neutral measure. We additionally provide a more general characterization in the presence of convex transaction costs and trading constraints. These results can be seen as an analogue of the fundamental theorem of asset pricing for statistical arbitrage under trading frictions and are of independent interest.
Propagation effects are one of the main sources of noise in high-precision pulsar timing. For pulsars below an ecliptic latitude of $5^\circ$, the ionised plasma in the solar wind can introduce dispersive delays of order 100 microseconds around solar conjunction at an observing frequency of 300 MHz. A common approach to mitigate this assumes a spherical solar wind with a time-constant amplitude. However, this has been shown to be insufficient to describe the solar wind. We present a linear, Gaussian-process piecewise Bayesian approach to fit a spherical solar wind of time-variable amplitude, which has been implemented in the pulsar software run_enterprise. Through simulations, we find that the current EPTA+InPTA data combination is not sensitive to such variations; however, solar wind variations will become important in the near future with the addition of new InPTA data and data collected with the low-frequency LOFAR telescope. We also compare our results for different high-precision timing datasets (EPTA+InPTA, PPTA, and LOFAR) of three millisecond pulsars (J0030$+$0451, J1022$+$1001, J2145$-$0450), and find that the solar-wind amplitudes are generally consistent for any individual pulsar, but they can vary from pulsar to pulsar. Finally, we compare our results with those of an independent method on the same LOFAR data of the three millisecond pulsars. We find that differences between the results of the two methods can be mainly attributed to the modelling of dispersion variations in the interstellar medium, rather than the solar wind modelling.
We study the effect of an inhomogeneous gas density on positive streamer discharges in air using a 3D fluid model with stochastic photoionization, generalizing earlier work with a 2D axisymmetric model by Starikovskiy and Aleksandrov (2019 Plasma Sources Sci. Technol. 28 095022). We consider various types of planar and (hemi)spherical gas density gradients. Streamers propagate from a region of density n0 towards a region of higher or lower gas density n1, where n0 corresponds to 300 K and 1 bar. We observe that streamers can always propagate into a region with a lower gas density. When streamers enter a region with a higher gas density, branching can occur at the density gradient, with branches growing in a flower-like pattern over the gradient surface. Depending on the gas density ratio, the gradient width and other factors, narrow branches are able to propagate into the higher-density gas. In a planar geometry, we find that such propagation is possible up to a gas density slope of 3.5n0/mm, although this value depends on a number of conditions, such as the gradient angle. Surprisingly, a higher applied voltage makes it more difficult for streamers to penetrate into the high-density region, due to an increase of the primary streamer's radius.
The Semimicroscopic Algebraic Cluster Model (SACM) is applied to 12C as a system of three alpha- clusters. The microscopic model space, which observes the Pauli-Exclusion-Principle (PEP), is constructed. It is shown that the 12C nucleus can effectively be treated as a two-cluster system 8Be+alpha. The experimental spectrum is well reproduced. The geometrical mapping is discussed and it is shown that the ground state must correspond to a triangular structure, which is in agreement with other microscopic calculations. The non-zero B(E2; 0_2+ --> 2_1+) transition requires a mixing of SU(3) irreducible representations (irreps) whose consequences are discussed. The Hoyle state turns out to contain large shell excitations. The results are compared to another phenomenological model, which assumes a triangular structure and, using simple symmetry arguments, can reproduce the states observed at low energy. This model does not observe the PEP and one objective of our contribution is to verify the extend of importance of the PEP.
In this paper, we present a novel method for dynamically expanding Convolutional Neural Networks (CNNs) during training, aimed at meeting the increasing demand for efficient and sustainable deep learning models. Our approach, drawing from the seminal work on Self-Expanding Neural Networks (SENN), employs a natural expansion score as an expansion criteria to address the common issue of over-parameterization in deep convolutional neural networks, thereby ensuring that the model's complexity is finely tuned to the task's specific needs. A significant benefit of this method is its eco-friendly nature, as it obviates the necessity of training multiple models of different sizes. We employ a strategy where a single model is dynamically expanded, facilitating the extraction of checkpoints at various complexity levels, effectively reducing computational resource use and energy consumption while also expediting the development cycle by offering diverse model complexities from a single training session. We evaluate our method on the CIFAR-10 dataset and our experimental results validate this approach, demonstrating that dynamically adding layers not only maintains but also improves CNN performance, underscoring the effectiveness of our expansion criteria. This approach marks a considerable advancement in developing adaptive, scalable, and environmentally considerate neural network architectures, addressing key challenges in the field of deep learning.
It has been an ultimate but seemingly distant goal of nanofluidics to controllably fabricate capillaries with dimensions approaching the size of small ions and water molecules. We report ion transport through ultimately narrow slits that are fabricated by effectively removing a single atomic plane from a bulk crystal. The atomically flat angstrom-scale slits exhibit little surface charge, allowing elucidation of the role of steric effects. We find that ions with hydrated diameters larger than the slit size can still permeate through, albeit with reduced mobility. The confinement also leads to a notable asymmetry between anions and cations of the same diameter. Our results provide a platform for studying effects of angstrom-scale confinement, which is important for development of nanofluidics, molecular separation and other nanoscale technologies.
The electrical and magnetic properties of p-type cubic (Ga,Mn)N thin films grown by plasma-assisted molecular beam epitaxy are reported. Hole concentrations in excess of 1018 cm-3 at room temperature are observed. Activated behaviour is observed down to around 150K, characterised by an acceptor ionisation energy of around 45-60meV. The dependence of hole concentration and ionisation energy on Mn concentration indicates that the shallow acceptor level is not simply due to unintentional co-doping. Thermopower measurements on freestanding films, CV profilometry, and the dependence of conductivity on thickness and growth temperature, all show that the conduction is not due to diffusion into the substrate. We therefore associate the p-type conductivity with the presence of the Mn in the cubic GaN films. Magnetometry measurements indicate a small room temperature ferromagnetic phase, and a significantly larger magnetic coupling at low temperatures.
We characterize the existence of a nonnegative, sublinear and continuous order-preserving function for a not necessarily complete preorder on a real convex cone in an arbitrary topological real vector space. As a corollary of the main result, we present necessary and sufficient conditions for the existence of such an order-preserving function for a complete preorder.
Recent evidences show that heteroclinic bifurcations in magnetic islands may be caused by the amplitude variation of resonant magnetic perturbations in tokamaks. To investigate the onset of these bifurcations, we consider a large aspect ratio tokamak with an ergodic limiter composed of two pairs of rings that create external primary perturbations with two sets of wave numbers. An individual pair produces hyperbolic and elliptic periodic points, and its associated islands, that are consistent with the Poincar\'e-Birkhoff fixed point theorem. However, for two pairs producing external perturbations resonant on the same rational surface, we show that different configurations of isochronous island chains may appear on phase space according to the amplitude of the electric currents in each pair of the ergodic limiter. When one of the electric currents increases, isochronous bifurcations take place and new islands are created with the same winding number as the preceding islands. We present examples of bifurcation sequences displaying (a) direct transitions from the island chain configuration generated by one of the pairs to the configuration produced by the other pair, and (b) transitions with intermediate configurations produced by the limiter pairs coupling. Furthermore, we identify shearless bifurcations inside some isochronous islands, originating nonmonotonic local winding number profiles with associated shearless invariant curves.
Monolayer 2H-NbSe2 has recently been shown to be a 2-dimensional superconductor, with a coexisting charge-density wave (CDW). As both phenomena are intimately related to electron-lattice interaction, a natural question is how superconductivity and CDW are interrelated through electron-phonon coupling (EPC), which is important to the understanding of 2-dimensional superconductivity. This work investigates the superconductivity of monolayer NbSe2 in CDW phase using the anisotropic Migdal-Eliashberg formalism based on first principles calculations. The mechanism of the competition between and coexistence of the superconductivity and CDW is studied in detail by analyzing EPC. It is found that the intra-pocket scattering is related to superconductivity, leading to almost constant value of superconducting gaps on parts of the Fermi surface. The inter-pocket scattering is found to be responsible for CDW, leading to partial or full bandgap on the remaining Fermi surface. Recent experiment indicates that there is transitioning from regular superconductivity in thin-film NbSe2 to two-gap superconductivity in the bulk, which is shown here to have its origin in the extent of Fermi surface gapping of K and K' pockets induced by CDW. Overall blue shifts of the phonons and sharp decrease of Eliashberg spectrum are found when the CDW forms.
We explore how the dilepton production rate is modified near the critical temperature of color superconductivity and QCD critical point by the soft modes inherently associated with the phase transitions of second-order. It is shown that the soft modes affect the photon self-energy significantly through so called the Aslamasov-Larkin, Maki-Thompson and density of states terms, which are known responsible for the paraconductivity in the metalic superconductivity, and cause an anomalous enhancement of the production rate in the low energy/momentum region.
In the present paper, we consider the initial value problem for the bipolar Vlasov-Poisson-Boltzmann (bVPB) system and its corresponding modified Vlasov-Poisson-Boltzmann (mVPB). We give the spectrum analysis on the linearized bVPB and mVPB systems around their equilibrium state and show the optimal convergence rate of global solutions. It was showed that the electric field decays exponentially and the distribution function tends to the absolute Maxwellian at the optimal convergence rate $(1+t)^{-3/4}$ for the bVPB system, yet both the electric field and the distribution function converge to equilibrium state at the optimal rate $(1+t)^{-3/4}$ for the mVPB system.
We introduce an analytic function $\Psi(s_1,\ldots,s_r;w)$ that interpolates truncated multiple zeta functions $\zeta_N(s_1,\ldots,s_r)$. We represent this interpolant as a Mellin transform of a function $G(q_1,\ldots,q_r;w)$ and, using this expression, give the analytic continuation. Further, the harmonic product relations for $\Psi$ and $G$ are established via relevant Hopf algebra structures, and some properties of the function $G$ are provided.
Recent three-dimensional radiation hydrodynamic simulations by Wedemeyer et al. (2004) suggest that the solar chromosphere is highly structured in space and time on scales of only 1000 km and 20-25 sec, resp.. The resulting pattern consists of a network of hot gas and enclosed cool regions which are due to the propagation and interaction of shock fronts. In contrast to many other diagnostics, the radio continuum at millimeter wavelengths is formed in LTE, and provides a rather direct measure of the thermal structure. It thus facilitates the comparison between numerical model and observation. While the involved time and length scales are not accessible with todays equipment for that wavelength range, the next generation of instruments, such as the Atacama Large Millimeter Array (ALMA), will provide a big step towards the required resolution. Here we present results of radiative transfer calculations at mm and sub-mm wavelengths with emphasis on spatial and temporal resolution which are crucial for the ongoing discussion about the chromospheric temperature structure.
Detection jitter quantifies variance introduced by the detector in the determination of photon arrival time. It is a crucial performance parameter for systems using superconducting nanowire single photon detectors (SNSPDs). In this work, we have demonstrated that the detection timing jitter is limited in part by the spatial variation of photon detection events along the length of the wire. This distribution causes the generated electrical pulses to arrive at the readout at varied times. We define this jitter source as geometric jitter since it is related to the length and area of the SNSPD. To characterize the geometric jitter, we have constructed a novel differential cryogenic readout with less than 7 ps of electronic jitter that can amplify the pulses generated from the two ends of an SNSPD. By differencing the measured arrival times of the two electrical pulses, we were able to partially cancel out the difference of the propagation times and thus reduce the uncertainty of the photon arrival time. Our experimental data indicates that the variation of the differential propagation time was a few ps for a 3 {\mu}m x 3 {\mu}m device while it increased up to 50 ps for a 20 {\mu}m x 20 {\mu}m device. In a 20 {\mu}m x 20 {\mu}m large SNSPD, we achieved a 20% reduction in the overall detection timing jitter for detecting telecom-wavelength photons by using the differential cryogenic readout. The geometric jitter hypothesis was further confirmed by studying jitter in devices that consisted of long wires with 1-{\mu}m-long narrowed regions used for sensing photons.
The immense scalability of continuous-variable cluster states motivates their study as a platform for quantum computing, with fault tolerance possible given sufficient squeezing and appropriately encoded qubits [Menicucci, PRL 112, 120504 (2014)]. Here, we expand the scope of that result by showing that additional anti-squeezing has no effect on the fault-tolerance threshold, removing the purity requirement for experimental continuous-variable cluster-state quantum computing. We emphasize that the appropriate experimental target for fault-tolerant applications is to directly measure 15-17 dB of squeezing in the cluster state rather than the more conservative upper bound of 20.5 dB.
Robust, high-precision global localization is fundamental to a wide range of outdoor robotics applications. Conventional fusion methods use low-accuracy pseudorange based GNSS measurements ($>>5m$ errors) and can only yield a coarse registration to the global earth-centered-earth-fixed (ECEF) frame. In this paper, we leverage high-precision GNSS carrier-phase positioning and aid it with local visual-inertial odometry (VIO) tracking using an extended Kalman filter (EKF) framework that better resolves the integer ambiguity concerned with GNSS carrier-phase. %to achieve centimeter-level accuracy in the ECEF frame. We also propose an algorithm for accurate GNSS-antenna-to-IMU extrinsics calibration to accurately align VIO to the ECEF frame. Together, our system achieves robust global positioning demonstrated by real-world hardware experiments in severely occluded urban canyons, and outperforms the state-of-the-art RTKLIB by a significant margin in terms of integer ambiguity solution fix rate and positioning RMSE accuracy.
In the context of the AdS/CFT correspondence we discuss the gravity dual of a high energy collision in a strongly coupled ${\cal N}=4$ SYM gauge theory. We suggest a setting in which two colliding objects are made of non-dynamical heavy quarks and antiquarks, which allows to treat the process in classical string approximation. Collision ``debris'' consist of closed as well as open strings. If the latter have ends on two outgoing charges, and thus are being ``stretched'' along the collision axes. We discuss motion in AdS of some simple objects first -- massless and massive particles -- and then focus on open strings. We study the latter in a considerable detail, concluding that they rapidly become ``rectangular'' in proper time -spatial rapidity $\tau-y$ coordinates with well separated fragmentation part and a near-free-falling rapidity-independent central part. Assuming that in the collisions of ``walls'' of charges multiple stretching strings are created, we also consider the motion of a 3d stretching membrane. We then argue that a complete solution can be approximated by two different vacuum solutions of Einstein eqns, with matter membrane separating them. We identify one of this solution with Janik-Peschanski stretching black hole solution, and show that all objects approach its (retreating) horizon in an universal manner.
Employing ab initio electronic calculations, we propose a new type of two-dimensional (2D) topological insulator (TI), monolayer (ML) low buckled (LB) mercury telluride (HgTe) and mercury selenide (HgSe), with tunable band gaps. Monolayer LB HgTe undergoes a transition to a topological nontrivial phase under the appropriate in-plane tensile strain ({\epsilon} > 2.6%) due to the combination effects of strain and spin orbital coupling (SOC). Under the 2.6%< {\epsilon} <4.2% tensile strain, the band inversion and topological nontrivial gap are induced by the SOC. For {\epsilon} >4.2%, the band inversion is already realized by strain but the topological gap is induced by SOC. The band gap of monolayer LB HgTe TI phase can be tuned over a wide range from 0 eV to 0.20 eV as the tensile strain increases from 2.6% to 7.4%. Similarly, the topological phase transition of monolayer LB HgSe is induced by strain and SOC as the strain {\epsilon} >3.1%. The topological band gap can be 0.05 eV as the strain increases to about 4.6%. The large band gap of 2D LB HgTe and HgSe monolayers make this type of material suitable for practical applications at room-temperature.
The hyper-velocity star S5-HVS1, ejected 5 Myr ago from the Galactic Center at 1800 km/s, was most likely produced by tidal break-up of a tight binary by the supermassive black hole SgrA*. Taking a Monte Carlo approach, we show that the former companion of S5-HVS1 was likely a main-sequence star between 1.2 and 6 solar masses and was captured into a highly eccentric orbit with pericenter distance in the range 1-10 AU and semimajor axis about $10^3$ AU. We then explore the fate of the captured star. We find that the heat deposited by tidally excited stellar oscillation modes leads to runaway disruption if the pericenter distance is smaller than about 3 AU. Over the past 5 Myr, its angular momentum has been significantly modified by orbital relaxation, which may stochastically drive the pericenter inwards below 3 AU and cause tidal disruption. We find an overall survival probability in the range 5% to 50%, depending on the local relaxation time in the close environment of the captured star, and the initial pericenter at capture. The pericenter distance of the surviving star has migrated to 10-100 AU, making it potentially the most extreme member of the S-star cluster. From the ejection rate of S5-HVS1-like stars, we estimate that there may currently be a few stars in such highly eccentric orbits. They should be detectable (typically Ks < 18.5 mag) by the GRAVITY instrument and by future Extremely Large Telescopes and hence provide an extraordinary probe of the spin of SgrA*.
The topological sigma model with the semi-infinite cigar-like target space (black hole geometry) is considered. The model is shown to possess unsuppressed instantons. The noncompactness of the moduli space of these instantons is responsible for an unusual physics. There is a stable vacuum state in which the vacuum energy is zero, correlation functions are numbers thus the model is in the topological phase. However, there are other vacuum states in which correlation functions show the coordinate dependence. The estimation of the vacuum energy indicates that it is nonzero. These states are interpreted as the ones with broken BRST-symmetry.
When any extraordinary event takes place in the world wide area, it is the social media that acts as the fastest carrier of the news along with the consequences dealt with that event. One can gather much information through social networks regarding the sentiments, behavior, and opinions of the people. In this paper, we focus mainly on sentiment analysis of twitter data of India which comprises of COVID-19 tweets. We show how Twitter data has been extracted and then run sentimental analysis queries on it. This is helpful to analyze the information in the tweets where opinions are highly unstructured, heterogeneous, and are either positive or negative or neutral in some cases.
The torque on a moving electric or magnetic dipole in slow motion is deduced using the Lorentz transformation of the fields to first order in v/c. It is shown that the obtained equations are independent of the model adopted for the dipole, whether it is of Amperian or Gilbertian type, thus showing the complete validity of the Amp\`ere equivalence principle even in dynamical conditions. The torque is made of three terms: beside that due to the direct torque on the dipole there are two more terms: one due to the torque on the associated perpendicular dual-dipole caused by motion, while the other is the inertial torque due to the displacement of the dipole which carries with it the field linear momentum, or the hidden momentum.
We review our recent results on short time approximations, with emphasis on applications for which the system-environment interactions involve a general non-Hermitian system operator and its conjugate. We evaluate the onset of decoherence at low temperatures in open quantum systems. The developed approach is complementary to Markovian approximations and appropriate for evaluation of quantum computing schemes. Example of a spin system coupled to a bosonic heat bath is discussed.
We give an elementary introduction to the theory of triangulated categories covering their axioms, homological algebra in triangulated categories, triangulated subcategories, and Verdier localization. We try to use a minimal set of axioms for triangulated categories and derive all other statements from these, including the existence of biproducts. We conclude with a list of examples.
We report dc transport and magnetization measurements of Jc in MgB2 wires fabricated by the powder-in-tube method, using commercial MgB2 powder with 5 %at Mg powder added as an additional source of magnesium, and stainless steel as sheath material. By appropriate heat treatments, we have been able to increase Jc by more than one order of magnitude from that of the as-drawn wire. We show that one beneficial effect of the annealing is the elimination of most of the micro-cracks, and we correlate the increase in Jc with the disappearance of the weak-link-type behavior.
Given the closeness of the two open clusters Cr 135 and UBC 7 on the sky, we investigate the possibility of the two clusters to be physically related. We aim to recover the present-day stellar membership in the open clusters Collinder 135 and UBC 7 (300 pc from the Sun), to constrain their kinematic parameters, ages and masses, and to restore their primordial phase space configuration. The most reliable cluster members are selected with our traditional method modified for the use of Gaia DR2 data. Numerical simulations use the integration of cluster trajectories backwards in time with our original high order Hermite4 code \PGRAPE. We constrain the age, spatial coordinates and velocities, radii and masses of the clusters. We estimate the actual separation of the cluster centres equal to 24 pc. The orbital integration shows that the clusters were much closer in the past if their current line-of-sight velocities are very similar and the total mass is more than 7 times larger the mass of the determined most reliable members. We conclude that the two clusters Cr 135 and UBC 7 might very well have formed a physial pair, based on the observational evidence as well as numerical simulations. The probability of a chance coincidence is only about $2\%$.
We study the Tevatron signatures of promptly-decaying slepton co-NLSPs in the context of General Gauge Mediation (GGM). The signatures consist of trileptons plus MET and same-sign dileptons plus MET. Focusing first on electroweak production, where the Tevatron has an advantage over the early LHC, we establish four simple benchmark scenarios within the parameter space of GGM which qualitatively capture all the relevant phenomenology. We derive limits on these benchmarks from existing searches, estimate the discovery potential with 10 fb^-1, and discuss ways in which these searches can be optimized for slepton co-NLSPs. We also analyze the Tevatron constraints on a scenario with light gluinos that could be discovered at the early LHC. Overall, we find that the Tevatron still has excellent reach for the discovery of SUSY in multilepton final states. Finally, we comment on the possible interpretation of a mild "excess" in the CDF same-sign dilepton search in terms of slepton co-NLSPs.
We predict magnitudes for young planets embedded in transition discs, still affected by extinction due to material in the disc. We focus on Jupiter-size planets at a late stage of their formation, when the planet has carved a deep gap in the gas and dust distributions and the disc starts being transparent to the planet flux in the infrared (IR). Column densities are estimated by means of three-dimensional hydrodynamical models, performed for several planet masses. Expected magnitudes are obtained by using typical extinction properties of the disc material and evolutionary models of giant planets. For the simulated cases located at $5.2$ AU in a disc with local unperturbed surface density of $127$ $\mathrm{g} \cdot \mathrm{cm}^{-2}$, a $1$ $M_J$ planet is highly extincted in J-, H- and K-bands, with predicted absolute magnitudes $\ge 50$ mag. In L- and M-bands extinction decreases, with planet magnitudes between $25$ and $35$ mag. In the N-band, due to the silicate feature on the dust opacities, the expected magnitude increases to $40$ mag. For a $2$ $M_J$ planet, the magnitudes in J-, H- and K-bands are above $22$ mag, while for L-, M- and N-bands the planet magnitudes are between $15$ and $20$ mag. For the $5$ $M_J$ planet, extinction does not play a role in any IR band, due to its ability to open deep gaps. Contrast curves are derived for the transition discs in CQ Tau, PDS70, HL Tau, TW Hya and HD163296. Planet mass upper-limits are estimated for the known gaps in the last two systems.
We determine the mass profile of an ensemble cluster built from 3056 galaxies in 59 nearby clusters observed in the ESO Nearby Abell Cluster Survey. The mass profile is derived from the distribution and kinematics of the Early-type (elliptical and S0) galaxies only, which are most likely to meet the conditions for the application of the Jeans equation. We assume that the Early-type galaxies have isotropic orbits, as supported by the shape of their velocity distribution. The brightest ellipticals (with M_R < -22+5 log h), and the Early-type galaxies in subclusters are excluded from the sample. Application of the Jeans equation yields a non-parametric estimate of the cumulative mass profile M(<r), which has a logarithmic slope of -2.4 +/- 0.4 in the density profile at the virial radius. We compare our result with several analytical models from the literature (NFW, Moore et al. 1999, softened isothermal sphere, and Burkert 1995) and find that all are acceptable. However, our data do not provide compelling evidence for the existence of a core; as a matter of fact, the best-fitting core models have core-radii well below 100/h kpc. The upper limit we put on the size of the core-radius provides a constraint for the scattering cross-section of dark matter particles. The total-mass density appears to be traced remarkably well by the luminosity density of the Early-type galaxies. On the contrary, the luminosity density of the brightest ellipticals increases faster towards the center than the mass density, while the luminosity density profiles of the early and late spirals are somewhat flatter than the mass density profile. (Abridged)
Centralized training methods have shown promising results in MR image reconstruction, but privacy concerns arise when gathering data from multiple institutions. Federated learning, a distributed collaborative training scheme, can utilize multi-center data without the need to transfer data between institutions. However, existing federated learning MR image reconstruction methods rely on manually designed models which have extensive parameters and suffer from performance degradation when facing heterogeneous data distributions. To this end, this paper proposes a novel FederAted neUral archiTecture search approach fOr MR Image reconstruction (FedAutoMRI). The proposed method utilizes differentiable architecture search to automatically find the optimal network architecture. In addition, an exponential moving average method is introduced to improve the robustness of the client model to address the data heterogeneity issue. To the best of our knowledge, this is the first work to use federated neural architecture search for MR image reconstruction. Experimental results demonstrate that our proposed FedAutoMRI can achieve promising performances while utilizing a lightweight model with only a small number of model parameters compared to the classical federated learning methods.
A statistical structure $(g, T)$ on a smooth manifold $M$ induced by $(\tilde M, \tilde g, \tilde T)$ is said to be {\em robust} if there exists an open neighborhood of $(g,T)$ in the fine $C^{\infty}$-topology consisting of statistical structures induced by $(\tilde M, \tilde g, \tilde T)$. Using Nash--Gromov implicit function theorem, we show robustness of the generic statistical structure induced on $M$ by the standard linear statistical structure on ${\R}^N$, for $N$ sufficiently large.
Space-based instruments provide new and, in some cases, unique opportunities to search for dark matter. In particular, if dark matter comprises sterile neutrinos, the x ray detection of their decay line is the most promising strategy for discovery. Sterile neutrinos with masses in the keV range could solve several long-standing astrophysical puzzles, from supernova asymmetries and the pulsar kicks to star formation, reionization, and baryogenesis. The best current limits on sterile neutrinos come from Chandra and XMM-Newton. Future advances can be achieved with a high-resolution x-ray spectrometry in space.
Here, we investigate the role of the interlayer magnetic ordering of CrSBr in the framework of $\textit{ab initio}$ calculations and by using optical spectroscopy techniques. These combined studies allow us to unambiguously determine the nature of the optical transitions. In particular, photoreflectance measurements, sensitive to the direct transitions, have been carried out for the first time. We have demonstrated that optically induced band-to-band transitions visible in optical measurement are remarkably well assigned to the band structure by the momentum matrix elements and energy differences for the magnetic ground state (A-AFM). In addition, our study reveals significant differences in electronic properties for two different interlayer magnetic phases. When the magnetic ordering of A-AFM to FM is changed, the crucial modification of the band structure reflected in the direct-to-indirect band gap transition and the significant splitting of the conduction bands along the $\Gamma-Z$ direction are obtained. In addition, Raman measurements demonstrate a splitting between the in-plane modes $B^2_{2g}$/$B^2_{3g}$, which is temperature dependent and can be assigned to different interlayer magnetic states, corroborated by the DFT+U study. Moreover, the $B^2_{2g}$ mode has not been experimentally observed before. Finally, our results point out the origin of interlayer magnetism, which can be attributed to electronic rather than structural properties. Our results reveal a new approach for tuning the optical and electronic properties of van der Waals magnets by controlling the interlayer magnetic ordering in adjacent layers.
First-order methods with momentum such as Nesterov's fast gradient method are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence rates for some applications. An adaptive restarting scheme can improve the convergence rate of the fast gradient method, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally strongly convex region. Recently, we introduced the optimized gradient method, a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of the fast gradient method, yet has a worst-case cost function rate that is twice faster than that of the fast gradient method and that is optimal for large-dimensional smooth convex problems. Building upon the success of accelerating the fast gradient method using adaptive restart, this paper investigates similar heuristic acceleration of the optimized gradient method. We first derive a new first-order method that resembles the optimized gradient method for strongly convex quadratic problems with known function parameters, yielding a linear convergence rate that is faster than that of the analogous version of the fast gradient method. We then provide a heuristic analysis and numerical experiments that illustrate that adaptive restart can accelerate the convergence of the optimized gradient method. Numerical results also illustrate that adaptive restart is helpful for a proximal version of the optimized gradient method for nonsmooth composite convex functions.
Monte Carlo simulation has been performed on a classical two dimensional XY- model with a modified form of interaction potential to investigate the role of topological defects on the phase transition exhibited by the model. In simulations in a restricted ensemble without defects, the system appears to remain ordered at all temperatures. Suppression of topological defects on the square plaquettes in the modified XY- model leads to complete elimination of the phase transition observed in this model.
Territoriality is a phenomenon exhibited throughout nature. On the individual level, it is the processes by which organisms exclude others of the same species from certain parts of space. On the population level, it is the segregation of space into separate areas, each used by subsections of the population. Proving mathematically that such individual-level processes can cause observed population-level patterns to form is necessary for linking these two levels of description in a non-speculative way. Previous mathematical analysis has relied upon assuming animals are attracted to a central area. This can either be a fixed geographical point, such as a den- or nest-site, or a region where they have previously visited. However, recent simulation-based studies suggest that this attractive potential is not necessary for territorial pattern formation. Here, we construct a partial differential equation (PDE) model of territorial interactions based on the individual-based model (IBM) from those simulation studies. The resulting PDE does not rely on attraction to spatial locations, but purely on conspecific avoidance, mediated via scent-marking. We show analytically that steady-state patterns can form, as long as (i) the scent does not decay faster than it takes the animal to traverse the terrain, and (ii) the spatial scale over which animals detect scent is incorporated into the PDE. As part of the analysis, we develop a general method for taking the PDE limit of an IBM that avoids destroying any intrinsic spatial scale in the underlying behavioral decisions.
The presence of cosmological fluctuations influences the background cosmology in which the perturbations evolve. This back-reaction arises as a second order effect in the cosmological perturbation expansion. The effect is cumulative in the sense that all fluctuation modes contribute to the change in the background geometry, and as a consequence the back-reaction effect can be large even if the amplitude of the fluctuation spectrum is small. We review two approaches used to quantify back-reaction. In the first approach, the effect of the fluctuations on the background is expressed in terms of an effective energy-momentum tensor. We show that in the context of an inflationary background cosmology, the long wavelength contributions to the effective energy-momentum tensor take the form of a negative cosmological constant, whose absolute value increases as a function of time since the phase space of infrared modes is increasing. This then leads to the speculation that gravitational back-reaction may lead to a dynamical cancellation mechanism for a bare cosmological constant, and yield a scaling fixed point in the asymptotic future in which the remnant cosmological constant satisfies $\Omega_{\Lambda} \sim 1$. We then discuss how infrared modes effect local observables (as opposed to mathematical background quantities) and find that the leading infrared back-reaction contributions cancel in single field inflationary models. However, we expect non-trivial back-reaction of infrared modes in models with more than one matter field.
Observations by LIGO--Virgo of binary black hole mergers suggest a possible anti-correlation between black hole mass ratio ($q=m_{2}/m_{1}$) and the effective inspiral spin parameter $\chi_{\rm eff}$, the mass-weighted spin projection onto the binary orbital angular momentum (Callister et al. 2021). We show that such an anti-correlation can naturally occur for binary black holes assembled in active galactic nuclei (AGN) due to spherical and planar symmetry-breaking effects. We describe a phenomenological model in which: 1) heavier black holes live in the AGN disk and tend to spin up into alignment with the disk; 2) lighter black holes with random spin orientations live in the nuclear spheroid; 3) the AGN disk is dense enough to rapidly capture a fraction of the spheroid component. but small in radial extent to limit the number of bulk disk mergers; 4) migration within the disk is non-uniform, likely disrupted by feedback from migrators or disk turbulence; 5) dynamical encounters in the disk are common and preferentially disrupt binaries that are retrograde around their center of mass, particularly at stalling orbits, or traps. This model may explain trends in LIGO--Virgo data while offering falsifiable predictions. Comparisons of predictions in ($q,\chi_{\rm eff}$) parameter space for the different channels may allow us to distinguish their fractional contributions to the observed merger rates.
When using a recently developed method of Doppler-Zeeman mapping (Vasilchenko et al., 1996) for analysis of a real star and real observational data, we are confronted with limitations due to the model simplifications and unavoidable errors in observed spectra. We discuss the errors introduced by probable inaccurracies of the mathematical model: analytical fit of the local Stokes parameters, influence of magneto-optical effect, ignorance of the true atmosphere model to compute local Stokes profiles, non-uniform surface brightness. The magnetic field configuration is found in the form of arbitrarily shifted dipole and sum of dipole and quadrupole, along with the distribution of Si, Ti, Cr and Fe over the surface of the star. Lines of different elements lead to the same magnetic field configuration, which is reliably determined for the part of the stellar surface which faces the observer. This allows to compare the magnetic field and chemical maps of the surface of HD 215441. A large-scale ring structure with the magnetic pole at its center is clearly seen on the abundance maps. Si, Cr and Ti are highly deficient where the magnetic field lines are vertical (near the magnetic pole) while Fe is highly overabundant there.
We present an efficient implementation of the highly robust and scalable GenEO preconditioner in the high-performance PDE framework DUNE. The GenEO coarse space is constructed by combining low energy solutions of a local generalised eigenproblem using a partition of unity. In this paper we demonstrate both weak and strong scaling for the GenEO solver on over 15,000 cores by solving an industrially motivated problem with over 200 million degrees of freedom. Further, we show that for highly complex parameter distributions arising in certain real-world applications, established methods become intractable while GenEO remains fully effective. The purpose of this paper is two-fold: to demonstrate the robustness and high parallel efficiency of the solver and to document the technical details that are crucial to the efficiency of the code.
High-precision spectrographs play a key role in exoplanet searches using the radial velocity technique. But at the accuracy level of 1 m.s-1, required for super-Earth characterization, stability of fiber-fed spectrograph performance is crucial considering variable observing conditions such as seeing, guiding and centering errors and, telescope vignetting. In fiber-fed spectrographs such as HARPS or SOPHIE, the fiber link scrambling properties are one of the main issues. Both the stability of the fiber near-field uniformity at the spectrograph entrance and of the far-field illumination on the echelle grating (pupil) are critical for high-precision radial velocity measurements due to the spectrograph geometrical field and aperture aberrations. We conducted tests on the SOPHIE spectrograph at the 1.93-m OHP telescope to measure the instrument sensitivity to the fiber link light feeding conditions: star decentering, telescope vignetting by the dome,and defocussing. To significantly improve on current precision, we designed a fiber link modification considering the spectrograph operational constraints. We have developed a new link which includes a piece of octagonal-section fiber, having good scrambling properties, lying inside the former circular-section fiber, and we tested the concept on a bench to characterize near-field and far-field scrambling properties. This modification has been implemented in spring 2011 on the SOPHIE spectrograph fibers and tested for the first time directly on the sky to demonstrate the gain compared to the previous fiber link. Scientific validation for exoplanet search and characterization has been conducted by observing standard stars.
On his way to General Relativity (GR) Einstein gave several arguments as to why a special relativistic theory of gravity based on a massless scalar field could be ruled out merely on grounds of theoretical considerations. We re-investigate his two main arguments, which relate to energy conservation and some form of the principle of the universality of free fall. We find that such a theory-based a priori abandonment not to be justified. Rather, the theory seems formally perfectly viable, though in clear contradiction with (later) experiments. This may be of interest to those who teach GR and/or have an active interest in its history.
The system of N scalar particles with Grassmann-valued color charges plus the color SU(3) Yang-Mills field is reformulated on spacelike hypersurfaces. The Dirac observables are found and the physical invariant mass of the system in the Wigner-covariant rest-frame instant form of dynamics (covariant Coulomb gauge) is given. From the reduced Hamilton equations we extract the second order equations of motion both for the reduced transverse color field and the particles. Then, we study this relativistic scalar quark model, deduced from the classical QCD Lagrangian and with the color field present, in the N=2 (meson) case. A special form of the requirement of having only color singlets, suited for a field-independent quark model, produces a ``pseudoclassical asymptotic freedom" and a regularization of the quark self-energy.
Corrected trapezoidal rules are proved for $\int_a^b f(x)\,dx$ under the assumption that $f"\in L^p([a,b])$ for some $1\leq p\leq\infty$. Such quadrature rules involve the trapezoidal rule modified by the addition of a term $k[f'(a)-f'(b)]$. The coefficient $k$ in the quadrature formula is found that minimizes the error estimates. It is shown that when $f'$ is merely assumed to be continuous then the optimal rule is the trapezoidal rule itself. In this case error estimates are in terms of the Alexiewicz norm. This includes the case when $f"$ is integrable in the Henstock--Kurzweil sense or as a distribution. All error estimates are shown to be sharp for the given assumptions on $f"$. It is shown how to make these formulas exact for all cubic polynomials $f$. Composite formulas are computed for uniform partitions.
We present an active visual search model for finding objects in unknown environments. The proposed algorithm guides the robot towards the sought object using the relevant stimuli provided by the visual sensors. Existing search strategies are either purely reactive or use simplified sensor models that do not exploit all the visual information available. In this paper, we propose a new model that actively extracts visual information via visual attention techniques and, in conjunction with a non-myopic decision-making algorithm, leads the robot to search more relevant areas of the environment. The attention module couples both top-down and bottom-up attention models enabling the robot to search regions with higher importance first. The proposed algorithm is evaluated on a mobile robot platform in a 3D simulated environment. The results indicate that the use of visual attention significantly improves search, but the degree of improvement depends on the nature of the task and the complexity of the environment. In our experiments, we found that performance enhancements of up to 42\% in structured and 38\% in highly unstructured cluttered environments can be achieved using visual attention mechanisms.
We have shown that a particular class of non-local free field theory has conformal symmetry in arbitrary dimensions. Using the local field theory counterpart of this class, we have found the Noether currents and Ward identities of the translation, rotation and scale symmetries. The operator product expansion of the energy-momentum tensor with quasi-primary fields is also investigated.
This contribution investigates the Degrees-of-Freedom region of a two-user frequency correlated Multiple-Input-Single-Output (MISO) Broadcast Channel (BC) with imperfect Channel State Information at the transmitter (CSIT). We assume that the system consists of an arbitrary number of subbands, denoted as $L$. Besides, the CSIT state varies across users and subbands. A tight outer-bound is found as a function of the minimum average CSIT quality between the two users. Based on the CSIT states across the subbands, the DoF region is interpreted as a weighted sum of the optimal DoF regions in the scenarios where the CSIT of both users are perfect, alternatively perfect and not known. Inspired by the weighted-sum interpretation and identifying the benefit of the optimal scheme for the unmatched CSIT proposed by Chen et al., we also design a scheme achieving the upper-bound for the general $L$-subband scenario in frequency domain BC, thus showing the optimality of the DoF region.
Spatial heterogeneity can have dramatic effects on the biochemical networks that drive cell regulation and decision-making. For this reason, a number of methods have been developed to model spatial heterogeneity and incorporated into widely used modeling platforms. Unfortunately, the standard approaches for specifying and simulating chemical reaction networks become untenable when dealing with multi-state, multi-component systems that are characterized by combinatorial complexity. To address this issue, we developed MCell-R, a framework that extends the particle-based spatial Monte Carlo simulator, MCell, with the rule-based model specification and simulation capabilities provided by BioNetGen and NFsim. The BioNetGen syntax enables the specification of biomolecules as structured objects whose components can have different internal states that represent such features as covalent modification and conformation and which can bind components of other molecules to form molecular complexes. The network-free simulation algorithm used by NFsim enables efficient simulation of rule-based models even when the size of the network implied by the biochemical rules is too large to enumerate explicitly, which frequently occurs in detailed models of biochemical signaling. The result is a framework that can efficiently simulate systems characterized by combinatorial complexity at the level of spatially-resolved individual molecules over biologically relevant time and length scales.
Anonymous social networks present a number of new and challenging problems for existing Social Network Analysis techniques. Traditionally, existing methods for analysing graph structure, such as community detection, required global knowledge of the graph structure. That implies that a centralised entity must be given access to the edge list of each node in the graph. This is impossible for anonymous social networks and other settings where privacy is valued by its participants. In addition, using their graph structure inputs for learning tasks defeats the purpose of anonymity. In this work, we hypothesise that one can re-purpose the use of the HyperANF a.k.a HyperBall algorithm -- intended for approximate diameter estimation -- to the task of privacy-preserving community detection for friend recommending systems that learn from an anonymous representation of the social network graph structure with limited privacy impact. This is possible because the core data structure maintained by HyperBall is a HyperLogLog with a counter of the number of reachable neighbours from a given node. Exchanging this data structure in future decentralised learning deployments gives away no information about the neighbours of the node and therefore does preserve the privacy of the graph structure.
Within leading-order perturbation theory, the Casimir-Polder potential of a ground-state atom placed within an arbitrary arrangement of dispersing and absorbing linear bodies can be expressed in terms of the polarizability of the atom and the scattering Green tensor of the body-assisted electromagnetic field. Based on a Born series of the Green tensor, a systematic expansion of the Casimir-Polder potential in powers of the susceptibilities of the bodies is presented. The Born expansion is used to show how and under which conditions the Casimir-Polder force can be related to microscopic many-atom van der Waals forces, for which general expressions are presented. As an application, the Casimir-Polder potentials of an atom near a dielectric ring and an inhomogeneous dielectric half space are studied and explicit expressions are presented that are valid up to second order in the susceptibility.
Distillation protocols enable generation of high quality entanglement even in the presence of noise. Existing protocols ignore the presence of local information in mixed states produced from some noise sources such as photon loss, amplitude damping or thermalization. We propose new protocols that exploit local information in mixed states. Our protocols converge to higher fidelities in fewer rounds, and when local information is significant one of our protocols consistently improves yields by 10 fold or more. We demonstrate that our protocols can be compacted into an entanglement-pumping scheme, allowing quantum computation in distributed systems with a few qubits per location.
Binding energy of the $1^-$ state (ortho-positronium) in QED is calculated using the one-photon exchange Bethe-Salpeter equation in the Feynman and Coulomb gauges for different coupling constants $\alpha$. Calculations show there is a remarkable difference in values of the binding energy for different coupling constants in these two gauges.
This article provides a thorough meta-analysis of the anomaly detection problem. To accomplish this we first identify approaches to benchmarking anomaly detection algorithms across the literature and produce a large corpus of anomaly detection benchmarks that vary in their construction across several dimensions we deem important to real-world applications: (a) point difficulty, (b) relative frequency of anomalies, (c) clusteredness of anomalies, and (d) relevance of features. We apply a representative set of anomaly detection algorithms to this corpus, yielding a very large collection of experimental results. We analyze these results to understand many phenomena observed in previous work. First we observe the effects of experimental design on experimental results. Second, results are evaluated with two metrics, ROC Area Under the Curve and Average Precision. We employ statistical hypothesis testing to demonstrate the value (or lack thereof) of our benchmarks. We then offer several approaches to summarizing our experimental results, drawing several conclusions about the impact of our methodology as well as the strengths and weaknesses of some algorithms. Last, we compare results against a trivial solution as an alternate means of normalizing the reported performance of algorithms. The intended contributions of this article are many; in addition to providing a large publicly-available corpus of anomaly detection benchmarks, we provide an ontology for describing anomaly detection contexts, a methodology for controlling various aspects of benchmark creation, guidelines for future experimental design and a discussion of the many potential pitfalls of trying to measure success in this field.
In recent work arXiv:2109.07820 we have shown the equivalence of the widely used nonconvex (generalized) branched transport problem with a shape optimization problem of a street or railroad network, known as (generalized) urban planning problem. The argument was solely based on an explicit construction and characterization of competitors. In the current article we instead analyse the dual perspective associated with both problems. In more detail, the shape optimization problem involves the Wasserstein distance between two measures with respect to a metric depending on the street network. We show a Kantorovich$\unicode{x2013}$Rubinstein formula for Wasserstein distances on such street networks under mild assumptions. Further, we provide a Beckmann formulation for such Wasserstein distances under assumptions which generalize our previous result in arXiv:2109.07820. As an application we then give an alternative, duality-based proof of the equivalence of both problems under a growth condition on the transportation cost, which reveals that urban planning and branched transport can both be viewed as two bilinearly coupled convex optimization problems.
A long-standing problem of astrophysical research is how to simultaneously obtain spectra of thousands of sources randomly positioned in the field of view of a telescope. Digital Micromirror Devices, used as optical switches, provide a most powerful solution allowing to design a new generation of instruments with unprecedented capabilities. We illustrate the key factors (opto-mechanical, cryo-thermal, cosmic radiation environment,...) that constrain the design of DMD-based multi-object spectrographs, with particular emphasis on the IR spectroscopic channel onboard the EUCLID mission, currently considered by the European Space Agency for a 2017 launch date.
We study analogues of well-known relationships between Muckenhoupt weights and $BMO$ in the setting of Bekoll\'e-Bonami weights. For Bekoll\'e-Bonami weights of bounded hyperbolic oscillation, we provide distance formulas of Garnett and Jones-type, in the context of $BMO$ on the unit disc and hyperbolic Lipschitz functions. This leads to a characterization of all weights in this class, for which any power of the weight is a Bekoll\'e-Bonami weight, which in particular reveals an intimate connection between Bekoll\'e-Bonami weights and Bloch functions. On the open problem of characterizing the closure of bounded analytic functions in the Bloch space, we provide a counter-example to a related recent conjecture. This shed light into the difficulty of preserving harmonicity in approximation problems in norms equivalent to the Bloch norm. Finally, we apply our results to study certain spectral properties of Cesar\'o operators.
Art history linked some early 20th Century avant-garde visual art movements to contemporary systems of ideas in mathematics and theoretical physics. One of the proposed connections is the one that might have existed between Cubism and Relativity, or more precisely, between Picasso and Einstein. The suggested links are similarity (in a weak version) or identity (in a strong version) in matters of space, time and simultaneity. It is possible, however, that these supposed links of Einstein and avant-garde art movements were more the product of the imagination of historians and critics, than the result of connections between painters and scientists. On the one hand, the visual arts (in contrast to music, as far as we now) were of no interest to Einstein, who, moreover, did not seem inclined or knowledgeable enough to appreciate advanced forms. On the other hand, Einstein's theories fell outside the artists' ken, let alone their understanding, although there are firm clues pointing to the fact that repercussions of those theories in the press and in literary circles could have fired the imagination of some artists.
In this work, Machine Learning (ML) methods are used to efficiently identify the unassociated sources and the Blazar Candidate of Uncertain types (BCUs) in the Fermi-LAT Third Source Catalog (3FGL). The aims are twofold: 1) to distinguish the Active Galactic Nuclei (AGNs) from others (non-AGNs) in the unassociated sources; 2) to identify BCUs into BL Lacertae objects (BL Lacs) or Flat Spectrum Radio Quasars (FSRQs). Two dimensional reduction methods are presented to decrease computational complexity, where Random Forest (RF), Multilayer Perceptron (MLP) and Generative Adversarial Nets (GAN) are trained as individual models. In order to achieve better performance, the ensemble technique is further explored. It is also demonstrated that grid search method is of help to choose the hyper-parameters of models and decide the final predictor, by which we have identified 748 AGNs out of 1010 unassociated sources, with an accuracy of 97.04%. Within the 573 BCUs, 326 have been identified as BL Lacs and 247 as FSRQs, with an accuracy of 92.13%.
Medical Dialogue Generation (MDG) is intended to build a medical dialogue system for intelligent consultation, which can communicate with patients in real-time, thereby improving the efficiency of clinical diagnosis with broad application prospects. This paper presents our proposed framework for the Chinese MDG organized by the 2021 China conference on knowledge graph and semantic computing (CCKS) competition, which requires generating context-consistent and medically meaningful responses conditioned on the dialogue history. In our framework, we propose a pipeline system composed of entity prediction and entity-aware dialogue generation, by adding predicted entities to the dialogue model with a fusion mechanism, thereby utilizing information from different sources. At the decoding stage, we propose a new decoding mechanism named Entity-revised Diverse Beam Search (EDBS) to improve entity correctness and promote the length and quality of the final response. The proposed method wins both the CCKS and the International Conference on Learning Representations (ICLR) 2021 Workshop Machine Learning for Preventing and Combating Pandemics (MLPCP) Track 1 Entity-aware MED competitions, which demonstrate the practicality and effectiveness of our method.
Scintillator detector response modelling has become an essential tool in various research fields such as particle and nuclear physics, astronomy or geophysics. Yet, due to the system complexity and the requirement for accurate electron response measurements, model inference and calibration remains a challenge. Here, we propose Compton edge probing to perform non-proportional scintillation model (NPSM) inference for inorganic scintillators. We use laboratory-based gamma-ray radiation measurements with a NaI(Tl) scintillator to perform Bayesian inference on a NPSM. Further, we apply machine learning to emulate the detector response obtained by Monte Carlo simulations. We show that the proposed methodology successfully constrains the NPSM and hereby quantifies the intrinsic resolution. Moreover, using the trained emulators, we can predict the spectral Compton edge dynamics as a function of the parameterized scintillation mechanisms. The presented framework offers a novel way to infer NPSMs for any inorganic scintillator without the need for additional electron response measurements.