text
stringlengths
133
1.92k
summary
stringlengths
24
228
Models with an orbifolded universal extra dimension receive important loop-induced corrections to the masses and couplings of Kaluza-Klein (KK) particles. The dominant contributions stem from so-called boundary terms which violate KK number. Previously, only the parts of these boundary terms proportional to $\ln(\Lambda R)$ have been computed, where $R$ is the radius of the extra dimension and $\Lambda$ is cut-off scale. However, for typical values of $\Lambda R \sim 10 \cdots 50$, the logarithms are not particularly large and non-logarithmic contributions may be numerically important. In this paper, these remaining finite terms are computed and their phenomenological impact is discussed. It is shown that the finite terms have a significant impact on the KK mass spectrum. Furthermore, one finds new KK-number violating interactions that do not depend on $\ln(\Lambda R)$ but nevertheless are non-zero. These lead to new production and decay channels for level-2 KK particles at colliders.
Radiative corrections to masses and couplings in Universal Extra Dimensions
The aim of this article is a deepening of quantification notions in the aspect of the modular logic. Therefore, it approaches modulated logic of the plausible that seeks to formalize the quantifier of ubiquity. The text presents a proposal, introduced by Paul Halmos, to interpret a classical quantifier in algebraic models and as a original contribution, extends this model to an algebraic model for the quantifier of ubiquity.
Um modelo alg\'ebrico do quantificador da Ubiquidade
Three-body collisions of ultracold identical Bose atoms under tight cylindrical confinement are analyzed. A Feshbach resonance in two-body collisions is described by a two-channel zero-range interaction. Elimination of the closed channel in the three-body problem reduces the interaction to a one-channel zero-range one with an energy dependent strength. The related problem with an energy independent strength (the Lieb-Liniger-McGuire model) has an exact solution and forbids all chemical processes, such as three-atom association and diatom dissociation, as well as reflection in atom-diatom collisions. The resonant case is analyzed by a numerical solution of the Faddeev-Lovelace equations. The results demonstrate that as the internal symmetry of the Lieb-Liniger-McGuire model is lifted, the reflection and chemical reactions become allowed and may be observed in experiments.
One-dimensional Bose chemistry: effects of non-integrability
In this review we compare the three existing sets of theoretical yields of zero metal massive stars available in the literature. We also show how each of these three different sets of yields fits the element abundance ratios observed in the extremely metal poor star CD 38245. We find that, at present, no theoretical set of yields of zero metal massive stars is able to satisfactorily reproduce the elemental ratios [X/Fe] of this star.
The Chemical Yields Produced by Zero Metal Massive Stars
(TMTTF)2AsF6 undergoes two phase transitions upon cooling from 300 K. At Tco=103 K a charge-ordering (CO) occurs, and at Tsp(B=9 T)=11 K the material undergoes a spin-Peierls (SP) transition. Within the intermediate, CO phase, the charge disproportionation ratio is found to be at least 3:1 from carbon-13 NMR 1/T1 measurements on spin-labeled samples. Above Tsp, up to about 3Tsp, 1/T1 is independent of temperature, indicative of low-dimensional magnetic correlations. With the application of about 0.15 GPa pressure, Tsp increases substantially, while Tco is rapidly suppressed, demonstrating that the two orders are competing. The experiments are compared to results obtained from calculations on the 1D extended Peierls-Hubbard model.
Competition and coexistence of bond and charge orders in (TMTTF)2AsF6
This paper introduces a new surgical end-effector probe, which allows to accurately apply a contact force on a tissue, while at the same time allowing for high resolution and highly repeatable probe movement. These are achieved by implementing a cable-driven parallel manipulator arrangement, which is deployed at the distal-end of a robotic instrument. The combination of the offered qualities can be advantageous in several ways, with possible applications including: large area endomicroscopy and multi-spectral imaging, micro-surgery, tissue palpation, safe energy-based and conventional tissue resection. To demonstrate the concept and its adaptability, the probe is integrated with a modified da Vinci robot instrument.
A cable-driven parallel manipulator with force sensing capabilities for high-accuracy tissue endomicroscopy
Using a state-of-the-art full-potential electronic structure method within the local spin density approximation, we study the electronic and magnetic structure of Mn$_2$V-based full Heusler alloys: Mn$_2$VZ (Z=Al, Ga, In, Si, Ge, and Sn). We show that small expansion of the calculated theoretical equilibrium lattice constants restores the half-metallic ferrimagnetism in these compounds. Moreover a small degree of disorder between the V and Z atoms, although iduces some states within the gap, it preserves the Slater-Pauling behaviour of the spin magnetic moments and the alloys keep a high degree of spin-polarisation at the Fermi level opening the way for a half-metallic compensated ferrimagnet.
Search for half-metallic ferrimagnetism in V-based Heusler alloys Mn$_2$VZ (Z$=$Al, Ga, In, Si, Ge, Sn)
We present polarimetric imaging of the protoplanetary nebula RAFGL 2688 obtained at 4.5 microns with the Infrared Space Observatory (ISO). We have deconvolved the images to remove the signature of the point spread function of the ISO telescope, to the extent possible. The deconvolved 4.5 micron image and polarimetric map reveal a bright point source with faint, surrounding reflection nebulosity. The reflection nebula is brightest to the north-northeast, in agreement with previous ground- and space-based infrared imaging. Comparison with previous near-infrared polarimetric imaging suggests that the polarization of starlight induced by the dust grains in RAFGL 2688 is more or less independent of wavelength between 2 microns and 4.5 microns. This, in turn, indicates that scattering dominates over thermal emission at wavelengths as long as ~5 microns, and that the dust grains have characteristic radii < 1 micron.
Infrared Space Observatory Polarimetric Imaging of the Egg Nebula (RAFGL 2688)
We derive new results related to the portfolio choice problem for power and logarithmic utilities. Assuming that the portfolio returns follow an approximate log-normal distribution, the closed-form expressions of the optimal portfolio weights are obtained for both utility functions. Moreover, we prove that both optimal portfolios belong to the set of mean-variance feasible portfolios and establish necessary and sufficient conditions such that they are mean-variance efficient. Furthermore, an application to the stock market is presented and the behavior of the optimal portfolio is discussed for different values of the relative risk aversion coefficient. It turns out that the assumption of log-normality does not seem to be a strong restriction.
Mean-Variance Efficiency of Optimal Power and Logarithmic Utility Portfolios
We study in this paper a new family of stable algebraic symplectic vector bundles of rank $ 2n $ on the complex projective space $P^{2n+1}$ whose classical null correlation bundles belongs. We show that these bundles are invariant under a miniversal deformation. We also study the sufficient cohomological conditions for a symplectic vector bundle on a projective variety to be stable.
Fibr\'e vectoriel de 0-corr\'elation pond\'er\'e sur l'espace $P^{2n+1}$
The close interplay between superconductivity and antiferromagnetism in several quantum materials can lead to the appearance of an unusual thermodynamic state in which both orders coexist microscopically, despite their competing nature. A hallmark of this coexistence state is the emergence of a spin-triplet superconducting gap component, called $\pi$-triplet, which is spatially modulated by the antiferromagnetic wave-vector, reminiscent of a pair-density wave. In this paper, we investigate the impact of these $\pi$-triplet degrees of freedom on the phase diagram of a system with competing antiferromagnetic and superconducting orders. Although we focus on a microscopic two-band model that has been widely employed in studies of iron pnictides, most of our results follow from a Ginzburg-Landau analysis, and as such should be applicable to other systems of interest, such as cuprates and heavy fermions. The Ginzburg-Landau functional reveals not only that the $\pi$-triplet gap amplitude couples tri-linearly with the singlet gap amplitude and the staggered magnetization magnitude, but also that the $\pi$-triplet $d$-vector couples linearly with the magnetization direction. While in the mean field level this coupling forces the $d$-vector to align parallel or anti-parallel to the magnetization, in the fluctuation regime it promotes two additional collective modes - a Goldstone mode related to the precession of the $d$-vector around the magnetization and a massive mode, related to the relative angle between the two vectors, which is nearly degenerate with a Leggett-like mode associated with the phase difference between the singlet and triplet gaps. We also investigate the impact of magnetic fluctuations on the superconducting-antiferromagnetic phase diagram, showing that due to their coupling with the $\pi$-triplet order parameter, the coexistence region is enhanced.
Induced spin-triplet pairing in the coexistence state of antiferromagnetism and singlet superconductivity: collective modes and microscopic properties
End-to-end weakly supervised semantic segmentation aims at optimizing a segmentation model in a single-stage training process based on only image annotations. Existing methods adopt an online-trained classification branch to provide pseudo annotations for supervising the segmentation branch. However, this strategy makes the classification branch dominate the whole concurrent training process, hindering these two branches from assisting each other. In our work, we treat these two branches equally by viewing them as diverse ways to generate the segmentation map, and add interactions on both their supervision and operation to achieve mutual promotion. For this purpose, a bidirectional supervision mechanism is elaborated to force the consistency between the outputs of these two branches. Thus, the segmentation branch can also give feedback to the classification branch to enhance the quality of localization seeds. Moreover, our method also designs interaction operations between these two branches to exchange their knowledge to assist each other. Experiments indicate our work outperforms existing end-to-end weakly supervised segmentation methods.
Branches Mutual Promotion for End-to-End Weakly Supervised Semantic Segmentation
This dissertation explores the impact of bias in deep neural networks and presents methods for reducing its influence on model performance. The first part begins by categorizing and describing potential sources of bias and errors in data and models, with a particular focus on bias in machine learning pipelines. The next chapter outlines a taxonomy and methods of Explainable AI as a way to justify predictions and control and improve the model. Then, as an example of a laborious manual data inspection and bias discovery process, a skin lesion dataset is manually examined. A Global Explanation for the Bias Identification method is proposed as an alternative semi-automatic approach to manual data exploration for discovering potential biases in data. Relevant numerical methods and metrics are discussed for assessing the effects of the identified biases on the model. Whereas identifying errors and bias is critical, improving the model and reducing the number of flaws in the future is an absolute priority. Hence, the second part of the thesis focuses on mitigating the influence of bias on ML models. Three approaches are proposed and discussed: Style Transfer Data Augmentation, Targeted Data Augmentations, and Attribution Feedback. Style Transfer Data Augmentation aims to address shape and texture bias by merging a style of a malignant lesion with a conflicting shape of a benign one. Targeted Data Augmentations randomly insert possible biases into all images in the dataset during the training, as a way to make the process random and, thus, destroy spurious correlations. Lastly, Attribution Feedback is used to fine-tune the model to improve its accuracy by eliminating obvious mistakes and teaching it to ignore insignificant input parts via an attribution loss. The goal of these approaches is to reduce the influence of bias on machine learning models, rather than eliminate it entirely.
Data augmentation and explainability for bias discovery and mitigation in deep learning
A decrement in the Cosmic Microwave Background (CMB) has been observed by the Ryle Telescope towards a pair of, possibly lensed, quasars (PC1643+4631 A&B). Assuming that the decrement is due to the Sunyaev-Zel'dovich (S-Z) effect, this is indicative of a very rich intervening cluster, although no X-ray emission has yet been observed in that direction. In order to investigate these problems, we present a new model for the formation of distant spherically symmetric clusters in an expanding Universe. Computation of photon paths allows us to evaluate the gravitational effects on CMB photons passing through the evolving mass (i.e. Rees Sciama effect). The lensing properties of the cluster are also considered so that the model can be applied to the PC1643+4631 case to retrieve both the S-Z flux and the separation of the quasar pair. We find that the Rees Sciama effect might contribute significantly to the overall observed CMB decrement.
Invisible Clusters and CMB Decrements
In August 2019, we introduced to our members and customers the idea of moving LinkedIn's two core talent products -- Jobs and Recruiter -- onto a single platform to help talent professionals be even more productive. This single platform is called the New Recruiter & Jobs. A critical and difficult part of this effort is migrating their existing data from the legacy database to the new database and ensure there is no data discrepancy and no down time. In this article, we will discuss the general architecture for a successful data migration and the thought process we followed. Then we expand these ideas to our circumstances and explain in more detail about our specific challenges and solutions. In the Ramp Process section, we explain the inherent difficulties in satisfying our success criteria and describe how we overcome these difficulties and fulfill the success criteria practically.
New Recruiter and Jobs: The Largest Enterprise Data Migration at LinkedIn
Motivated by the prevalence of ``price protection guarantee", which allows a customer who purchased a product in the past to receive a refund from the seller during the so-called price protection period (typically defined as a certain time window after the purchase date) in case the seller decides to lower the price, we study the impact of such policy on the design of online learning algorithm for data-driven dynamic pricing with initially unknown customer demand. We consider a setting where a firm sells a product over a horizon of $T$ time steps. For this setting, we characterize how the value of $M$, the length of price protection period, can affect the optimal regret of the learning process. We show that the optimal regret is $\tilde{\Theta}(\sqrt{T}+\min\{M,\,T^{2/3}\})$ by first establishing a fundamental impossible regime with novel regret lower bound instances. Then, we propose LEAP, a phased exploration type algorithm for \underline{L}earning and \underline{EA}rning under \underline{P}rice Protection to match this lower bound up to logarithmic factors or even doubly logarithmic factors (when there are only two prices available to the seller). Our results reveal the surprising phase transitions of the optimal regret with respect to $M$. Specifically, when $M$ is not too large, the optimal regret has no major difference when compared to that of the classic setting with no price protection guarantee. We also show that there exists an upper limit on how much the optimal regret can deteriorate when $M$ grows large. Finally, we conduct extensive numerical experiments to show the benefit of LEAP over other heuristic methods for this problem.
Phase Transitions in Learning and Earning under Price Protection Guarantee
Exploiting the emerging nanoscale periodicities in epitaxial, single-crystal thin films is an exciting direction in quantum materials science: confinement and periodic distortions induce novel properties. The structural motifs of interest are ferroelastic, ferroelectric, multiferroic, and, more recently, topologically protected magnetization and polarization textures. A critical step towards heterostructure engineering is understanding their nanoscale structure, best achieved through real-space imaging. X-ray Bragg coherent diffractive imaging visualizes sub-picometer crystalline displacements with tens of nanometers spatial resolution. Yet, it is limited to objects spatially confined in all three dimensions and requires highly coherent, laser-like x-rays. Here we lift the confinement restriction by developing real-space imaging of periodic lattice distortions: we combine an iterative phase retrieval algorithm with unsupervised machine learning to invert the diffuse scattering in conventional x-ray reciprocal-space mapping into real-space images of polar and elastic textures in thin epitaxial films. We first demonstrate our imaging in PbTiO3/SrTiO3 superlattices to be consistent with published phase-field model calculations. We then visualize strain-induced ferroelastic domains emerging during the metal-insulator transition in Ca2RuO4 thin films. Instead of homogeneously transforming into a low-temperature structure (like in bulk), the strained Mott insulator splits into nanodomains with alternating lattice constants, as confirmed by cryogenic scanning transmission electron microscopy. Our study reveals the type, size, orientation, and crystal displacement field of the nano-textures. The non-destructive imaging of textures promises to improve models for their dynamics and enable advances in quantum materials and microelectronics.
Real-space imaging of polar and elastic nano-textures in thin films via inversion of diffraction data
In this paper, we prove by means of a counterexample that there exist pair of integers (n,p) with $n\geq 3$, $2\leq p\leq n-1$, and open sets $D$ in $C^{n}$ which are cohomologically $p$-complete with respect to the structure sheaf of $D$ such that the cohomology group $H_{n+p}(D,Z)$ does not vanish. In particular $D$ is not $p$-complete.
On the integral homology and counterexamples to the Andreotti-Grauert conjecture
Computational modeling is usually applied to aid experimental exploration of advanced materials to better understand the fundamental plasticity mechanisms during mechanical testing. In this work, we perform Molecular dynamics (MD) simulations to emulate experimental room temperature spherical-nanoindentation of crystalline W matrices by different interatomic potentials: EAM, modified EAM, and a recently developed machine learned based tabulated Gaussian approximation potential (tabGAP) for describing the interaction of W-W. Results show similarities between load displacements and stress-strain curves, regardless of the numerical model. However, a discrepancy is observed at early stages of the elastic to plastic deformation transition showing different mechanisms for dislocation nucleation and evolution, that is attributed to the difference of Burgers vector magnitudes, stacking fault and dislocation glide energies. Besides, contact pressure is investigated by considering large indenters sizes that provides a detailed analysis of screw and edge dislocations during loading process. Furthermore, the glide barrier of this kind of dislocations are reported for all the interatomic potentials showing that tabGAP model presents the most accurate results with respect to density functional theory calculations and a good qualitative agreement with reported experimental data
Atomistic simulations of nanoindentation in single crystalline tungsten: The role of interatomic potentials
We present a short derivation and discussion of the master equation for an open quantum system weakly coupled to a heat bath and then its generalization to the case of with periodic external driving based on the Floquet theory. Further, a single heat bath is replaced by several ones. We present also the definition of heat currents which satisfies the second law of thermodynamics and apply the general results to a simple model of periodically modulated qubit.
Periodically driven quantum open systems: Tutorial
We determine the support of the irreducible spherical representation (i.e., the irreducible quotient of the polynomial representation) of the rational Cherednik algebra of a finite Coxeter group for any value of the parameter c. In particular, we determine for which values of c this representation is finite dimensional. This generalizes a result of Varagnolo and Vasserot, arXiv:0705.2691, who classified finite dimensional spherical representations in the case of Weyl groups and equal parameters (i.e., when c is a constant function). As an application, we compute the zero set of the kernel of the Macdonald pairing in the trigonometric case (for equal parameters). Our proof is based on the Macdonald-Mehta integral and the elementary theory of distributions.
Supports of irreducible spherical representations of rational Cherednik algebras of finite Coxeter groups
We use high-resolution K-band VLT/HAWK-I imaging over 0.25 square degrees to study the structural evolution of massive early-type galaxies since z~1. Mass-selected samples, complete down to log(M/M_sun)~10.7 such that `typical' L* galaxies are included at all redshifts, are drawn from pre-existing photometric redshift surveys. We then separated the samples into different redshift slices and classify them as late- or early-type galaxies on the basis of their specific star-formation rate. Axis-ratio measurements for the ~400 early-type galaxies in the redshift range 0.6<z<1.8 are accurate to 0.1 or better. The projected axis-ratio distributions are then compared with lower redshift samples. We find strong evidence for evolution of the population properties: early-type galaxies at z>1 are, on average, flatter than at z<1 and the median projected axis ratio at a fixed mass decreases with redshift. However, we also find that at all epochs z<~2 the very most massive early-type galaxies (log(M/M_sun)>11.3) are the roundest, with a pronounced lack among them of galaxies that are flat in projection. Merging is a plausible mechanism that can explain both results: at all epochs merging is required for early-type galaxies to grow beyond log(M/M_sun)~11.3, and all early types over time gradually and partially loose their disk-like characteristics.
Shape Evolution of Massive Early-Type Galaxies: Confirmation of Increased Disk Prevalence at z>1
We propose generalized quantization axioms for Nambu-Poisson manifolds, which allow for a geometric interpretation of n-Lie algebras and their enveloping algebras. We illustrate these axioms by describing extensions of Berezin-Toeplitz quantization to produce various examples of quantum spaces of relevance to the dynamics of M-branes, such as fuzzy spheres in diverse dimensions. We briefly describe preliminary steps towards making the notion of quantized 2-plectic manifolds rigorous by extending the groupoid approach to quantization of symplectic manifolds.
Branes, Quantization and Fuzzy Spheres
An approach to utilizing adaptive mesh refinement algorithms for storm surge modeling is proposed. Currently numerical models exist that can resolve the details of coastal regions but are often too costly to be run in an ensemble forecasting framework without significant computing resources. The application of adaptive mesh refinement algorithms substantially lowers the computational cost of a storm surge model run while retaining much of the desired coastal resolution. The approach presented is implemented in the \geoclaw framework and compared to \adcirc for Hurricane Ike along with observed tide gauge data and the computational cost of each model run.
Adaptive Mesh Refinement for Storm Surge
Using a femtosecond laser writing technique, we fabricate and characterise three-waveguide digital adiabatic passage devices, with the central waveguide digitised into five discrete waveguidelets. Strongly asymmetric behaviour was observed, devices operated with high fidelity in the counter-intuitive scheme while strongly suppressing transmission in the intuitive. The low differential loss of the digital adiabatic passage designs potentially offers additional functionality for adiabatic passage based devices. These devices operate with a high contrast ($>\!90\%$) over a 60nm bandwidth, centered at $\sim 823$nm.
Digital Waveguide Adiabatic Passage Part 2: Experiment
Aims. To determine the credentials of nine candidate intermediate polars in order to confirm whether or not they are magnetic cataclysmic variables. Methods. Frequency analysis of RXTE and XMM data was used to search for temporal variations which could be associated with the spin period of the magnetic white dwarf. X-ray spectral analysis was carried out to characterise the emission and absorption properties of each target. Results. The hard X-ray light curve of V2069 Cyg shows a pulse period of 743.2 s, and its spectrum is fit by an absorbed bremsstrahlung model with an iron line, confirming this to be a genuine intermediate polar. The hard X-ray light curve of the previously confirmed intermediate polar IGR J00234+6141 is shown to be consistent with the previous low energy X-ray detection of a 563.5 s pulse period. The likely polar IGR J14536-5522 shows no coherent modulation at the previously identified period of 3.1 hr, but does exhibit a clear signal at periods likely to be harmonically related to it. Whilst our RXTE observations of RX J0153.3+7447, Swift J061223.0+701243.9, V436 Car and DD Cir are largely too faint to give any definitive results, the observation of IGR J16167-4957 and V2487 Oph show some characteristics of intermediate polars and these objects remain good candidates. Conclusions. We confirmed one new hard X-ray selected intermediate polar from our sample, V2069 Cyg.
RXTE and XMM observations of intermediate polar candidates
We give a group-theoretic interpretation of relativistic holography as equivalence between representations of the anti de Sitter algebra describing bulk fields and boundary fields. Our main result is the explicit construction of the boundary-to-bulk operators for arbitrary integer spin the framework of representation theory. Further we show that these operators and the bulk-to-boundary operators are intertwining operators. In analogy to the de Sitter case, we show that each bulk field has two boundary (shadow) fields with conjugated conformal weights. These fields are related by another intertwining operator given by a two-point function on the boundary.
Intertwining Operator Realization of anti de Sitter Holography
We perform a global fit of the most relevant neutrinoless double beta decay experiments within the standard model with massive Majorana neutrinos. Using Bayesian inference makes it possible to take into account the theoretical uncertainties on the nuclear matrix elements in a fully consistent way. First, we analyze the data used to claim the observation of neutrinoless double beta decay in Ge-76, and find strong evidence (according to Jeffrey's scale) for a peak in the spectrum and moderate evidence for that the peak is actually close to the energy expected for the neutrinoless decay. We also find a significantly larger statistical error than the original analysis, which we include in the comparison with other data. Then, we statistically test the consistency between this claim with that of recent measurements using Xe-136. We find that the two data sets are about 40 to 80 times more probable under the assumption that they are inconsistent, depending on the nuclear matrix element uncertainties and the prior on the smallest neutrino mass. Hence, there is moderate to strong evidence of incompatibility, and for equal prior probabilities the posterior probability of compatibility is between 1.3% and 2.5%. If one, despite such evidence for incompatibility, combines the two data sets, we find that the total evidence of neutrinoless double beta decay is negligible. If one ignores the claim, there is weak evidence against the existence of the decay. We also perform approximate frequentist tests of compatibility for fixed ratios of the nuclear matrix elements, as well as of the no signal hypothesis. Generalization to other sets of experiments as well as other mechanisms mediating the decay is possible.
Combining and comparing neutrinoless double beta decay experiments using different nuclei
Let $k$ be an algebraically closed field of characteristic 0, and $A$ a Cohen-Macaulay graded domain with $A_0=k$. If $A$ is semi-standard graded (i.e., $A$ is finitely generated as a $k[A_1]$-module), it has the $h$-vector $(h_0, h_1, ..., h_s)$, which encodes the Hilbert function of $A$. From now on, assume that $s=2$. It is known that if $A$ is standard graded (i.e., $A=k[A_1]$), then $A$ is level. We will show that, in the semi-standard case, if $A$ is not level, then $h_1+1$ divides $h_2$. Conversely, for any positive integers $h$ and $n$, there is a non-level $A$ with the $h$-vector $(1, h, (h+1)n)$. Moreover, such examples can be constructed as Ehrhart rings (equivalently, normal toric rings).
Non-level semi-standard graded Cohen-Macaulay domain with $h$-vector $(h_0,h_1,h_2)$
We prove existence and uniqueness of $L^2$ solutions to the inhomogeneous wave equation on $\mathbb{R}^{n-1}\times\mathbb{R}$ under the assumption that the inhomogeneous data lies in $L^p(\mathbb{R}^n)$ for $p=2n/(n+4)$. We also require the Fourier transform of the inhomogeneous data to vanish on an infinite cone where the solution could become singular. Subsequently, we show sharpness of the exponent $p$. This extends work of Michael Goldberg, in which similar Fourier-analytic techniques were used to study the inhomogeneous Helmholtz equation.
The Inhomogeneous Wave Equation with $L^p$ Data
Identifying physical traits and emotions based on system-sensed physical activities is a challenging problem in the realm of human-computer interaction. Our work contributes in this context by investigating an underlying connection between head movements and corresponding traits and emotions. To do so, we utilize a head movement measuring device called eSense, which gives acceleration and rotation of a head. Here, first, we conduct a thorough study over head movement data collected from 46 persons using eSense while inducing five different emotional states over them in isolation. Our analysis reveals several new head movement based findings, which in turn, leads us to a novel unified solution for identifying different human traits and emotions through exploiting machine learning techniques over head movement data. Our analysis confirms that the proposed solution can result in high accuracy over the collected data. Accordingly, we develop an integrated unified solution for real-time emotion and trait identification using head movement data leveraging outcomes of our analysis.
As You Are, So Shall You Move Your Head: A System-Level Analysis between Head Movements and Corresponding Traits and Emotions
Quantum machine learning is an emerging field at the intersection of machine learning and quantum computing. Classical cross entropy plays a central role in machine learning. We define its quantum generalization, the quantum cross entropy, prove its lower bounds, and investigate its relation to quantum fidelity. In the classical case, minimizing cross entropy is equivalent to maximizing likelihood. In the quantum case, when the quantum cross entropy is constructed from quantum data undisturbed by quantum measurements, this relation holds. Classical cross entropy is equal to negative log-likelihood. When we obtain quantum cross entropy through empirical density matrix based on measurement outcomes, the quantum cross entropy is lower-bounded by negative log-likelihood. These two different scenarios illustrate the information loss when making quantum measurements. We conclude that to achieve the goal of full quantum machine learning, it is crucial to utilize the deferred measurement principle.
Quantum Cross Entropy and Maximum Likelihood Principle
The operator of double differentiation, perturbed by the composition of the differentiation operator and a convolution one, on a finite interval with Dirichlet boundary conditions is considered. We obtain uniform stability of recovering the convolution kernel from the spectrum in a weighted $L_2$-norm and in a weighted uniform norm. For this purpose, we successively prove uniform stability of each step of the algorithm for solving this inverse problem in both the norms. Besides justifying the numerical computations, the obtained results reveal some essential difference from the classical inverse Sturm-Liouville problem.
Uniform stability of the inverse spectral problem for a convolution integro-differential operator
We present measurements of transmission and reflection spectra of a microwave photonic crystal composed of 874 metallic cylinders arranged in a triangular lattice. The spectra show clear evidence of a Dirac point, a characteristic of a spectrum of relativistic massless fermions. In fact, Dirac points are a peculiar property of the electronic band structure of graphene, whose properties consequently can be described by the relativistic Dirac equation. In the vicinity of the Dirac point, the measured reflection spectra resemble those obtained by conductance measurements in scanning tunneling microscopy of graphene flakes.
Observation of a Dirac point in microwave experiments with a photonic crystal modeling graphene
In this expository article we discuss the relations between Sasakian geometry, reduced holonomy and supersymmetry. It is well known that the Riemannian manifolds other than the round spheres that admit real Killing spinors are precisely Sasaki-Einstein manifolds, 7-manifolds with a nearly parallel G2 structure, and nearly Kaehler 6-manifolds. We then discuss the relations between the latter two and Sasaki-Einstein geometry.
Sasakian Geometry, Holonomy, and Supersymmetry
Recent works have shown that generative sequence models (e.g., language models) have a tendency to memorize rare or unique sequences in the training data. Since useful models are often trained on sensitive data, to ensure the privacy of the training data it is critical to identify and mitigate such unintended memorization. Federated Learning (FL) has emerged as a novel framework for large-scale distributed learning tasks. However, it differs in many aspects from the well-studied central learning setting where all the data is stored at the central server. In this paper, we initiate a formal study to understand the effect of different components of canonical FL on unintended memorization in trained models, comparing with the central learning setting. Our results show that several differing components of FL play an important role in reducing unintended memorization. Specifically, we observe that the clustering of data according to users---which happens by design in FL---has a significant effect in reducing such memorization, and using the method of Federated Averaging for training causes a further reduction. We also show that training with a strong user-level differential privacy guarantee results in models that exhibit the least amount of unintended memorization.
Understanding Unintended Memorization in Federated Learning
Localization of flux lines to splayed columnar pins is studied. A sine-Gordon type renormalization group study reveals the existence of a Splay glass phase and yields an analytic form for the transition temperature into the glass phase. As an independent test, the $I-V$ characteristics are determined via a Molecular Dynamics code. The glass transition temperature supports the RG results convincingly. The full phase diagram of the model is constructed.
Phase Diagram for Splay Glass Superconductivity
We examine the spectral function of the single electron Green function at finite temperatures for the Tomonaga-Luttinger model which consists of the mutual interaction with only the forward scattering. The spectral weight, which is calculated as a function of the frequency with the fixed wave number, shows that several peaks originating in the excitation spectra of charge and spin fluctuations vary into a single peak by the increase of temperature.
Effect of Thermal Fluctuation on Spectral Function for the Tomonaga-Luttinger Model
In this paper, we analyze and evaluate suitable preconditioning techniques to improve the performance of the $L^p$-norm phase unwrapping method. We consider five preconditioning techniques commonly found in the literature, and analyze their performance with different sizes of wrapped-phase maps. Keywords.- Phase unwrapping, $L^p$-norm based method, Preconditioning techniques.
On the performance of preconditioned methods to solve \(L^p\)-norm phase unwrapping
We consider non-interacting multi-qubit systems as controllable probes of an environment of defects/impurities modelled as a composite spin-boson environment. The spin-boson environment consists of a small number of quantum-coherent two-level fluctuators (TLFs) damped by independent bosonic baths. A master equation of the Lindblad form is derived for the probe-plus-TLF system. We discuss how correlation measurements in the probe system encode information about the environment structure and could be exploited to efficiently discriminate between different experimental preparation techniques, with particular focus on the quantum correlations (entanglement) that build up in the probe as a result of the TLF-mediated interaction. We also investigate the harmful effects of the composite spin-boson environment on initially prepared entangled bipartite qubit states of the probe and on entangling gate operations. Our results offer insights in the area of quantum computation using superconducting devices, where defects/impurities are believed to be a major source of decoherence.
Probing a composite spin-boson environment
We show that SU(n) Bethe Ansatz equations with arbitrary `twist' parameters are hidden inside certain nth order ordinary differential equations, and discuss various consequences of this fact.
Differential equations for general SU(n) Bethe ansatz systems
We study the parametric excitation of the free thermal convection in a horizontal layer and a rectangular cell by random vertical vibrations. The mathematical formulation we use allows one to explore the cases of heating from below and above and the lowgravity conditions. The excitation threshold of the second moments of the current velocity and the temperature perturbations is derived. The heat flux through the system quantified by the Nusselt number is reported to be related to the second moment of temperature perturbations; therefore, the threshold of the stochastic excitation of second moments gives the threshold for the excitation of the convective heat transfer. Comparison of the stochastic parametric excitation to the effect of high-frequency periodic modulation reveals dramatic dissimilarity between the two.
Stochastic parametric excitation of convective heat transfer
We investigate under what circumstances an embedded planet in a protoplanetary disc may sculpt the dust distribution such that it observationally presents as a `transition' disc. We concern ourselves with `transition' discs that have large holes ($\gtrsim 10$ AU) and high accretion rates ($\sim 10^{-9}-10^{-8}$ M$_\odot$ yr$^{-1}$). Particularly, those discs which photoevaporative models struggle to explain. Assuming the standard picture for how massive planets sculpt their parent discs, along with the observed accretion rates in `transition' discs, we find that the accretion luminosity from the forming planet is significant, and can dominate over the stellar luminosity at the gap edge. This planetary accretion luminosity can apply a significant radiation pressure to small ($s\lesssim 1\mu$m) dust particles provided they are suitably decoupled from the gas. Secular evolution calculations that account for the evolution of the gas and dust components in a disc with an embedded, accreting planet, show that only with the addition of the radiation pressure can we explain the full observed characteristics of a `transition' disc (NIR dip in the SED, mm cavity and high accretion rate). At suitably high planet masses ($\gtrsim 3-4$ M$_J$), radiation pressure from the accreting planet is able to hold back the small dust particles, producing a heavily dust-depleted inner disc that is optically thin (vertically and radially) to Infra-Red radiation. We use our models to calculate synthetic observations and present a observational evolutionary scenario for a forming planet, sculpting its parent disc. The planet-disc system will present as a `transition' disc with a dip in the SED, only when the planet mass and planetary accretion rate is high enough. At other times it will present as a disc with a primordial SED, but with a cavity in the mm, as observed in a handful of protoplanetary discs.
Accreting planets as dust dams in `transition' discs
We investigate a chain of spinless fermions with nearest-neighbour interactions that are subject to a local loss process. We determine the time evolution of the system using matrix product state methods. We find that at intermediate times a metastable state is formed, which has very different properties than usual equilibrium states. In particular, in a region around the loss, the filling is reduced, while Friedel oscillations with a period corresponding to the original filling continue to exist. The associated momentum distribution is emptied at all momenta by the loss process and the Fermi edge remains approximately at its original value. Even in the presence of strong interactions, where a redistribution by the scattering is naively expected, such a regime can exist over a long time-scale. Additionally, we point out the existence a system.
Non-equilibrium metastable state in a chain of interacting spinless fermions with localized loss
We study equitable 2-partitions of the Johnson graphs J(n,w) with a quotient matrix containing the eigenvalue lambda_2(w,n) = (w-2)(n-w-2)-2 in its spectrum. For any w>=4 and n>=2w, we find all admissible quotient matrices of such partitions, and characterize all these partitions for w>=4, n>2w, and for w>=7, n = 2w, up to equivalence.
Equitable 2-partitions of Johnson graphs with the second eigenvalue
In this paper, we give a faster width-dependent algorithm for mixed packing-covering LPs. Mixed packing-covering LPs are fundamental to combinatorial optimization in computer science and operations research. Our algorithm finds a $1+\eps$ approximate solution in time $O(Nw/ \eps)$, where $N$ is number of nonzero entries in the constraint matrix and $w$ is the maximum number of nonzeros in any constraint. This run-time is better than Nesterov's smoothing algorithm which requires $O(N\sqrt{n}w/ \eps)$ where $n$ is the dimension of the problem. Our work utilizes the framework of area convexity introduced in [Sherman-FOCS'17] to obtain the best dependence on $\eps$ while breaking the infamous $\ell_{\infty}$ barrier to eliminate the factor of $\sqrt{n}$. The current best width-independent algorithm for this problem runs in time $O(N/\eps^2)$ [Young-arXiv-14] and hence has worse running time dependence on $\eps$. Many real life instances of the mixed packing-covering problems exhibit small width and for such cases, our algorithm can report higher precision results when compared to width-independent algorithms. As a special case of our result, we report a $1+\eps$ approximation algorithm for the densest subgraph problem which runs in time $O(md/ \eps)$, where $m$ is the number of edges in the graph and $d$ is the maximum graph degree.
Faster width-dependent algorithm for mixed packing and covering LPs
Calibration and uncertainty estimation are crucial topics in high-risk environments. We introduce a new diversity regularizer for classification tasks that uses out-of-distribution samples and increases the overall accuracy, calibration and out-of-distribution detection capabilities of ensembles. Following the recent interest in the diversity of ensembles, we systematically evaluate the viability of explicitly regularizing ensemble diversity to improve calibration on in-distribution data as well as under dataset shift. We demonstrate that diversity regularization is highly beneficial in architectures, where weights are partially shared between the individual members and even allows to use fewer ensemble members to reach the same level of robustness. Experiments on CIFAR-10, CIFAR-100, and SVHN show that regularizing diversity can have a significant impact on calibration and robustness, as well as out-of-distribution detection.
Improving robustness and calibration in ensembles with diversity regularization
We consider two or more simple symmetric walks on some graphs, e.g. the real line, the plane or the two dimensional comb lattice, and investigate the properties of the distance among the walkers.
About the distance between random walkers on some graphs
We consider the Topological String/Spectral theory duality on toric Calabi-Yau threefolds obtained from the resolution of the cone over the $Y^{N,0}$ singularity. Assuming Kyiv formula, we demonstrate this duality in a special regime thanks to an underlying connection between spectral determinants of quantum mirror curves and the non-autonomous (q)-Toda system. We further exploit this link to connect small and large time expansions in Toda equations. In particular we provide an explicit expression for their tau functions at large time in terms of a strong coupling version of irregular $W_N$ conformal blocks at $c=N-1$. These are related to a special class of multi-cut matrix models which describe the strong coupling regime of four dimensional, $\mathcal{N}=2$ $SU(N)$ super Yang-Mills.
Connecting topological strings and spectral theory via non-autonomous Toda equations
We show that the Cancellation Conjecture does not hold for the affine space A^3_k over any field k of positive characteristic. We prove that an example of T. Asanuma provides a three-dimensional k-algebra A for which A is not isomorphic to k[X_1,X_2,X_3] although A[T] is isomorphic to k[X_1, X_2, X_3, X_4].
A Counter-example to the Cancellation Problem for the Affine Space A^3 in characteristic p
The detection of high energy neutrinos ($10^{15}-10^{20}$ eV or $1-10^{5}$ PeV) is an important step toward understanding the most energetic cosmic accelerators and would enable tests of fundamental physics at energy scales that cannot easily be achieved on Earth. In this energy range, there are two expected populations of neutrinos: the astrophysical flux observed with IceCube at lower energies ($\sim1$ PeV) and the predicted cosmogenic flux at higher energies ($\sim10^{18}$ eV). Radio detector arrays such as RICE, ANITA, ARA, and ARIANNA exploit the Askaryan effect and the radio transparency of glacial ice, which together enable enormous volumes of ice to be monitored with sparse instrumentation. We describe here the design for a phased radio array that would lower the energy threshold of radio techniques to the PeV scale, allowing measurement of the astrophysical flux observed with IceCube over an extended energy range. Meaningful energy overlap with optical Cherenkov telescopes could be used for energy calibration. The phased radio array design would also provide more efficient coverage of the large effective volume required to discover cosmogenic neutrinos.
A Technique for Detection of PeV Neutrinos Using a Phased Radio Array
The security of quantum cryptography is guaranteed by the no-cloning theorem, which implies that an eavesdropper copying transmitted qubits in unknown states causes their disturbance. Nevertheless, in real cryptographic systems some level of disturbance has to be allowed to cover, e.g., transmission losses. An eavesdropper can attack such systems by replacing a noisy channel by a better one and by performing approximate cloning of transmitted qubits which disturb them but below the noise level assumed by legitimate users. We experimentally demonstrate such symmetric individual eavesdropping on the quantum key distribution protocols of Bennett and Brassard (BB84) and the trine-state spherical code of Renes (R04) with two-level probes prepared using a recently developed photonic multifunctional quantum cloner [K. Lemr et al., Phys. Rev. A 85, 050307(R) (2012)]. We demonstrated that our optimal cloning device with high-success rate makes the eavesdropping possible by hiding it in usual transmission losses. We believe that this experiment can stimulate the quest for other operational applications of quantum cloning.
Experimental Eavesdropping Based on Optimal Quantum Cloning
An easy way to define and visualize geometry for PHITS input files introduced. Suggested FitsGeo Python package helps to define surfaces as Python objects and manipulate them conveniently. VPython assists to view defined geometry interactively which boosts geometry development and helps with complicated cases. Every class that sets the surface object has methods with some extra properties. As well as geometry generation for PHITS input, additional modules developed for material and cell definition. Any user with a very basic knowledge of Python can define the geometry in a convenient way and use it in further research related to particle transport.
FitsGeo -- Python package for PHITS geometry development and visualization
The work contains a first attempt to treat the problem of routing in networks with energy harvesting units. We propose HDR - a Hysteresis Based Routing Algorithm and analyse it in a simple diamond network. We also consider a network with three forwarding nodes. The results are used to give insight into its application in general topology networks and to general harvesting patterns.
HDR - A Hysteresis-Driven Routing Algorithm for Energy Harvesting Tag Networks
A new procedure is presented, which allows, based on Kendall's $\tau$, to test for partial correlation in the presence of censored data. Further, a significance level can be assigned to the partial correlation -- a problem which hasn't been addressed in the past, even for uncensored data. The results of various tests with simulated data are reported. Finally, we apply this newly developed methodology to estimate the influence of selection effects on the correlation between the soft X--ray luminosity and both total and core radio luminosity in a complete sample of Active Galactic Nuclei.
A test for partial correlation with censored astronomical data
Anomalies in the abundance measurements of short lived radionuclides in meteorites indicate that the protosolar nebulae was irradiated by a high amount of energetic particles (E$\gtrsim$10 MeV). The particle flux of the contemporary Sun cannot explain these anomalies. However, similar to T Tauri stars the young Sun was more active and probably produced enough high energy particles to explain those anomalies. We want to study the interaction of stellar energetic particles with the gas component of the disk and identify possible observational tracers of this interaction. We use a 2D radiation thermo-chemical protoplanetary disk code to model a disk representative for T Tauri stars. We use a particle energy distribution derived from solar flare observations and an enhanced stellar particle flux proposed for T Tauri stars. For this particle spectrum we calculate the stellar particle ionization rate throughout the disk with an accurate particle transport model. We study the impact of stellar particles for models with varying X-ray and cosmic-ray ionization rates. We find that stellar particle ionization has a significant impact on the abundances of the common disk ionization tracers HCO$^+$ and N$_2$H$^+$, especially in models with low cosmic-ray ionization rates. In contrast to cosmic rays and X-rays, stellar particles cannot reach the midplane of the disk. Therefore molecular ions residing in the disk surface layers are more affected by stellar particle ionization than molecular ions tracing the cold layers/midplane of the disk. Spatially resolved observations of molecular ions tracing different vertical layers of the disk allow to disentangle the contribution of stellar particle ionization from other competing ionization sources. Modeling such observations with a model like the one presented here allows to constrain the stellar particle flux in disks around T Tauri stars.
Stellar energetic particle ionization in protoplanetary disks around T Tauri stars
Visual tracking (VT) is the process of locating a moving object of interest in a video. It is a fundamental problem in computer vision, with various applications in human-computer interaction, security and surveillance, robot perception, traffic control, etc. In this paper, we address this problem for the first time in the quantum setting, and present a quantum algorithm for VT based on the framework proposed by Henriques et al. [IEEE Trans. Pattern Anal. Mach. Intell., 7, 583 (2015)]. Our algorithm comprises two phases: training and detection. In the training phase, in order to discriminate the object and background, the algorithm trains a ridge regression classifier in the quantum state form where the optimal fitting parameters of ridge regression are encoded in the amplitudes. In the detection phase, the classifier is then employed to generate a quantum state whose amplitudes encode the responses of all the candidate image patches. The algorithm is shown to be polylogarithmic in scaling, when the image data matrices have low condition numbers, and therefore may achieve exponential speedup over the best classical counterpart. However, only quadratic speedup can be achieved when the algorithm is applied to implement the ultimate task of Henriques's framework, i.e., detecting the object position. We also discuss two other important applications related to VT: (1) object disappearance detection and (2) motion behavior matching, where much more significant speedup over the classical methods can be achieved. This work demonstrates the power of quantum computing in solving computer vision problems.
Quantum algorithm for visual tracking
Multipathing in communication networks is gaining momentum due to its attractive features of increased reliability, throughput, fault tolerance, and load balancing capabilities. In particular, wireless environments and datacenters are envisioned to become largely dependent on the power of multipathing for seamless handovers, virtual machine (VM) migration and in general, pooling less proficient resources together for achieving overall high proficiency. The transport layer, with its knowledge about end-to-end path characteristics, is well placed to enhance performance through better utilization of multiple paths. Realizing the importance of transport-layer multipath, this paper investigates the modernization of traditional connection establishment, flow control, sequence number splitting, acknowledgement, and flow scheduling mechanisms for use with multiple paths. Since congestion control defines a fundamental feature of the transport layer, we study the working of multipath rate control and analyze its stability and convergence. We also discuss how various multipath congestion control algorithms differ in their window increase and decrease functions, their TCP-friendliness, and responsiveness. To the best of our knowledge, this is the first in-depth survey paper that has chronicled the evolution of the transport layer of the Internet from the traditional single-path TCP to the recent development of the modern multipath TCP (MPTCP) protocol. Along with describing the history of this evolution, we also highlight in this paper the remaining challenges and research issues.
The Past, Present, and Future of Transport-Layer Multipath
Shell models of turbulence are representation of turbulence equations in Fourier domain. Various shell models and their existence theory along with numerical simulations have been studied earlier. In this work we study control problems related to sabra shell model of turbulence. We associate two cost functionals: one ensures minimizing turbulence in the system and the other addresses the need of taking the flow near a priori known state. We derive optimal controls in terms of the solution of adjoint equations for corresponding linearized problems. In this work, we also establish feedback controllers which would preserve prescribed physical constraints. Since fluid equations have certain fundamental invariants, we would like to preserve these quantities via a control in the feedback form. We utilize the theory of nonlinear semi groups and represent the feedback control as a multi-valued feedback term which lies in the normal cone of the convex constraint space under consideration.
Control Problems and Invariant Subspaces for the Sabra Shell Model of Turbulence
We show that the simplified $3D$ relativistic Vlasov-Maxwell (sRVM) system, in which there is no magnetic field, poses a global solution for a class of arbitrarily large cylindrically symmetric initial data. In particular, a vanishing order condition imposed on the initial data for the relativistic Vlasov-Poisson system (RVP) in Wang(2022) is not imposed for the sRVM system.
Remarks on the Large data global solutions of $3D$ RVP system and $3D$ RVM system
We use data mining techniques for finding 82 previously unreported common proper motion pairs from the PPM-Extended catalogue. Special-purpose software automating the different phases of the process has been developed. The software simplifies the detection of the new pairs by integrating a set of basic operations over catalogues. The operations can be combined by the user in scripts representing different filtering criteria. This procedure facilitates testing the software and employing the same scripts for different projects.
New Common Proper-Motion Pairs From the PPMX Catalog
We study protoplanetary disc evolution assuming that angular momentum transport is driven by gravitational instability at large radii, and magnetohydrodynamic (MHD) turbulence in the hot inner regions. At radii of the order of 1 AU such discs develop a magnetically layered structure, with accretion occurring in an ionized surface layer overlying quiescent gas that is too cool to sustain MHD turbulence. We show that layered discs are subject to a limit cycle instability, in which accretion onto the protostar occurs in bursts with an accretion rate of 10^{-5} solar masses / yr, separated by quiescent intervals where the accretion rate is 10^{-8} solar masses / yr. Such bursts could lead to repeated episodes of strong mass outflow in Young Stellar Objects. The transition to this episodic mode of accretion occurs at an early epoch (t < 1 Myr), and the model therefore predicts that many young pre-main-sequence stars should have low rates of accretion through the inner disc. At ages of a few Myr, the discs are up to an order of magnitude more massive than the minimum mass solar nebula, with most of the mass locked up in the quiescent layer of the disc at around 1 AU. The predicted rate of low mass planetary migration is reduced at the outer edge of the layered disc, which could lead to an enhanced probability of giant planet formation at radii of 1-3 AU.
Episodic accretion in magnetically layered protoplanetary discs
Industrial, metrological, and medical applications provide a strong technological pull for advanced nanoscale sensors exploiting the unique sensitivity of quantum coherent systems to their environments. Essential to the functionality of these devices is the availability of control protocols which shape the sensor's response to the environment in frequency space. However, a key challenge in these applications is that common control routines result in out-of-band spectral leakage which complicates interpretation of the sensor's signal. In this work we demonstrate provably optimal narrowband control protocols ideally suited to quantum sensing. Our results, based on experiments with trapped ions using modulation in the form of discrete prolate spheroidal sequences (aka Slepian functions), demonstrate reduction of spectral leakage by orders of magnitude over conventional controls. We tune the narrowband sensitivity using concepts from RF engineering and experimentally reconstruct complex noise spectra using engineered noise for quantitative performance evaluation. We then deploy these techniques to identify previously immeasurable frequency-resolved amplitude noise in our qubit synthesis chain with calibrated sensitivity better than 0.001 dB.
Application of optimal band-limited control protocols to quantum noise sensing
In this paper we construct an explicit quasi-isomorphism to study the cyclic cohomology of a deformation quantization over a riemannian \'etale groupoid. Such a quasi-isomorphism allows us to propose a general algebraic index problem for riemannian \'etale groupoids. We discuss solutions to that index problem when the groupoid is proper or defined by a constant Dirac structure on a 3-dim torus.
On the algebraic index for riemannian \'etale groupoids
The competition between singlet and triplet superconductivity is examined in consideration of correlations on an extended Hubbard model. It is shown that the triplet superconductivity may not be included in the common Hubbard model since the strong correlation favors the singlet superconductivity, and thus the triplet superconductivity should be induced by the electron-phonon interaction and the ferromagnetic exchange interaction. We also present a superconducting qualification with which magnetism is unbeneficial to superconductivity.
Competition between singlet and triplet superconductivity
A theory of local temperature measurement of an interacting quantum electron system far from equilibrium via a floating thermoelectric probe is developed. It is shown that the local temperature so defined is consistent with the zeroth, first, second, and third laws of thermodynamics, provided the probe-system coupling is weak and broad band. For non-broad-band probes, the local temperature obeys the Clausius form of the second law and the third law exactly, but there are corrections to the zeroth and first laws that are higher-order in the Sommerfeld expansion. The corrections to the zeroth and first laws are related, and can be interpreted in terms of the error of a nonideal temperature measurement. These results also hold for systems at negative absolute temperature.
Local temperature of an interacting quantum system far from equilibrium
We discuss the path-integral formulation of quantum cosmology with a massless scalar field as a sum-over-histories, with particular reference to loop quantum cosmology. Exploiting the analogy with the relativistic particle, we give a complete overview of the possible two-point functions, deriving vertex expansions and composition laws they satisfy. We clarify the tie between definitions using a group averaging procedure and those in a deparametrised framework. We draw some conclusions about the physics of a single quantum universe and multiverse field theories where the role of these sectors and the inner product are reinterpreted.
2-point functions in quantum cosmology
Organisations are required to show that their procedures and processes satisfy the relevant regulatory requirements. The computational complexity of proving regulatory compliance is known to be generally hard. However, for some of its simpler variants the computational complexity is still unknown. We focus on the eight variants of the problem that can be identified by the following binary properties: whether the requirements consists of one or multiple obligations, whether the obligations are conditional or always in force, and whether only propositional literals or formulae can be used to describe the obligations. This paper in particular shows that proving full compliance of a model against a single unconditional obligation whose elements can be described using formulae is coNP-complete. Finally we show how this result allows to fully map the computational complexity of these variants for proving full and non compliance, while for partial compliance the complexity result of one of the variants is still missing.
Proving Regulatory Compliance: Full Compliance Against an Expressive Unconditional Obligation is coNP-Complete
Recent gender policies in the Middle East and North Africa (MENA) region have improved legal equality for women with noticeable effects in some countries. The implications of these policies on science, however, is not well-understood. This study applies a bibliometric lens to describe the landscape of gender disparities in scientific research in MENA. Specifically, we examine 1.7 million papers indexed in the Web of Science published by 1.1 million authors from MENA between 2008 and 2020. We used bibliometric indicators to analyse potential disparities between men and women in the share of authors, research productivity, and seniority in authorship. The results show that gender parity is far from being achieved in MENA. Overall, men authors obtain higher representation, research productivity, and seniority. But some countries standout: Tunisia, Lebanon, Turkey, Algeria and Egypt have higher shares or women researchers compared to the rest of MENA countries. The UAE, Qatar, and Jordan have shown progress in terms of women participation in science, but Saudi Arabia lags behind. We find that women are more likely to stop publishing than men and that men publish on average between 11% and 51% more than women, with this gap increasing over time. Finally, men, on average, achieved senior positions in authorship faster than women. Our longitudinal study contributes to a better understanding of gender disparities in science in MENA which is catching up in terms of policy engagement and women representation. However, the results suggest that the effects of the policy changes have yet to materialize into distinct improvement in women's participation and performance in science.
On the lack of women researchers in the Middle East & North Africa
A holomorphic foliation on $\mathbb P^2_{\mathbb C}$, or a real analytic foliation on $\mathbb{P}^{2}_{\mathbb{R}},$ is said to be convex if its leaves other than straight lines have no inflection points. The classification of the convex foliations of degree $2$ on $\mathbb P^2_{\mathbb C}$ has been established in $2015$ by C.~\textsc{Favre} and J.~\textsc{Pereira}. The main argument of this classification was a result obtained in~$2004$ by~D.~\textsc{Schlomiuk} and N.~\textsc{Vulpe} concerning the real polynomial vector fields of degree $2$ whose associated foliation on $\mathbb{P}^{2}_{\mathbb{R}}$ is convex. We present here a new proof of this classification, that is simpler, does not use this result and does not leave the holomorphic framework. It is based on the properties of certain models of convex foliations of $\mathbb P^2_{\mathbb C}$ of arbitrary degree and of the discriminant of the dual web of a foliation of $\mathbb P^2_{\mathbb C}$.
Une nouvelle d\'emonstration de la classification des feuilletages convexes de degr\'{e} deux sur $\mathbb P^2_{\mathbb C}$
We propose a reinforcement learning based approach to query object localization, for which an agent is trained to localize objects of interest specified by a small exemplary set. We learn a transferable reward signal formulated using the exemplary set by ordinal metric learning. Our proposed method enables test-time policy adaptation to new environments where the reward signals are not readily available, and outperforms fine-tuning approaches that are limited to annotated images. In addition, the transferable reward allows repurposing the trained agent from one specific class to another class. Experiments on corrupted MNIST, CU-Birds, and COCO datasets demonstrate the effectiveness of our approach.
Learning Transferable Reward for Query Object Localization with Policy Adaptation
We studied experimentally the role of phonon dimensionality on electron-phonon (e-p) interaction in thin copper wires evaporated either on suspended silicon nitride membranes or on bulk substrates, at sub-Kelvin temperatures. The power emitted from electrons to phonons was measured using sensitive normal metal-insulator-superconductor (NIS) tunnel junction thermometers. Membrane thicknesses ranging from 30 nm to 750 nm were used to clearly see the onset of the effects of two-dimensional (2D) phonon system. We observed for the first time that a 2D phonon spectrum clearly changes the temperature dependence and strength of the e-p scattering rate, with the interaction becoming stronger at the lowest temperatures below $\sim$ 0.5 K for the 30 nm membranes.
Influence of Phonon dimensionality on Electron Energy Relaxation
The anomaly in the anomalous Nernst effect (ANE) was observed for a C1b-type NiMnSb half-Heusler alloy thin film deposited on a MgO (001) substrate. The Nernst angle ({\theta}ANE) showed maximum peak with decreasing temperature and reached 0.15 at 80 K, which is considered to be brought by the cross-over from half-metal to normal ferromagnet in NiMnSb at low temperature. This anomaly was also observed for the transport properties, that is, both the resistivity and the anomalous Hall resistivity in the same temperature range.
Anomaly in anomalous Nernst effect at low temperature for C1b-type NiMnSb half-Heusler alloy thin film
The STAR collaboration reports the first measurements of the transverse momentum asymmetry $A_J$ of di-jet pairs in central gold-gold collisions and minimum bias proton-proton collisions at $\sqrt{s_{NN}}=200$ GeV at RHIC. We focus on anti-$k_T$ di-jets with a leading jet $p_T>20$ GeV/$c$ and a subleading jet $p_T>10$ GeV/$c$, with a constituent cut of 2 GeV/$c$, which reduces the effect of the underlying heavy-ion background. We examine the evolution of $A_J$ while reclustering these same di-jets with a lower constituent cut of 200 MeV/$c$. For the low $p_T$ constituent cut with a resolution parameter of $R=0.4$, the balance between the di-jets is restored to the level of p+p collisions which indicates the lost energy observed for di-jets with a constituent cut of $p_T ^{\text{Cut}}>2$ GeV/$c$ is recovered. Further variations of $R$ and the constituent \pT-cutoff indicate that the lost energy is redistributed in the form of soft particles, accompanied by a broadening of the jet structure.
Di-Jet Imbalance Measurements in Central Au+Au Collisions at $\sqrt{s_{NN}}=200$ GeV from STAR
The framework of slowly evolving horizons is generalized to the case of black branes in asymptotically anti-de Sitter spaces in arbitrary dimensions. The results are used to analyze the behavior of both event and apparent horizons in the gravity dual to boost-invariant flow. These considerations are motivated by the fact that at second order in the gradient expansion the hydrodynamic entropy current in the dual Yang-Mills theory appears to contain an ambiguity. This ambiguity, in the case of boost-invariant flow, is linked with a similar freedom on the gravity side. This leads to a phenomenological definition of the entropy of black branes. Some insights on fluid/gravity duality and the definition of entropy in a time-dependent setting are elucidated.
Black brane entropy and hydrodynamics: the boost-invariant case
Discrete non-Abelian Symmetries have been extensively used to reproduce the lepton mixings. In particular, the S4 group turned out to be suitable to describe predictive mixing patterns, such as the well-known Tri-Bimaximal and the Bimaximal schemes, which all represent possible first approximations of the experimental lepton mixing matrix. We review the main application of the S4 discrete group as a flavour symmetry, first dealing with the formalism and later with the phenomenological implications. In particular, we summarize the main features of flavour models based on S4, commenting on their ability in reproducing a reactor angle in agreement with the recent data and on their predictions for lepton flavour violating transitions.
Neutrino Mixings and the S4 Discrete Flavour Symmetry
The Hilbert program was actually a specific approach for proving consistency. Quantifiers were supposed to be replaced by $\epsilon$-terms. $\epsilon{x}A(x)$ was supposed to denote a witness to $\exists{x}A(x)$, arbitrary if there is none. The Hilbertians claimed that in any proof in a number-theoretic system $S$, each $\epsilon$-term can be replaced by a numeral, making each line provable and true. This implies that $S$ must not only be consistent, but also 1-consistent ($\Sigma_{1}^{0}$-correct). Here we show that if the result is supposed to be provable within $S$, a statement about all $\Pi_{2}^{0}$ statements that subsumes itself within its own scope must be provable, yielding a contradiction. The result resembles G\"odel's but arises naturally out of the Hilbert program itself.
The Collapse of the Hilbert Program: A Variation on the G\"odelian Theme
The nature of the progenitors of type Ia supernovae is still under controversial debate. KPD 1930+2752 is one of the best SN Ia progenitor candidates known today. The object is a double degenerate system consisting of a subluminous B star and a massive white dwarf. Maxted et al. 2000 conclude that the system mass exceeds the Chandrasekhar mass. This conclusion, however, rests on the assumption that the sdB mass is 0.5 Mo. However, recent binary population synthesis calculations suggest that the mass of an sdB star may range from 0.3 Mo to more than 0.7 Mo. It is therefore important to measure the mass of the sdB star simultaneously with that of the white dwarf. Since the rotation of the sdB star is tidally locked to the orbit the inclination of the system can be constrained. An analysis of the ellipsoidal variations in the light curve allows to tighten the constraints derived from spectroscopy. We derive the mass-radius relation for the sdB star from a quantitative spectral analysis. The projected rotational velocity is determined for the first time from high-resolution spectra. In addition a reanalysis of the published light curve is performed. The atmospheric and orbital parameters are measured with unprecedented accuracy. In particular the projected rotational velocity vrotsini = 92.3 +/- 1.5 km/s is determined. The mass of the sdB is limited between 0.45 Mo and 0.52 Mo. The total mass of the system ranges from 1.36 Mo to 1.48 Mo and hence is likely to exceed the Chandrasekhar mass. So KPD 1930+2752 qualifies as an excellent double degenerate supernova Ia progenitor candidate.
The subdwarf B + white dwarf binary KPD 1930+2752 - a Supernova Type Ia progenitor candidate
Recent neuroimaging studies that focus on predicting brain disorders via modern machine learning approaches commonly include a single modality and rely on supervised over-parameterized models.However, a single modality provides only a limited view of the highly complex brain. Critically, supervised models in clinical settings lack accurate diagnostic labels for training. Coarse labels do not capture the long-tailed spectrum of brain disorder phenotypes, which leads to a loss of generalizability of the model that makes them less useful in diagnostic settings. This work presents a novel multi-scale coordinated framework for learning multiple representations from multimodal neuroimaging data. We propose a general taxonomy of informative inductive biases to capture unique and joint information in multimodal self-supervised fusion. The taxonomy forms a family of decoder-free models with reduced computational complexity and a propensity to capture multi-scale relationships between local and global representations of the multimodal inputs. We conduct a comprehensive evaluation of the taxonomy using functional and structural magnetic resonance imaging (MRI) data across a spectrum of Alzheimer's disease phenotypes and show that self-supervised models reveal disorder-relevant brain regions and multimodal links without access to the labels during pre-training. The proposed multimodal self-supervised learning yields representations with improved classification performance for both modalities. The concomitant rich and flexible unsupervised deep learning framework captures complex multimodal relationships and provides predictive performance that meets or exceeds that of a more narrow supervised classification analysis. We present elaborate quantitative evidence of how this framework can significantly advance our search for missing links in complex brain disorders.
Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes
Writing the Poisson equation for the pressure in the vorticity-strain form, we show that the pressure has a finite inertial range spectrum for high Reynolds number isotropic turbulence only if the anomalous scaling exponents $\mu$ and $\mu_{\omega}$ for the dissipation and enstrophy (squared vorticity) are equal. Since a finite inertial range pressure spectrum requires only very weak assumptions about high Reynolds number turbulence, we conclude that the inference from experiment and direct numerical siimulation that these exponents are different must be a finite range scaling result which will not survive taking the high Reynolds number limit.
Enstrophy and dissipation must have the same scaling exponent in the high Reynolds number limit of fluidturbulence
The introduction of the mm-Wave spectrum into 5G NR promises to bring about unprecedented data throughput to future mobile wireless networks but comes with several challenges. Network densification has been proposed as a viable solution to increase RAN resilience, and the newly introduced Integrated-Access-and-Backhaul (IAB) is considered a key enabling technology with compelling cost-reducing opportunities for such dense deployments. Reconfigurable Intelligent Surfaces (RIS) have recently gained extreme popularity as they can create Smart Radio Environments by EM wave manipulation and behave as inexpensive passive relays. However, it is not yet clear what role this technology can play in a large RAN deployment. With the scope of filling this gap, we study the blockage resilience of realistic mm-Wave RAN deployments that use IAB and RIS. The RAN layouts have been optimised by means of a novel mm-Wave planning tool based on MILP formulation. Numerical results show how adding RISs to IAB deployments can provide high blockage resistance levels while significantly reducing the overall network planning cost.
Boosting 5G mm-Wave IAB Reliability with Reconfigurable Intelligent Surfaces
Ever since the pioneering work of Schmidt a half-century ago there has been great interest in finding an appropriate empirical relation that would directly link some property of interstellar gas with the process of star formation within it. Schmidt conjectured that this might take the form of a power-law relation between the rate of star formation (SFR) and the surface density of interstellar gas. However, recent observations suggest that a linear scaling relation between the total SFR and the amount of dense gas within molecular clouds appears to be the underlying physical relation that most directly connects star formation with interstellar gas from scales of individual GMCs to those encompassing entire galaxies both near and far. Although Schmidt relations are found to exist within local GMCs, there is no Schmidt relation observed between GMCs. The implications of these results for interpreting and understanding the Kennicutt-Schmidt scaling law for galaxies are discussed.
On Schmidt's Conjecture and Star Formation Scaling Laws
We present here some studies on noise-induced order and synchronous firing in a system of bidirectionally coupled generic type-I neurons. We find that transitions from unsynchronized to completely synchronized states occur beyond a critical value of noise strength that has a clear functional dependence on neuronal coupling strength and input values. For an inhibitory-excitatory (IE) synaptic coupling, the approach to a partially synchronized state is shown to vary qualitatively depending on whether the input is less or more than a critical value. We find that introduction of noise can cause a delay in the bifurcation of the firing pattern of the excitatory neuron for IE coupling.
Noise-induced synchronization in bidirectionally coupled type-I neurons
Assuming the existence of the dS/CFT correspondence, we construct local scalar fields with $m^2>\left( \frac{d}{2} \right)^2$ in de Sitter space by smearing over conformal field theory operators on the future/past boundary. To maintain bulk micro-causality and recover the bulk Wightman function in the Euclidean vacuum, the smearing prescription must involve two sets of single--trace operators with dimensions $\Delta$ and $d-\Delta$. Thus the local operator prescription in de Sitter space differs from the analytic continuation from the prescription in anti--de Sitter space. Pushing a local operator in the global patch to future/past infinity is shown to lead to an operator relation between single--trace operators in conformal field theories at $\mathcal{I}^\pm$, which can be interpreted as a basis transformation, also identified as the relation between an operator in CFT and its shadow operator. Construction of spin$-s$ gauge field operators is discussed, it is shown that the construction of higher spin gauge fields in de Sitter space is equivalent to constructing scalar fields with specific values of mass parameter $m^2<\left( \frac{d}{2} \right)^2$. An acausal higher spin bulk operator which matches onto boundary higher spin current is constructed. Implementation of the scalar operator constructions in AdS and dS with embedding formalism is briefly described.
Holographic Representation of Local Operators In De Sitter Space
The detection of frauds in credit card transactions is a major topic in financial research, of profound economic implications. While this has hitherto been tackled through data analysis techniques, the resemblances between this and other problems, like the design of recommendation systems and of diagnostic / prognostic medical tools, suggest that a complex network approach may yield important benefits. In this contribution we present a first hybrid data mining / complex network classification algorithm, able to detect illegal instances in a real card transaction data set. It is based on a recently proposed network reconstruction algorithm that allows creating representations of the deviation of one instance from a reference group. We show how the inclusion of features extracted from the network data representation improves the score obtained by a standard, neural network-based classification algorithm; and additionally how this combined approach can outperform a commercial fraud detection system in specific operation niches. Beyond these specific results, this contribution represents a new example on how complex networks and data mining can be integrated as complementary tools, with the former providing a view to data beyond the capabilities of the latter.
Credit card fraud detection through parenclitic network analysis
Wehrenberg et. al. [Nature 550 496 (2017)] used ultrafast in situ x-ray diffraction at the LCLS x-ray free-electron laser facility to measure large lattice rotations resulting from slip and deformation twinning in shock-compressed laser-driven [110] fibre textured tantalum polycrystal. We employ a crystal plasticity finite element method model, with slip kinetics based closely on the isotropic dislocation-based Livermore Multiscale Model [Barton et. al., J. Appl. Phys. 109 (2011)], to analyse this experiment. We elucidate the link between the degree of lattice rotation and the kinetics of plasticity, demonstrating that a transition occurs at shock pressures of $\sim$27 GPa, between a regime of relatively slow kinetics, resulting in a balanced pattern of slip system activation and therefore relatively small net lattice rotation, and a regime of fast kinetics, due to the onset of nucleation, resulting in a lop-sided pattern of deformation-system activation and therefore large net lattice rotations. We demonstrate a good fit between this model and experimental x-ray diffraction data of lattice rotation, and show that this data is constraining of deformation kinetics.
Crystal plasticity finite element simulation of lattice rotation and x-ray diffraction during laser shock-compression of Tantalum
We discuss the methods employed to photometrically calibrate the data acquired by the Low Frequency Instrument on Planck. Our calibration is based on a combination of the Orbital Dipole plus the Solar Dipole, caused respectively by the motion of the Planck spacecraft with respect to the Sun and by motion of the Solar System with respect to the CMB rest frame. The latter provides a signal of a few mK with the same spectrum as the CMB anisotropies and is visible throughout the mission. In this data release we rely on the characterization of the Solar Dipole as measured by WMAP. We also present preliminary results (at 44GHz only) on the study of the Orbital Dipole, which agree with the WMAP value of the Solar System speed within our uncertainties. We compute the calibration constant for each radiometer roughly once per hour, in order to keep track of changes in the detectors' gain. Since non-idealities in the optical response of the beams proved to be important, we implemented a fast convolution algorithm which considers the full beam response in estimating the signal generated by the dipole. Moreover, in order to further reduce the impact of residual systematics due to sidelobes, we estimated time variations in the calibration constant of the 30GHz radiometers (the ones with the largest sidelobes) using the signal of a reference load. We have estimated the calibration accuracy in two ways: we have run a set of simulations to assess the impact of statistical errors and systematic effects in the instrument and in the calibration procedure, and we have performed a number of consistency checks on the data and on the brightness temperature of Jupiter. Calibration errors for this data release are expected to be about 0.6% at 44 and 70 GHz, and 0.8% at 30 GHz. (Abriged.)
Planck 2013 results. V. LFI calibration
Generative models estimate the underlying distribution of a dataset to generate realistic samples according to that distribution. In this paper, we present the first membership inference attacks against generative models: given a data point, the adversary determines whether or not it was used to train the model. Our attacks leverage Generative Adversarial Networks (GANs), which combine a discriminative and a generative model, to detect overfitting and recognize inputs that were part of training datasets, using the discriminator's capacity to learn statistical differences in distributions. We present attacks based on both white-box and black-box access to the target model, against several state-of-the-art generative models, over datasets of complex representations of faces (LFW), objects (CIFAR-10), and medical images (Diabetic Retinopathy). We also discuss the sensitivity of the attacks to different training parameters, and their robustness against mitigation strategies, finding that defenses are either ineffective or lead to significantly worse performances of the generative models in terms of training stability and/or sample quality.
LOGAN: Membership Inference Attacks Against Generative Models
This is a contribution for the discussion on "Unbiased Markov chain Monte Carlo with couplings" by Pierre E. Jacob, John O'Leary and Yves F. Atchad\'e to appear in the Journal of the Royal Statistical Society Series B.
Discussion of "Unbiased Markov chain Monte Carlo with couplings" by Pierre E. Jacob, John O'Leary and Yves F. Atchad\'e
The characterisation of the physical properties of nanoparticles in their native environment plays a central role in a wide range of fields, from nanoparticle-enhanced drug delivery to environmental nanopollution assessment. Standard optical approaches require long trajectories of nanoparticles dispersed in a medium with known viscosity to characterise their diffusion constant and, thus, their size. However, often only short trajectories are available, while the medium viscosity is unknown, e.g., in most biomedical applications. In this work, we demonstrate a label-free method to quantify size and refractive index of individual subwavelength particles using two orders of magnitude shorter trajectories than required by standard methods, and without assumptions about the physicochemical properties of the medium. We achieve this by developing a weighted average convolutional neural network to analyse the holographic images of the particles. As a proof of principle, we distinguish and quantify size and refractive index of silica and polystyrene particles without prior knowledge of solute viscosity or refractive index. As an example of an application beyond the state of the art, we demonstrate how this technique can monitor the aggregation of polystyrene nanoparticles, revealing the time-resolved dynamics of the monomer number and fractal dimension of individual subwavelength aggregates. This technique opens new possibilities for nanoparticle characterisation with a broad range of applications from biomedicine to environmental monitoring.
Holographic characterisation of subwavelength particles enhanced by deep learning
Machine learning (ML) techniques, in particular supervised regression algorithms, are a promising new way to use multiple observables to predict a cluster's mass or other key features. To investigate this approach we use the \textsc{MACSIS} sample of simulated hydrodynamical galaxy clusters to train a variety of ML models, mimicking different datasets. We find that compared to predicting the cluster mass from the $\sigma -M$ relation, the scatter in the predicted-to-true mass ratio is reduced by a factor of 4, from $0.130\pm0.004$ dex (${\simeq} 35$ per cent) to $0.031 \pm 0.001$ dex (${\simeq} 7$ per cent) when using the same, interloper contaminated, spectroscopic galaxy sample. Interestingly, omitting line-of-sight galaxy velocities from the training set has no effect on the scatter when the galaxies are taken from within $r_{200c}$. We also train ML models to reproduce estimated masses derived from mock X-ray and weak lensing analyses. While the weak lensing masses can be recovered with a similar scatter to that when training on the true mass, the hydrostatic mass suffers from significantly higher scatter of ${\simeq} 0.13$ dex (${\simeq} 35$ per cent). Training models using dark matter only simulations does not significantly increase the scatter in predicted cluster mass compared to training on simulated clusters with hydrodynamics. In summary, we find ML techniques to offer a powerful method to predict masses for large samples of clusters, a vital requirement for cosmological analysis with future surveys.
An application of machine learning techniques to galaxy cluster mass estimation using the MACSIS simulations
We study a class of quark mass matrix models with the $\cal CP$-violating phase determined through making the $\cal CP$-violating parameter $J$ an extremum. These models assume that $m_u \ll m_c \ll m_t$ and that $m_d \ll m_s \ll m_b$. They have $\left|{V_{ub}\over{V_{cb}}}\right|\approx{\sqrt{m_u \over{m_c}}}$, $\left|{V_{td}\over{V_{ts}}}\right|\approx{\sqrt{m_d \over{m_s}}}$ and $|V_{us}|\approx|V_{cd}|\approx{\sqrt{{m_d \over{m_s}}+{m_u \over{m_c}}}}$. The Wolfenstein parameters $\rho$ and $\eta$ are found to be related by $\rho\approx\eta^2$. Finally, we examine a special class of such models where the masses are constrained to be roughly in geometric progression. Further application of the extremal condition to $J$ then leads to ${\sqrt{m_d \over{m_s}}}\approx 3{\sqrt{m_u \over{m_c}}}$ and hence $\eta\approx\frac{1}{3}$, for maximal $J$.
CP-Violation and the Quark Mass Matrices
The formation of the composite photonic-excitonic particle, known as a polariton, is a phenomenon emerging in materials possessing strong coupling to light. The organic-based materials besides the strong light-matter coupling also demonstrate strong interaction of electronic and vibrational degrees of freedom. We study the vibration-assisted polariton wavefunction evolution treating both types of interactions as equally strong. Using the multiconfiguration Hartree approach we derive the equations of motion for the polariton wavefunction, where the vibration degrees of freedom interact with the polariton quantum field through the mean-field Hartree term.
Hartree method for molecular polaritons
Large-scale simulations of the Centaur population are carried out. The evolution of 23328 particles based on the orbits of 32 well-known Centaurs is followed for up to 3 Myr in the forward and backward direction under the influence of the 4 massive planets. The objects exhibit a rich variety of dynamical behaviour with half-lives ranging from 540 kyr (1996 AR20) to 32 Myr (2000 FZ53). The mean half-life of the entire sample of Centaurs is 2.7 Myr. The data are analyzed using a classification scheme based on the controlling planets at perihelion and aphelion, previously given in Horner et al (2003). Transfer probabilities are computed and show the main dynamical pathways of the Centaur population. The total number of Centaurs with diameters larger than 1 km is estimated as roughly 44300, assuming an inward flux of one new short-period comet every 200 yrs. The flux into the Centaur region from the Edgeworth-Kuiper belt is estimated to be 1 new object every 125 yrs. Finally, the flux from the Centaur region to Earth-crossing orbits is 1 new Earth-crosser every 880 yrs
Simulations of the Population of Centaurs I: The Bulk Statistics
Relativistic current sheets have been proposed as the sites of dissipation in pulsar winds, jets in active galaxies and other Poynting-flux dominated flows. It is shown that the steady versions of these structures differ from their nonrelativistic counterparts because they do not permit transformation to a de Hoffmann/Teller reference frame, in which the electric field vanishes. Instead, their generic form is that of a true neutral sheet: one in which the linking magnetic field component normal to the sheet is absent. Taken together with Alfven's limit on the total cross-field potential, this suggests plasma is ejected from the sheet in the cross-field direction rather than along it. The maximum energy to which such structures can accelerate particles is derived, and used to compute the maximum frequency of the subsequent synchrotron radiation. This can be substantially in excess of standard estimates. In the magnetically driven gamma-ray burst scenario, acceleration of electrons is possible to energies sufficient to enable photon-photon pair production after an inverse Compton scattering event.
Particle acceleration in relativistic current sheets
To be presented is a study of the secular evolution of a spherical stellar system with a central star-accreting black hole (BH) using the anisotropic gaseous model. This method solves numerically moment equations of the full Fokker-Planck equation, with Boltzmann-Vlasov terms on the left- and collisional terms on the right-hand sides. We study the growth of the central BH due to star accretion at its tidal radius and the feedback of this process on to the core collapse as well as the post-collapse evolution of the surrounding stellar cluster in a self-consistent manner. Diffusion in velocity space into the loss-cone is approximated by a simple model. The results show that the self-regulated growth of the BH reaches a certain fraction of the total mass cluster and agree with other methods. Our approach is much faster than competing ones (Monte Carlo, $N$--body) and provides detailed informations about the time and space dependent evolution of all relevant properties of the system. In this work we present the method and study simple models (equal stellar masses, no stellar evolution or collisions). Nonetheless, a generalisation to include such effects is conceptually simple and under way.
Accretion of stars on to a massive black hole: A realistic diffusion model and numerical studies
To analyze data supported by arbitrary graphs G, DSP has been extended to Graph Signal Processing (GSP) by redefining traditional DSP concepts like shift, filtering, and Fourier transform among others. This paper revisits modulation, convolution, and sampling of graph signals as appropriate natural extensions of the corresponding DSP concepts. To define these for both the vertex and the graph frequency domains, we associate with generic data graph G and its graph shift A, a graph spectral shift M and a spectral graph Gs. This leads to a spectral GSP theory that parallels in the graph frequency domain the existing GSP theory in the vertex domain. The paper applies this to design and recovery sampling techniques for data supported by arbitrary directed graphs.
Graph Signal Processing: Modulation, Convolution, and Sampling
Images can vary according to changes in viewpoint, resolution, noise, and illumination. In this paper, we aim to learn representations for an image, which are robust to wide changes in such environmental conditions, using training pairs of matching and non-matching local image patches that are collected under various environmental conditions. We present a regularized discriminant analysis that emphasizes two challenging categories among the given training pairs: (1) matching, but far apart pairs and (2) non-matching, but close pairs in the original feature space (e.g., SIFT feature space). Compared to existing work on metric learning and discriminant analysis, our method can better distinguish relevant images from irrelevant, but look-alike images.
Regularized Discriminant Embedding for Visual Descriptor Learning
We prove that any extended formulation that approximates the matching polytope on $n$-vertex graphs up to a factor of $(1+\varepsilon)$ for any $\frac2n \le \varepsilon \le 1$ must have at least $\binom{n}{{\alpha}/{\varepsilon}}$ defining inequalities where $0<\alpha<1$ is an absolute constant. This is tight as exhibited by the $(1+\varepsilon)$ approximating linear program obtained by dropping the odd set constraints of size larger than $({1+\varepsilon})/{\varepsilon}$ from the description of the matching polytope. Previously, a tight lower bound of $2^{\Omega(n)}$ was only known for $\varepsilon = O\left(\frac{1}{n}\right)$ [Rothvoss, STOC '14; Braun and Pokutta, IEEE Trans. Information Theory '15] whereas for $\frac2n \le \varepsilon \le 1$, the best lower bound was $2^{\Omega\left({1}/{\varepsilon}\right)}$ [Rothvoss, STOC '14]. The key new ingredient in our proof is a close connection to the non-negative rank of a lopsided version of the unique disjointness matrix.
Lower Bounds for Approximating the Matching Polytope
Heterogeneous computing is one of the most important computational solutions to meet rapidly increasing demands on system performance. It typically allows the main flow of applications to be executed on a CPU while the most computationally intensive tasks are assigned to one or more accelerators, such as GPUs and FPGAs. The refactoring of systems for execution on such platforms is highly desired but also difficult to perform, mainly due the inherent increase in software complexity. After exploration, we have identified a current need for a systematic approach that supports engineers in the refactoring process -- from CPU-centric applications to software that is executed on heterogeneous platforms. In this paper, we introduce a decision framework that assists engineers in the task of refactoring software to incorporate heterogeneous platforms. It covers the software engineering lifecycle through five steps, consisting of questions to be answered in order to successfully address aspects that are relevant for the refactoring procedure. We evaluate the feasibility of the framework in two ways. First, we capture the practitioner's impressions, concerns and suggestions through a questionnaire. Then, we conduct a case study showing the step-by-step application of the framework using a computer vision application in the automotive domain.
HPM-Frame: A Decision Framework for Executing Software on Heterogeneous Platforms