text
stringlengths
6
128k
We report new results from our effort to identify obscured Wolf-Rayet stars in the Galaxy. Candidates were selected by their near-infrared (2MASS) and mid-infrared (Spitzer/GLIMPSE) color excesses, which are consistent with free-free emission from ionized stellar winds and thermal excess from hot dust. We have confirmed 12 new Wolf-Rayet stars in the Galactic disk, including 9 of the nitrogen subtype (WN), and 3 of the carbon subtype (WC); this raises the total number of Wolf-Rayet stars discovered with our approach to 27. We classify one of the new stars as a possible dust-producing WC9d+OBI colliding-wind binary, as evidenced by an infrared excess resembling that of known WC9d stars, the detection of OBI features superimposed on the WC9 spectrum, and hard X-ray emission detected by XMM-Newton. A WC8 star in our sample appears to be a member of the stellar cluster Danks 1, in contrast to the rest of the confirmed Wolf-Rayet stars that generally do not appear to reside within dense stellar clusters. Either the majority of the stars are runaways from clusters, or they formed in relative isolation. We briefly discuss prospects for the expansion and improvement of the search for Wolf-Rayet stars throughout the Milky Way Galaxy.
With the first direct detections of gravitational waves (GWs) from the coalescence of compact binaries observed by the advanced LIGO and VIRGO interferometers, the era of GW astronomy has begun. Whilst there is strong evidence that the observed GWs are connected to the merger of two black holes (BH) or two neutron stars (NS), future detections may present a less consistent picture. Indeed, the possibility that the observed GW signal was created by a merger of exotic compact objects (ECOs) such as boson stars (BS) or axion stars (AS) has not yet been fully excluded. For a detailed understanding of the late stages of the coalescence full 3D numerical relativity simulations are essential. In this paper, we extend the infrastructure of the numerical relativity code BAM, to permit the simultaneous simulation of baryonic matter with bosonic scalar fields, thus enabling the study of BS-BS, BS-NS, and BS-BH mergers. We present a large number of single star evolutions to test the newly implemented routines, and to quantify the numerical challenges of such simulations, which we find to partially differ from the default NS case. We also compare head-on BS-BS simulations with independent numerical relativity codes, namely the SpEC and the GRChombo codes, and find good general agreement. Finally, we present what are, to the best of our knowledge, the first full NR simulations of BS-NS mergers, a first step towards identifying the hallmarks of BS-NS interactions in the strong gravity regime, as well as possible GW and electromagnetic observables.
The thermal conductivity of the heavy-fermion superconductor CeCoIn_5 has been studied in a magnetic field rotating within the 2D planes. A clear fourfold symmetry of the thermal conductivity which is characteristic of a superconducting gap with nodes along the (+-pi,+-pi)-directions is resolved. The thermal conductivity measurement also reveals a first order transition at H_c2, indicating a Pauli limited superconducting state. These results indicate that the symmetry most likely belongs to d_{x^2-y^2}, implying that the anisotropic antiferromagnetic fluctuation is relevant to the superconductivity.
The recovery of a signal from the magnitudes of its transformation, like the Fourier transform, is known as the phase retrieval problem and is of big relevance in various fields of engineering and applied physics. In this paper, we present a fast inertial/momentum based algorithm for the phase retrieval problem and we prove a convergence guarantee for the new algorithm and for the Fast Griffin-Lim algorithm, whose convergence remained unproven in the past decade. In the final chapter, we compare the algorithm for the Short Time Fourier transform phase retrieval with the Griffin-Lim algorithm and FGLA and to other iterative algorithms typically used for this type of problem.
Unpacking and comprehending how black-box machine learning algorithms make decisions has been a persistent challenge for researchers and end-users. Explaining time-series predictive models is useful for clinical applications with high stakes to understand the behavior of prediction models. However, existing approaches to explain such models are frequently unique to data where the features do not have a time-varying component. In this paper, we introduce WindowSHAP, a model-agnostic framework for explaining time-series classifiers using Shapley values. We intend for WindowSHAP to mitigate the computational complexity of calculating Shapley values for long time-series data as well as improve the quality of explanations. WindowSHAP is based on partitioning a sequence into time windows. Under this framework, we present three distinct algorithms of Stationary, Sliding and Dynamic WindowSHAP, each evaluated against baseline approaches, KernelSHAP and TimeSHAP, using perturbation and sequence analyses metrics. We applied our framework to clinical time-series data from both a specialized clinical domain (Traumatic Brain Injury - TBI) as well as a broad clinical domain (critical care medicine). The experimental results demonstrate that, based on the two quantitative metrics, our framework is superior at explaining clinical time-series classifiers, while also reducing the complexity of computations. We show that for time-series data with 120 time steps (hours), merging 10 adjacent time points can reduce the CPU time of WindowSHAP by 80% compared to KernelSHAP. We also show that our Dynamic WindowSHAP algorithm focuses more on the most important time steps and provides more understandable explanations. As a result, WindowSHAP not only accelerates the calculation of Shapley values for time-series data, but also delivers more understandable explanations with higher quality.
Covers are a kind of quasiperiodicity in strings. A string $C$ is a cover of another string $T$ if any position of $T$ is inside some occurrence of $C$ in $T$. The shortest and longest cover arrays of $T$ have the lengths of the shortest and longest covers of each prefix of $T$, respectively. The literature has proposed linear-time algorithms computing longest and shortest cover arrays taking border arrays as input. An equivalence relation $\approx$ over strings is called a substring consistent equivalence relation (SCER) iff $X \approx Y$ implies (1) $|X| = |Y|$ and (2) $X[i:j] \approx Y[i:j]$ for all $1 \le i \le j \le |X|$. In this paper, we generalize the notion of covers for SCERs and prove that existing algorithms to compute the shortest cover array and the longest cover array of a string $T$ under the identity relation will work for any SCERs taking the accordingly generalized border arrays.
Many studies have shown that Physarum polycephalum slime mold is able to find the shortest path in a maze. In this paper we study this behavior in a network, using a hyperbolic model of chemotaxis. Suitable transmission and boundary conditions at each node are considered to mimic the behavior of such an organism in the feeding process. Several numerical tests are presented for special network geometries to show the qualitative agreement between our model and the observed behavior of the mold.
We perform a detailed study of mesonic properties in a class of holographic models of QCD, which is described by the Yang-Mills plus Chern-Simons action. By decomposing the 5 dimensional gauge field into resonances and integrating out the massive ones, we reproduce the Chiral Perturbative Theory Lagrangian up to ${\cal O}(p^6)$ and obtain all the relevant low energy constants (LECs). The numerical predictions of the LECs show minor model dependence, and agree reasonably with the determinations from other approaches. Interestingly, various model-independent relations appear among them. Some of these relations are found to be the large-distance limits of universal relations between form factors of the anomalous and even-parity sectors of QCD.
The charged Higgs associated production with a W boson has a smooth cross section as a function of the charged Higgs mass at muon colliders. The cross section in Minimal Supersymmetric Standard Model is about 25fb in the range 200 GeV < mH < 400 GeV with tanbeta = 50. This is much larger than the corresponding cross section at an e+e- collider which reaches a fraction of femtobarn. The observability of this channel at a muon collider has been recently studied in an earlier work leading to the result that with 1 ab-1, a 5sigma signal can be observed throughout the aforementioned mass range. In this paper, results of a study based on a general two Higgs doublet model (type II and III) are presented and the cross section of this process in the most sensitive parameter space is evaluated. It is concluded that the cross section of this process increases with increasing neutral Higgs boson masses involved in the s-channel diagram and can be as large as several picobarn with tanbeta = 50. The region of "physical Higgs boson mass" parameter space which could lead to a 5 sigma signal at 50fb-1 is specified.
Buildings have been introduced by J. Tits in order to study semi-simple algebraic groups from a geometrical point of view. One of the most important results in the theory of buildings is the classification of irreducible spherical buildings of rank at least 3. About 25 years ago, M. Ronan and J. Tits defined the class of twin buildings, which generalize spherical buildings in a natural way. The motivation of their definition is provided by the theory of Kac-Moody groups. A 2-spherical twin building is uniquely determined by its local structure in almost all cases: The so-called foundation is the union of the rank 2 residues which contain an (arbitrary) chamber. Therefore, the classification of 2-spherical twin buildings reduces to the classification of all foundations which can be realized as the local structure of such a twin building. We call such a foundation "integrable". By a result of Tits, an integrable foundation is Moufang, which means that the rank 2 buildings in the foundation are Moufang polygons, and that the glueings are compatible with the Moufang structures induced on the rank 1 residues. As a consequence, the classification of Moufang polygons and the solution of the isomorphism problem for Moufang sets are essential to work out which Moufang polygons fit together in order to form a foundation. The present thesis contributes to establish complete lists of integrable foundations for certain types of diagrams, namely for simply laced diagrams and for 443 triangle diagrams. In this process, we closely follow the approach for the classification of spherical buildings. However, we have to refine the techniques used there, since in general, foundations don't only depend on the diagram and the defining field. Moreover, one of the main results in the context of Moufang sets is the solution of the isomorphism problem for Moufang sets of pseudo-quadratic spaces.
In a recent work, Fouli and Lin generalized a Villarreal's result and showed that if each connected components of the line graph of a squarefree monomial ideal contains at most a unique odd cycle, then this ideal is of linear type. In this short note, we reprove this result with Villarreal's original ideas together with a method of Conca and De Negri. We also propose a class of squarefree monomial ideals of linear type.
The densities of Yang-Lee zeros for the Ising ferromagnet on the $L\times L$ square lattice are evaluated from the exact grand partition functions ($L=3\sim16$). The properties of the density of Yang-Lee zeros are discussed as a function of temperature $T$ and system size $L$. The three different classes of phase transitions for the Ising ferromagnet, first-order phase transition, second-order phase transition, and Yang-Lee edge singularity, are clearly distinguished by estimating the magnetic scaling exponent $y_h$ from the densities of zeros for finite-size systems. The divergence of the density of zeros at Yang-Lee edge in high temperatures (Yang-Lee edge singularity), which has been detected only by the series expansion until now for the square-lattice Ising ferromagnet, is obtained from the finite-size data. The identification of the orders of phase transitions in small systems is also discussed using the density of Yang-Lee zeros.
This is a sequel to our previous paper (joint with Furusho). It will give a more natural framework for constructing elements in the Hopf algebra of framed mixed Tate motives according to Bloch and Kriz. This framework allows us to extend our previous results to interpret all multiple zeta values (including the divergent ones) and the multiple polylogarithms in one variable as elements of this Hopf algebra. It implies that the pro-unipotent completion of the torsor of paths on projective line minus three points, is a mixed Tate motive in the sense of Bloch-Kriz. Also It allows us to interpret the multiple logarithm as an element of this Hopf algebra as long as the products of consecutive arguments are not 1.
In this paper we prove the existence of a uniform bound for Frobenius test exponents for parameter ideals of a local ring $(R, \frak m)$ of prime characteristic in the following cases: (1) $R$ is generalized Cohen-Macaulay. Our proof is much more simpler than the original proof of Huneke, Katzman, Sharp and Yao, (2) The Frobenius actions on all lower local cohomologies $H^i_{\frak m}(R)$, $i < \dim R$, are nilpotent.
We study geometry of two-dimensional models of conformal space-time based on the group of Moebius transformation. The natural geometric invariants, called cycles, are used to linearise Moebius action. Conformal completion of the space-time is achieved through an addition of a zero-radius cycle at infinity. We pay an attention to the natural condition of non-reversibility of time arrow in order to get a correct compactification in the hyperbolic case.
Selecting suitable charge transport layers and suppressing non-radiative recombination at interfaces to the absorber layer are vital to maximize the efficiency of halide perovskite solar cells. In this work, high-quality perovskite thin films and devices are fabricated with different fullerene-based electron transport layers and different self-assembled monolayers as hole transport layers. We then perform a comparative study of a significant variety of different electrical, optical and photoemission-based characterization techniques to quantify the properties of the solar cells, the individual layers and importantly the interfaces between them. In addition, we highlight the limitations and problems of the different measurements, the insights gained by combining different methods and the different strategies to extract information from the experimental raw data.
The geometric, aesthetic, and mathematical elegance of origami is being recognized as a powerful pathway to self-assembly of micro and nano-scale machines with programmable mechanical properties. The typical approach to designing the mechanical response of an ideal origami machine is to include mechanisms where mechanical constraints transform applied forces into a desired motion along a narrow set of degrees of freedom. In fact, to date, most design approaches focus on building up complex mechanisms from simple ones in ways that preserve each individual mechanism's degree of freedom (DOF), with examples ranging from simple robotic arms to homogenous arrays of identical vertices, such as the well-known Miura-ori. However, such approaches typically require tight fabrication tolerances, and often suffer from parasitic compliance. In this work, we demonstrate a technique in which high-degree-of-freedom mechanisms associated with single vertices are heterogeneously combined so that the coupled phase spaces of neighboring vertices are pared down to a controlled range of motions. This approach has the advantage that it produces mechanisms that retain the DOF at each vertex, are robust against fabrication tolerances and parasitic compliance, but nevertheless effectively constrain the range of motion of the entire machine. We demonstrate the utility of this approach by mapping out the configuration space for the modified Miura-ori vertex of degree 6, and show that when strung together, their combined configuration spaces create mechanisms that isolate deformations, constrain the configuration topology of neighboring vertices, or lead to sequential bistable folding throughout the entire origami sheet.
For the SPECTRAP experiment at GSI, Germany, detectors with Single-Photon counting capability in the visible and near-infrared regime are required. For the wavelength region up to 1100 nm we investigate the performance of 2x2 mm^2 avalanche photo diodes (APDs) of type S0223 manufactured by Radiation Monitoring Devices. To minimize thermal noise, the APDs are cooled to approximately -170 deg. C using liquid nitrogen. By operating the diodes close to the breakdown voltage it is possible to achieve relative gains in excess of 2x10^4. Custom-made low noise preamplifiers are used to read out the devices. The measurements presented in this paper have been obtained at a relative gain of 2.2x10^4. At a discriminator threshold of 6 mV the resulting dark count rate is in the region of 230/s. With these settings the studied APDs are able to detect single photons at 628 nm wavelength with a photo detection efficiency of (67+-7)%. Measurements at 1020 nm wavelength have been performed using the attenuated output of a grating spectrograph with a light bulb as photon source. With this setup the photo detection efficiency at 1020 nm has been determined to be (13+-3)%, again at a threshold of 6 mV.
Recently, it is increasingly popular to equip mobile RGB cameras with Time-of-Flight (ToF) sensors for active depth sensing. However, for off-the-shelf ToF sensors, one must tackle two problems in order to obtain high-quality depth with respect to the RGB camera, namely 1) online calibration and alignment; and 2) complicated error correction for ToF depth sensing. In this work, we propose a framework for jointly alignment and refinement via deep learning. First, a cross-modal optical flow between the RGB image and the ToF amplitude image is estimated for alignment. The aligned depth is then refined via an improved kernel predicting network that performs kernel normalization and applies the bias prior to the dynamic convolution. To enrich our data for end-to-end training, we have also synthesized a dataset using tools from computer graphics. Experimental results demonstrate the effectiveness of our approach, achieving state-of-the-art for ToF refinement.
We consider the inverse problem of H\"oldder-stably determining the time- and space-dependent coefficients of the Schr\"odinger equation on a simple Riemannian manifold with boundary of dimension $n\geq2$ from knowledge of the Dirichlet-to-Neumann map. Assuming the divergence of the magnetic potential is known, we show that the electric and magnetic potentials can be H\"older-stably recovered from these data. Here we also remove the smallness assumption for the solenoidal part of the magnetic potential present in previous results.
The main purpose of this paper is to present a general method for the controllability of the stability of a system of fractional-order differential equations around its equilibrium states. This method is applied to analyze and control the fractional stability of the fractional 2-dimensional fractional Toda lattice with one linear control.
We derive Kubo formulae for first-order spin hydrodynamics based on non-equilibrium statistical operators method. In first-order spin hydrodynamics, there are two new transport coefficients besides the ordinary ones appearing in first-order viscous hydrodynamics. They emerge due to the incorporation of the spin degree of freedom into fluids and the spin-orbital coupling. Zubarev's non-equilibrium statistical operator method can be well applied to investigate these quantum effects in fluids. The Kubo formulae, based on the method of non-equilibrium statistical operators, are related to equilibrium (imaginary-time) infrared Green's functions, and all the transport coefficients can be determined when the microscopic theory is specified.
We clarify three aspects of non-compact elliptic genera. Firstly, we give a path integral derivation of the elliptic genus of the cigar conformal field theory from its non-linear sigma-model description. The result is a manifestly modular sum over a lattice. Secondly, we discuss supersymmetric quantum mechanics with a continuous spectrum. We regulate the theory and analyze the dependence on the temperature of the trace weighted by the fermion number. The dependence is dictated by the regulator. From a detailed analysis of the dependence on the infrared boundary conditions, we argue that in non-compact elliptic genera right-moving supersymmetry combined with modular covariance is anomalous. Thirdly, we further clarify the relation between the flat space elliptic genus and the infinite level limit of the cigar elliptic genus.
The environmental effect is commonly used to explain the excess of gas-poor galaxies in galaxy clusters. Meanwhile, the presence of gas-poor galaxies at cluster outskirts, where galaxies have not spent enough time to feel the cluster environmental effect, hints for the presence of pre-processing. Using cosmological hydrodynamic simulations on 16 clusters, we investigate the mechanisms of gas depletion of galaxies found inside clusters. The gas depletion mechanisms can be categorized into three channels based on where and when they took place. First, 34$\%$ of our galaxies are gas poor before entering clusters (`pre-processing'). They are mainly satellites that have undergone the environmental effect inside group halos. Second, 43$\%$ of the sample became quickly gas deficient in clusters before the first pericentric pass (`fast cluster processing'). Some of them were group satellites that are low in gas at the time of cluster entry compared to the galaxies directly coming from the field. Even the galaxies with large gas fractions take this channel if they fall into massive clusters ($> 10^{14.5}\, \rm M_{\odot}$) or approach cluster centers through radial orbits. Third, 24$\%$ of our sample retain gas even after their first pericentric pass (`slow cluster processing') as they fall into the less massive clusters and/or have circular orbits. The relative importance of each channel varies with a cluster's mass, while the exact degree of significance is subject to large uncertainties. Group pre-processing accounts for a third of the total gas depletion; but it also determines the gas fraction of galaxies at their cluster entry which in turn determines whether a galaxy should take the fast or the slow cluster processing.
This paper presents an approach to tackle the re-identification problem. This is a challenging problem due to the large variation of pose, illumination or camera view. More and more datasets are available to train machine learning models for person re-identification. These datasets vary in conditions: cameras numbers, camera positions, location, season, in size, i.e. number of images, number of different identities. Finally in labeling: there are datasets annotated with attributes while others are not. To deal with this variety of datasets we present in this paper an approach to take information from different datasets to build a system which performs well on all of them. Our model is based on a Convolutional Neural Network (CNN) and trained using multitask learning. Several losses are used to extract the different information available in the different datasets. Our main task is learned with a classification loss. To reduce the intra-class variation we experiment with the center loss. Our paper ends with a performance evaluation in which we discuss the influence of the different losses on the global re-identification performance. We show that with our method, we are able to build a system that performs well on different datasets and simultaneously extracts attributes. We also show that our system outperforms recent re-identification works on two datasets.
The non-perturbative dynamics of quantum field theories is studied using theoretical tools inspired by string formalism. Two main lines are developed: the analysis of stringy instantons in a class of four-dimensional N=2 gauge theories and the holographic study of the minimal model for a strongly coupled unbalanced superconductor. The field theory instanton calculus admits a natural and efficient description in terms of D-brane models. In addition, the string viewpoint offers the possibility of generalizing the ordinary instanton configurations. Even though such generalized, or stringy, instantons would be absent in a purely field-theoretical, low-energy treatment, we demonstrate that they do alter the IR effective description of the brane dynamics by introducing contributions related to the string scale. In the first part of this thesis we compute explicitly the stringy instanton corrections to the effective prepotential in a class of quiver gauge theories. In the second part of the thesis, we present a detailed analysis of the minimal holographic setup yielding an effective description of a superconductor with two Abelian currents. The model contains a scalar field whose condensation produces a spontaneous symmetry breaking which describes the transition to a superfluid phase. This system has important applications in both QCD and condensed matter physics; moreover, it allows us to study mixed electric-spin transport properties (i.e. spintronics) at strong coupling.
We present a new method to identify connected components on triangular grids used in atmosphere and climate models to discretize the horizontal dimension. In contrast to structured latitude-longitude grids, triangular grids are unstructured and the neighbors of a grid cell do not simply follow from the grid cell index. This complicates the identification of connected components compared to structured grids. Here, we show that this complication can be addressed by involving the mathematical tool of cubulation, which allows one to map the 2-d cells of the triangular grid onto the vertices of the 3-d cells of a cubic grid. Because the latter is structured, connected components can be readily identified by previously developed software packages for cubic grids. Computing the cubulation can be expensive, but importantly needs to be done only once for a given grid. We implement our method in a Python package that we name TriCCo and make available via pypi, gitlab and zenodo. We document the package and demonstrate its application using simulation output from the ICON atmosphere model. Finally, we characterize its computational performance and compare it to graph-based identifications of connected components using breadth-first search. The latter shows that TriCCo is ready for triangular grids with up to 500,000 cells, but that its speed and memory requirement should be improved for the application to larger grids.
The measured masses of the Higgs boson and top quark indicate that the effective potential of the standard model either develops an unstable electroweak vacuum or stands stable all the way up to the Planck scale. In the latter case in which the top quark mass is about $2\sigma$ below its present central value, the Higgs boson can be the inflaton with the help of a large nonminimal coupling to curvature in four dimensions. We propose a scenario in which the Higgs boson can be the inflaton in a five-dimensional Gauss-Bonnet braneworld model to solve both the unitarity and stability problems which usually plague Higgs inflation. We find that in order for Higgs inflation to happen successfully in the Gauss-Bonnet regime, the extra dimension scale must appear roughly in the range between the TeV scale and the instability scale of standard model. At the tree level, our model can give rise to a naturally small nonminimal coupling $\xi\sim\mathcal{O}(1)$ for the Higgs quartic coupling $\lambda\sim\mathcal{O}(0.1)$ if the extra dimension scale lies at the TeV scale. At the loop level, the inflationary predictions at the tree level are preserved. Our model can be confronted with future experiments and observations from both particle physics and cosmology.
A quantum-kinetic approach to the ultrafast dynamics of carrier multiplication in semiconductor quantum dots is presented. We investigate the underlying dynamics in the electronic subband occupations and the time-resolved optical emission spectrum, focusing on the interplay between the light-matter and the Coulomb interaction. We find a transition between qualitatively differing behaviors of carrier multiplication, which is controlled by the ratio of the interaction induced time scale and the pulse duration of the exciting light pulse. On short time scales, i.e., before intra-band relaxation, this opens the possibility of detecting carrier multiplication without refering to measurements of (multi-)exciton lifetimes.
For a Grothendieck category having a noetherian generator, we prove that there are only finitely many minimal atoms. This is a noncommutative analogue of the fact that every noetherian scheme has only finitely many irreducible components. It is also shown that each minimal atom is represented by a compressible object.
The $B(E2;0^+\to2^+)$ value in $^{68}$Ni has been measured using Coulomb excitation at safe energies. The $^{68}$Ni radioactive beam was post-accelerated at the ISOLDE facility (CERN) to 2.9 MeV/u. The emitted $\gamma$ rays were detected by the MINIBALL detector array. A kinematic particle reconstruction was performed in order to increase the measured c.m. angular range of the excitation cross section. The obtained value of 2.8$^{+1.2}_{-1.0}$ 10$^2$ e$^2$fm$^4$ is in good agreement with the value measured at intermediate energy Coulomb excitation, confirming the low $0^+\to2^+$ transition probability.
Optical nanoresonators are fundamental building blocks in a number of nanotechnology applications (e.g. in spectroscopy) due to their ability to efficiently confine light at the nanoscale. Recently, nanoresonators based on the excitation of phonon polaritons (PhPs) $-$ light coupled to lattice vibrations $-$ in polar crystals (e.g. SiC, or h-BN) have attracted much attention due to their strong field confinement, high-quality factors, and potential to enhance the photonic density of states at mid-infrared (IR) frequencies. Here, we go one step further by introducing PhPs nanoresonators that not only exhibit these extraordinary properties but also incorporate a new degree of freedom $-$ twist tuning, i.e. the possibility to be spectrally controlled by a simple rotation. To that end, we both take advantage of the low-loss in-plane hyperbolic propagation of PhPs in the van der Waals crystal $\alpha$-MoO$_3$, and realize dielectric engineering of a pristine $\alpha$-MoO$_3$ slab placed on top of metal ribbon grating, which preserves the high-quality of the polaritonic resonances. By simple rotating the $\alpha$-MoO$_3$ slab in the plane (from 0 to 45$^{\circ}$), we demonstrate via far- and near-field measurements that the narrow polaritonic resonances (with quality factors Q up to 200) can be tuned in a broad range (up to 32 cm$^{-1}$, i.e up 6 ~ times its full width at half maximum, FWHM ~ 5 cm$^{-1}$). Our results open the door to the development of tunable low-loss nanotechnologies at IR frequencies with application in sensing, emission or photodetection.
A $(G,[k_1,\dots,k_t],\lambda)$ {\it partitioned difference family} (PDF) is a partition $\cal B$ of an additive group $G$ into sets ({\it blocks}) of sizes $k_1$, \dots, $k_t$, such that the list of differences of ${\cal B}$ covers exactly $\lambda$ times every non-zero element of $G$. It is called {\it Hadamard} (HPDF) if the order of $G$ is $2\lambda$. The study of HPDFs is motivated by the fact that each of them gives rise, recursively, to infinitely many other PDFs. Apart from the {\it elementary} HPDFs consisting of a Hadamard difference set and its complement, only one HPDF was known. In this article we present three new examples in several groups and we start a general investigation on the possible existence of HPDFs with assigned parameters by means of simple arguments.
We calculate the in-plane modes of the vortex lattice in a rotating Bose condensate from the Thomas-Fermi to the mean-field quantum Hall regimes. The Tkachenko mode frequency goes from linear in the wavevector, $k$, for lattice rotational velocities, $\Omega$, much smaller than the lowest sound wave frequency in a finite system, to quadratic in $k$ in the opposite limit. The system also supports an inertial mode of frequency $\ge 2\Omega$. The calculated frequencies are in good agreement with recent observations of Tkachenko modes at JILA, and provide evidence for the decrease in the shear modulus of the vortex lattice at rapid rotation.
The evolution over time of the non-linear slip behavior of a polydimethylsiloxane (PDMS) polymer melt on a weakly adsorbing surface made of short non-entangled PDMS chains densely end-grafted to the surface of a fused silica prism has been measured. The critical shear rate at which the melt enters the nonlinear slip regime has been shown to increase with time. The adsorption kinetics of the melt on the same surface has been determined independently using ellipsometry. We show that the evolution of slip can be explained by the slow adsorption of melt chains using the Brochard-de Gennes's model.
In this letter I present results from a correlation analysis of three galaxy redshift catalogs: the SSRS2, the CfA2 and the PSCz. I will focus on the observation that the amplitude of the two--point correlation function rises if the depth of the sample is increased. There are two competing explanations for this observation, one in terms of a fractal scaling, the other based on luminosity segregation. I will show that there is strong evidence that the observed growth is due to a luminosity dependent clustering of the galaxies.
This paper introduces a novel skiagraphic method for shading toroidal forms in architectural illustrations, addressing the challenges of traditional techniques. Skiagraphy projects 3D objects onto 2D surfaces to display geometric properties. Traditional shading of tori involves extensive manual calculations and multiple projections, leading to high complexity and inaccuracies. The proposed method simplifies this by focusing on the elevation view, eliminating the need for multiple projections and complex math. Utilizing descriptive geometry, it reduces labor and complexity. Accuracy was validated through comparisons with SketchUp-generated shading and various torus configurations. This technique streamlines shading toroidal shapes while maintaining the artistic value of traditional illustration. Additionally, it has potential applications in 3D model generation from architectural shade casts, contributing to the evolving field of architectural visualization and representation.
Recently, a rigorous yet concise formula has been derived to evaluate the information flow, and hence the causality in a quantitative sense, between time series. To assess the importance of a resulting causality, it needs to be normalized. The normalization is achieved through distinguishing three types of fundamental mechanisms that govern the marginal entropy change of the flow recipient. A normalized or relative flow measures its importance relative to other mechanisms. In analyzing realistic series, both absolute and relative information flows need to be taken into account, since the normalizers for a pair of reverse flows belong to two different entropy balances; it is quite normal that two identical flows may differ a lot in relative importance in their respective balances. We have reproduced these results with several autoregressive models. We have also shown applications to a climate change problem and a financial analysis problem. For the former, reconfirmed is the role of the Indian Ocean Dipole as an uncertainty source to the El Ni\~no prediction. This might partly account for the unpredictability of certain aspects of El Ni\~no that has led to the recent portentous but spurious forecasts of the 2014 "Monster El Ni\~no". For the latter, an unusually strong one-way causality has been identified from IBM (International Business Machines Corporation) to GE (General Electric Company) in their early era, revealing to us an old story, which has almost gone to oblivion, about "Seven Dwarfs" competing a giant for the mainframe computer market.
Besides the target to pursue the narrow bandwidth X-ray pulses, the large bandwidth free-electron laser pulses are also strongly demanded to satisfy a wide range of scientific user experiments. In this paper, using the transversely tilt beam enabled by deflecting cavity and/or corrugated structure, the potential of large bandwidth X-ray free-electron lasers generation with the natural gradient of the planar undulator are discussed. Simulations confirm the theoretical prediction, and X-ray free-electron laser bandwidth indicates an increase of one order of magnitude with the optimized parameters.
Picking permutations at random, the expected number of k-cycles is known to be 1/k and is, in particular, independent of the size of the permuted set. This short note gives similar size-independent statistics of finite general linear groups: ones that depend only on small minors. The proof technique uses combinatorics of categories, motivated by representation stability, and applies simultaneously to symmetric groups, finite linear groups and many other settings.
We calculate the two-boson-exchange (TBE) corrections to the parity asymmetry of the elastic electron-proton scattering in a model using the formalism of generalized parton distributions (GPDs).
This paper is devoted to a stochastic differential game of functional forward-backward stochastic differential equation (FBSDE, for short). The associated upper and lower value functions of the stochastic differential game are defined by controlled functional backward stochastic differential equations (BSDEs, for short). Applying the Girsanov transformation method introduced by Buckdahn and Li [1], the upper and the lower value functions are shown to be deterministic. We also generalize the Hamilton-Jacobi-Bellman-Isaacs (HJBI, for short) equations to the path-dependent ones. By establishing the dynamic programming principal (DPP, for short), the upper and the lower value functions are shown to be the viscosity solutions of the corresponding upper and the lower path-dependent HJBI equations, respectively.
The quantum phase transition in an atom-molecule conversion system with atomic hopping between different hyperfine states is studied. In mean field approximation, we give the phase diagram whose phase boundary only depends on the atomic hopping strength and the atom-molecule energy detuning but not on the atomic interaction. Such a phase boundary is further confirmed by the fidelity of the ground state and the energy gap between the first-excited state and the ground one. In comparison to mean field approximation, we also study the quantum phase transition in full quantum method, where the phase boundary can be affected by the particle number of the system. Whereas, with the help of finite-size scaling behaviors of energy gap, fidelity susceptibility and the first-order derivative of entanglement entropy, we show that one can obtain the same phase boundary by the MFA and full quantum methods in the limit of $N\rightarrow \infty$. Additionally, our results show that the quantum phase transition can happens at the critical value of the atomic hopping strength even if the atom-molecule energy detuning is fixed on a certain value, which provides one a new way to control the quantum phase transition.
Due to the finite kinetic energy in the intermediate $N\Delta$ state the (internal) energy available for mesonic decay is decreased and consequently the effective $N\Delta$ width is suppressed in $NN$ scattering. The same can happen also in $\Delta$$\Delta$ case. Also the $N\Delta$ angular momentum suppresses the width as well, while the effect of the initial $NN$ angular momentum is more subtle. The state dependence affects e.g. pion production observables and can also be seen as the origin of T=1 "dibaryons".
This survey is an introduction to asymptotic methods for portfolio-choice problems with small transaction costs. We outline how to derive the corresponding dynamic programming equations and simplify them in the small-cost limit. This allows to obtain explicit solutions in a wide range of settings, which we illustrate for a model with mean-reverting expected returns and proportional transaction costs. For even more complex models, we present a policy iteration scheme that allows to compute the solution numerically.
Intersecting D-brane theories motivate the existence of exotic U(1) gauge bosons that only interact with the Standard Model through kinetic mixing with hypercharge. We analyze an effective field theory description of this effect and describe the implications of these exotic gauge bosons on precision electroweak, LHC and ILC observables.
This letter studies the synchrophasor measurement error of electric power distribution systems with on-line and off-line measurements using graphical and numerical tests. It demonstrates that the synchrophasor measurement error follows a non-Gaussian distribution instead of the traditionally-assumed Gaussian distribution. It suggests the need to use non-Gaussian or Gaussian mixture models to represent the synchrophasor measurement error. These models are more realistic to accurately represent the error than the traditional Gaussian model. The measurements and underlying analysis will be helpful for the understanding of distribution system measurement characteristics, and also for the modeling and simulation of distribution system applications.
There has been a great interest in magnetic field induced quantum spin liquids in Kitaev magnets after the discovery of neutron scattering continuum and half quantized thermal Hall conductivity in the material $\alpha$-RuCl$_3$. In this work, we provide a semiclassical analysis of the relevant theoretical models on large system sizes, and compare the results to previous studies on quantum models with small system sizes. We find a series of competing magnetic orders with fairly large unit cells at intermediate magnetic fields, which are most likely missed by previous approaches. We show that quantum fluctuations are typically strong in these large unit cell orders, while their magnetic excitations may resemble a scattering continuum and give rise to a large thermal Hall conductivity. Our work provides an important basis for a thorough investigation of emergent spin liquids and competing phases in Kitaev magnets.
The primary obstacle to developing technologies for low-resource languages is the lack of usable data. In this paper, we report the adoption and deployment of 4 technology-driven methods of data collection for Gondi, a low-resource vulnerable language spoken by around 2.3 million tribal people in south and central India. In the process of data collection, we also help in its revival by expanding access to information in Gondi through the creation of linguistic resources that can be used by the community, such as a dictionary, children's stories, an app with Gondi content from multiple sources and an Interactive Voice Response (IVR) based mass awareness platform. At the end of these interventions, we collected a little less than 12,000 translated words and/or sentences and identified more than 650 community members whose help can be solicited for future translation efforts. The larger goal of the project is collecting enough data in Gondi to build and deploy viable language technologies like machine translation and speech to text systems that can help take the language onto the internet.
We describe a method of solving the nuclear Skyrme-Hartree-Fock problem by using a deformed Cartesian harmonic oscillator basis. The complete list of expressions required to calculate local densities, total energy, and self-consistent fields is presented, and an implementation of the self-consistent symmetries is discussed. Formulas to calculate matrix elements in the Cartesian harmonic oscillator basis are derived for the nuclear and Coulomb interactions.
We consider a wave equation with a potential on the half-line as a model problem for wave propagation close to an extremal horizon, or the asymptotically flat end of a black hole spacetime. We propose a definition of quasinormal frequencies (QNFs) as eigenvalues of the generator of time translations for a null foliation, acting on an appropriate (Gevrey based) Hilbert space. We show that this QNF spectrum is discrete in a subset of $\mathbb{C}$ which includes the region $\{$Re$(s) >-b$, $|$Im $(s)|> K\}$ for any $b>0$ and some $K=K(b) \gg 1$. As a corollary we establish the meromorphicity of the scattering resolvent in a sector $|$arg$(s)| <\varphi_0$ for some $\varphi_0 > \frac{2\pi}{3}$, and show that the poles occur only at quasinormal frequencies according to our definition. This result applies in situations where the method of complex scaling cannot be directly applied, as our potentials need not be analytic. Finally, we show that QNFs computed by the continued fraction method of Leaver are necessarily QNFs according to our new definition. This paper is a companion to [D. Gajic and C. Warnick, Quasinormal modes in extremal Reissner-Nordstr\"om spacetimes, preprint (2019)], which deals with the QNFs of the wave equation on the extremal Reissner-Nordstr\"om black hole.
Collisions of deformed uranium nuclei provide a unique opportunity to study the spatial dependence of charmonium in-medium effects. By selecting the orientations of the colliding nuclei, different path lengths through the nuclear medium could be selected within the same experimental environment. In addition, higher energy densities can be achieved in U+U collisions relative to Au+Au collisions. In this paper, we investigate the prospects for charmonium studies with U+U collisions. We discuss the effects of shadowing and nuclear absorption on the J/\psi\ yield. We introduce a new observable which could help distinguish between different types of J/\psi\ interactions in hot and dense matter.
System operators employ operating reserves to deal with unexpected variations of demand and generation and guarantee the security of supply. However, they face new challenges to ensure this mission with the increasing share of renewable generation. This article focuses on the operational approach adopted by the French transmission system operator RTE for dynamically sizing the required margins in the dynamic margin monitoring strategy context. It relies on continuous forecasts of the main drivers of the uncertainties of the system imbalance. Four types of forecast errors, assumed to be independent, are considered in this approach: the errors in the wind and photovoltaic power generation, production of conventional power units, and electricity consumption. Then, the required margin is the result of comparing the global forecast error, computed as the convolution of these independent errors, with a security of supply criterion. This study presents the results of this method implemented at RTE and used in real-time operation.
We define symplectic fractional twists, which generalize Dehn twists, and use these in open books to investigate contact structures. The resulting contact structures are invariant under a circle action, and share several similarities with the invariant contact structures that were studied by Lutz and Giroux. We show that left-handed fractional twists often give rise to non-fillable contact manifolds. These manifolds are in fact "algebraically overtwisted", yet they do not seem to contain bLobs, nor are they directly related to negative stabilizations. We also show that the Weinstein conjecture holds for the non-fillable contact manifolds we construct, and we investigate the symplectic isotopy problem for fractional twists.
Synthesis of tri functional electrically conducting, optical and magnetic nano-chain of Nicore-Aushell has been discussed here. Our Investigation indicates that such material attached with biomolecule DNA in chain form will have great potentiality in medical instrument and bio computer device.
Greybox fuzzing is a lightweight testing approach that effectively detects bugs and security vulnerabilities. However, greybox fuzzers randomly mutate program inputs to exercise new paths; this makes it challenging to cover code that is guarded by complex checks. In this paper, we present a technique that extends greybox fuzzing with a method for learning new inputs based on already explored program executions. These inputs can be learned such that they guide exploration toward specific executions, for instance, ones that increase path coverage or reveal vulnerabilities. We have evaluated our technique and compared it to traditional greybox fuzzing on 26 real-world benchmarks. In comparison, our technique significantly increases path coverage (by up to 3X) and detects more bugs (up to 38% more), often orders-of-magnitude faster.
We investigate the influence of the inner profile of lens objects on gravitational lens statistics taking into account of the effect of magnification bias and both the evolution and the scatter of halo profiles. We take the dark halos as the lens objects and consider the following three models for the density profile of dark halos; SIS (singular isothermal sphere), the NFW (Navarro Frenk White) profile, and the generalized NFW profile which has a different slope at smaller radii. The mass function of dark halos is assumed to be given by the Press-Schechter function. We find that magnification bias for the NFW profile is order of magnitude larger than that for SIS. We estimate the sensitivity of the lensing probability of distant sources to the inner profile of lenses and to the cosmological parameters. It turns out that the lensing probability is strongly dependent on the inner density profile as well as on the cosmological constant. We compare the predictions with the largest observational sample, the Cosmic Lens All-Sky Survey. The absence or presence of large splitting events in larger surveys currently underway such as the 2dF and SDSS could set constraints on the inner density profile of dark halos.
We study an ensemble of individuals playing the two games of the so-called Parrondo paradox. In our study, players are allowed to choose the game to be played by the whole ensemble in each turn. The choice cannot conform to the preferences of all the players and, consequently, they face a simple frustration phenomenon that requires some strategy to make a collective decision. We consider several such strategies and analyze how fluctuations can be used to improve the performance of the system.
We show that the $k$-point bound of de Laat, Machado, Oliveira, and Vallentin, a hierarchy of upper bounds for the independence number of a topological packing graph derived from the Lasserre hierarchy, converges to the independence number.
We consider stationary viscous Mean-Field Games systems in the case of local, decreasing and unbounded coupling. These systems arise in ergodic mean-field game theory, and describe Nash equilibria of games with a large number of agents aiming at aggregation. We show how the dimension of the state space, the behavior of the coupling and the Hamiltonian at infinity affect the existence and non-existence of regular solutions. Our approach relies on the study of Sobolev regularity of the invariant measure and a blow-up procedure which is calibrated on the scaling properties of the system. In very special cases we observe uniqueness of solutions. Finally, we apply our methods to obtain new existence results for MFG systems with competition, namely when the coupling is local and increasing.
We study the dynamical stability of Proca-Higgs stars, in spherical symmetry. These are solutions of the Einstein-Proca-Higgs model, which features a Higgs-like field coupled to a Proca field, both of which minimally coupled to the gravitational field. The corresponding stars can be regarded as Proca stars with self-interactions, while avoiding the hyperbolicity issues of self-interacting Einstein-Proca models. We report that these configurations are stable near the Proca limit in the candidate stable branches, but exhibit instabilities in certain parts of the parameter space, even in the candidate stable branches, regaining their stability for very strong self-interactions. This shows that for these models, unlike various examples of scalar boson stars, self-interactions can deteriorate, rather than improve, the dynamical robustness of bosonic stars.
Data association, the problem of reasoning over correspondence between targets and measurements, is a fundamental problem in tracking. This paper presents a graphical model formulation of data association and applies an approximate inference method, belief propagation (BP), to obtain estimates of marginal association probabilities. We prove that BP is guaranteed to converge, and bound the number of iterations necessary. Experiments reveal a favourable comparison to prior methods in terms of accuracy and computational complexity.
In this work we continue our study initiated in \cite{GFGP} on the uniqueness properties of real solutions to the IVP associated to the Benjamin-Ono (BO) equation. In particular, we shall show that the uniqueness results established in \cite{GFGP} do not extend to any pair of non-vanishing solutions of the BO equation. Also, we shall prove that the uniqueness result established in \cite{GFGP} under a hypothesis involving information of the solution at three different times can not be relaxed to two different times.
We propose a general framework for the recommendation of possible customers (users) to advertisers (e.g., brands) based on the comparison between On-line Social Network profiles. In particular, we represent both user and brand profiles as trees where nodes correspond to categories and sub-categories in the associated On-line Social Network. When categories involve posts and comments, the comparison is based on word embedding, and this allows to take into account the similarity between topics popular in the brand profile and user preferences. Results on real datasets show that our approach is successfull in identifying the most suitable set of users to be used as target for a given advertisement campaign.
Multiple string matching is known as locating all the occurrences of a given number of patterns in an arbitrary string. It is used in bio-computing applications where the algorithms are commonly used for retrieval of information such as sequence analysis and gene/protein identification. Extremely large amount of data in the form of strings has to be processed in such bio-computing applications. Therefore, improving the performance of multiple string matching algorithms is always desirable. Multicore architectures are capable of providing better performance by parallelizing the multiple string matching algorithms. The Aho-Corasick algorithm is the one that is commonly used in exact multiple string matching algorithms. The focus of this paper is the acceleration of Aho-Corasick algorithm through a multicore CPU based software implementation. Through our implementation and evaluation of results, we prove that our method performs better compared to the state of the art.
We discuss techniques and results for the extraction of the nucleon's spin-dependent parton distributions and their uncertainties from data for polarized deep-inelastic lepton-nucleon and proton-proton scattering by means of a global QCD analysis. Computational methods are described that significantly increase the speed of the required calculations to a level that allows to perform the full analysis consistently at next-to-leading order accuracy. We examine how the various data sets help to constrain different aspects of the quark, anti-quark, and gluon helicity distributions. Uncertainty estimates are performed using both the Lagrange multiplier and the Hessian approaches. We use the extracted parton distribution functions and their estimated uncertainties to predict spin asymmetries for high-transverse momentum pion and jet production in polarized proton-proton collisions at 500 GeV center-of-mass system energy at BNL-RHIC, as well as for W boson production.
Developing the proper representations for simulating high-speed flows with strong shock waves, rarefactions, and contact discontinuities has been a long-standing question in numerical analysis. Herein, we employ neural operators to solve Riemann problems encountered in compressible flows for extreme pressure jumps (up to $10^{10}$ pressure ratio). In particular, we first consider the DeepONet that we train in a two-stage process, following the recent work of \cite{lee2023training}, wherein the first stage, a basis is extracted from the trunk net, which is orthonormalized and subsequently is used in the second stage in training the branch net. This simple modification of DeepONet has a profound effect on its accuracy, efficiency, and robustness and leads to very accurate solutions to Riemann problems compared to the vanilla version. It also enables us to interpret the results physically as the hierarchical data-driven produced basis reflects all the flow features that would otherwise be introduced using ad hoc feature expansion layers. We also compare the results with another neural operator based on the U-Net for low, intermediate, and very high-pressure ratios that are very accurate for Riemann problems, especially for large pressure ratios, due to their multiscale nature but computationally more expensive. Overall, our study demonstrates that simple neural network architectures, if properly pre-trained, can achieve very accurate solutions of Riemann problems for real-time forecasting. The source code, along with its corresponding data, can be found at the following URL: https://github.com/apey236/RiemannONet/tree/main
This research paper presents a thorough economic analysis of Bitcoin and its impact. We delve into fundamental principles, and technological evolution into a prominent decentralized digital currency. Analysing Bitcoin's economic dynamics, we explore aspects such as transaction volume, market capitalization, mining activities, and macro trends. Moreover, we investigate Bitcoin's role in economy ecosystem, considering its implications on traditional financial systems, monetary policies, and financial inclusivity. We utilize statistical and analytical tools to assess equilibrium , market behaviour, and economic . Insights from this analysis provide a comprehensive understanding of Bitcoin's economic significance and its transformative potential in shaping the future of global finance. This research contributes to informed decision-making for individuals, institutions, and policymakers navigating the evolving landscape of decentralized finance.
In this paper, we apply the theory of inverse semigroups to the $C^{*}$-algebra $U[\mathbb{Z}]$ considered in \cite{Cuntz}. We show that the $C^{*}$-algebra $U[\mathbb{Z}]$ is generated by an inverse semigroup of partial isometries. We explicity identify the groupoid $\mathcal{G}_{tight}$ associated to the inverse semigroup and show that $\mathcal{G}_{tight}$ is exactly the same groupoid obtained in \cite{Cuntz-Li}.
The Askaryan Radio Array (ARA) reports an observation of radio emission coincident with the "Valentine's Day" solar flare on Feb. 15$^{\rm{th}}$, 2011 in the prototype "Testbed" station. We find $\sim2000$ events that passed our neutrino search criteria during the 70 minute period of the flare, all of which reconstruct to the location of the sun. A signal analysis of the events reveals them to be consistent with that of bright thermal noise correlated across antennas. This is the first natural source of radio emission reported by ARA that is tightly reconstructable on an event-by-event basis. The observation is also the first for ARA to point radio from individual events to an extraterrestrial source on the sky. We comment on how the solar flares, coupled with improved systematic uncertainties in reconstruction algorithms, could aid in a mapping of any above-ice radio emission, such as that from cosmic-ray air showers, to astronomical locations on the sky.
OPERA is a neutrino oscillation experiment designed to perform a nu\_tau appearance search at long distance in the future CNGS beam from CERN to Gran Sasso. It is based on the nuclear emulsion technique to distinguish among the neutrino interaction products the track of a tau produced by a nu\_tau and its decay tracks. The OPERA detector is presently under construction in the Gran Sasso underground laboratory, 730 km from CERN, and will receive its first neutrinos in 2006. The experimental technique is reviewed and the development of the project described. Foreseen performances in measuring nu\_tau appearance and also in searching for nu\_e appearance are discussed.
In this paper, we present a proof of the Riemann hypothesis. We show that zeros of the Riemann zeta function should be on the line with the real value 1/2, in the region where the real part of complex variable is between 0 and 1.
We construct a new class of exact string solutions with a four dimensional target space metric of signature ($-,+,+,+$) by gauging the independent left and right nilpotent subgroups with `null' generators of WZNW models for rank 2 non-compact groups $G$. The `null' property of the generators (${\rm Tr }(N_n N_m)=0$) implies the consistency of the gauging and the absence of $\a'$-corrections to the semiclassical backgrounds obtained from the gauged WZNW models. In the case of the maximally non-compact groups ($G= SL(3), SO(2,2), SO(2,3), G_2$) the construction corresponds to gauging some of the subgroups generated by the nilpotent `step' operators in the Gauss decomposition. The rank 2 case is a particular example of a general construction leading to conformal backgrounds with one time-like direction. The conformal theories obtained by integrating out the gauge field can be considered as sigma model analogs of Toda models (their classical equations of motion are equivalent to Toda model equations). The procedure of `null gauging' applies also to other non-compact groups.
In this paper we study the locus of singular tuples of a complex valued multisymmetric tensor. The main problem that we focus on is: given the set of singular tuples of some general tensor, which are all the tensors that admit those same singular tuples. Assume that the triangular inequality holds, that is exactly the condition such that the dual variety to the Segre-Veronese variety is an hypersurface, or equivalently, the hyperdeterminant exists. We show in such case that, when at least one component has degree odd, this tensor is projectively unique. On the other hand, if all the degrees are even, the fiber is an $1$-dimensional space.
One-way functions are widely used for encrypting the secret in public key cryptography, although they are regarded as plausibly one-way but have not been proven so. Here we discuss the public key cryptosystem based on the system of higher order Diophantine equations. In this system those Diophantine equations are used as public keys for sender and recipient, and sender can recover the secret from the Diophantine equation returned from recipient with a trapdoor. In general the system of Diophantine equations is hard to solve when it is positive-dimensional and it implies the Diophantine equations in this cryptosystem works as a possible one-way function. We also discuss some problems on implementation, which are caused from additional complexity necessary for constructing Diophantine equations in order to prevent from attacking by tamperers.
Spannotation is an open source user-friendly tool developed for image annotation for semantic segmentation specifically in autonomous navigation tasks. This study provides an evaluation of Spannotation, demonstrating its effectiveness in generating accurate segmentation masks for various environments like agricultural crop rows, off-road terrains and urban roads. Unlike other popular annotation tools that requires about 40 seconds to annotate an image for semantic segmentation in a typical navigation task, Spannotation achieves similar result in about 6.03 seconds. The tools utility was validated through the utilization of its generated masks to train a U-Net model which achieved a validation accuracy of 98.27% and mean Intersection Over Union (mIOU) of 96.66%. The accessibility, simple annotation process and no-cost features have all contributed to the adoption of Spannotation evident from its download count of 2098 (as of February 25, 2024) since its launch. Future enhancements of Spannotation aim to broaden its application to complex navigation scenarios and incorporate additional automation functionalities. Given its increasing popularity and promising potential, Spannotation stands as a valuable resource in autonomous navigation and semantic segmentation. For detailed information and access to Spannotation, readers are encouraged to visit the project's GitHub repository at https://github.com/sof-danny/spannotation
The shaping of nuclear spin polarization profiles and the induction of nuclear resonances are demonstrated within a parabolic quantum well using an externally applied gate voltage. Voltage control of the electron and hole wave functions results in nanometer-scale sheets of polarized nuclei positioned along the growth direction of the well. RF voltages across the gates induce resonant spin transitions of selected isotopes. This depolarizing effect depends strongly on the separation of electrons and holes, suggesting that a highly localized mechanism accounts for the observed behavior.
Discussion on "Random-projection ensemble classification" by T. Cannings and R. Samworth. We believe that the proposed approach can find many applications in economics such as credit scoring (e.g. Altman (1968)) and can be extended to more general type of classifiers. In this discussion we would like to draw authors attention to the copula-based discriminant analysis (Han et al. (2013) and He et al. (2016)).
We develop a theory of quantum harmonic analysis on lattices in $\mathbb{R}^{2d}$. Convolutions of a sequence with an operator and of two operators are defined over a lattice, and using corresponding Fourier transforms of sequences and operators we develop a version of harmonic analysis for these objects. We prove analogues of results from classical harmonic analysis and the quantum harmonic analysis of Werner, including Tauberian theorems and a Wiener division lemma. Gabor multipliers from time-frequency analysis are described as convolutions in this setting. The quantum harmonic analysis is thus a conceptual framework for the study of Gabor multipliers, and several of the results include results on Gabor multipliers as special cases.
The practical damage of silicon bipolar devices subjected to mixed ionization and displacement irradiations is usually evaluated by the sum of separated ionization and displacement damages. However, recent experiments show clear difference between the practical and summed damages, indicating significant irradiation synergistic effects (ISEs). Understanding the behaviors and mechanisms of ISEs is essential to predict the practical damages. In this work, we first make a brief review on the state of the art, critically emphasizing on the difficulty encountered in previous models to understand the dose rate dependence of the ISEs. We then introduce in detail our models explaining this basic phenomenon, which can be described as follows. Firstly, we show our experimental works on PNP and NPN transistors. A variable neutron fluence and $\gamma$-ray dose setup is adopted. Fluence-dependent `tick'-like and sublinear dose profiles are observed for PNP and NPN transistors, respectively. Secondly, we describe our theoretical investigations on the positive ISE in NPN transistors. We propose an atomistic model of transformation and annihilation of $\rm V_2$ displacement defects in p-type silicon under ionization irradiation, which is totally different from the traditional picture of Coulomb interaction of oxide trapped charges in silica on charge carriers in irradiated silicon. The predicted novel dose and fluence dependences are fully verified by the experimental data. Thirdly, the mechanism of the observed negative ISE in PNP transistors is investigated in a similar way as in the NPN transistor case. The difference is that in n-type silicon, VO displacement defects also undergo an ionization-induced transformation and annihilation process. Our results show that, the evolution of displacement defects due to carrier-enhanced defect diffusion and reaction is the dominating mechanism of the ISEs.
The present work introduces floodlight, an open source Python package built to support and automate team sport data analysis. It is specifically designed for the scientific analysis of spatiotemporal tracking data, event data, and game codes in disciplines such as match and performance analysis, exercise physiology, training science, and collective movement behavior analysis. It is completely provider- and sports-independent and includes a high-level interface suitable for programming beginners. The package includes routines for most aspects of the data analysis process, including dedicated data classes, file parsing functionality, public dataset APIs, pre-processing routines, common data models and several standard analysis algorithms previously used in the literature, as well as basic visualization functionality. The package is intended to make team sport data analysis more accessible to sport scientists, foster collaborations between sport and computer scientists, and strengthen the community's culture of open science and inclusion of previous works in future works.
Textural and structural features can be regraded as "two-view" feature sets. Inspired by the recent progress in multi-view learning, we propose a novel two-view classification method that models each feature set and optimizes the process of merging these views efficiently. Examples of implementation of this approach in classification of real-world data are presented, with special emphasis on medical images. We firstly decompose fully-textured images into two layers of representation, corresponding to natural stochastic textures (NST) and structural layer, respectively. The structural, edge-and-curve-type, information is mostly represented by the local spatial phase, whereas, the pure NST has random phase and is characterized by Gaussianity and self-similarity. Therefore, the NST is modeled by the 2D self-similar process, fractional Brownian motion (fBm). The Hurst parameter, characteristic of fBm, specifies the roughness or irregularity of the texture. This leads us to its estimation and implementation along other features extracted from the structure layer, to build the "two-view" features sets used in our classification scheme. A shallow neural net (NN) is exploited to execute the process of merging these feature sets, in a straightforward and efficient manner.
The main result of this note is an efficient presentation of the $S^1$-equivariant cohomology ring of Peterson varieties (in type $A$) as a quotient of a polynomial ring by an ideal $\mathcal{J}$, in the spirit of the well-known Borel presentation of the cohomology of the flag variety. Our result simplifies previous presentations given by Harada-Tymoczko and Bayegan-Harada. In particular, our result gives an affirmative answer to a conjecture of Bayegan and Harada that the defining ideal $\mathcal{J}$ is generated by quadratics.
The aim of this paper is to give not only an explicit upper bound of the total Q-curvature but also an induced isoperimetric deficit formula for the complete conformal metrics on $\mathbb R^n$, $n\ge 3$ with scalar curvature being nonnegative near infinity and Q-curvature being absolutely convergent.
The nonextensive statistics based on the $q$-entropy $S_q=-\frac{\sum_{i=1}^v(p_i-p_i^q)}{1-q}$ has been so far applied to systems in which the $q$ value is uniformly distributed. For the systems containing different $q$'s, the applicability of the theory is still a matter of investigation. The difficulty is that the class of systems to which the theory can be applied is actually limited by the usual nonadditivity rule of entropy which is no more valid when the systems contain non uniform distribution of $q$ values. In this paper, within the framework of the so called incomplete information theory, we propose a more general nonadditivity rule of entropy prescribed by the zeroth law of thermodynamics. This new nonadditivity generalizes in a simple way the usual one and can be proved to lead uniquely to the $q$-entropy.
Few-shot and one-shot learning have been the subject of active and intensive research in recent years, with mounting evidence pointing to successful implementation and exploitation of few-shot learning algorithms in practice. Classical statistical learning theories do not fully explain why few- or one-shot learning is at all possible since traditional generalisation bounds normally require large training and testing samples to be meaningful. This sharply contrasts with numerous examples of successful one- and few-shot learning systems and applications. In this work we present mathematical foundations for a theory of one-shot and few-shot learning and reveal conditions specifying when such learning schemes are likely to succeed. Our theory is based on intrinsic properties of high-dimensional spaces. We show that if the ambient or latent decision space of a learning machine is sufficiently high-dimensional than a large class of objects in this space can indeed be easily learned from few examples provided that certain data non-concentration conditions are met.
We prove that a compact quaternionic-K\"{a}hler manifold of dimension $4n\geq 8$ admitting a conformal-Killing 2-form which is not Killing, is isomorphic to the quaternionic projective space, with its standard quaternionic-K\"{a}hler structure.
We compare theoretical and experimental predictions of two main classes of models addressing fermion mass hierarchies and flavour changing neutral currents (FCNC) effects in supersymmetry: Froggatt-Nielsen (FN) U(1) gauged flavour models and Nelson-Strassler/extra dimensional models with hierarchical wave functions for the families. We show that whereas the two lead to identical predictions in the fermion mass matrices, the second class generates a stronger suppression of FCNC effects. We prove that, whereas at first sight the FN setup is more constrained due to anomaly cancelation conditions, imposing unification of gauge couplings in the second setup generates conditions which precisely match the mixed anomaly constraints in the FN setup. Finally, we provide an economical extra dimensional realisation of the hierarchical wave functions scenario in which the leptonic FCNC can be efficiently suppressed due to the strong coupling (CFT) origin of the electron mass.
Recently, Grabowska and Kaplan constructed a four-dimensional lattice formulation of chiral gauge theories on the basis of the chiral overlap operator. At least in the tree-level approximation, the left-handed fermion is coupled only to the original gauge field~$A$, while the right-handed one is coupled only to the gauge field~$A_\star$, a deformation of~$A$ by the gradient flow with infinite flow time. In this paper, we study the fermion one-loop effective action in their formulation. We show that the continuum limit of this effective action contains local interaction terms between $A$ and~$A_\star$, even if the anomaly cancellation condition is met. These non-vanishing terms would lead an undesired perturbative spectrum in the formulation.
User-driven applications belong to the new type of programs, in which users get the full control of WHAT, WHEN, and HOW must appear on the screen. Such programs can exist only if the screen view is organized not according with the predetermined scenario, written by the developers, but if any screen object can be moved, resized, and reconfigured by any user at any moment. This article describes the algorithm, by which an object of an arbitrary shape can be turned into moveable and resizable. It also explains some rules of such design and the technique, which can be useful in many cases. Both the individual movements of objects and their synchronous movements are analysed. After discussing the individually moveable controls, different types of groups are analysed and the arbitrary grouping of controls is considered.
Motivated by applications from computer vision to bioinformatics, the field of shape analysis deals with problems where one wants to analyze geometric objects, such as curves, while ignoring actions that preserve their shape, such as translations, rotations, or reparametrizations. Mathematical tools have been developed to define notions of distances, averages, and optimal deformations for geometric objects. One such framework, which has proven to be successful in many applications, is based on the square root velocity (SRV) transform, which allows one to define a computable distance between spatial curves regardless of how they are parametrized. This paper introduces a supervised deep learning framework for the direct computation of SRV distances between curves, which usually requires an optimization over the group of reparametrizations that act on the curves. The benefits of our approach in terms of computational speed and accuracy are illustrated via several numerical experiments.
We report C, N, and Si isotopic data for 59 highly 13C-enriched presolar submicron- to micron-sized SiC grains from the Murchison meteorite, including eight putative nova grains (PNGs) and 29 15N-rich (14N/15N<=solar) AB grains, and their Mg-Al, S, and Ca-Ti isotope data when available. These 37 grains are enriched in 13C, 15N and 26Al with the PNGs showing more extreme enhancements. The 15N-rich AB grains show systematically higher 26Al and 30Si excesses than the 14N-rich AB grains. Thus, we propose to divide the AB grains into groups 1 (14N/15N<solar) and 2 (14N/15N>=solar). For the first time, we have obtained both S and Ti isotopic data for five AB1 grains and one PNG, and found 32S and/or 50Ti enhancements. Interestingly, one AB1 grain had the largest 32S and 50Ti excesses, strongly suggesting a neutron-capture nucleosynthetic origin of the 32S excess and thus the initial presence of radiogenic 32Si (t1/2=153 yr). More importantly, we found that the 15N and 26Al excesses of AB1 grains form a trend that extends to the region in the N-Al isotope plot occupied by C2 grains, strongly indicating a common stellar origin for both AB1 and C2 grains. Comparison of supernova models with the AB1 and C2 grain data indicates that these grains came from SNe that experienced H ingestion into the He/C zones of their progenitors.
Learning object-centric representations from complex natural environments enables both humans and machines with reasoning abilities from low-level perceptual features. To capture compositional entities of the scene, we proposed cyclic walks between perceptual features extracted from vision transformers and object entities. First, a slot-attention module interfaces with these perceptual features and produces a finite set of slot representations. These slots can bind to any object entities in the scene via inter-slot competitions for attention. Next, we establish entity-feature correspondence with cyclic walks along high transition probability based on the pairwise similarity between perceptual features (aka "parts") and slot-binded object representations (aka "whole"). The whole is greater than its parts and the parts constitute the whole. The part-whole interactions form cycle consistencies, as supervisory signals, to train the slot-attention module. Our rigorous experiments on \textit{seven} image datasets in \textit{three} \textit{unsupervised} tasks demonstrate that the networks trained with our cyclic walks can disentangle foregrounds and backgrounds, discover objects, and segment semantic objects in complex scenes. In contrast to object-centric models attached with a decoder for the pixel-level or feature-level reconstructions, our cyclic walks provide strong learning signals, avoiding computation overheads and enhancing memory efficiency. Our source code and data are available at: \href{https://github.com/ZhangLab-DeepNeuroCogLab/Parts-Whole-Object-Centric-Learning/}{link}.
We study the possible exotic states with $J^{PC} = 0^{+-}$ using the tetraquark interpolating currents with the QCD sum rule approach. The extracted masses are around 4.85 GeV for the charmonium-like states and 11.25 GeV for the bottomomium-like states. There is no working region for the light tetraquark currents, which implies the light $0^{+-}$ state may not exist below 2 GeV.
In this paper, we address the problem of motion planning and control at the limits of handling, under locally varying traction conditions. We propose a novel solution method where traction variations over the prediction horizon are represented by time-varying tire force constraints, derived from a predictive friction estimate. A constrained finite time optimal control problem is solved in a receding horizon fashion, imposing these time-varying constraints. Furthermore, our method features an integrated sampling augmentation procedure that addresses the problems of infeasibility and sensitivity to local minima that arise at abrupt constraint alterations, e.g., due to sudden friction changes. We validate the proposed algorithm on a Volvo FH16 heavy-duty vehicle, in a range of critical scenarios. Experimental results indicate that traction adaptive motion planning and control improves the vehicle's capacity to avoid accidents, both when adapting to low local traction, by ensuring dynamic feasibility of the planned motion, and when adapting to high local traction, by realizing high traction utilization.
Recent developments show that Large Language Models (LLMs) produce state-of-the-art performance on natural language (NL) to code generation for resource-rich general-purpose languages like C++, Java, and Python. However, their practical usage for structured domain-specific languages (DSLs) such as YAML, JSON is limited due to domain-specific schema, grammar, and customizations generally unseen by LLMs during pre-training. Efforts have been made to mitigate this challenge via in-context learning through relevant examples or by fine-tuning. However, it suffers from problems, such as limited DSL samples and prompt sensitivity but enterprises maintain good documentation of the DSLs. Therefore, we propose DocCGen, a framework that can leverage such rich knowledge by breaking the NL-to-Code generation task for structured code languages into a two-step process. First, it detects the correct libraries using the library documentation that best matches the NL query. Then, it utilizes schema rules extracted from the documentation of these libraries to constrain the decoding. We evaluate our framework for two complex structured languages, Ansible YAML and Bash command, consisting of two settings: Out-of-domain (OOD) and In-domain (ID). Our extensive experiments show that DocCGen consistently improves different-sized language models across all six evaluation metrics, reducing syntactic and semantic errors in structured code. We plan to open-source the datasets and code to motivate research in constrained code generation.
Falling raindrops are usually considered purely negative factors for traditional optical imaging because they generate not only rain streaks but also rain fog, resulting in a decrease in the visual quality of images. However, this work demonstrates that the image degradation caused by falling raindrops can be eliminated by the raindrops themselves. The temporal second-order correlation properties of the photon number fluctuation introduced by falling raindrops has a remarkable attribute: the rain streak photons and rain fog photons result in the absence of a stable second-order photon number correlation, while this stable correlation exists for photons that do not interact with raindrops. This fundamental difference indicates that the noise caused by falling raindrops can be eliminated by measuring the second-order photon number fluctuation correlation in the time domain. The simulation and experimental results demonstrate that the rain removal effect of this method is even better than that of deep learning methods when the integration time of each measurement event is short. This high-efficient quantum rain removal method can be used independently or integrated into deep learning algorithms to provide front-end processing and high-quality materials for deep learning.
The financial industry poses great challenges with risk modeling and profit generation. These entities are intricately tied to the sophisticated prediction of stock movements. A stock forecaster must untangle the randomness and ever-changing behaviors of the stock market. Stock movements are influenced by a myriad of factors, including company history, performance, and economic-industry connections. However, there are other factors that aren't traditionally included, such as social media and correlations between stocks. Social platforms such as Reddit, Facebook, and X (Twitter) create opportunities for niche communities to share their sentiment on financial assets. By aggregating these opinions from social media in various mediums such as posts, interviews, and news updates, we propose a more holistic approach to include these "media moments" within stock market movement prediction. We introduce a method that combines financial data, social media, and correlated stock relationships via a graph neural network in a hierarchical temporal fashion. Through numerous trials on current S&P 500 index data, with results showing an improvement in cumulative returns by 28%, we provide empirical evidence of our tool's applicability for use in investment decisions.
Based on 471 million BB pairs collected with the BABAR detector at the PEP-II e+e- collider, we perform a series of measurements on rare decays B->K(*)l+l-, where l+l- is either e+e- or mu+mu-. The measurements include total branching fractions, and partial branching fractions in six bins of di-lepton mass-squared. We also measure isospin asymmetries in the same six bins. Furthermore, we measure direct CP and lepton flavor asymmetries for di-lepton mass below and above the J/Psi resonance. Our measurements show good agreement with both Standard Model predictions and measurements from other experiments.
The current trend of scaling language models involves increasing both parameter count and training dataset size. Extrapolating this trend suggests that training dataset size may soon be limited by the amount of text data available on the internet. Motivated by this limit, we investigate scaling language models in data-constrained regimes. Specifically, we run a large set of experiments varying the extent of data repetition and compute budget, ranging up to 900 billion training tokens and 9 billion parameter models. We find that with constrained data for a fixed compute budget, training with up to 4 epochs of repeated data yields negligible changes to loss compared to having unique data. However, with more repetition, the value of adding compute eventually decays to zero. We propose and empirically validate a scaling law for compute optimality that accounts for the decreasing value of repeated tokens and excess parameters. Finally, we experiment with approaches mitigating data scarcity, including augmenting the training dataset with code data or removing commonly used filters. Models and datasets from our 400 training runs are freely available at https://github.com/huggingface/datablations.