text
stringlengths
6
128k
In this paper we analyze the performance issues involved in the generation of auto- mated traffic reports for large IT infrastructures. Such reports allows the IT manager to proactively detect possible abnormal situations and roll out the corresponding cor- rective actions. With the ever-increasing bandwidth of current networks, the design of automated traffic report generation systems is very challenging. In a first step, the huge volumes of collected traffic are transformed into enriched flow records obtained from diverse collectors and dissectors. Then, such flow records, along with time series obtained from the raw traffic, are further processed to produce a usable report. As will be shown, the data volume in flow records is very large as well and requires careful selection of the Key Performance Indicators (KPIs) to be included in the report. In this regard, we discuss the use of high-level languages versus low- level approaches, in terms of speed and versatility. Furthermore, our design approach is targeted for rapid development in commodity hardware, which is essential to cost-effectively tackle demanding traffic analysis scenarios.
We introduce quantum Markov categories as a structure that refines and extends a synthetic approach to probability theory and information theory so that it includes quantum probability and quantum information theory. In this broader context, we analyze three successively more general notions of reversibility and statistical inference: ordinary inverses, disintegrations, and Bayesian inverses. We prove that each one is a strictly special instance of the latter for certain subcategories, providing a categorical foundation for Bayesian inversion as a generalization of reversing a process. We unify the categorical and $C^*$-algebraic notions of almost everywhere (a.e.) equivalence. As a consequence, we prove many results including a universal no-broadcasting theorem for S-positive categories, a generalized Fisher--Neyman factorization theorem for a.e. modular categories, a relationship between error correcting codes and disintegrations, and the relationship between Bayesian inversion and Umegaki's non-commutative sufficiency.
We present results of targeted searches for signatures of non-radial oscillation modes (such as r- and g-modes) in neutron stars using {\it RXTE} data from several accreting millisecond X-ray pulsars (AMXPs). We search for potentially coherent signals in the neutron star rest frame by first removing the phase delays associated with the star's binary motion and computing FFT power spectra of continuous light curves with up to $2^{30}$ time bins. We search a range of frequencies in which both r- and g-modes are theoretically expected to reside. Using data from the discovery outburst of the 435 Hz pulsar XTE J1751$-$305 we find a single candidate, coherent oscillation with a frequency of $0.5727597 \times \nu_{spin} = 249.332609$ Hz, and a fractional Fourier amplitude of $7.46 \times 10^{-4}$. We estimate the significance of this feature at the $1.6 \times 10^{-3}$ level, slightly better than a $3\sigma$ detection. We argue that possible mode identifications include rotationally-modified g-modes associated with either a helium-rich surface layer or a density discontinuity due to electron captures on hydrogen in the accreted ocean. Alternatively, the frequency could be identified with that of an inertial mode or an r-mode modified by the presence of a solid crust, however, the r-mode amplitude required to account for the observed modulation amplitude would induce a large spin-down rate inconsistent with the observed pulse timing measurements. For the AMXPs XTE J1814$-$338 and NGC 6440 X-2 we do not find any candidate oscillation signals, and we place upper limits on the fractional Fourier amplitude of any coherent oscillations in our frequency search range of $7.8\times 10^{-4}$ and $5.6 \times 10^{-3}$, respectively. We briefly discuss the prospects and sensitivity for similar searches with future, larger X-ray collecting area missions.
The power-balanced hybrid optical imaging system is a special design of a diffractive computational camera, introduced in this paper, with image formation by a refractive lens and Multilevel Phase Mask (MPM). This system provides a long focal depth with low chromatic aberrations thanks to MPM and a high energy light concentration due to the refractive lens. We introduce the concept of optical power balance between the lens and MPM which controls the contribution of each element to modulate the incoming light. Additional unique features of our MPM design are the inclusion of quantization of the MPM's shape on the number of levels and the Fresnel order (thickness) using a smoothing function. To optimize optical power-balance as well as the MPM, we build a fully-differentiable image formation model for joint optimization of optical and imaging parameters for the proposed camera using Neural Network techniques. Additionally, we optimize a single Wiener-like optical transfer function (OTF) invariant to depth to reconstruct a sharp image. We numerically and experimentally compare the designed system with its counterparts, lensless and just-lens optical systems, for the visible wavelength interval (400-700)nm and the depth-of-field range (0.5-$\infty$m for numerical and 0.5-2m for experimental). The attained results demonstrate that the proposed system equipped with the optimal OTF overcomes its counterparts (even when they are used with optimized OTF) in terms of reconstruction quality for off-focus distances. The simulation results also reveal that optimizing the optical power-balance, Fresnel order, and the number of levels parameters are essential for system performance attaining an improvement of up to 5dB of PSNR using the optimized OTF compared with its counterpart lensless setup.
Impurity nuclear spin relaxation is studied theoretically. A single impurity generates a bound state localized around the impurity atom in unconventional superconductors. With increasing impurity potential, the relaxation rate $T_1^{-1}$ is reduced by the impurity potential. However, it has a peak at low temperatures due to the impurity bound state. The peak disappears at non-impurity sites. The impurity site NMR measurement detecting a local electronic structure just on the impurity atom is very useful for identifying the unconventional pairing states.
Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1/8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application.
Unified dark matter cosmologies economically combine missing matter and energy in a single fluid. Of these models, the standard Chaplygin gas is theoretically motivated, but faces problems in explaining large scale structure if linear perturbations are directly imposed on the homogeneous fluid. However, early formation of a clustered component of small halos is sufficient (and necessary) for hierarchical clustering to proceed in a CDM-like component as in the standard scenario, with the remaining homogeneous component acting as dark energy. We examine this possibility. A linear analysis shows that a critical Press-Schecter threshold for collapse can generally only be reached for generalized Chaplygin gas models that mimic $\Lambda$CDM, or ones where superluminal sound speeds occur. But the standard Chaplygin gas case turns out to be marginal, with overdensities reaching order one in the linear regime. This motivates a nonlinear analysis. A simple infall model suggests that collapse is indeed possible for perturbations of order 1~kpc and above; for, as opposed to standard gases, pressure forces decrease with increasing densities, allowing for the collapse of linearly stable systems. This suggests that a cosmological scenario based on the standard Chaplygin gas may not be ruled out from the viewpoint of structure formation, as often assumed. On the other hand, a 'nonlinear Jeans scale', constricting growth to scales $R \gtrsim {\rm kpc}$, which may be relevant to the small scale problems of CDM, is predicted. Finally, the background dynamics of clustered Chaplygin gas cosmologies is examined and confronted with observational datasets. It is found to be viable (at 1-sigma), with a mildly larger $H_0$ than $\Lambda$CDM, if the clustered fraction is larger than $90 \%$.
Researchers have developed neural network verification algorithms motivated by the need to characterize the robustness of deep neural networks. The verifiers aspire to answer whether a neural network guarantees certain properties with respect to all inputs in a space. However, many verifiers inaccurately model floating point arithmetic but do not thoroughly discuss the consequences. We show that the negligence of floating point error leads to unsound verification that can be systematically exploited in practice. For a pretrained neural network, we present a method that efficiently searches inputs as witnesses for the incorrectness of robustness claims made by a complete verifier. We also present a method to construct neural network architectures and weights that induce wrong results of an incomplete verifier. Our results highlight that, to achieve practically reliable verification of neural networks, any verification system must accurately (or conservatively) model the effects of any floating point computations in the network inference or verification system.
A large fraction of the electronic health records consists of clinical measurements collected over time, such as blood tests, which provide important information about the health status of a patient. These sequences of clinical measurements are naturally represented as time series, characterized by multiple variables and the presence of missing data, which complicate analysis. In this work, we propose a surgical site infection detection framework for patients undergoing colorectal cancer surgery that is completely unsupervised, hence alleviating the problem of getting access to labelled training data. The framework is based on powerful kernels for multivariate time series that account for missing data when computing similarities. Our approach show superior performance compared to baselines that have to resort to imputation techniques and performs comparable to a supervised classification baseline.
Recently, label consistent k-svd (LC-KSVD) algorithm has been successfully applied in image classification. The objective function of LC-KSVD is consisted of reconstruction error, classification error and discriminative sparse codes error with L0-norm sparse regularization term. The L0-norm, however, leads to NP-hard problem. Despite some methods such as orthogonal matching pursuit can help solve this problem to some extent, it is quite difficult to find the optimum sparse solution. To overcome this limitation, we propose a label embedded dictionary learning (LEDL) method to utilise the L1-norm as the sparse regularization term so that we can avoid the hard-to-optimize problem by solving the convex optimization problem. Alternating direction method of multipliers and blockwise coordinate descent algorithm are then exploited to optimize the corresponding objective function. Extensive experimental results on six benchmark datasets illustrate that the proposed algorithm has achieved superior performance compared to some conventional classification algorithms.
Long-range spatiotemporal correlations may play important roles in nonequilibrium surface growth process. In order to investigate the effects of long-range temporal correlation on dynamic scaling of growing surfaces, we perform extensive numerical simulations of the (1+1)- and (2+1)-dimensional Kardar-Parisi-Zhang (KPZ) growth system in the presence of temporally correlated noise, and compare our results with previous theoretical predictions and numerical simulations. We find that surface morphologies are obviously affected with long-range temporal correlations, and as the temporal correlation exponent increases, the KPZ surfaces develop gradually faceted patterns in the saturated growth regimes. Our results show that the temporal correlated KPZ system displays evidently nontrivial dynamic properties when $0<\theta<0.5$, the characteristic roughness exponents satisfy $\alpha<\alpha_s$, and $\alpha_{loc}$ exhibiting non-universal scaling within local window sizes, which differs with the existing dynamic scaling classifications, both in the (1+1)- and (2+1)-dimensions.
We study ground states of two-dimensional Bose-Einstein condensates with attractive interactions in a trap $V(x)$ rotating at the velocity $\Omega $. It is known that there exist a critical rotational velocity $0<\Omega ^*:=\Omega^*(V)\leq \infty$ and a critical number $0<a^*<\infty$ such that for any rotational velocity $0\le \Omega <\Omega ^*$, ground states exist if and only if the coupling constant $a$ satisfies $a<a^*$. For a general class of traps $V(x)$, which may not be symmetric, we prove in this paper that up to a constant phase, there exists a unique ground state as $a\nearrow a^*$, where $\Omega\in(0,\Omega^*)$ is fixed. This result extends essentially our recent uniqueness result, where only the radially symmetric traps $V(x)$ could be handled with.
Support vector machines (SVMs) appeared in the early nineties as optimal margin classifiers in the context of Vapnik's statistical learning theory. Since then SVMs have been successfully applied to real-world data analysis problems, often providing improved results compared with other techniques. The SVMs operate within the framework of regularization theory by minimizing an empirical risk in a well-posed and consistent way. A clear advantage of the support vector approach is that sparse solutions to classification and regression problems are usually obtained: only a few samples are involved in the determination of the classification or regression functions. This fact facilitates the application of SVMs to problems that involve a large amount of data, such as text processing and bioinformatics tasks. This paper is intended as an introduction to SVMs and their applications, emphasizing their key features. In addition, some algorithmic extensions and illustrative real-world applications of SVMs are shown.
We point out interesting effects of additional massless Dirac fermions with N_F colors upon the critical behavior of the Ginzburg-Landau model. For increasing N_F, the model is driven into the type II regime of superconductivity. The critical exponents are given as a function of N_F.
Hydrogenated amorphous silicon alloy films are generally deposited by radio frequency plasma enhanced chemical vapor deposition (RF PECVD) technique on various types of substrates. Generally it is assumed that film quality remains unchanged when deposited on textured or non-textured substrates. Here we analyzed the difference in growth of thin film silicon layers when deposited in a textured and a non-textured surface. In this investigation characteristics of two solar cells were compared, where one cell was prepared on a textured surface ( Cell-A) while the other prepared on a non-textured surface (Cell-B). Defect analysis of the devices were carried out by simulation and device modeling. It shows that the intrinsic film deposited on a textured surface was more defective ($2.4\times 10^{17}$ cm$^{-3}$) than that deposited on a flat surface ($3.2\times 10^{16}$ cm$^{-3}$). Although the primary differences in these two cells were thickness of the active layer and nature of surface texturing, the simulation results show that thin film deposited on a textured surface may acquire an increased defect density than that deposited on a flat surface. Lower effective flux density of $SiH_{3}$ precursors on the textured surface can be one of the reasons for higher defect density in the film deposited on textured surface. An Improved light coupling can be achieved by using a thinner doped window layer. By changing the thickness from 15 nm to 3 nm, the short circuit current density increased from 16.4 mA/cm$^{2}$ to 20.96 mA/cm$^{2}$ and efficiency increased from $9.4\%$ to $12.32\%$.
This document introduces the background and the usage of the Hyperspectral City Dataset and the benchmark. The documentation first starts with the background and motivation of the dataset. Follow it, we briefly describe the method of collecting the dataset and the processing method from raw dataset to the final release dataset, specifically, the version 1.0. We also provide the detailed usage of the dataset and the evaluation metric for submitted the result for the 2019 Hyperspectral City Challenge.
We study a non-linear convective-diffusive equation, local in space and time, which has its background in the dynamics of the thickness of a wetting film. The presence of a non-linear diffusion predicts the existence of fronts as well as shock fronts. Despite the absence of memory effects, solutions in the case of pure non-linear diffusion exhibit an anomalous sub-diffusive scaling. Due to a balance between non-linear diffusion and convection we, in particular, show that solitary waves appear. For large times they merge into a single solitary wave exhibiting a topological stability. Even though our results concern a specific equation, numerical simulations supports the view that anomalous diffusion and the solitary waves disclosed will be general features in such non-linear convective-diffusive dynamics.
The magnetization relaxation rate of small gamma-Fe2O3 particles dispersed in a silica matrix has been measured from 60 mK to 5 K. It shows a minimum around 150 mK, that can be discussed in terms of either thermal or quantum relaxation regime.
We derive semiclassical equations of motion for an accelerated wavepacket in a two-band system. We show that these equations can be formulated in terms of the static band geometry described by the quantum metric. We consider the specific cases of the Rashba Hamiltonian with and without a Zeeman term. The semiclassical trajectories are in full agreement with the ones found by solving the Schr\"odinger equation. This formalism successfully describes the adiabatic limit and the anomalous Hall effect traditionally attributed to Berry curvature. It also describes the opposite limit of coherent band superposition giving rise to a spatially oscillating Zitterbewegung motion. At $k=0$, such wavepacket exhibits a circular trajectory in real space, with its radius given by the square root of the quantum metric. This quantity appears as a universal length scale, providing a geometrical origin of the Compton wavelength.
Hybrid quantum systems (HQSs) have attracted several research interests in the last years. In this Letter, we report on the design, fabrication, and characterization of a novel diamond architecture for HQSs that consists of a high quality thin circular diamond membrane with embedded near-surface nitrogen-vacancy centers (NVCs). To demonstrate this architecture, we employed the NVCs by means of their optical and spin interfaces as nanosensors of the motion of the membrane under static pressure and in-resonance vibration, as well as the residual stress of the membrane. Driving the membrane at its fundamental resonance mode, we observed coupling of this vibrational mode to the spin of the NVCs by Hahn echo signal. Our realization of this architecture will enable futuristic HQS-based applications in diamond piezometry and vibrometry, as well as spin-mechanical and mechanically mediated spin-spin coupling in quantum information science.
Abnormal event detection, which refers to mining unusual interactions among involved entities, plays an important role in many real applications. Previous works mostly over-simplify this task as detecting abnormal pair-wise interactions. However, real-world events may contain multi-typed attributed entities and complex interactions among them, which forms an Attributed Heterogeneous Information Network (AHIN). With the boom of social networks, abnormal event detection in AHIN has become an important, but seldom explored task. In this paper, we firstly study the unsupervised abnormal event detection problem in AHIN. The events are considered as star-schema instances of AHIN and are further modeled by hypergraphs. A novel hypergraph contrastive learning method, named AEHCL, is proposed to fully capture abnormal event patterns. AEHCL designs the intra-event and inter-event contrastive modules to exploit self-supervised AHIN information. The intra-event contrastive module captures the pair-wise and multivariate interaction anomalies within an event, and the inter-event module captures the contextual anomalies among events. These two modules collaboratively boost the performance of each other and improve the detection results. During the testing phase, a contrastive learning-based abnormal event score function is further proposed to measure the abnormality degree of events. Extensive experiments on three datasets in different scenarios demonstrate the effectiveness of AEHCL, and the results improve state-of-the-art baselines up to 12.0% in Average Precision (AP) and 4.6% in Area Under Curve (AUC) respectively.
We utilise the final catalogue from the Pan-Andromeda Archaeological Survey to investigate the links between the globular cluster system and field halo in M31 at projected radii $R_p=25-150$ kpc. In this region the cluster radial density profile exhibits a power-law decline with index $\Gamma=-2.37\pm0.17$, matching that for the stellar halo component with [Fe/H] $<-1.1$. Spatial density maps reveal a striking correspondence between the most luminous substructures in the metal-poor field halo and the positions of many globular clusters. By comparing the density of metal-poor halo stars local to each cluster with the azimuthal distribution at commensurate radius, we reject the possibility of no correlation between clusters and field overdensities with high confidence. We use our stellar density measurements and previous kinematic data to demonstrate that $\approx35-60\%$ of clusters exhibit properties consistent with having been accreted into the outskirts of M31 at late times with their parent dwarfs. Conversely, at least $\sim40\%$ of remote clusters show no evidence for a link with halo substructure. The radial density profile for this subgroup is featureless and closely mirrors that observed for the apparently smooth component of the metal-poor stellar halo. We speculate that these clusters are associated with the smooth halo; if so, their properties appear consistent with a scenario where the smooth halo was built up at early times via the destruction of primitive satellites. In this picture the entire M31 globular cluster system outside $R_p=25$ kpc comprises objects accumulated from external galaxies over a Hubble time of growth.
Despite its remarkable empirical success as a highly competitive branch of artificial intelligence, deep learning is often blamed for its widely known low interpretation and lack of firm and rigorous mathematical foundation. However, most theoretical endeavor is devoted in discriminative deep learning case, whose complementary part is generative deep learning. To the best of our knowledge, we firstly highlight landscape of empirical error in generative case to complete the full picture through exquisite design of image super resolution under norm based capacity control. Our theoretical advance in interpretation of the training dynamic is achieved from both mathematical and biological sides.
We aim to investigate the bolometric $L_{\mathrm{X}}-T$ relation for galaxy groups, and study the impact of gas cooling, feedback from supermassive black holes, and selection effects on it. With a sample of 26 galaxy groups we obtained the best fit $L_{\mathrm{X}}-T$ relation for five different cases depending on the ICM core properties and central AGN radio emission, and determined the slopes, normalisations, intrinsic and statistical scatters for both temperature and luminosity. Simulations were undertaken to correct for selection effects (e.g. Malmquist bias) and the bias corrected relations for groups and clusters were compared. The slope of the bias corrected $L_{\mathrm{X}}-T$ relation is marginally steeper but consistent with clusters ($\sim 3$). Groups with a central cooling time less than 1 Gyr (SCC groups) show indications of having the steepest slope and the highest normalisation. For the groups, the bias corrected intrinsic scatter in $L_{\mathrm{X}}$ is larger than the observed scatter for most cases, which is reported here for the first time. Lastly, we see indications that the groups with an extended central radio source have a much steeper slope than those groups which have a CRS with only core emission. Additionally, we also see indications that the more powerful radio AGN are preferentially located in NSCC groups rather than SCC groups.
The "similarity" degree of a unital operator algebra $A$ was defined and studied in two recent papers of ours, where in particular we showed that it coincides with the "length" of an operator algebra. This paper brings several complements: we give direct proofs (with slight improvements) of several known facts on the length which were only known via the degree, and we show that the length of a type $II_1$ factor with property $\Gamma$ is at most 5, improving on a previous bound ($\le 44$) due to E. Christensen.
When seeking information not covered in patient-friendly documents, like medical pamphlets, healthcare consumers may turn to the research literature. Reading medical papers, however, can be a challenging experience. To improve access to medical papers, we introduce a novel interactive interface-Paper Plain-with four features powered by natural language processing: definitions of unfamiliar terms, in-situ plain language section summaries, a collection of key questions that guide readers to answering passages, and plain language summaries of the answering passages. We evaluate Paper Plain, finding that participants who use Paper Plain have an easier time reading and understanding research papers without a loss in paper comprehension compared to those who use a typical PDF reader. Altogether, the study results suggest that guiding readers to relevant passages and providing plain language summaries, or "gists," alongside the original paper content can make reading medical papers easier and give readers more confidence to approach these papers.
High quality Quantitative Precipitation Estimation at high spatiotemporal resolution is crucial to many hydrologic/hydro-meteorological designs. Optimal Quantitative Precipitation Estimation of rainfall improves the accuracy of river and flash flood forecasts. In this study, we aim to merge multiple rainfall estimates including rain gauge, radar, Inverse Distance Weighting, Ordinary Co-Kriging, and Adaptive Conditional Bias Penalized Co-Kriging through two most common model selection techniques known as Least Absolute Shrinkage and Selection Operator and Bayesian Model Averaging. The methods were applied to the entire United States for a certain period. Statistical measures such as RMSE, ME, NSE, and Correlation Coefficient are used to investigate the accuracy and reliability of the estimation models. It is shown that both BMA and LASSO improve the precipitation estimation considering all ranges of rainfall observation included. However, OCK and CBPCK technique outperforms other methods in rainfall more than 10 mm. The IDW estimates show small bias, which results in a poor estimation, which is due to the limitation in using secondary variable radar. However, OCK and CBPCK address this problem by adding radar rainfall estimates as the second variable.
We show that the quasi-adiabatic evolution of a system governed by the Dicke Hamiltonian can be described in terms of a self-induced quantum many-body metrological protocol. This effect relies on the sensitivity of the ground state to a small symmetry-breaking perturbation at the quantum phase transition, that leads to the collapse of the wavefunciton into one of two possible ground states. The scaling of the final state properties with the number of atoms and with the intensity of the symmetry breaking field, can be interpreted in terms of the precession time of an effective quantum metrological protocol. We show that our ideas can be tested with spin-phonon interactions in trapped ion setups. Our work points to a classification of quantum phase transitions in terms of the capability of many-body quantum systems for parameter estimation.
We have recently studied a simplified version of the path integral for a particle on a sphere, and more generally on maximally symmetric spaces, and proved that Riemann normal coordinates allow the use of a quadratic kinetic term in the particle action. The emerging linear sigma model contains a scalar effective potential that reproduces the effects of the curvature. We present here further details on the construction, and extend its perturbative evaluation to orders high enough to read off the type-A trace anomalies of a conformal scalar in dimensions d = 14 and d = 16.
A long standing question in the theory of orthogonal matrix polynomials is the matrix Bochner problem, the classification of $N \times N$ weight matrices $W(x)$ whose associated orthogonal polynomials are eigenfunctions of a second order differential operator. Based on techniques from noncommutative algebra (semiprime PI algebras of Gelfand-Kirillov dimension one), we construct a framework for the systematic study of the structure of the algebra $\mathcal D(W)$ of matrix differential operators for which the orthogonal polynomials of the weight matrix $W(x)$ are eigenfunctions. The ingredients for this algebraic setting are derived from the analytic properties of the orthogonal matrix polynomials. We use the representation theory of the algebras $\mathcal D(W)$ to resolve the matrix Bochner problem under the two natural assumptions that the sum of the sizes of the matrix algebras in the central localization of $\mathcal D(W)$ equals $N$ (fullness of $\mathcal D(W)$) and the leading coefficient of the second order differential operator multiplied by the weight $W(x)$ is positive definite. In the case of $2\times 2$ weights, it is proved that fullness is satisfied as long as $\mathcal D(W)$ is noncommutative. The two conditions are natural in that without them the problem is equivalent to much more general ones by artificially increasing the size of the matrix $W(x)$.
This is the second paper in a series of four in which we use space adiabatic methods in order to incorporate backreactions among the homogeneous and between the homogeneous and inhomogeneous degrees of freedom in quantum cosmological perturbation theory. The purpose of the present paper is twofold. On the one hand, it illustrates the formalism of space adiabatic perturbation theory (SAPT) for two simple quantum mechanical toy models. On the other, it proves the main point, namely that backreactions lead to additional correction terms in effective Hamiltonians that one would otherwise neglect in a crude Born-Oppenheimer approximation. The first model that we consider is a harmonic oscillator coupled to an anharmonic oscillator. We chose it because it displays many similarities with the more interesting second model describing the coupling between an inflaton and gravity restricted to the purely homogeneous and isotropic sector. These results have potential phenomenological consequences in particular for quantum cosmological theories describing big bounces such as Loop Quantum Cosmology (LQC).
The locomotion of Caenorhabditis elegans exhibits complex patterns. In particular, the worm combines mildly curved runs and sharp turns to steer its course. Both runs and sharp turns of various types are important components of taxis behavior. The statistics of sharp turns have been intensively studied. However, there have been few studies on runs, except for those on klinotaxis (also called weathervane mechanism), in which the worm gradually curves toward the direction with a high concentration of chemicals; this phenomenon was discovered recently. We analyzed the data of runs by excluding sharp turns. We show that the curving rate obeys long-tail distributions, which implies that large curving rates are relatively frequent. This result holds true for locomotion in environments both with and without a gradient of NaCl concentration; it is independent of klinotaxis. We propose a phenomenological computational model on the basis of a random walk with multiplicative noise. The assumption of multiplicative noise posits that the fluctuation of the force is proportional to the force exerted. The model reproduces the long-tail property present in the experimental data.
Usually the first course in mathematics is calculus. Its a core course in the curriculum of the Business, Engineering and the Sciences. However many students face difficulties to learn calculus. These difficulties are often caused by the prior fear of mathematics. The students today cant live without using computer technology. The uses of computer for teaching and learning can transform the boring traditional methodology of teach to more active and attractive method. In this paper, we will show how we can use Excel in teaching calculus to improve our students learning and understanding through different types of applications ranging from Business to Engineering. The effectiveness of the proposed methodology was tested on a random sample of 45 students from different majors over a period of two semesters.
We have obtained contemporaneous light, color, and radial velocity data for three proto-planetary nebulae (PPNe) over the years 2007 to 2015. The light and velocity curves of each show similar periods of pulsation, with photometric periods of 42 and 50 days for IRAS 17436+5003, 102 days for IRAS 18095+2704, and 35 days for IRAS 19475+3119. The light and velocity curves are complex with multiple periods and small, variable amplitudes. Nevertheless, at least over limited time intervals, we were able to identify dominant periods in the light, color, and velocity curves and compare the phasing of each. The color curves appear to peak with or slightly after the light curves while the radial velocity curves peak about a quarter of a cycle before the light curves. Similar results were found previously for two other PPNe, although for them the light and color appeared to be in phase. Thus it appears that PPNe are brightest when smallest and hottest. These phase results differ from those found for classical Cepheid variables, where the light and velocity differ by half a cycle, and are hottest at about average size and expanding. However, they do appear to have similar phasing to the larger amplitude pulsations seen in RV Tauri variables. Presently, few pulsation models exist for PPNe, and these do not fit the observations well, especially the longer periods observed. Model fits to these new light and velocity curves would allow masses to be determined for these post-AGB objects, and thereby provide important constraints to post-AGB stellar evolution models of low and intermediate-mass stars.
We demonstrate the selective control of the magnetic response and photoluminescence properties of Er3+ centers with light, by associating them with a highly conjugated beta-diketonate (1,3-di(2-naphthyl)-1,3-propanedione) ligand. We demonstrate this system to be an optically-pumped molecular compound emittingin infra-red, which can be employed as a precise heat-driving and detecting unit for low temperatures
The sufficiently scattered condition (SSC) is a key condition in the study of identifiability of various matrix factorization problems, including nonnegative, minimum-volume, symmetric, simplex-structured, and polytopic matrix factorizations. The SSC allows one to guarantee that the computed matrix factorization is unique/identifiable, up to trivial ambiguities. However, this condition is NP-hard to check in general. In this paper, we show that it can however be checked in a reasonable amount of time in realistic scenarios, when the factorization rank is not too large. This is achieved by formulating the problem as a non-convex quadratic optimization problem over a bounded set. We use the global non-convex optimization software Gurobi, and showcase the usefulness of this code on synthetic data sets and on real-world hyperspectral images.
We prove that a "random" free group outer automorphism is an ageometric fully irreducible outer automorphism whose ideal Whitehead graph is a union of triangles. In particular, we show that its attracting (and repelling) tree is a nongeometric $\mathbb R$-tree all of whose branch points are trivalent
We present two new methods for linear elasticity with simultaneously yield stress and displacement approximations of optimal accuracy in both the mesh size h and polynomial degree p. This is achieved within the recently developed discontinuous Petrov-Galerkin (DPG) framework. In this framework, both the stress and the displacement approximations are discontinuous across element interfaces. We study locking-free convergence properties and the interrelationships between the two DPG methods.
In this paper we use a generalization of Oevel's theorem about master symmetries to relate them with superintegrability and quadratic algebras.
SMS messaging is a popular media of communication. Because of its popularity and privacy, it could be used for many illegal purposes. Additionally, since they are part of the day to day life, SMSes can be used as evidence for many legal disputes. Since a cellular phone might be accessible to people close to the owner, it is important to establish the fact that the sender of the message is indeed the owner of the phone. For this purpose, the straight forward solutions seem to be the use of popular stylometric methods. However, in comparison with the data used for stylometry in the literature, SMSes have unusual characteristics making it hard or impossible to apply these methods in a conventional way. Our target is to come up with a method of authorship detection of SMS messages that could still give a usable accuracy. We argue that, considering the methods of author attribution, the best method that could be applied to SMS messages is an n-gram method. To prove our point, we checked two different methods of distribution comparison with varying number of training and testing data. We specifically try to compare how well our algorithms work under less amount of testing data and large number of candidate authors (which we believe to be the real world scenario) against controlled tests with less number of authors and selected SMSes with large number of words. To counter the lack of information in an SMS message, we propose the method of stacking together few SMSes.
Both experiments and direct numerical simulations have been used to demonstrate that riblets can reduce turbulent drag by as much as $10\%$, but their systematic design remains an open challenge. In this paper, we develop a model-based framework to quantify the effect of streamwise-aligned spanwise-periodic riblets on kinetic energy and skin-friction drag in turbulent channel flow. We model the effect of riblets as a volume penalization in the Navier-Stokes equations and use the statistical response of the eddy-viscosity-enhanced linearized equations to quantify the effect of background turbulence on the mean velocity and skin-friction drag. For triangular riblets, our simulation-free approach reliably predicts drag-reducing trends as well as mechanisms that lead to performance deterioration for large riblets. We investigate the effect of height and spacing on drag reduction and demonstrate a correlation between energy suppression and drag-reduction for appropriately sized riblets. We also analyze the effect of riblets on drag reduction mechanisms and turbulent flow structures including very large scale motions. Our results demonstrate the utility of our approach in capturing the effect of riblets on turbulent flows using models that are tractable for analysis and optimization.
Self-supervised representation learning has achieved impressive results in recent years, with experiments primarily coming on ImageNet or other similarly large internet imagery datasets. There has been little to no work with these methods on other smaller domains, such as satellite, textural, or biological imagery. We experiment with several popular methods on an unprecedented variety of domains. We discover, among other findings, that Rotation is by far the most semantically meaningful task, with much of the performance of Jigsaw and Instance Discrimination being attributable to the nature of their induced distribution rather than semantic understanding. Additionally, there are several areas, such as fine-grain classification, where all tasks underperform. We quantitatively and qualitatively diagnose the reasons for these failures and successes via novel experiments studying pretext generalization, random labelings, and implicit dimensionality. Code and models are available at https://github.com/BramSW/Extending_SSRL_Across_Domains/.
We determine the density of monic integer polynomials of given degree $n>1$ that have squarefree discriminant; in particular, we prove for the first time that the lower density of such polynomials is positive. Similarly, we prove that the density of monic integer polynomials $f(x)$, such that $f(x)$ is irreducible and $\mathbb Z[x]/(f(x))$ is the ring of integers in its fraction field, is positive, and is in fact given by $\zeta(2)^{-1}$. It also follows from our methods that there are $\gg X^{1/2+1/n}$ monogenic number fields of degree $n$ having associated Galois group $S_n$ and absolute discriminant less than $X$, and we conjecture that the exponent in this lower bound is optimal.
Quadratic functions have applications in cryptography. In this paper, we investigate the modular quadratic equation $$ ax^2+bx+c=0 \quad (mod \,\, 2^n), $$ and provide a complete analysis of it. More precisely, we determine when this equation has a solution and in the case that it has a solution, we not only determine the number of solutions, but also give the set of solutions in $O(n)$ time. One of the interesting results of our research is that, when this equation has a solution, then the number of solutions is a power of two.
This paper is a study of power series, where the coefficients are binomial expressions (iterated finite differences). Our results can be used for series summation, for series transformation, or for asymptotic expansions involving Stirling numbers of the second kind. In certain cases we obtain asymptotic expansions involving Bernoulli polynomials, poly-Bernoulli polynomials, or Euler polynomials. We also discuss connections to Euler series transformations and other series transformation formulas.
We discuss relation between the cluster integrable systems and spin chains in the context of their correspondence with 5d supersymmetric gauge theories. It is shown that $\mathfrak{gl}_N$ XXZ-type spin chain on $M$ sites is isomorphic to a cluster integrable system with $N \times M$ rectangular Newton polygon and $N \times M$ fundamental domain of a 'fence net' bipartite graph. The Casimir functions of the Poisson bracket, labeled by the zig-zag paths on the graph, correspond to the inhomogeneities, on-site Casimirs and twists of the chain, supplemented by total spin. The symmetricity of cluster formulation implies natural spectral duality, relating $\mathfrak{gl}_N$-chain on $M$ sites with the $\mathfrak{gl}_M$-chain on $N$ sites. For these systems we construct explicitly a subgroup of the cluster mapping class group $\mathcal{G}_\mathcal{Q}$ and show that it acts by permutations of zig-zags and, as a consequence, by permutations of twists and inhomogeneities. Finally, we derive Hirota bilinear equations, describing dynamics of the tau-functions or A-cluster variables under the action of some generators of $\mathcal{G}_\mathcal{Q}$.
Molecular dynamics (MD) is an important research tool extensively applied in materials science. Running MD on a graphics processing unit (GPU) is an attractive new approach for accelerating MD simulations. Currently, GPU implementations of MD usually run in a one-host-process-one-GPU (OHPOG) scheme. This scheme may pose a limitation on the system size that an implementation can handle due to the small device memory relative to the host memory. In this paper, we present a one-host-process-multiple-GPU (OHPMG) implementation of MD with embedded-atom-model or semi-empirical tight-binding many-body potentials. Because more device memory is available in an OHPMG process, the system size that can be handled is increased to a few million or more atoms. In comparison with the CPU implementation, in which Newton's third law is applied to improve the computational efficiency, our OHPMG implementation has achieved a 28.9x~86.0x speedup in double precision, depending on the system size, the cut-off ranges and the number of GPUs. The implementation can also handle a group of small boxes in one run by combining the small boxes into a large box. This approach greatly improves the GPU computing efficiency when a large number of MD simulations for small boxes are needed for statistical purposes.
The recent interest in beta- radionuclides for radio-guided surgery derives from the feature of the beta radiation to release energy in few millimeters of tissue. Such feature can be used to locate residual tumors with a probe located in its immediate vicinity, determining the resection margins with an accuracy of millimeters. The drawback of this technique is that it does not allow to identify tumors hidden in more than few mm of tissue. Conversely, the bremsstrahlung X-rays emitted by the interaction of the beta- radiation with the nuclei of the tissue are relatively penetrating. To complement the beta- probes, we have therefore developed a detector based on cadmium telluride, an X-ray detector with a high quantum efficiency working at room temperature. We measured the secondary emission of bremsstrahlung photons in a target of Polymethylmethacrylate (PMMA) with a density similar to living tissue. The results show that this device allows to detect a 1 ml residual or lymph-node with an activity of 1 kBq hidden under a layer of 10 mm of PMMA with a 3:1 signal to noise, i.e. with a five sigma discrimination in less than 5 s.
We have analyzed an efficient integration of the multi-qubit echo quantum memory into the quantum computer scheme on the atomic resonant ensembles in quantum electrodynamics cavity. Here, one atomic ensemble with controllable inhomogeneous broadening is used for the quantum memory node and other atomic ensembles characterized by the homogeneous broadening of the resonant line are used as processing nodes. We have found optimal conditions for efficient integration of multi-qubit quantum memory modified for this analyzed physical scheme and we have determined a specified shape of the self temporal modes providing a perfect reversible transfer of the photon qubits between the quantum memory node and arbitrary processing nodes. The obtained results open the way for realization of full-scale solid state quantum computing based on using the efficient multi-qubit quantum memory.
On the affine space containing the space $\mathcal{S}$ of quantum states of finite-dimensional systems there are contravariant tensor fields by means of which it is possible to define Hamiltonian and gradient vector fields encoding relevant geometrical properties of $\mathcal{S}$. Guided by Dirac's analogy principle, we will use them as inspiration to define contravariant tensor fields, Hamiltonian and gradient vector fields on the affine space containing the space of fair probability distributions on a finite sample space and analyse their geometrical properties. Most of our considerations will be dealt with for the simple example of a three-level system.
In this talk I review recent progress made in extracting V_{ub} from the cut electron energy and hadronic mass spectra of inclusive B meson decays utilizing the data from radiative decays. It is shown that an extraction is possible without modeling the B meson structure function. I discuss the issues involving the assumptions of local duality in various extractions. I also comment on the recent CLEO extraction of V_{ub}.
We propose a novel method for continuous-time feature tracking in event cameras. To this end, we track features by aligning events along an estimated trajectory in space-time such that the projection on the image plane results in maximally sharp event patch images. The trajectory is parameterized by $n^{th}$ order B-splines, which are continuous up to $(n-2)^{th}$ derivative. In contrast to previous work, we optimize the curve parameters in a sliding window fashion. On a public dataset we experimentally confirm that the proposed sliding-window B-spline optimization leads to longer and more accurate feature tracks than in previous work.
The Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE) is a new ground-based atmospheric Cherenkov telescope for gamma-ray astronomy. STACEE uses the large mirror area of a solar heliostat facility to achieve a low energy threshold. A prototype experiment which uses 32 heliostat mirrors with a total mirror area of ~ 1200\unit{m^2} has been constructed. This prototype, called STACEE-32, was used to search for high energy gamma-ray emission from the Crab Nebula and Pulsar. Observations taken between November 1998 and February 1999 yield a strong statistical excess of gamma-like events from the Crab, with a significance of $+6.75\sigma$ in 43 hours of on-source observing time. No evidence for pulsed emission from the Crab Pulsar was found, and the upper limit on the pulsed fraction of the observed excess was < 5.5% at the 90% confidence level. A subset of the data was used to determine the integral flux of gamma rays from the Crab. We report an energy threshold of E_{th} = 190 \pm 60\unit{GeV}, and a measured integral flux of I (E > E_{th}) = (2.2 \pm 0.6 \pm 0.2) \times 10^{-10}\unit{photons cm^{-2} s^{-1}}. The observed flux is in agreement with a continuation to lower energies of the power law spectrum seen at TeV energies.
Geographical information systems are ideal candidates for the application of parallel programming techniques, mainly because they usually handle large data sets. To help us deal with complex calculations over such data sets, we investigated the performance constraints of a classic master-worker parallel paradigm over a message-passing communication model. To this end, we present a new approach that employs an external database in order to improve the calculation/communication overlap, thus reducing the idle times for the worker processes. The presented approach is implemented as part of a parallel radio-coverage prediction tool for the GRASS environment. The prediction calculation employs digital elevation models and land-usage data in order to analyze the radio coverage of a geographical area. We provide an extended analysis of the experimental results, which are based on real data from an LTE network currently deployed in Slovenia. Based on the results of the experiments, which were performed on a computer cluster, the new approach exhibits better scalability than the traditional master-worker approach. We successfully tackled real-world data sets, while greatly reducing the processing time and saturating the hardware utilization.
Let $X$ be a smooth scheme over a finitely generated flat $\mathbb{Z}$-, $\mathbb{Z}_{(p)}$- or $\mathbb{Z}_p$-algebra $R$. Evaluated at finite truncation sets $S$, the relative de Rham-Witt complex $W_S\Omega_{X/R}^{\bullet}$ is a quotient of the de Rham complex $\Omega^{\bullet}_{W_S(X)/W_S(R)}$, which can be computed affine locally via explicit, but complicated relations. In this paper we prove that $W_S\Omega_{X/R}^{\bullet}$ is the torsionless quotient of the usual de Rham complex $\Omega^{\bullet}_{W_S(X)/W_S(R)}$ on the singular scheme $W_S(X)$. This result was suggested by comparison with a similar modification of the de Rham complex in the theory of singular varieties.
Natural language processing (NLP) is the field that attempts to make human language accessible to computers, and it relies on applying a mathematical model to express the meaning of symbolic language. One such model, DisCoCat, defines how to express both the meaning of individual words as well as their compositional nature. This model can be naturally implemented on quantum computers, leading to the field quantum NLP (QNLP). Recent experimental work used quantum machine learning techniques to map from text to class label using the expectation value of the quantum encoded sentence. Theoretical work has been done on computing the similarity of sentences but relies on an unrealized quantum memory store. The main goal of this thesis is to leverage the DisCoCat model to design a quantum-based kernel function that can be used by a support vector machine (SVM) for NLP tasks. Two similarity measures were studied: (i) the transition amplitude approach and (ii) the SWAP test. A simple NLP meaning classification task from previous work was used to train the word embeddings and evaluate the performance of both models. The Python module lambeq and its related software stack was used for implementation. The explicit model from previous work was used to train word embeddings and achieved a testing accuracy of $93.09 \pm 0.01$%. It was shown that both the SVM variants achieved a higher testing accuracy of $95.72 \pm 0.01$% for approach (i) and $97.14 \pm 0.01$% for (ii). The SWAP test was then simulated under a noise model defined by the real quantum device, ibmq_guadalupe. The explicit model achieved an accuracy of $91.94 \pm 0.01$% while the SWAP test SVM achieved 96.7% on the testing dataset, suggesting that the kernelized classifiers are resilient to noise. These are encouraging results and motivate further investigations of our proposed kernelized QNLP paradigm.
In this paper, we systematically study the dynamic snap-through behavior of a pre-deformed elastic ribbon by combining theoretical analysis, discrete numerical simulations, and experiments. By rotating one of its clamped ends with controlled angular speed, we observe two snap-through transition paths among the multiple stable configurations of a ribbon in three-dimensional (3D) space, which is different from the classical snap-through of a two-dimensional (2D) bistable beam. Our theoretical model for the static bifurcation analysis is derived based on the Kirchhoff equations, and dynamical numerical simulations are conducted using the Discrete Elastic Rods (DER) algorithm. The planar beam model is also employed for the asymptotic analysis of dynamic snap-through behaviors. The results show that, since the snap-through processes of both planar beams and 3D ribbons are governed by the saddle-node bifurcation, the same scaling law for the delay applies. We further demonstrate that, in elastic ribbons, by controlling the rotating velocity at the end, distinct snap-through pathways can be realized by selectively skipping specific modes, moreover, particular final modes can be strategically achieved. Through a parametric study using numerical simulations, we construct general phase diagrams for both mode skipping and selection of snapping ribbons. The work serves as a benchmark for future investigations on dynamic snap-through of thin elastic structures and provides guidelines for the novel design of intelligent mechanical systems.
A sensitive optical diffractometry method is developed and utilized for advanced tomography of laser-induced air plasma formations. Using transverse diffractometry and Supergaussian plasma distribution modelling we extract the main parameters of the plasma being the plasma density, width and shape with 20 micrometer spatial resolution throughout the plasma formation. The experimentally recorded diffraction patterns fitted by the Supergaussian plasma model are found to capture unprecedentedly delicate traits in the evolution of the plasma from its effective birth and on. Key features in the spatial evolution of the plasma such as the 'escape position', the 'turning point' and the refocusing dynamics of the beam are identified and explored in details. Our work provides experimental and theoretical access into the highly nonlinear dynamics of laser-induced air plasma.
I present an analysis of the gamma-ray and afterglow energies of the complete sample of 17 short duration GRBs with prompt X-ray follow-up. I find that 80% of the bursts exhibit a linear correlation between their gamma-ray fluence and the afterglow X-ray flux normalized to t=1 d, a proxy for the kinetic energy of the blast wave ($F_{X,1}~F_{gamma}^1.01). An even tighter correlation is evident between E_{gamma,iso} and L_{X,1} for the subset of 13 bursts with measured or constrained redshifts. The remaining 20% of the bursts have values of F_{X,1}/F_{gamma} that are suppressed by about three orders of magnitude, likely because of low circumburst densities (Nakar 2007). These results have several important implications: (i) The X-ray luminosity is generally a robust proxy for the blast wave kinetic energy, indicating nu_X>nu_c and hence a circumburst density n>0.05 cm^{-3}; (ii) most short GRBs have a narrow range of gamma-ray efficiency, with <epsilon_{gamma}>~0.85 and a spread of 0.14 dex; and (iii) the isotropic-equivalent energies span 10^{48}-10^{52} erg. Furthermore, I find tentative evidence for jet collimation in the two bursts with the highest E_{gamma,iso}, perhaps indicative of the same inverse correlation that leads to a narrow distribution of true energies in long GRBs. I find no clear evidence for a relation between the overall energy release and host galaxy type, but a positive correlation with duration may be present, albeit with a large scatter. Finally, I note that the outlier fraction of 20% is similar to the proposed fraction of short GRBs from dynamically-formed neutron star binaries in globular clusters. This scenario may naturally explain the bimodality of the F_{X,1}/F_{gamma} distribution and the low circumburst densities without invoking speculative kick velocities of several hundred km/s.
A pair of variables that tend to rise and fall either together or in opposition are said to be monotonically associated. For certain phenomena, this tendency is causally restricted to a subpopulation, as, for example, an allergic reaction to an irritant. Previously, Yu et al. (2011) devised a method of rearranging observations to test paired data to see if such an association might be present in a subpopulation. However, the computational intensity of the method limited its application to relatively small samples of data, and the test itself only judges if association is present in some subpopulation; it does not clearly identify the subsample that came from this subpopulation, especially when the whole sample tests positive. The present paper adds a "top-K" feature (Sampath and Verducci (2013)) based on a multistage ranking model, that identifies a concise subsample that is likely to contain a high proportion of observations from the subpopulation in which the association is supported. Computational improvements incorporated into this top-K tau-path (TKTP) algorithm now allow the method to be extended to thousands of pairs of variables measured on sample sizes in the thousands. A description of the new algorithm along with measures of computational complexity and practical efficiency help to gauge its potential use in different settings. Simulation studies catalog its accuracy in various settings, and an example from finance illustrates its step-by-step use.
We study the soldering of two Siegel chiral bosons into one scalar field in a gravitational background.
The motion of faint propagating disturbances (PD) in the solar corona reveals an intricate structure which must be defined by the magnetic field. Applied to quiet Sun observations by the Atmospheric Imaging Assembly (AIA)/Solar Dynamics Observatory (SDO), a novel method reveals a cellular network, with cells of typical diameters 50\arcsec\ in the cool 304\AA\ channel, and 100\arcsec\ in the coronal 193\AA\ channel. The 193\AA\ cells can overlie several 304\AA\ cells, although both channels share common source and sink regions. The sources are points, or narrow corridors, of divergence that occupy the centres of cells. They are significantly aligned with photospheric network features and enhanced magnetic elements. This shows that the bright network is important to the production of PDs, and confirms that the network is host to the source footpoint of quiet coronal loops. The other footpoint, or the sinks of the PDs, form the boundaries of the coronal cells. These are not significantly aligned with the photospheric network - they are generally situated above the dark internetwork photosphere. They form compact points or corridors, often without an obvious signature in the underlying photosphere. We argue that these sink points can either be concentrations of closed field footpoints associated with minor magnetic elements in the internetwork, or concentrations of upward-aligned open field. The link between the coronal velocity and magnetic fields is strengthened by a comparison with a magnetic extrapolation, which shows several general and specific similarities, thus the velocity maps offer a valuable additional constraint on models.
In a composite model of the weak bosons the excited bosons, in particular the p-wave bosons, are studied. The state with the lowest mass is identified with the boson, which has been discovered recently at the "Large Hadron Collider" at CERN. Specific properties of the excited weak bosons are studied, in particular their decays into weak bosons and into photons.
Short-term load forecasting is a critical element of power systems energy management systems. In recent years, probabilistic load forecasting (PLF) has gained increased attention for its ability to provide uncertainty information that helps to improve the reliability and economics of system operation performances. This paper proposes a two-stage probabilistic load forecasting framework by integrating point forecast as a key probabilistic forecasting feature into PLF. In the first stage, all related features are utilized to train a point forecast model and also obtain the feature importance. In the second stage the forecasting model is trained, taking into consideration point forecast features, as well as selected feature subsets. During the testing period of the forecast model, the final probabilistic load forecast results are leveraged to obtain both point forecasting and probabilistic forecasting. Numerical results obtained from ISO New England demand data demonstrate the effectiveness of the proposed approach in the hour-ahead load forecasting, which uses the gradient boosting regression for the point forecasting and quantile regression neural networks for the probabilistic forecasting.
The Feistel Boomerang Connectivity Table (FBCT) was proposed as the feistel counterpart of the Boomerang Connectivity Table. The entries of the FBCT are actually related to the second-order zero differential spectrum. Recently, several results on the second-order zero differential uniformity of some functions were introduced. However, almost all of them were focused on power functions, and there are only few results on non-power functions. In this paper, we investigate the second-order zero differential uniformity of the swapped inverse functions, which are functions obtained from swapping two points in the inverse function. We also present the second-order zero differential spectrum of the swapped inverse functions for certain cases. In particular, this paper is the first result to characterize classes of non-power functions with the second-order zero differential uniformity equal to 4, in even characteristic.
Let $R$ be a commutative chain ring. We use a variation of Gr\"obner bases to study the lattice of ideals of $R[x]$. Let $I$ be a proper ideal of $R[x]$. We are interested in the following two questions: When is $R[x]/I$ Frobenius? When is $R[x]/I$ Frobenius and local? We develop algorithms for answering both questions. When the nilpotency of $\text{rad}\,R$ is small, the algorithms provide explicit answers to the questions.
A real time coding system with lookahead consists of a memoryless source, a memoryless channel, an encoder, which encodes the source symbols sequentially with knowledge of future source symbols upto a fixed finite lookahead, d, with or without feedback of the past channel output symbols and a decoder, which sequentially constructs the source symbols using the channel output. The objective is to minimize the expected per-symbol distortion. For a fixed finite lookahead d>=1 we invoke the theory of controlled markov chains to obtain an average cost optimality equation (ACOE), the solution of which, denoted by D(d), is the minimum expected per-symbol distortion. With increasing d, D(d) bridges the gap between causal encoding, d=0, where symbol by symbol encoding-decoding is optimal and the infinite lookahead case, d=\infty, where Shannon Theoretic arguments show that separation is optimal. We extend the analysis to a system with finite state decoders, with or without noise-free feedback. For a Bernoulli source and binary symmetric channel, under hamming loss, we compute the optimal distortion for various source and channel parameters, and thus obtain computable bounds on D(d). We also identify regions of source and channel parameters where symbol by symbol encoding-decoding is suboptimal. Finally, we demonstrate the wide applicability of our approach by applying it in additional coding scenarios, such as the case where the sequential decoder can take cost constrained actions affecting the quality or availability of side information about the source.
With the decrease in system inertia, frequency security becomes an issue for power systems around the world. Energy storage systems (ESS), due to their excellent ramping capabilities, are considered as a natural choice for the improvement of frequency response following major contingencies. In this manuscript, we propose a new strategy for energy storage -- frequency shaping control -- that allows to completely eliminate the frequency Nadir, one of the main issue in frequency security, and at the same time tune the rate of change of frequency (RoCoF) to a desired value. With Nadir eliminated, the frequency security assessment can be performed via simple algebraic calculations, as opposed to dynamic simulations for conventional control strategies. Moreover, our proposed control is also very efficient in terms of the requirements on storage peak power, requiring up to 40% less power than conventional virtual inertia approach for the same performance.
We prove that a (globally) subanalytic p-adic function which is locally Lipschitz continuous with some constant C is piecewise (globally on each piece) Lipschitz continuous with possibly some other constant, where the pieces can be taken subanalytic. We also prove the analogous result for a subanalytic family of functions depending on p-adic parameters. The statements also hold in a semi-algebraic set-up and also in finite extensions of the field of p-adic numbers. These results are p-adic analogues of results of K. Kurdyka over the real numbers. To encompass the total disconnectedness of p-adic fields, we need to introduce new methods adapted to the p-adic situation.
We study amplified-and-forward (AF)-based two-way relaying (TWR) with multiple source pairs, which are exchanging information through the relay. Each source has single antenna and the relay has multi-antenna. The optimal beamforming matrix structure that achieves maximum signal-to-interference-plus-noise ratio (SINR) for TWR with multiple source pairs is derived. We then present two new non-zero-forcing based beamforming schemes for TWR, which take into consideration the tradeoff between preserving the desired signals and suppressing inter-pair interference between different source pairs. Joint grouping and beamforming scheme is proposed to achieve a better signal-to-interference-plus-noise ratio (SINR) when the total number of source pairs is large and the signal-to-noise ratio (SNR) at the relay is low.
Unified University Inventory System (UUIS), is an inventory system created for the Imaginary University of Arctica (IUfA) to facilitate its inventory management, of all the faculties in one system. Team 1 elucidates the functions of the system and the characteristics of the users who have access to these functions. It shows the access restrictions to different functionalities of the system provided to users, who are the staff and students of the University. Team 1, also, emphasises on the necessary steps required to prevent the security of the system and its data.
The concept of nanopublications was first proposed about six years ago, but it lacked openly available implementations. The library presented here is the first one that has become an official implementation of the nanopublication community. Its core features are stable, but it also contains unofficial and experimental extensions: for publishing to a decentralized server network, for defining sets of nanopublications with indexes, for informal assertions, and for digitally signing nanopublications. Most of the features of the library can also be accessed via an online validator interface.
We present an adaptation of Stein's method of normal approximation to the study of both discrete- and continuous-time dynamical systems. We obtain new correlation-decay conditions on dynamical systems for a multivariate central limit theorem augmented by a rate of convergence. We then present a scheme for checking these conditions in actual examples. The principal contribution of our paper is the method, which yields a convergence rate essentially with the same amount of work as the central limit theorem, together with a multiplicative constant that can be computed directly from the assumptions.
Event-triggered control is often argued to lower the average triggering rate compared to time-triggered control while still achieving a desired control goal, e.g., the same performance level. However, this property, often called consistency, cannot be taken for granted and can be hard to analyze in many settings. In particular, although numerous decentralized event-triggered control schemes have been proposed in the past years, their performance properties with respect to time-triggered control remain mostly unexplored. In this paper, we therefore examine the performance properties of event-triggered control (relative to time-triggered control) for a single-integrator consensus problem with a level-triggering rule. We consider the long-term average quadratic deviation from consensus as a performance measure. For this setting, we show that enriching the information the local controllers use improves the performance of the consensus algorithm but renders a previously consistent event-triggered control scheme inconsistent. In addition, we do so while deploying optimal control inputs which we derive for both information cases and all triggering schemes. With this insight, we can furthermore explain the relationship between two contrasting consistency results from the literature on decentralized event-triggered control. We support our theoretical findings with simulation results.
Time series forecasting (TSF) is one of the most important tasks in data science given the fact that accurate time series (TS) predictive models play a major role across a wide variety of domains including finance, transportation, health care, and power systems. Real-world utilization of machine learning (ML) typically involves (pre-)training models on collected, historical data and then applying them to unseen data points. However, in real-world applications, time series data streams are usually non-stationary and trained ML models usually, over time, face the problem of data or concept drift. To address this issue, models must be periodically retrained or redesigned, which takes significant human and computational resources. Additionally, historical data may not even exist to re-train or re-design model with. As a result, it is highly desirable that models are designed and trained in an online fashion. This work presents the Online NeuroEvolution-based Neural Architecture Search (ONE-NAS) algorithm, which is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks. Without any pre-training, ONE-NAS utilizes populations of RNNs that are continuously updated with new network structures and weights in response to new multivariate input data. ONE-NAS is tested on real-world, large-scale multivariate wind turbine data as well as the univariate Dow Jones Industrial Average (DJIA) dataset. Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods, including online linear regression, fixed long short-term memory (LSTM) and gated recurrent unit (GRU) models trained online, as well as state-of-the-art, online ARIMA strategies.
Under what circumstances might every extension of a combinatorial structure contain more copies of another one than the original did? This property, which we call prolificity, holds universally in some cases (e.g., finite linear orders) and only trivially in others (e.g., permutations). Integer compositions, or equivalently layered permutations, provide a middle ground. In that setting, there are prolific compositions for a given pattern if and only if that pattern begins and ends with 1. For each pattern, there is an easily constructed automaton that recognises prolific compositions for that pattern. Some instances where there is a unique minimal prolific composition for a pattern are classified.
The sigma-convergence concept has been up to now used to derive macroscopic models in full space dimensions. In this work, we generalize it to thin heterogeneous domains given rise to phenomena in lower space dimensions. More precisely, we provide a new approach of the sigma-convergence method that is suitable for the study of phenomena occurring in thin heterogeneous media. This is made through a systematic study of the sigma-convergence method for thin heterogeneous domains. Assuming that the thin heterogeneous layer is made of microstructures that are distributed inside in a deterministic way including as special cases the periodic and the almost periodic distributions, we make use of the concept of algebras with mean value to state and prove the main compactness results. As an illustration, we upscale a Darcy-Lapwood-Brinkmann micro-model for thin flow. We prove that, according to the magnitude of the permeability of the porous domain, we obtain as effective models, the Darcy law in lower dimensions. The effective models are derived through the solvability of either the local Darcy-Brinkmann problems or the local Hele-Shaw problems.
In the context of a linear model with a sparse coefficient vector, exponential weights methods have been shown to be achieve oracle inequalities for prediction. We show that such methods also succeed at variable selection and estimation under the necessary identifiability condition on the design matrix, instead of much stronger assumptions required by other methods such as the Lasso or the Dantzig Selector. The same analysis yields consistency results for Bayesian methods and BIC-type variable selection under similar conditions.
The KLOE detector at DAFNE has collected about 30 pb-1 by the end of year 2000,' allowing, among other things, accurate measurements on several decay channels of the K0S meson. With data acquired in the year 2000 run we have measured the ratio of the branching ratios of the K0S to two charged and neutral pions to 1.5 percent accuracy. The branching ratio of the semileptonic decay of the K0S is also measured to 5 percent accuracy, the best measurement of this BR to date.
Training at the edge utilizes continuously evolving data generated at different locations. Privacy concerns prohibit the co-location of this spatially as well as temporally distributed data, deeming it crucial to design training algorithms that enable efficient continual learning over decentralized private data. Decentralized learning allows serverless training with spatially distributed data. A fundamental barrier in such distributed learning is the high bandwidth cost of communicating model updates between agents. Moreover, existing works under this training paradigm are not inherently suitable for learning a temporal sequence of tasks while retaining the previously acquired knowledge. In this work, we propose CoDeC, a novel communication-efficient decentralized continual learning algorithm which addresses these challenges. We mitigate catastrophic forgetting while learning a task sequence in a decentralized learning setup by combining orthogonal gradient projection with gossip averaging across decentralized agents. Further, CoDeC includes a novel lossless communication compression scheme based on the gradient subspaces. We express layer-wise gradients as a linear combination of the basis vectors of these gradient subspaces and communicate the associated coefficients. We theoretically analyze the convergence rate for our algorithm and demonstrate through an extensive set of experiments that CoDeC successfully learns distributed continual tasks with minimal forgetting. The proposed compression scheme results in up to 4.8x reduction in communication costs with iso-performance as the full communication baseline.
We outline the evaluation of the cosmological constant in the framework of the standard field-theoretical treatment of vacuum energy and discuss the relation between the vacuum energy problem and the gauge-group spontaneous symmetry breaking. We suggest possible extensions of the 't Hooft-Nobbenhuis symmetry, in particular, its complexification till duality symmetry and discuss the compatible implementation on gravity. We propose to use the discrete time-reflection transform to formulate a framework in which one can eliminate the huge contributions of vacuum energy into the effective cosmological constant and suggest that the breaking of time--reflection symmetry could be responsible for a small observable value of this constant.
In this investigation of character tables of finite groups we study basic sets and associated representation theoretic data for complementary sets of conjugacy classes. For the symmetric groups we find unexpected properties of characters on restricted sets of conjugacy classes, like beautiful combinatorial determinant formulae for submatrices of the character table and Cartan matrices with respect to basic sets; we observe that similar phenomena occur for the transition matrices between power sum symmetric functions to bounded partitions and the $k$-Schur functions introduced by Lapointe and Morse. Arithmetic properties of the numbers occurring in this context are studied via generating functions.
We present ab initio two-dimensional extended Hubbard-type multiband models for EtMe_3Sb[Pd(dmit)_2]_2 and \kappa-(BEDT-TTF)_2Cu(NCS)_2, after a downfolding scheme based on the constrained random phase approximation (cRPA) and maximally-localized Wannier orbitals, together with the dimensional downfolding. In the Pd(dmit)_2 salt, the antibonding state of the highest occupied molecular orbital (HOMO) and the bonding/antibonding states of the lowest unoccupied molecular orbital (LUMO) are considered as the orbital degrees of freedom, while, in the \kappa-BEDT-TTF salt, the HOMO-antibonding/bonding states are considered. Accordingly, a three-band model for the Pd(dmit)_2 salt and a two-band model for the \kappa-(BEDT-TTF) salt are derived. We derive single band models for the HOMO-antibonding state for both of the compounds as well.
The number of computers, tablets and smartphones is increasing rapidly, which entails the ownership and use of multiple devices to perform online tasks. As people move across devices to complete these tasks, their identities becomes fragmented. Understanding the usage and transition between those devices is essential to develop efficient applications in a multi-device world. In this paper we present a solution to deal with the cross-device identification of users based on semi-supervised machine learning methods to identify which cookies belong to an individual using a device. The method proposed in this paper scored third in the ICDM 2015 Drawbridge Cross-Device Connections challenge proving its good performance.
In directed graphs, we investigate the problems of finding: 1) a minimum feedback vertex set (also called the Feedback Vertex Set problem, or MFVS), 2) a feedback vertex set inducing an acyclic graph (also called the Vertex 2-Coloring without Monochromatic Cycles problem, or Acyclic FVS) and 3) a minimum feedback vertex set inducing an acyclic graph (Acyclic MFVS). We show that these problems are strongly related to (variants of) Monotone 3-SAT and Monotone NAE 3-SAT, where monotone means that all literals are in positive form. As a consequence, we deduce several NP-completeness results on restricted versions of these problems. In particular, we define the 2-Choice version of an optimization problem to be its restriction where the optimum value is known to be either D or D+1 for some integer D, and the problem is reduced to decide which of D or D+1 is the optimum value. We show that the 2-Choice versions of MFVS, Acyclic MFVS, Min Ones Monotone 3-SAT and Min Ones Monotone NAE 3-SAT are NP-complete. The two latter problems are the variants of Monotone 3-SAT and respectively Monotone NAE 3-SAT requiring that the truth assignment minimize the number of variables set to true. Finally, we propose two classes of directed graphs for which Acyclic FVS is polynomially solvable, namely flow reducible graphs (for which MFVS is already known to be polynomially solvable) and C1P-digraphs (defined by an adjacency matrix with the Consecutive Ones Property).
Volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log10(p-value) from the t test). We review the basic and an interactive use of the volcano plot, and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide an unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility to apply volcano plots to other fields beyond microarray.
This paper considers a hierarchical caching system where a server connects with multiple mirror sites, each connecting with a distinct set of users, and both the mirror sites and users are equipped with caching memories. Although there already exist works studying this setup and proposing coded caching scheme to reduce transmission loads, two main problems are remained to address: 1) the optimal communication load under the uncoded placement for the first hop, denoted by $R_1$, is still unknown. 2) the previous schemes are based on Maddah-Ali and Niesen's data placement and delivery, which requires high subpacketization level. How to achieve the well tradeoff between transmission loads and subpacketization level for the hierarchical caching system is unclear. In this paper, we aim to address these two problems. We first propose a new combination structure named hierarchical placement delivery array (HPDA), which characterizes the data placement and delivery for any hierarchical caching system. Then we construct two classes of HPDAs, where the first class leads to a scheme achieving the optimal $R_1$ for some cases, and the second class requires a smaller subpacketization level at the cost of slightly increasing transmission loads.
The post-recombination streaming of baryons through dark matter keeps baryons out of low mass (<10^6 solar masses) halos coherently on scales of a few comoving Mpc. It has been argued that this will have a large impact on the 21-cm signal before and after reionization, as it raises the minimum mass required to form ionizing sources. Using a semi-numerical code, we show that the impact of the baryon streaming effect on the 21-cm signal during reionization (redshifts z approximately 7-20) depends strongly on the cooling scenario assumed for star formation, and the corresponding virial temperature or mass at which stars form. For the canonical case of atomic hydrogen cooling at 10^4 Kelvin, the minimum mass for star formation is well above the mass of halos that are affected by the baryon streaming and there are no major changes to existing predictions. For the case of molecular hydrogen cooling, we find that reionization is delayed by a change in redshift of approximately 2 and that more relative power is found in large modes at a given ionization fraction. However, the delay in reionization is degenerate with astrophysical assumptions, such as the production rate of UV photons by stars.
Particle hopping is a common feature in heterogeneous media. We explore such motion by using the widely applicable formalism of the continuous time random walk and focus on the statistics of rare events. Numerous experiments have shown that the decay of the positional probability density function P (X, t), describing the statistics of rare events, exhibits universal exponential decay. We show that such universality ceases to exist once the threshold of exponential distribution of particle hops is crossed. While the mean hop is not diverging and can attain a finite value; the transition itself is critical. The exponential universality of rare events arises due to the contribution of all the different states occupied during the process. Once the reported threshold is crossed, a single large event determines the statistics. In this realm, the big jump principle replaces the large deviation principle, and the spatial part of the decay is unaffected by the temporal properties of rare events.
Freezing of water is arguably one of the most common phase transitions on Earth and almost always happens heterogeneously. Despite its importance, we lack a fundamental understanding of what makes substrates efficient ice nucleators. Here we address this by computing the ice nucleation (IN) ability of numerous model hydroxylated substrates with diverse surface hydroxyl (OH) group arrangements. Overall, for the substrates considered, we find that neither the symmetry of the OH patterns nor the similarity between a substrate and ice correlate well with the IN ability. Instead, we find that the OH density and the substrate-water interaction strength are useful descriptors of a material's IN ability. This insight allows the rationalization of ice nucleation ability across a wide range of materials, and can aid the search and design of novel potent ice nucleators in the future.
We give a self-contained introduction to the theory of secondary polytopes and geometric bistellar flips in triangulations of polytopes and point sets, as well as a review of some of the known results and connections to algebraic geometry, topological combinatorics, and other areas. As a new result, we announce the construction of a point set in general position with a disconnected space of triangulations. This shows, for the first time, that the poset of strict polyhedral subdivisions of a point set is not always connected.
We study the product formula $(fg)(A) = f(A)g(A)$ in the framework of (unbounded) functional calculus of sectorial operators $A$. We give an abstract result, and, as corollaries, we obtain new product formulas for the holomorphic functional calculus, an extended Stieltjes functional calculus and an extended Hille-Phillips functional calculus. Our results generalise previous work of Hirsch, Martinez and Sanz, and Schilling.
How producers of public goods persist in microbial communities is a major question in evolutionary biology. Cooperation is evolutionarily unstable, since cheating strains can reproduce quicker and take over. Spatial structure has been shown to be a robust mechanism for the evolution of cooperation. Here we study how spatial assortment might emerge from native dynamics and show that fluid flow shear promotes cooperative behavior. Social structures arise naturally from our advection-diffusion-reaction model as self-reproducing Turing patterns. We computationally study the effects of fluid advection on these patterns as a mechanism to enable or enhance social behavior. Our central finding is that flow shear enables and promotes social behavior in microbes by increasing the group fragmentation rate and thereby limiting the spread of cheating strains. Regions of the flow domain with higher shear admit high cooperativity and large population density, whereas low shear regions are devoid of life due to opportunistic mutations.
A class of solutions of the gravitational field equations describing vacuum spacetimes outside rotating cylindrical sources is presented. A subclass of these solutions corresponds to the exterior gravitational fields of rotating cylindrical systems that emit gravitational radiation. The properties of these rotating gravitational wave spacetimes are investigated. In particular, we discuss the energy density of these waves using the gravitational stress-energy tensor.
Automatic Speaker Verification (ASV) is the process of identifying a person based on the voice presented to a system. Different synthetic approaches allow spoofing to deceive ASV systems (ASVs), whether using techniques to imitate a voice or recunstruct the features. Attackers try to beat up the ASVs using four general techniques; impersonation, speech synthesis, voice conversion, and replay. The last technique is considered as a common and high potential tool for spoofing purposes since replay attacks are more accessible and require no technical knowledge from adversaries. In this study, we introduce a novel replay spoofing countermeasure for ASVs. Accordingly, we used the Constant Q Cepstral Coefficient (CQCC) features fed into an autoencoder to attain more informative features and to consider the noise information of spoofed utterances for discrimination purpose. Finally, different configurations of the Siamese network were used for the first time in this context for classification. The experiments performed on ASVspoof challenge 2019 dataset using Equal Error Rate (EER) and Tandem Detection Cost Function (t-DCF) as evaluation metrics show that the proposed system improved the results over the baseline by 10.73% and 0.2344 in terms of EER and t-DCF, respectively.
In comparison to conventional traffic designs, shared spaces promote a more pleasant urban environment with slower motorized movement, smoother traffic, and less congestion. In the foreseeable future, shared spaces will be populated with a mixture of autonomous vehicles (AVs) and vulnerable road users (VRUs) like pedestrians and cyclists. However, a driver-less AV lacks a way to communicate with the VRUs when they have to reach an agreement of a negotiation, which brings new challenges to the safety and smoothness of the traffic. To find a feasible solution to integrating AVs seamlessly into shared-space traffic, we first identified the possible issues that the shared-space designs have not considered for the role of AVs. Then an online questionnaire was used to ask participants about how they would like a driver of the manually driving vehicle to communicate with VRUs in a shared space. We found that when the driver wanted to give some suggestions to the VRUs in a negotiation, participants thought that the communications via the driver's body behaviors were necessary. Besides, when the driver conveyed information about her/his intentions and cautions to the VRUs, participants selected different communication methods with respect to their transport modes (as a driver, pedestrian, or cyclist). These results suggest that novel eHMIs might be useful for AV-VRU communication when the original drivers are not present. Hence, a potential eHMI design concept was proposed for different VRUs to meet their various expectations. In the end, we further discussed the effects of the eHMIs on improving the sociality in shared spaces and the autonomous driving systems.
In this paper, we study the theory of geodesics with respect to the Tanaka-Webster connection in a pseudo-Hermitian manifold, aiming to generalize some comparison results in Riemannian geometry to the case of pseudo-Hermitian geometry. Some Hopf-Rinow type, Cartan-Hadamard type and Bonnet-Myers type results are established.
Extremely large aperture arrays (ELAAs) and reconfigurable intelligent surfaces (RISs) are candidate enablers to realize connectivity goals for the sixth-generation (6G) wireless networks. For instance, ELAAs can provide orders-of-magnitude higher area throughput compared to what massive multiple-input multiple-output (MIMO) can deliver through spatial multiplexing, while RISs can improve the propagation conditions over wireless channels but a passively reflecting RIS must be large to be effective. Active RIS with amplifiers can deal with this issue. In this paper, we study the distortion created by nonlinear amplifiers in both ELAAs and active RIS. We analytically obtain the angular directions and depth of the nonlinear distortion in both near- and far-field channels. The results are demonstrated numerically and we conclude that non-linearities can both create in-band and out-of-band distortion that is beamformed in entirely new directions and distances from the transmitter.
Production-destruction systems (PDS) of ordinary differential equations (ODEs) are used to describe physical and biological reactions in nature. The considered quantities are subject to natural laws. Therefore, they preserve positivity and conservation of mass at the analytical level. In order to maintain these properties at the discrete level, the so-called modified Patankar-Runge-Kutta (MPRK) schemes are often used in this context. However, up to our knowledge, the family of MPRK has been only developed up to third order of accuracy. In this work, we propose a method to solve PDS problems, but using the Deferred Correction (DeC) process as a time integration method. Applying the modified Patankar approach to the DeC scheme results in provable conservative and positivity preserving methods. Furthermore, we demonstrate that these modified Patankar DeC schemes can be constructed up to arbitrarily high order. Finally, we validate our theoretical analysis through numerical simulations.
For families of continuous plurisubharmonic functions we show that, in a local sense, separately bounded above implies bounded above.