text
stringlengths
6
128k
The composites of strontium-doped lanthanum manganite (LSMO) with zinc oxide (ZnO) are candidate materials for energy harvesting by virtue of their magnetic and piezoelectric characteristics. They could be used to harvest energy from stray sources, such as the vibrations and electromagnetic noise from transformers and compressors within electrical grid power stations to power small diagnostic sensors, among other applications. The LSMO/ZnO nanocomposites were made by: (i) milling the two bulk powders and, (ii) a wet chemical process which resulted in core-shell structures. The electrical, piezoelectric, and magnetoelectric properties showed strong dependence on the fabrication method. Growth of ZnO nanopillars on the particulate core of LSMO surface appears to have improved the piezoelectric properties. Moreover, the chemical bath deposition process can be easily modified to incorporate dopants to augment these properties further.
We introduce a concept that uses detuned arm cavities to increase the shot noise limited sensitivity of LIGO without increasing the light power inside the arm cavities. Numerical simulations show an increased sensitivity between 125 and 400 Hz, with a maximal improvement of about 80% around 225 Hz, while the sensitivity above 400Hz is decreased. Furthermore our concept is found to give a sensitivity similar to that of a conventional RSE configuration with a Signal-Recycling mirror of moderate reflectivity. In the near future detuned arm cavities might be a beneficial alternative to RSE, due the potentially less hardware intensive implementation of the proposed concept.
In this paper we study the Diophantine problem in Chevalley groups $G_\pi (\Phi,R)$, where $\Phi$ is an indecomposable root system of rank $> 1$, $R$ is an arbitrary commutative ring with $1$. We establish a variant of double centralizer theorem for elementary unipotents $x_\alpha(1)$. This theorem is valid for arbitrary commutative rings with $1$. The result is principle to show that any one-parametric subgroup $X_\alpha$, $\alpha \in \Phi$, is Diophantine in $G$. Then we prove that the Diophantine problem in $G_\pi (\Phi,R)$ is polynomial time equivalent (more precisely, Karp equivalent) to the Diophantine problem in $R$. This fact gives rise to a number of model-theoretic corollaries for specific types of rings.
Identity Switching remains one of the main difficulties Multiple Object Tracking (MOT) algorithms have to deal with. Many state-of-the-art approaches now use sequence models to solve this problem but their training can be affected by biases that decrease their efficiency. In this paper, we introduce a new training procedure that confronts the algorithm to its own mistakes while explicitly attempting to minimize the number of switches, which results in better training. We propose an iterative scheme of building a rich training set and using it to learn a scoring function that is an explicit proxy for the target tracking metric. Whether using only simple geometric features or more sophisticated ones that also take appearance into account, our approach outperforms the state-of-the-art on several MOT benchmarks.
Perception that involves multi-object detection and tracking, and trajectory prediction are two major tasks of autonomous driving. However, they are currently mostly studied separately, which results in most trajectory prediction modules being developed based on ground truth trajectories without taking into account that trajectories extracted from the detection and tracking modules in real-world scenarios are noisy. These noisy trajectories can have a significant impact on the performance of the trajectory predictor and can lead to serious prediction errors. In this paper, we build an end-to-end framework for detection, tracking, and trajectory prediction called ODTP (Online Detection, Tracking and Prediction). It adopts the state-of-the-art online multi-object tracking model, QD-3DT, for perception and trains the trajectory predictor, DCENet++, directly based on the detection results without purely relying on ground truth trajectories. We evaluate the performance of ODTP on the widely used nuScenes dataset for autonomous driving. Extensive experiments show that ODPT achieves high performance end-to-end trajectory prediction. DCENet++, with the enhanced dynamic maps, predicts more accurate trajectories than its base model. It is also more robust when compared with other generative and deterministic trajectory prediction models trained on noisy detection results.
We show that existing low energy experiments, searching for the breaking of local Lorentz invariance, set bounds upon string theory inspired quantum gravity models that induce corrections to the propagation of fields. In the D-particle recoil model we find M > 1.2 x 10^5 M_P and v < 2 x 10^{-27}c for the mass and recoil speed of the D-particle, respectively. These bounds are \~10^8 times stronger than the latest astrophysical bounds. These results indicate that the stringy scenario for modified dispersion relations is as vulnerable to these types of tests as the loop quantum gravity schemes.
We present an elaboration and application of Spline Upwind (SU) stabilization method, designed in space--time Isogeometric Analysis framework, in order to make this stabilization as suitable as possible in the context of cardiac electrophysiology. Our aim is to propose a formulation as simple and efficient as possible, effectual in preventing spurious oscillations present in plain Galerkin method and also reasonable from the computational cost point of view. For these reasons we validate the method's capability with numerical experiments, focusing on accuracy and computational aspects.
We present optical images of nearby 50 narrow-line Seyfert 1 galaxies (NLS1s) which cover all the NLS1s at z<0.0666 and $\delta \ge -25^{\circ}$ known at the time of 2001. Among the 50 NLS1s, 40 images are newly obtained by our observations and 10 images are taken from archive data. Motivated by the hypothesis that NLS1s are in an early phase of a super-massive black hole (BH) evolution, we present a study of NLS1 host galaxy morphology to examine trigger mechanism(s) of active galactic nuclei (AGNs) by seeing the early phase of AGN. With these images, we made morphological classification by eye inspection and by quantitative method, and found a high bar frequency of the NLS1s in the optical band; the bar frequency is $85 \pm 7% among disk galaxies ($64- 71$% in total sample) which is more frequent than that (40-70%) of broad-line Seyfert 1 galaxies (BLS1s) and normal disk galaxies, though the significance is marginal. Our results confirm the claim by Crenshaw et al. (2003) with a similar analysis for 19 NLS1s. The frequency is comparable to that of HII/starburst galaxies. We also examined the bar frequency against width of the broad H$\beta$ emission line, Eddington ratio, and black hole mass, but no clear trend is seen. Possible implications such as an evolutionary sequence from NLS1s to BLS1 are discussed briefly.
We perform a general chiral symmetry and unitarity based analysis of a local process of the fermion-antifermion creation from the vacuum by a high-energy photon as well as an explicit partial wave analysis of the vector current in QED and QCD. It turns out that such a local process proceeds necessarily via a certain superposition of the $S$- and $D$-wave contributions. These constraints from chiral symmetry and unitarity are confronted then with the well-known theoretical and experimental results on $e^+e^-\to\gamma\to e^+e^-$, $e^+e^-\to\gamma\to \mu^+\mu^-$, and $e^+e^-\to\gamma\to q\bar{q}$ in the ultrarelativistic limit. It is shown that these well-known results are consistent with the $S+D$-wave structure of the vertex and are inconsistent with the pure $S$-wave interpretation of the vertex. Then a free quark loop in the $1^{--}$ channel, representing the leading term in the Operator Product Expansion, contains both $S$-wave and $D$-wave contributions. This fact rules out the possibility that there is only one radial trajectory for the $\rho$-mesons with the fixed $S$-wave content. It also implies that all holographic models that assume a pure $S$-wave content of the $\rho$-meson have to fail to satisfy the matching conditions at the ultraviolet border $z=0$.
In this article we give a purely noncommutative criterion for the characterization of two-state normal distribution. We prove that families of two-state normal distribution can be described by relations which is similar to the conditional expectation in free probability, but has no classical analogue. We also show a generalization of Bozejko, Leinert and Speicher's formula (relating moments and noncommutative cumulants).
We revisit the classical population genetics model of a population evolving under multiplicative selection, mutation and drift. The number of beneficial alleles in a multi-locus system can be considered a trait under exponential selection. Equations of motion are derived for the cumulants of the trait distribution in the diffusion limit and under the assumption of linkage equilibrium. Because of the additive nature of cumulants, this reduces to the problem of determining equations of motion for the expected allele distribution cumulants at each locus. The cumulant equations form an infinite dimensional linear system and in an authored appendix Adam Prugel-Bennett provides a closed form expression for these equations. We derive approximate solutions which are shown to describe the dynamics well for a broad range of parameters. In particular, we introduce two approximate analytical solutions: (1) Perturbation theory is used to solve the dynamics for weak selection and arbitrary mutation rate. The resulting expansion for the system's eigenvalues reduces to the known diffusion theory results for the limiting cases with either mutation or selection absent. (2) For low mutation rates we observe a separation of time-scales between the slowest mode and the rest which allows us to develop an approximate analytical solution for the dominant slow mode. The solution is consistent with the perturbation theory result and provides a good approximation for much stronger selection intensities.
This paper focuses on natural dualities for varieties of bilattice-based algebras.Such varieties have been widely studied as semantic models in situations where information is incomplete or inconsistent. The most popular tool for studying bilattices-based algebras is product representation. The authors recently set up a widely applicable algebraic framework which enabled product representations over a base variety to be derived in a uniform and categorical manner. By combining this methodology with that of natural duality theory, we demonstrate how to build a natural duality for any bilattice-based variety which has a suitable product representation over a dualisable base variety. This procedure allows us systematically to present economical natural dualities for many bilattice-based varieties, for most of which no dual representation has previously been given. Among our results we highlight that for bilattices with a generalised conflation operation (not assumed to be an involution or commute with negation). Here both the associated product representation and the duality are new. Finally we outline analogous procedures for pre-bilattice-based algebras (so negation is absent).
To accomplish complex swarm robotic missions in the real world, one needs to plan and execute a combination of single robot behaviors, group primitives such as task allocation, path planning, and formation control, and mission-specific objectives such as target search and group coverage. Most such missions are designed manually by teams of robotics experts. Recent work in automated approaches to learning swarm behavior has been limited to individual primitives with sparse work on learning complete missions. This paper presents a systematic approach to learn tactical mission-specific policies that compose primitives in a swarm to accomplish the mission efficiently using neural networks with special input and output encoding. To learn swarm tactics in an adversarial environment, we employ a combination of 1) map-to-graph abstraction, 2) input/output encoding via Pareto filtering of points of interest and clustering of robots, and 3) learning via neuroevolution and policy gradient approaches. We illustrate this combination as critical to providing tractable learning, especially given the computational cost of simulating swarm missions of this scale and complexity. Successful mission completion outcomes are demonstrated with up to 60 robots. In addition, a close match in the performance statistics in training and testing scenarios shows the potential generalizability of the proposed framework.
We show that the spectral action, when perturbed by a gauge potential, can be written as a series of Chern--Simons actions and Yang--Mills actions of all orders. In the odd orders, generalized Chern--Simons forms are integrated against an odd $(b,B)$-cocycle, whereas, in the even orders, powers of the curvature are integrated against $(b,B)$-cocycles that are Hochschild cocycles as well. In both cases, the Hochschild cochains are derived from the Taylor series expansion of the spectral action Tr$(f(D+V))$ in powers of $V=\pi_D(A)$, but unlike the Taylor expansion we expand in increasing order of the forms in $A$. This extends [Connes--Chamseddine 2006], which computes only the scale-invariant part of the spectral action, works in dimension at most 4, and assumes the vanishing tadpole hypothesis. In our situation, we obtain a truly infinite odd $(b,B)$-cocycle. The analysis involved draws from recent results in multiple operator integration, which also allows us to give conditions under which this cocycle is entire, and under which our expansion is absolutely convergent. As a consequence of our expansion and of the gauge invariance of the spectral action, we show that the odd $(b,B)$-cocycle pairs trivially with $K_1$.
We show, in the context of quantum combinatorial optimization, or quantum annealing, how the nonlinear Schr\"odinger-Langevin-Kostin equation can dynamically drive the system toward its ground state. We illustrate, moreover, how a frictional force of Kostin type can prevent the appearance of genuinely quantum problems such as Bloch oscillations and Anderson localization which would hinder an exhaustive search.
Using the stabilized jellium model with self-compression, we have calculated the dissociation energies and the barrier heights for the binary fragmentation of charged silver clusters. At each step of calculations, we have used the relaxed-state sizes and energies of the clusters. The results for the doubly charged Ag clusters predict a critical size, at which evaporation dominates the fission, in good agreement with the experiment. Comparing the dissociation energies and the fission barrier heights with the experimental ones, we conclude that in the experiments the fragmentation occurs before the full structural relaxation expected after the ionization of the cluster. In the decays of Ag$_N^{4+}$ clusters, the results predict that the charge-symmetric fission processes are dominant for smaller clusters, and the charge-asymmetric fission processes become dominant for sufficiently larger clusters.
The issue of non-Gaussianity is not only related to distinguishing the theories of the origin of primordial fluctuations, but also crucial for the determination of cosmological parameters in the framework of inflation paradigm. We present an advenced method for testing non-Gaussianity on the whole-sky CMB anisotropies. This method is based on the Kuiper's statistic to probe the two-dimensional uniformity on a periodic mapping square associating phases: return mapping of phases of the derived CMB (similar to auto correlation) and cross correlations between phases of the derived CMB and foregrounds. Since phases reflect morphology, detection of cross correlation of phases signifies the contamination of foreground signals in the derived CMB map. The advantage of this method is that one can cross check the auto and cross correlation of phases of the derived CMB and foregrounds, and mark off those multipoles in which the non-Gaussianity results from the foreground contaminations. We apply this statistic on the derived signals from the 1-year WMAP data. The auto-correlations of phases from the ILC map shows the significance above 95% CL against the random phase hypothesis on 17 spherical harmonic multipoles, among which some have pronounced cross correlations with the foreground maps. We conclude that most of the non-Gaussianity found in the derived CMB maps are from foreground contaminations, except, among others, l=6. With this method we are better equipped to approach the issue of non-Gaussianity of primordial origin for the upcoming PLANCK mission.
Spiking activity of neurons engaged in learning and performing a task show complex spatiotemporal dynamics. While the output of recurrent network models can learn to perform various tasks, the possible range of recurrent dynamics that emerge after learning remains unknown. Here we show that modifying the recurrent connectivity with a recursive least squares algorithm provides sufficient flexibility for synaptic and spiking rate dynamics of spiking networks to produce a wide range of spatiotemporal activity. We apply the training method to learn arbitrary firing patterns, stabilize irregular spiking activity of a balanced network, and reproduce the heterogeneous spiking rate patterns of cortical neurons engaged in motor planning and movement. We identify sufficient conditions for successful learning, characterize two types of learning errors, and assess the network capacity. Our findings show that synaptically-coupled recurrent spiking networks possess a vast computational capability that can support the diverse activity patterns in the brain.
We compute the vacuum metric generated by a generic rotating object in arbitrary dimensions up to third post-Minkowskian order by computing the classical contribution of scattering amplitudes describing the graviton emission by massive spin-1 particles up to two loops. The solution depends on the mass, angular momenta, and on up to two parameters related to generic quadrupole moments. In $D=4$ spacetime dimensions, we recover the vacuum Hartle-Thorne solution describing a generic spinning object to second order in the angular momentum, of which the Kerr metric is a particular case obtained for a specific mass quadrupole moment dictated by the uniqueness theorem. At the level of the effective action, the case of minimal couplings corresponds to the Kerr black hole, while any other mass quadrupole moment requires non-minimal couplings. In $D>4$, the absence of black-hole uniqueness theorems implies that there are multiple spinning black hole solutions with different topology. Using scattering amplitudes, we find a generic solution depending on the mass, angular momenta, the mass quadrupole moment, and a new stress quadrupole moment which does not exist in $D=4$. As special cases, we recover the Myers-Perry and the single-angular-momentum black ring solutions, to third and first post-Minkowksian order, respectively. Interestingly, at variance with the four dimensional case, none of these solutions corresponds to the minimal coupling in the effective action. This shows that, from the point of view of scattering amplitudes, black holes are the "simplest" General Relativity vacuum solutions only in $D=4$.
In this paper we study the family of cyclic codes such that its minimum distance reaches the maximum of its BCH bounds. We also show a way to construct cyclic codes with that property by means of computations of some divisors of a polynomial of the form X^n-1. We apply our results to the study of those BCH codes C, with designed distance delta, that have minimum distance d(C)= delta. Finally, we present some examples of new binary BCH codes satisfying that condition. To do this, we make use of two related tools: the discrete Fourier transform and the notion of apparent distance of a code, originally defined for multivariate abelian codes.
We propose a framework to assess how to optimally sort and grade students of heterogenous ability. Potential employers face uncertainty regarding an individual's productive value. Knowing which school an individual went to is useful for two reasons: firstly, average student ability may differ across schools; secondly, different schools may use different grading rules and thus provide varying incentives to exert effort. An optimal school system exhibits coarse stratification with respect to ability, and more lenient grading at the top-tier schools than at the bottom-tier schools. Our paper contributes to the ongoing policy debate on tracking in secondary schools.
We define and study the vanishing sequence along a real valuation of sections of a line bundle on a projective variety. Building on previous work of the first author with Huayi Chen, we prove an equidistribution result for vanishing sequences of large powers of a big line bundle, and study the limit measure. In particular, the latter is described in terms of restricted volumes for divisorial valuations. We also show on an example that the associated concave function on the Okounkov body can be discontinuous at boundary points.
We introduce a gamma function $\Ga(x,z)$ in two complex variables which extends the classical gamma function $\Ga(z)$ in the sense that $\lim_{x\to 1}\Ga(x,z)=\Ga(z)$. We will show that many properties which $\Ga(z)$ enjoys extend in a natural way to the function $\Ga(x,z)$. Among other things we shall provide functional equations, a multiplication formula, and analogues of the Stirling formula with asymptotic estimates as consequences.
We present an analysis of the neutron star High Mass X-ray Binary (HMXB) 4U 1909+07 mainly based on Suzaku data. We extend the pulse period evolution, which behaves in a random-walk like manner, indicative of direct wind accretion. Studying the spectral properties of 4U 1909+07 between 0.5 to 90 keV we find that a power-law with an exponential cutoff can describe the data well, when additionally allowing for a blackbody or a partially covering absorber at low energies. We find no evidence for a cyclotron resonant scattering feature (CRSF), a feature seen in many other neutron star HMXBs sources. By performing pulse phase resolved spectroscopy we investigate the origin of the strong energy dependence of the pulse profile, which evolves from a broad two-peak profile at low energies to a profile with a single, narrow peak at energies above 20 keV. Our data show that it is very likely that a higher folding energy in the high energy peak is responsible for this behavior. This in turn leads to the assumption that we observe the two magnetic poles and their respective accretion columns at different phases, and that these accretions column have slightly different physical conditions.
In this paper, we investigate the statefinder, the deceleration and equation of state parameters when universe is composed of generalized holographic dark energy or generalized Ricci dark energy for Bianchi I universe model. These parameters are found for both interacting as well as non-interacting scenarios of generalized holographic or generalized Ricci dark energy with dark matter and generalized Chaplygin gas. We explore these parameters graphically for different situations. It is concluded that these models represent accelerated expansion of the universe.
We analyse the mass spectrum of the Constrained Minimal Supersymmetric Standard Model at the low tan beta fixed point. We find that the model only satisfies experimental and dark matter bounds in regions where the vacuum is meta-stable -- ie where it violates `unbounded from below' (UFB) bounds. Adding a small amount of R-parity violation solves these problems but the absolute upper bound on the lowest higgs mass m_{h^0}<97 GeV remains. We present the predicted sparticle mass spectrum as a function of the gluino mass m_g.
We have obtained optical spectra of the soft X-ray transient GRO J1655-40 during different X-ray spectral states (quiescence, high-soft and hard outburst) between 1994 Aug and 1997 Jun. Characteristic features observed during the 1996-97 high-soft state were: a) broad absorption lines at Halpha and Hbeta, probably formed in the inner disk; b) double-peaked HeII 4686 emission lines, formed in a temperature-inversion layer on the disk surface, created by the soft X-ray irradiation; c) double-peaked Halpha emission, with a strength associated with the hard X-ray flux, suggesting that it was probably emitted from deeper layers than He II 4686. The observed rotational velocities of all the double-peaked lines suggest that the disk was extended slightly beyond its tidal radius. Three classes of lines were identified in the spectra taken in 1994 Aug-Sep, during a period of low X-ray activity between two strong X-ray flares: broad absorption, broad (flat-topped) emission and narrow emission. We have found that the narrow emission lines (single-peaked or double-peaked) cannot be explained by a thin-disk model. We propose that the system was in a transient state, in which the accretion disk might have had an extended optically thin cocoon and significant matter outflow. After the onset of a hard X-ray flare the disk signatures disappeared, and strong single-peaked Halpha and Paschen emission was detected, suggesting that the cocoon became opaque to optical radiation. High-ionisation lines disappeared or weakened. Two weeks after the end of the flare, the cocoon appeared to be once again optically thin.
Subscribing to online services is typically a straightforward process, but cancelling them can be arduous and confusing -- causing many to resign and continue paying for services they no longer use. Making the cancellation intentionally difficult is recognized as a dark pattern called Roach Motel. This paper characterizes the subscription and cancellation flows of popular news websites from four different countries, and discusses them in the context of recent regulatory changes. We study the design features that make it difficult to cancel a subscription and find several cancellation flows that feature intentional barriers, such as forcing users to type in a phrase or call a representative. Further, we find many subscription flows that do not adequately inform users about recurring charges. Our results point to a growing need for effective regulation of designs that trick, coerce, or manipulate users into paying for subscriptions they do not want.
We show that Jacobian algebras arising from a sphere with $n$-punctures, with $n\geq5$, are finite dimensional algebras. We consider also a family of cyclically oriented quivers and we prove that, for any primitive potential, the associated Jacobian algebra is finite dimensional.
In this paper we present a parametric estimation method for certain multi-parameter heavy-tailed L\'evy-driven moving averages. The theory relies on recent multivariate central limit theorems obtained in [3] via Malliavin calculus on Poisson spaces. Our minimal contrast approach is related to the papers [14, 15], which propose to use the marginal empirical characteristic function to estimate the one-dimensional parameter of the kernel function and the stability index of the driving L\'evy motion. We extend their work to allow for a multi-parametric framework that in particular includes the important examples of the linear fractional stable motion, the stable Ornstein-Uhlenbeck process, certain CARMA(2, 1) models and Ornstein-Uhlenbeck processes with a periodic component among other models. We present both the consistency and the associated central limit theorem of the minimal contrast estimator. Furthermore, we demonstrate numerical analysis to uncover the finite sample performance of our method.
Contextual commonsense inference is the task of generating various types of explanations around the events in a dyadic dialogue, including cause, motivation, emotional reaction, and others. Producing a coherent and non-trivial explanation requires awareness of the dialogue's structure and of how an event is grounded in the context. In this work, we create CICEROv2, a dataset consisting of 8,351 instances from 2,379 dialogues, containing multiple human-written answers for each contextual commonsense inference question, representing a type of explanation on cause, subsequent event, motivation, and emotional reaction. We show that the inferences in CICEROv2 are more semantically diverse than other contextual commonsense inference datasets. To solve the inference task, we propose a collection of pre-training objectives, including concept denoising and utterance sorting to prepare a pre-trained model for the downstream contextual commonsense inference task. Our results show that the proposed pre-training objectives are effective at adapting the pre-trained T5-Large model for the contextual commonsense inference task.
In this paper we show that for the purposes of dimensionality reduction certain class of structured random matrices behave similarly to random Gaussian matrices. This class includes several matrices for which matrix-vector multiply can be computed in log-linear time, providing efficient dimensionality reduction of general sets. In particular, we show that using such matrices any set from high dimensions can be embedded into lower dimensions with near optimal distortion. We obtain our results by connecting dimensionality reduction of any set to dimensionality reduction of sparse vectors via a chaining argument.
We classify the future of the universe for general cosmological models including matter and dark energy. If the equation of state of dark energy is less then -1, the age of the universe becomes finite. We compute the rest of the age of the universe for such universe models. The behaviour of the future growth of matter density perturbation is also studied. We find that the collapse of spherical overdensity region is greatly changed if the equation of state of dark energy is less than -1.
The X-ray color (hardness ratio) of optically undetected X-ray sources can be used to distinguish obscured active galactic nuclei (AGNs) at low and intermediate redshift from viable high-redshift (i.e., z>5) AGN candidates. This will help determine the space density, ionizing photon production, and X-ray background contribution of the earliest detectable AGNs. High redshift AGNs should appear soft in X-rays, with hardness ratio HR ~ -0.5, even if there is strong absorption by a hydrogen column density N_H up to 10^23 cm^-2, simply because the absorption redshifts out of the soft X-ray band in the observed frame. Here the X-ray hardness ratio is defined as HR= (H-S)/(H+S), where S and H are the soft and hard band net counts detected by Chandra. High redshift AGNs that are Compton thick (N_H>~10^24 cm^-2) could have HR~0.0 at z>5. However, these should be rare in deep Chandra images, since they have to be >~10 times brighter intrinsically, which implies >~100 times drop in their space density. Applying the hardness criterion (HR<0.0) can filter out about 50% of the candidate high redshift AGNs selected from deep Chandra images.
Immersive video offers the freedom to navigate inside virtualized environment. Instead of streaming the bulky immersive videos entirely, a viewport (also referred to as field of view, FoV) adaptive streaming is preferred. We often stream the high-quality content within current viewport, while reducing the quality of representation elsewhere to save the network bandwidth consumption. Consider that we could refine the quality when focusing on a new FoV, in this paper, we model the perceptual impact of the quality variations (through adapting the quantization stepsize and spatial resolution) with respect to the refinement duration, and yield a product of two closed-form exponential functions that well explain the joint quantization and resolution induced quality impact. Analytical model is cross-validated using another set of data, where both Pearson and Spearman's rank correlation coefficients are close to 0.98. Our work is devised to optimize the adaptive FoV streaming of the immersive video under limited network resource. Numerical results show that our proposed model significantly improves the quality of experience of users, with about 9.36\% BD-Rate (Bjontegaard Delta Rate) improvement on average as compared to other representative methods, particularly under the limited bandwidth.
This study investigates the performance of two open source intrusion detection systems (IDSs) namely Snort and Suricata for accurately detecting the malicious traffic on computer networks. Snort and Suricata were installed on two different but identical computers and the performance was evaluated at 10 Gbps network speed. It was noted that Suricata could process a higher speed of network traffic than Snort with lower packet drop rate but it consumed higher computational resources. Snort had higher detection accuracy and was thus selected for further experiments. It was observed that the Snort triggered a high rate of false positive alarms. To solve this problem a Snort adaptive plug-in was developed. To select the best performing algorithm for Snort adaptive plug-in, an empirical study was carried out with different learning algorithms and Support Vector Machine (SVM) was selected. A hybrid version of SVM and Fuzzy logic produced a better detection accuracy. But the best result was achieved using an optimised SVM with firefly algorithm with FPR (false positive rate) as 8.6% and FNR (false negative rate) as 2.2%, which is a good result. The novelty of this work is the performance comparison of two IDSs at 10 Gbps and the application of hybrid and optimised machine learning algorithms to Snort.
Models of small-field inflation often suffer from the overshoot problem. A particularly efficient resolution to the problem was proposed recently in the context of string theory. We show that this resolution predicts the existence of giant spherically symmetric overdense regions with radius of at least 110 Mpc. We argue that if such structures will be found they could offer an experimental window into string theory.
In certain instances, the particle paths predicted by Bohmian mechanics are thought to be at odds with classical intuition. A striking illustration arises in the interference experiments envisaged by Englert, Scully, S\"ussmann and Walther, which lead the authors to claim that the Bohmian trajectories can not be real and so must be `surreal'. Through a combined experimental and numerical study, we here demonstrate that individual trajectories in the hydrodynamic pilot-wave system exhibit the key features of their surreal Bohmian counterparts. These real surreal classical trajectories are rationalized in terms of the system's non-Markovian pilot-wave dynamics. Our study thus makes clear that the designation of Bohmian trajectories as surreal is based on misconceptions concerning the limitations of classical dynamics and a lack of familiarity with pilot-wave hydrodynamics.
Large-scale machine learning problems make the cost of hyperparameter tuning ever more prohibitive. This creates a need for algorithms that can tune themselves on-the-fly. We formalize the notion of "tuning-free" algorithms that can match the performance of optimally-tuned optimization algorithms up to polylogarithmic factors given only loose hints on the relevant problem parameters. We consider in particular algorithms that can match optimally-tuned Stochastic Gradient Descent (SGD). When the domain of optimization is bounded, we show tuning-free matching of SGD is possible and achieved by several existing algorithms. We prove that for the task of minimizing a convex and smooth or Lipschitz function over an unbounded domain, tuning-free optimization is impossible. We discuss conditions under which tuning-free optimization is possible even over unbounded domains. In particular, we show that the recently proposed DoG and DoWG algorithms are tuning-free when the noise distribution is sufficiently well-behaved. For the task of finding a stationary point of a smooth and potentially nonconvex function, we give a variant of SGD that matches the best-known high-probability convergence rate for tuned SGD at only an additional polylogarithmic cost. However, we also give an impossibility result that shows no algorithm can hope to match the optimal expected convergence rate for tuned SGD with high probability.
In this note, we prove that, for a finite-dimensional Lie algebra $\mathfrak g$ over a field $\mathbb K$ of characteristic 0 which contains $\mathbb C$, the Chevalley--Eilenberg complex $\mathrm U(\mathfrak g)\otimes \wedge(\mathfrak g)$, which is in a natural way a deformation quantization of the Koszul complex of $\mathrm S(\mathfrak g)$, is $A_\infty$-quasi-isomorphic to the deformation quantization of the $A_\infty$-bimodule $K=\mathbb K$ provided by the Formality Theorem in presence of two branes (CFFR).
We study the positivity of the first Chern class of a rank r Ulrich vector bundle E on a smooth n-dimensional variety $X \subseteq \mathbb P^N$. We prove that $c_1(E)$ is very positive on every subvariety not contained in the union of lines in X. In particular if X is not covered by lines, then E is big and $c_1(E)^n \ge r^n$. Moreover we classify rank r Ulrich vector bundles E with $c_1(E)^2=0$ on surfaces and with $c_1(E)^2=0$ or $c_1(E)^3=0$ on threefolds (with some exceptions).
We study the tendency of AI systems to deceive by constructing a realistic simulation setting of a company AI assistant. The simulated company employees provide tasks for the assistant to complete, these tasks spanning writing assistance, information retrieval and programming. We then introduce situations where the model might be inclined to behave deceptively, while taking care to not instruct or otherwise pressure the model to do so. Across different scenarios, we find that Claude 3 Opus 1) complies with a task of mass-generating comments to influence public perception of the company, later deceiving humans about it having done so, 2) lies to auditors when asked questions, and 3) strategically pretends to be less capable than it is during capability evaluations. Our work demonstrates that even models trained to be helpful, harmless and honest sometimes behave deceptively in realistic scenarios, without notable external pressure to do so.
Bose-condensed gases are considered with an effective interaction strength varying in the whole range of the values between zero and infinity. The consideration is based on the usage of a representative statistical ensemble for Bose systems with broken global gauge symmetry. Practical calculations are illustrated for a uniform Bose gas at zero temperature, employing a self-consistent mean-field theory, which is both conserving and gapless.
We consider the moderate deviations behaviors for two (co-) volatility estima-tors: generalised bipower variation, Hayashi-Yoshida estimator. The results are obtained by using a new result about the moderate deviations principle for m-dependent random variables based on the Chen-Ledoux type condition. In the last decade there has been a considerable development of the asymptotic theory for processes observed at a high frequency. This was mainly motivated by financial applications , where the data, such as stock prices or currencies, are observed very frequently. As under the no-arbitrage assumptions price processes must follow a semimartingale, there was a need for probabilistic tools for functionals of semimartingales based on high frequency observations. Inspired by potential applications, probabilists started to develop limit theorems for semimartingales. Statisticians applied the asymptotic theory to analyze the path properties of discretely observed semimartingales: for the estimation of certain volatility functionals and realised jumps, or for performing various test procedures. We consider X t = (X 1,t , X 2,t) t$\in$[0,T ] a 2-dimensional semimartingale, defined on the filtred probability space ($\Omega$, F , (F t) [0,T ] , P), of the form
Nonrelativistic string theory is a self-contained corner of string theory, with its string spectrum enjoying a Galilean-invariant dispersion relation. This theory is unitary and ultraviolet complete, and can be studied from first principles. In these notes, we focus on the bosonic closed string sector. In curved spacetime, nonrelativistic string theory is defined by a renormalizable quantum nonlinear sigma model in background fields, following certain symmetry principles that disallow any deformation towards relativistic string theory. We review previous proposals of such symmetry principles and propose a modified version that might be useful for supersymmetrizations. The appropriate target-space geometry determined by these local spacetime symmetries is string Newton-Cartan geometry. This geometry is equipped with a two-dimensional foliation structure that is restricted by torsional constraints. Breaking the symmetries that give rise to such torsional constraints in the target space will in general generate quantum corrections to a marginal deformation in the worldsheet quantum field theory. Such a deformation induces a renormalization group flow towards sigma models that describe relativistic strings.
The quality of fetal ultrasound images is significantly affected by motion blur while the imaging system requires low motion quality in order to capture accurate data. This can be achieved with a mathematical model of motion blur in time or frequency domain. We propose a new model of linear motion blur in both frequency and moment domain to analyse the invariant features of blur convolution for ultrasound images. Moreover, the model also helps to provide an estimation of motion parameters for blur length and angle. These outcomes might imply great potential of this invariant method in ultrasound imaging application.
It is shown that for every $p\in (1,\infty)$ there exists a Banach space $X$ of finite cotype such that the projective tensor product $\ell_p\tp X$ fails to have finite cotype. More generally, if $p_1,p_2,p_3\in (1,\infty)$ satisfy $\frac{1}{p_1}+\frac{1}{p_2}+\frac{1}{p_3}\le 1$ then $\ell_{p_1}\tp\ell_{p_2}\tp\ell_{p_3}$ does not have finite cotype. This is a proved via a connection to the theory of locally decodable codes.
Accurate software effort estimation has been a challenge for many software practitioners and project managers. Underestimation leads to disruption in the projects estimated cost and delivery. On the other hand, overestimation causes outbidding and financial losses in business. Many software estimation models exist; however, none have been proven to be the best in all situations. In this paper, a decision tree forest (DTF) model is compared to a traditional decision tree (DT) model, as well as a multiple linear regression model (MLR). The evaluation was conducted using ISBSG and Desharnais industrial datasets. Results show that the DTF model is competitive and can be used as an alternative in software effort prediction.
Charge order appears to be an ubiquitous phenomenon in doped Mott insulators, which is currently under intense experimental and theoretical investigations particularly in the high $T_c$ cuprates. This phenomenon is conventionally understood in terms of Hartree-Fock type mean field theory. Here we demonstrate a mechanism for charge modulation which is rooted in the many-particle quantum physics arising in the strong coupling limit. Specifically, we consider the problem of a single hole in a bipartite $t-J$ ladder. As a remnant of the fermion signs, the hopping hole picks up subtle phases pending the fluctuating spins, the so-called phase string effect. We demonstrate the presence of charge modulations in the density matrix renormalization group solutions which disappear when the phase strings are switched off. This form of charge modulation can be understood analytically in a path-integral language, showing that the phase strings give rise to constructive interferences leading to self-localization. When the latter occurs, left- and right-moving propagating modes emerge inside the localization volume and their interference is responsible for the real space charge modulation.
Functional connectivity refers to the temporal statistical relationship between spatially distinct brain regions and is usually inferred from the time series coherence/correlation in brain activity between regions of interest. In human functional brain networks, the network structure is often inferred from functional magnetic resonance imaging (fMRI) blood oxygen level dependent (BOLD) signal. Since the BOLD signal is a proxy for neuronal activity, it is of interest to learn the latent functional network structure. Additionally, despite a core set of observations about functional networks such as small-worldness, modularity, exponentially truncated degree distributions, and presence of various types of hubs, very little is known about the computational principles which can give rise to these observations. This paper introduces a Hidden Markov Random Field framework for the purpose of representing, estimating, and evaluating latent neuronal functional relationships between different brain regions using fMRI data.
We derive the first and second-order expressions for the shear, the bulk viscosity, and the thermal conductivity of a relativistic hot boson gas in a magnetic field using the relativistic kinetic theory within the Chapman-Enskog method. The order-by-order off-equilibrium distribution function is obtained in terms of the associate Laguerre polynomial with magnetic field-dependent coefficients using the relativistic Boltzmann-Uehling-Uhlenbeck transport equation. The order-by-order anisotropic transport coefficients are evaluated in powers of the dimensionless ratio of kinetic energy to the fluid temperature for finite magnetic fields. In a magnetic field, the shear viscosity (in all order) splits into five different coefficients. Four of them show a magnetic field dependence as seen in a previous study \cite{Ashutosh1} using the relaxation time approximation for the collision kernel. On the other hand, bulk viscosity, which splits into three components (in all order), is independent of the magnetic field. The thermal conductivity shows a similar splitting but is field-dependent. The difference in the first and second-order results are prominent for the thermal conductivities than the shear viscosity; moreover, the difference in the two results is most evident at low temperatures. The first and second-order results seem to converge rapidly for high temperatures.
We define and study a simplicial complex which is a homogeneous space for the group $PGL(2, K)$ over a two-dimensional local field $K$. The complex is a generalization of the tree studied by F. Bruhat, J. Tits, J.-P. Serre and P. Cartier in the 60's and early 70's. Such complex can be canonically attached to the triples $x \in C \subset X$ where $X$ is an algebraic surface, $C$ is an irreducible curve and $x$ is a smooth point on $C$ and $X$. This construction can be used for a description of the isomorphism set of vector bundles on $X$.
Successful quantitative measurement of carbon content in coal using laser-induced breakdown spectroscopy (LIBS) is suffered from relatively low precision and accuracy. In the present work, the spectrum standardization method was combined with the dominant factor based partial least square (PLS) method to improve the measurement accuracy of carbon content in coal by LIBS. The combination model employed the spectrum standardization method to convert the carbon line intensity into standard state for more accurately calculating the dominant carbon concentration, and then applied PLS with full spectrum information to correct the residual errors. The combination model was applied to the measurement of carbon content for 24 bituminous coal samples. The results demonstrated that the combination model could further improve the measurement accuracy compared with both our previously established spectrum standardization model and dominant factor based PLS model using spectral area normalized intensity for the dominant factor model. For example, the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and the average relative error (ARE) for the combination model were 0.99, 1.75%, and 2.39%, respectively; while those values for the spectrum standardization method were 0.83, 2.71%, and 3.40%, respectively; and those values for the dominant factor based PLS model were 0.99, 2.66%, and 3.64%, respectively.
The ability of an autonomous vehicle to perform 3D tracking is essential for safe planing and navigation in cluttered environments. The main challenges for multi-object tracking (MOT) in autonomous driving applications reside in the inherent uncertainties regarding the number of objects, when and where the objects may appear and disappear, and uncertainties regarding objects' states. Random finite set (RFS) based approaches can naturally model these uncertainties accurately and elegantly, and they have been widely used in radar-based tracking applications. In this work, we developed an RFS-based MOT framework for 3D LiDAR data. In partiuclar, we propose a Poisson multi-Bernoulli mixture (PMBM) filter to solve the amodal MOT problem for autonomous driving applications. To the best of our knowledge, this represents a first attempt for employing an RFS-based approach in conjunction with 3D LiDAR data for MOT applications with comprehensive validation using challenging datasets made available by industry leaders. The superior experimental results of our PMBM tracker on public Waymo and Argoverse datasets clearly illustrate that an RFS-based tracker outperforms many state-of-the-art deep learning-based and Kalman filter-based methods, and consequently, these results indicate a great potential for further exploration of RFS-based frameworks for 3D MOT applications.
We discuss Hamiltonian learning in quantum field theories as a protocol for systematically extracting the operator content and coupling constants of effective field theory Hamiltonians from experimental data. Learning the Hamiltonian for varying spatial measurement resolutions gives access to field theories at different energy scales, and allows to learn a flow of Hamiltonians reminiscent of the renormalization group. Our method, which we demonstrate in both theoretical studies and available data from a quantum gas experiment, promises new ways of addressing the emergence of quantum field theories in quantum simulation experiments.
The present status of investigations on fluctuations and correlations seen in high energy multiparticle production processes made using the notion of nonextensivity is reviewed.
One of the important questions in high energy physics is the relation of quark and gluon spin to that of the nucleons which they comprise. Polarization experiments provide a mechanism to probe the spin properties of elementary particles and provide crucial tests of Quantum Chromodynamics (QCD). The theoretical and experimental status of this fundamental question will be reviewed in this paper.
We study non-parametric estimation of choice models, which were introduced to alleviate unreasonable assumptions in traditional parametric models, and are prevalent in several application areas. Existing literature focuses only on the static observational setting where all of the observations are given upfront, they are not equipped with explicit convergence rate guarantees, and consequently they cannot provide an a priori analysis for the model accuracy vs sparsity trade-off on the actual estimated model returned by their algorithms. As opposed to this, we focus on estimating a non-parametric choice model from observational data in a \emph{dynamic} setting, where observations are obtained over time. We show that choice model estimation can be cast as a convex-concave saddle-point (SP) joint estimation and optimization (JEO) problem, and we provide a primal-dual framework for deriving algorithms to solve this based on online convex optimization. By tailoring our framework carefully to the choice model estimation problem, we obtain tractable algorithms with provable convergence guarantees and explicit bounds on the sparsity of the estimated model. Our numerical experiments confirm the effectiveness of the algorithms derived from our framework.
A significant number of confined energetic electrons have been observed outside the Last Closed Flux Surface (LCFS) of the solenoid-free, ECRH sustained plasmas in the EXL-50 spherical torus. Several diagnostics have been applied, for the first time, to investigate the key characters of energetic electrons. Experiments reveal the existence of high-temperature low density electrons, which can carry relatively a large amount of the stored energy. The boundary between the thermal plasma and the energetic electron fluid appears to be clearly separated and the distance between the two boundaries can reach tens of centimeters (around the size of the minor radius of the thermal plasma). This implies that the Grad-Shafranov equilibrium is not suitable to describe the equilibrium of the EXL-50 plasma and a multi-fluid model is required. Particle dynamics simulations of full orbits show that energetic electrons can be well confined outside the LCFS. This is consistent with the experimental observations.
In this paper, we apply transformation-based optics to the derivation of a general class of transparent metamaterial slabs. By means of analytical and numerical full-wave studies, we explore their image displacement/formation capabilities, and establish intriguing connections with configurations already known in the literature. Starting from these revisitations, we develop a number of nontrivial extensions, and illustrate their possible applications to the design of perfect radomes, anti-cloaking devices, and focusing devices based on double-positive (possibly nonmagnetic) media. These designs show that such anomalous features may be achieved without necessarily relying on negative-index or strongly resonant metamaterials, suggesting more practical venues for the realization of these devices.
Spin precession in Rubidium atoms is investigated through a pump-probe technique. The excited wave packet corresponds to a precession of spin and orbital angular momentum around the total angular momentum. We show that using shaped laser pulses allows us to control this dynamics. With a Fourier transform limited pulse, the wave packet is initially prepared in the bright state (coupled to the initial state) whereas a pulse presenting a $\pi $ step in the spectral phase prepares the wave packet in the dark state (uncoupled to the initial state).
If one thinks of a Riemannian metric, $g_1$, analogously as the gradient of the corresponding distance function, $d_1$, with respect to a background Riemannian metric, $g_0$, then a natural question arises as to whether a corresponding theory of Sobolev inequalities exists between the Riemannian metric and its distance function. In this paper we study the sub-critical case $p < \frac{m}{2}$ and show a Sobolev inequality exists where an $L^{\frac{p}{2}}$ bound on a Riemannian metric implies an $L^q$ bound on its corresponding distance function. We then use this result to state a convergence theorem and show how this theorem can be useful to prove geometric stability results by proving a version of Gromov's conjecture for tori with almost non-negative scalar curvature in the conformal case. Examples are given to show that the hypotheses of the main theorems are necessary.
In the spirit of previous papers, but using more general field configurations, the non-linear O(3) model in (2+1)-D, modified by the addition of both a potential-like term and a Skyrme-like term, is considered. The instanton solutions are numerically evolved in time and some of their stability properties studied. They are found to be stable, and a repulsive force is seen to exist among them. These results, which are restricted to the case of zero speed systems, confirm those obtained in previous investigations, in which a similar problem was studied for a different choice of the potential-like term.
The role of background in bosonic quantum statistics is discussed in the frame of a new approach in terms of coherent states. Bosons are indeed detected in different physical situations where they exhibit different and apparently unconnected properties. Besides Bose gas we consider bosons in particle physics and bosons in harmonic traps. In particle physics bosons are dealt with in a context where the number of observed particles is finite: here the relevant features are the canonical commutation relations, which we shall show to be related to a Boltzmann-like distribution. A further case is Bose condensate in harmonic traps, where discrete spectrum leads us to predict, below T_c, a new critical temperature. The unified approach proposed shows that all defferences can be ascribed to the environment.
Removing speckle noise from medical ultrasound images while preserving image features without introducing artifact and distortion is a major challenge in ultrasound image restoration. In this paper, we propose a multiframe-based adaptive despeckling (MADS) algorithm to reconstruct a high-resolution B-mode image from raw radio-frequency (RF) data that is based on a multiple input single output (MISO) model. As a prior step to despeckling, the speckle pattern in each frame is estimated using a novel multiframe-based adaptive approach for ultrasonic speckle noise estimation (MSNE) based on a single input multiple output (SIMO) modeling of consecutive deconvolved ultrasound image frames. The elegance of the proposed despeckling algorithm is that it addresses the despeckling problem by completely following the signal generation model unlike conventional ad-hoc smoothening or filtering based approaches, and therefore, it is likely to maximally preserve the image features. As deconvolution is a necessary pre-processing step to despeckling, we describe here a 2-D extension of the SIMO model-based 1-D deconvolution method. Finally, a complete framework for the generation of high-resolution ultrasound B-mode image has been also established in this paper. The results show 8.55-15.91 dB, 8.24-14.94 dB improvement in terms of SNR and PSNR, respectively, for simulation data and 2.22-3.17, 13.24-32.85 improvement in terms of NIQE and BRISQUE, respectively, for in-vivo data compared to the traditional despeckling algorithms. Visual comparison shows superior texture, resolution, details of B-mode images offered by our method compared to those by a commercial scanner, and hence, it may significantly improve the diagnostic quality of ultrasound images.
Hamilton-Jacobi (HJ) partial differential equations (PDEs) have diverse applications spanning physics, optimal control, game theory, and imaging sciences. This research introduces a first-order optimization-based technique for HJ PDEs, which formulates the time-implicit update of HJ PDEs as saddle point problems. We remark that the saddle point formulation for HJ equations is aligned with the primal-dual formulation of optimal transport and potential mean-field games (MFGs). This connection enables us to extend MFG techniques and design numerical schemes for solving HJ PDEs. We employ the primal-dual hybrid gradient (PDHG) method to solve the saddle point problems, benefiting from the simple structures that enable fast computations in updates. Remarkably, the method caters to a broader range of Hamiltonians, encompassing non-smooth and spatiotemporally dependent cases. The approach's effectiveness is verified through various numerical examples in both one-dimensional and two-dimensional examples, such as quadratic and $L^1$ Hamiltonians with spatial and time dependence.
In 2022, the second author found a prolific construction of strongly regular graphs, which is based on joining a coclique and a divisible design graph with certain parameters. The construction produces strongly regular graphs with the same parameters as the complement of the symplectic graph $\mathsf{Sp}(2d,q)$. In this paper, we determine the parameters of strongly regular graphs which admit a decomposition into a divisible design graph and a coclique attaining the Hoffman bound. In particular, it is shown that when the least eigenvalue of such a strongly regular graph is a prime power, its parameters coincide with those of the complement of $\mathsf{Sp}(2d,q)$. Furthermore, a generalization of the construction is discussed.
Axionic electrodynamics predicts many peculiar magnetoelectric-based properties. Hitherto, simple structures such as one-dimensional multilayers were employed to explore these axionic magnetoelectric responses, and Fabry-P\'{e}rot interference mechanism was frequently applied to augment these effects. In this Letter, we propose a new mechanism, metamaterial-enhanced axionic magnetoelectric response, by taking advantage of intense enhancement of localized electromagnetic fields associated with plasmonic resonances. Through numerical simulations, we show that plasmonic metamaterial can enhance axionic magnetoelectric effect by two orders of magnitude.
We empirically analyze the most volatile component of the electricity price time series from two North-American wholesale electricity markets. We show that these time series exhibit fluctuations which are not described by a Brownian Motion, as they show multi-scaling, high Hurst exponents and sharp price movements. We use the generalized Hurst exponent (GHE, $H(q)$) to show that although these time-series have strong cyclical components, the fluctuations exhibit persistent behaviour, i.e., $H(q)>0.5$. We investigate the effectiveness of the GHE as a predictive tool in a simple linear forecasting model, and study the forecast error as a function of $H(q)$, with $q=1$ and $q=2$. Our results suggest that the GHE can be used as prediction tool for these time series when the Hurst exponent is dynamically evaluated on rolling time windows of size $\approx 50 - 100$ hours. These results are also compared to the case in which the cyclical components have been subtracted from the time series, showing the importance of cyclicality in the prediction power of the Hurst exponent.
Let $R = k[x_1, \dotsc , x_n]$ denote the standard graded polynomial ring over a field $k$. We study certain classes of equigenerated monomial ideals with the property that the so-called complementary ideal has no linear relations on the generators. We then use iterated trimming complexes to deduce Betti numbers for such ideals. Furthermore, using a result on splitting mapping cones by Miller and Rahmati, we construct the minimal free resolutions for all ideals under consideration explicitly and conclude with questions about extra structure on these complexes.
Grid operators in EGEE use a dedicated dashboard as their central operational tool, stable and scalable for the last 5 years despite continuous upgrade from specifications by users, monitoring tools or data providers. In EGEE-III, regionalisation of operations led the tool developers to conceive a standalone instance of this tool. Hereby, we will present the concept and the EELA-II implementation. Indeed, there-engineering of this tool led to an easily deployable package that canconnect to EELA-II specific information sources such as EVENTUM, EELAGOCDB like or SAM EELA instance through the three components of thepackage: the generic and scalable data access mechanism, Lavoisier; thewidely spread php framework Symfony, for configuration flexibility and a Mysql database.
In this paper, we have considered a cosmological model with the non--minimal kinetic coupling terms and investigated its cosmological implications with respect to the logarithmic entropy-- corrected holographic dark energy (LECHDE). The correspondence between LECHDE in flat FRW cosmology and the phantom dark energy model with the aim to interpret the current universe acceleration is also examined.
We show that the limit at infinity of the vector-valued Brown-York-type quasi-local mass along any coordinate exhaustion of an asymptotically hyperbolic $3$-manifold satisfying the relevant energy condition on the scalar curvature has the conjectured causal character. Our proof uses spinors and relies on a Witten-type formula expressing the asymptotic limit of this quasi-local mass as a bulk integral which manifestly has the right sign under the above assumptions. In the spirit of recent work by Hijazi, Montiel and Raulot, we also provide another proof of this result which uses the theory of boundary value problems for Dirac operators on compact domains to show that a certain quasi-local mass, which converges to the Brown-York mass in the asymptotic limit, has the expected causal character under suitable geometric assumptions.
In massless QCD coupled to QED in an external magnetic field, a photon with the linear polarization in the direction of the external magnetic field mixes with the charge neutral pion through the triangle anomaly, leading to one gapless mode with the quadratic dispersion relation $\omega \sim k^2$ and one gapped mode. We show that this gapless mode can be interpreted as the so-called type-B Nambu-Goldstone (NG) mode associated with the spontaneous breaking of generalized global symmetries and that its presence is solely dictated by the anomalous commutator in the symmetry algebra. We also argue a possible realization of such nonrelativistic NG modes in 3-dimensional Dirac semimetals.
A major challenge of research on non-English machine reading for question answering (QA) is the lack of annotated datasets. In this paper, we present GermanQuAD, a dataset of 13,722 extractive question/answer pairs. To improve the reproducibility of the dataset creation approach and foster QA research on other languages, we summarize lessons learned and evaluate reformulation of question/answer pairs as a way to speed up the annotation process. An extractive QA model trained on GermanQuAD significantly outperforms multilingual models and also shows that machine-translated training data cannot fully substitute hand-annotated training data in the target language. Finally, we demonstrate the wide range of applications of GermanQuAD by adapting it to GermanDPR, a training dataset for dense passage retrieval (DPR), and train and evaluate the first non-English DPR model.
Poly(ionic liquid)s (PILs), similar to their ionic liquid (IL) analogues, present a nanostructure arising from local interactions. The influence of this nanostructure on the macromolecular conformation of polymer chains is investigated for the first time by means of an extensive use of small angle neutron scattering on a series of poly(1-vinyl-3-alkylimidazolium)s with varying alkyl side-chain length and counter-anion both in bulk and in dilute solutions. Radii of gyration are found to increase with the side-chain length in solution as a consequence of crowding interactions between neighboring monomers. In bulk, however, a non monotonic evolution of the radius of gyration reflects a change in chain flexibility and a potential screening of electrostatic interactions. Additionally at smaller scale, SANS provides an experimental estimation of both the chain diameter and the correlation length between neighboring chains, comparison of which unveils clear evidence of interdigitation of the alkyl side-chains. These structural features bring precious insights in the understanding of the dynamic properties of PILs.
We introduce the problem Synchronized Planarity. Roughly speaking, its input is a loop-free multi-graph together with synchronization constraints that, e.g., match pairs of vertices of equal degree by providing a bijection between their edges. Synchronized Planarity then asks whether the graph admits a crossing-free embedding into the plane such that the orders of edges around synchronized vertices are consistent. We show, on the one hand, that Synchronized Planarity can be solved in quadratic time, and, on the other hand, that it serves as a powerful modeling language that lets us easily formulate several constrained planarity problems as instances of Synchronized Planarity. In particular, this lets us solve Clustered Planarity in quadratic time, where the most efficient previously known algorithm has an upper bound of $O(n^{8})$.
Considering different universality classes of the QCD phase transitions, we perform the Monte Carlo simulations of the 3-dimensional $O(1, 2, 4)$ models at vanishing and non-vanishing external field, respectively. Interesting high cumulants of the order parameter and energy from O(1) (Ising) spin model, and the cumulants of the energy from O(2) and O(4) spin models are presented. The critical features of the cumulants are discussed. They are instructive to the high cumulants of the net baryon number in the QCD phase transitions.
The effects of cell size and deformability on the lateral migration and deformation of living cells flowing through a rectangular microchannel has been numerically investigated and compared with the inertial-microfluidics data on detection and separation of cells. The results of this work indicate that the cells move closer to the centerline if they are bigger and/or more deformable and that their equilibrium position is largely determined by the solvent (cytosol) viscosity, which is much less than the polymer (cytoskeleton) viscosity measured in most rheological systems. Simulations also suggest that decreasing channel dimensions leads to larger differences in equilibrium position for particles of different viscoelastic properties, giving design guidance for the next generation of microfluidic cell separation chips.
Many software systems today face uncertain operating conditions, such as sudden changes in the availability of resources or unexpected user behavior. Without proper mitigation these uncertainties can jeopardize the system goals. Self-adaptation is a common approach to tackle such uncertainties. When the system goals may be compromised, the self-adaptive system has to select the best adaptation option to reconfigure by analyzing the possible adaptation options, i.e., the adaptation space. Yet, analyzing large adaptation spaces using rigorous methods can be resource- and time-consuming, or even be infeasible. One approach to tackle this problem is by using online machine learning to reduce adaptation spaces. However, existing approaches require domain expertise to perform feature engineering to define the learner, and support online adaptation space reduction only for specific goals. To tackle these limitations, we present 'Deep Learning for Adaptation Space Reduction Plus' -- DLASeR+ in short. DLASeR+ offers an extendable learning framework for online adaptation space reduction that does not require feature engineering, while supporting three common types of adaptation goals: threshold, optimization, and set-point goals. We evaluate DLASeR+ on two instances of an Internet-of-Things application with increasing sizes of adaptation spaces for different combinations of adaptation goals. We compare DLASeR+ with a baseline that applies exhaustive analysis and two state-of-the-art approaches for adaptation space reduction that rely on learning. Results show that DLASeR+ is effective with a negligible effect on the realization of the adaptation goals compared to an exhaustive analysis approach, and supports three common types of adaptation goals beyond the state-of-the-art approaches.
Traversable wormholes are objects that present a lot of interest in the last years because of their geometric features and their relation with exotic matter. In this paper we presnt a review of the principal characteristics of traversable Morris-Thorne wormholes, their construction proccess and some aspects about the exotic matter that is needed in order to mantain them. Then, we use a junction proccess to obatin two specific wormhole solutions in the (2+1) gravity formalism with negative cosmological constant. The obtained solutions represent wormholes with an external spacetime correspondient to the BTZ black hole solution. We also show that exotic matter is needed to mantain these wormholes. ----- Los agujeros de gusano atravesables son objetos que presentan un gran interes en la actualidad debido a sus caracteristicas geometricas y a su relacion con la materia exotica. En el presente trabajo se muestra una revision de las caracteristicas de los agujeros de gusano atravesables al estilo de Morris y Thorne, al igual que el proceso de construccion y aspectos de la materia exotica necesaria para mantenerlos. Luego, se utiliza un proceso de juntura para construir dos soluciones especificas tipo agujero de gusano en el formalismo de la gravedad (2+1) con constante cosmologica negativa. Con esta construccion, se obtienen agujeros atravesables que se encuentran unidos a un espacio-tiempo externo correspondiente al agujero negro BTZ sin momento angular y sin carga electrica. Ademas de esto, se muestra que para mantener este tipo de solucion es necesaria la existencia de materia exotica, es decir, materia que viole las condiciones de energia.
We report measurements of optical absorption in the zig-zag antiferromagnet $\alpha$-RuCl$_3$ as a function of temperature, $T$, magnetic field, $B$, and photon energy, $\hbar\omega$ in the range $\sim$ 0.3 to 8.3 meV, using time-domain terahertz spectroscopy. Polarized measurements show that 3-fold rotational symmetry is broken in the honeycomb plane from 2 K to 300 K. We find a sharp absorption peak at 2.56 meV upon cooling below the N\'eel temperature of 7 K at $B=0$ that we identify as magnetic-dipole excitation of a zero-wavevector magnon, or antiferromagnetic resonance (AFMR). With application of $B$, the AFMR broadens and shifts to lower frequency as long-range magnetic order is lost in a manner consistent with transitioning to a spin-disordered phase. From direct, internally calibrated measurement of the AFMR spectral weight, we place an upper bound on the contribution to the $dc$ susceptibility from a magnetic excitation continuum.
Every known artificial deep neural network (DNN) corresponds to an object in a canonical Grothendieck's topos; its learning dynamic corresponds to a flow of morphisms in this topos. Invariance structures in the layers (like CNNs or LSTMs) correspond to Giraud's stacks. This invariance is supposed to be responsible of the generalization property, that is extrapolation from learning data under constraints. The fibers represent pre-semantic categories (Culioli, Thom), over which artificial languages are defined, with internal logics, intuitionist, classical or linear (Girard). Semantic functioning of a network is its ability to express theories in such a language for answering questions in output about input data. Quantities and spaces of semantic information are defined by analogy with the homological interpretation of Shannon's entropy of P.Baudot and D.Bennequin in 2015). They generalize the measures found by Carnap and Bar-Hillel (1952). Amazingly, the above semantical structures are classified by geometric fibrant objects in a closed model category of Quillen, then they give rise to homotopical invariants of DNNs and of their semantic functioning. Intentional type theories (Martin-Loef) organize these objects and fibrations between them. Information contents and exchanges are analyzed by Grothendieck's derivators.
In this paper, we introduce the concept of the (higher order) Appell-Carlitz numbers which unifies the definitions of several special numbers in positive characteristic, such as the Bernoulli-Carlitz numbers and the Cauchy-Carlitz numbers.Their generating function is usually named Hurwitz series in the function field arithmetic. By using Hasse-Teichm\"uller derivatives, we also obtain several properties of the (higher order) Appell-Carlitz numbers, including a recurrence formula, two closed forms expressions, and a determinant expression. The recurrence formula implies Carlitz's recurrence formula for Bernoulli-Carlitz numbers. Two closed from expressions implies the corresponding results for Bernoulli-Carlitz and Cauchy-Carlitz numbers . The determinant expression implies the corresponding results for Bernoulli-Carlitz and Cauchy-Carlitz numbers, which are analogues of the classical determinant expressions of Bernoulli and Cauchy numbers stated in an article by Glaisher in 1875.
We consider the problem of partitioning the node set of a graph into $k$ sets of given sizes in order to \emph{minimize the cut} obtained using (removing) the $k$-th set. If the resulting cut has value $0$, then we have obtained a vertex separator. This problem is closely related to the graph partitioning problem. In fact, the model we use is the same as that for the graph partitioning problem except for a different \emph{quadratic} objective function. We look at known and new bounds obtained from various relaxations for this NP-hard problem. This includes: the standard eigenvalue bound, projected eigenvalue bounds using both the adjacency matrix and the Laplacian, quadratic programming (QP) bounds based on recent successful QP bounds for the quadratic assignment problems, and semidefinite programming bounds. We include numerical tests for large and \emph{huge} problems that illustrate the efficiency of the bounds in terms of strength and time.
A key issue in tropical geometry is the lifting of intersection points to a non-Archimedean field. Here, we ask: Where can classical intersection points of planar curves tropicalize to? An answer should have two parts: first, identifying constraints on the images of classical intersections, and, second, showing that all tropical configurations satisfying these constraints can be achieved. This paper provides the first part: images of intersection points must be linearly equivalent to the stable tropical intersection by a suitable rational function. Several examples provide evidence for the conjecture that our constraints may suffice for part two.
We analyze the $\gamma p \to \eta p$ process from threshold up to 1.2 GeV, employing an effective Lagrangian approach that allows for a mixing of eta couplings of pseudoscalar and pseudovector nature. The mixing ratio of the couplings may serve as a quantitative estimation of the $SU_L(3)\times SU_R(3)$ extended chiral symmetry violation in this energy regime. The data analyzed (differential cross sections and asymmetries) show a preference for the pseudoscalar coupling -- 91% of pseudoscalar coupling component for the best fit. We stress that a more conclusive answer to this question requires a more complete electromagnetic multipole database than the presently available one.
Two applications of mean-field calculations based on 3D coordinate-space techniques are presented. The first concerns the structure of odd-N superheavy elements that have been recently observed experimentally and shows the ability of the method to describe, in a self-consistent way, very heavy odd-mass nuclei. Our results are consistent with the experimental data. The second application concerns the introduction of correlations beyond a mean-field approach by means of projection techniques and configuration mixing. Results for Mg isotopes demonstrate that the restoration of rotational symmetry plays a crucial role in the description of 32Mg.
Although Turing pattern is one of the most universal mechanisms for pattern formation, in its standard model the number of stripes changes with the system size, since the wavelength of the pattern is invariant: It fails to preserve the proportionality of the pattern, i.e., the ratio of the wavelength to the size, that is often required in biological morphogeneis. To get over this problem, we show that the Turing pattern can preserve proportionality by introducing a catalytic chemical whose concentration depends on the system size. Several plausible mechanisms for such size dependence of the concentration are discussed. Following this general discussion, two models are studied in which arising Turing patterns indeed preserve the proportionality. Relevance of the present mechanism to biological morphogenesis is discussed from the viewpoint of its generality, robustness, and evolutionary accessibility.
A scheme for fine tuning of quantum operations to improve their performance is proposed. A quantum system in $\Lambda$ configuration with two-photon Raman transitions is considered without adiabatic elimination of the excited (intermediate) state. Conditional dynamics of the system is studied with focus on improving fidelity of quantum operations. In particular, the $\pi$ pulse and $\pi/2$ pulse quantum operations are considered. The dressed states for the atom-field system, with an atom driven on one transition by a classical field and on the other by a quantum cavity field, are found. A discrete set of detunings is given for which high fidelity of desired states is achieved. Analytical solutions for the quantum state amplitudes are found in the first order perturbation theory with respect to the cavity damping rate $\kappa$ and the spontaneous emission rate $\gamma$. Numerical solutions for higher values of $\kappa$ and $\gamma$ indicate a stabilizing role of spontaneous emission in the $\pi$ and $\pi/2$ pulse quantum operations. The idea can also be applied for excitation pulses of different shapes.
We report a new search for 12CO(1-0) emission in high-velocity clouds (HVCs) performed with the IRAM 30 m telescope. This search was motivated by the recent detection of cold dust emission in the HVCs of Complex C. Despite a spatial resolution which is three times better and sensitivity twice as good compared to previous studies, no CO emission is detected in the HVCs of Complex C down to a best 5 sigma limit of 0.16 K km/s at a 22'' resolution. The CO emission non-detection does not provide any evidence in favor of large amounts of molecular gas in these HVCs and hence in favor of the infrared findings. We discuss different configurations which, however, allow us to reconcile the negative CO result with the presence of molecular gas and cold dust emission. H2 column densities higher than our detection limit, N(H2) = 3x10^{19} cm^{-2}, are expected to be confined in very small and dense clumps with 20 times smaller sizes than the 0.5 pc clumps resolved in our observations according to the results obtained in cirrus clouds, and might thus still be highly diluted. As a consequence, the inter-clump gas at the 1 pc scale has a volume density lower than 20 cm^{-3} and already appears as too diffuse to excite the CO molecules. The observed physical conditions in the HVCs of Complex C also play an important role against CO emission detection. It has been shown that the CO-to-H2 conversion factor in low metallicity media is 60 times higher than at the solar metallicity, leading for a given H2 column density to a 60 times weaker integrated CO intensity. And the very low dust temperature estimated in these HVCs implies the possible presence of gas cold enough (< 20 K) to cause CO condensation onto dust grains under interstellar medium pressure conditions and thus CO depletion in gas-phase observations.
Since their introduction the Trasformer architectures emerged as the dominating architectures for both natural language processing and, more recently, computer vision applications. An intrinsic limitation of this family of "fully-attentive" architectures arises from the computation of the dot-product attention, which grows both in memory consumption and number of operations as $O(n^2)$ where $n$ stands for the input sequence length, thus limiting the applications that require modeling very long sequences. Several approaches have been proposed so far in the literature to mitigate this issue, with varying degrees of success. Our idea takes inspiration from the world of lossy data compression (such as the JPEG algorithm) to derive an approximation of the attention module by leveraging the properties of the Discrete Cosine Transform. An extensive section of experiments shows that our method takes up less memory for the same performance, while also drastically reducing inference time. This makes it particularly suitable in real-time contexts on embedded platforms. Moreover, we assume that the results of our research might serve as a starting point for a broader family of deep neural models with reduced memory footprint. The implementation will be made publicly available at https://github.com/cscribano/DCT-Former-Public
The soliton wave model of action potentials, and the proposal of induced lipid pores, are potentially paradigm shifting ideas which challenge accepted views of the Hodgkin-Huxley model and of protein-based ion channels. These two proposals are reviewed, and possible tests of each are presented. Also, three key non-electrical features seen during action potentials are reviewed; a shift in birefringence, the pattern of heat emission and absorption, and the expansion of cell diameter. How the soliton wave model uses the lipid phase transition to account for each of these three phenomena is contrasted with alternatives which might be consistent with the Hodgkin-Huxley model of action potentials. It is suggested that changes in membrane potential during the action potential might have a significant effect on membrane proteins and contribute to the production of each of these phenomena. A key assumption of the soliton wave model is that lipid phase transitions are common and adaptive in life. However a review of the literature suggests that often lipid phase transitions are damaging to cells, and so are often avoided. There is a need for clearer evidence as to whether or not neurons actually do have liquid crystalline to gel lipid phase transitions happening in them during action potentials, as the soliton model assumes. Several major paradigm shifts are associated with the soliton wave model, and with the proposal of induced lipid pores, and such large claims require additional testing and a great deal of supportive evidence if they are to achieve broader acceptance.
The aim of this paper is to introduce a new Monte Carlo method based on importance sampling techniques for the simulation of stochastic differential equations. The main idea is to combine random walk on squares or rectangles methods with importance sampling techniques. The first interest of this approach is that the weights can be easily computed from the density of the one-dimensional Brownian motion. Compared to the Euler scheme this method allows one to obtain a more accurate approximation of diffusions when one has to consider complex boundary conditions. The method provides also an interesting alternative to performing variance reduction techniques and simulating rare events.
We study D-instanton contributions to supergraviton scattering amplitudes in the ten-dimensional type IIB superstring theory beyond the leading non-perturbative order. In particular, we determine the one-D-instanton contribution to maximal R-symmetry violating (MRV) amplitudes with arbitrary momenta at the first subleading order in string coupling, as well as the effects of a D-/anti-D-instanton pair at leading nontrivial order in the momentum expansion. These results confirm a number of predictions of S-duality, and unveil some previously unknown pieces of type IIB string amplitudes. Our computation is based on the Neveu-Schwarz-Ramond formalism with picture changing operators and vertical integration. The naive on-shell prescription for D-instanton mediated amplitudes, based on integration over the moduli space of worldsheet geometries as well as the moduli space of D-instanton boundary conditions, suffers from potential open string divergences and regularization ambiguities that are in principle resolved in the framework of open+closed superstring field theory. In this paper, the "on-shell ambiguities" of one-D-instanton MRV amplitudes are resolved by arguments involving supersymmetry and soft limits, part of which is verified by a string field theoretic computation in a highly nontrivial manner.
We propose a family of renormalization group transformations characterized by free parameters that may be tuned in order to reduce the truncation effects. As a check we test them in the three dimensional XY model. The Schwinger--Dyson equations are used to study the renormalization group flow.
We extend the known classification of threefolds of general type that are complete intersections to various classes of non-complete intersections, and find other classes of polarised varieties, including Calabi-Yau threefolds with canonical singularities, that are not complete intersections. Our methods apply more generally to construct orbifolds described by equations in given Gorenstein formats.
Alzheimer's disease (AD), as a progressive brain disease, affects cognition, memory, and behavior. Similarly, limbic-predominant age-related TDP-43 encephalopathy (LATE) is a recently defined common neurodegenerative disease that mimics the clinical symptoms of AD. At present, the risk factors implicated in LATE and those distinguishing LATE from AD are largely unknown. We leveraged an integrated feature selection-based algorithmic approach, to identify important factors differentiating subjects with LATE and/or AD from Control on significantly imbalanced data. We analyzed two datasets ROSMAP and NACC and discovered that alcohol consumption was a top lifestyle and environmental factor linked with LATE and AD and their associations were differential. In particular, we identified a specific subpopulation consisting of APOE e4 carriers. We found that, for this subpopulation, light-to-moderate alcohol intake was a protective factor against both AD and LATE, but its protective role against AD appeared stronger than LATE. The codes for our algorithms are available at https://github.com/xinxingwu-uk/PFV.
We compare the diffraction-limited field of view (FOV) provided by four types of off-axis Gregorian telescopes: the classical Gregorian, the aplanatic Gregorian, and designs that cancel astigmatism and both astigmatism and coma. The analysis is carried out using telescope parameters that are appropriate for satellite and balloon-borne millimeter and sub-millimeter wave astrophysics. We find that the design that cancels both coma and astigmatism provides the largest flat FOV, about 21 square degrees. We also find that the FOV can be increased by about 15% by optimizing the shape and location of the focal surface.
Today's highly heterogeneous computing landscape places a burden on programmers wanting to achieve high performance on a reasonably broad cross-section of machines. To do so, computations need to be expressed in many different but mathematically equivalent ways, with, in the worst case, one variant per target machine. Loo.py, a programming system embedded in Python, meets this challenge by defining a data model for array-style computations and a library of transformations that operate on this model. Offering transformations such as loop tiling, vectorization, storage management, unrolling, instruction-level parallelism, change of data layout, and many more, it provides a convenient way to capture, parametrize, and re-unify the growth among code variants. Optional, deep integration with numpy and PyOpenCL provides a convenient computing environment where the transition from prototype to high-performance implementation can occur in a gradual, machine-assisted form.