text
stringlengths
6
128k
A Fibonacci string is a length n binary string containing no two consecutive 1s. Fibonacci cubes (FC), Extended Fibonacci cubes (ELC) and Lucas cubes (LC) are subgraphs of hypercube defined in terms of Fibonacci strings. All these cubes were introduced in the last ten years as models for interconnection networks and shown that their network topology posseses many interesting properties that are important in parallel processor network design and parallel applications. In this paper, we propose a new family of Fibonacci-like cube, namely Extended Lucas Cube (ELC). We address the following network simulation problem : Given a linear array, a ring or a two-dimensional mesh; how can its nodes be assigned to ELC nodes so as to keep their adjacent nodes near each other in ELC ?. We first show a simple fact that there is a Hamiltonian path and cycle in any ELC. We prove that any linear array and ring network can be embedded into its corresponding optimum ELC (the smallest ELC with at least the number of nodes in the ring) with dilation 1, which is optimum for most cases. Then, we describe dilation 1 embeddings of a class of meshes into their corresponding optimum ELC.
Various results on factorisations of complete graphs into circulant graphs and on 2-factorisations of these circulant graphs are proved. As a consequence, a number of new results on the Oberwolfach Problem are obtained. For example, a complete solution to the Oberwolfach Problem is given for every 2-regular graph of order 2p where p = 5 (mod 8) is prime.
When a biased conductor is put in proximity with an unbiased conductor a drag current can be induced in the absence of detailed balance. This is known as the Coulomb drag effect. However, even in this situation far away from equilibrium where detailed balance is explicitly broken, theory predicts that fluctuation relations are satisfied. This surprising effect has, to date, not been confirmed experimentally. Here we propose a system consisting of a capacitively coupled double quantum dot where the nonlinear fluctuation relations are verified in the absence of detailed balance.
Let $\mathfrak q$ be a finite-dimensional Lie algebra, $\vartheta\in Aut(\mathfrak q)$ a finite order automorphism, and $\mathfrak q_0$ the subalgebra of fixed points of $\vartheta$. Using $\vartheta$ one can construct a pencil $\mathcal P$ of compatible Poisson brackets on $S(\mathfrak q)$, and thereby a `large' Poisson-commutative subalgebra $Z(\mathfrak q,\vartheta)$ consisting of $\mathfrak q_0$-invariants in $S(\mathfrak q)$. We study one particular bracket $\{\,\,,\,\}_{\infty}\in\mathcal P$ and the related Poisson centre ${\mathcal Z}_\infty$. It is shown that ${\mathcal Z}_\infty$ is a polynomial ring, if $\mathfrak q$ is reductive.
Structural balance modeling for signed graph networks presents how to model the sources of conflicts. The state-of-the-art has focused on computing the frustration index of a signed graph as a critical step toward solving problems in social and sensor networks and for scientific modeling. However, the proposed approaches do not scale to modern large, sparse signed networks. Also, they do not address that there is more than one way in some networks to reach a consensus with the minimum number of edge-sign switches needed. We propose an efficient balanced state discovery algorithm and a network frustration computation that will discover the nearest balanced state for the \emph{any} size of the graph network and compute the frustration of the network. The speedup of the proposed method is around 300 times faster than the state-of-the-art for signed graphs with hundreds of thousands of edges. The technique successfully scales to find the balanced states and frustration of the networks with millions of nodes and edges in real time where state-of-the-art fails.
With growing awareness of societal impact of artificial intelligence, fairness has become an important aspect of machine learning algorithms. The issue is that human biases towards certain groups of population, defined by sensitive features like race and gender, are introduced to the training data through data collection and labeling. Two important directions of fairness ensuring research have focused on (i) instance weighting in order to decrease the impact of more biased instances and (ii) adversarial training in order to construct data representations informative of the target variable, but uninformative of the sensitive attributes. In this paper we propose a Fair Adversarial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions. Merging the two paradigms, it inherits desirable properties from both -- interpretability of reweighting and end-to-end trainability of adversarial training. We propose four different variants of the method and, among other things, demonstrate how the method can be cast in a fully probabilistic framework. Additionally, theoretical analysis of FAIR models' properties have been studied extensively. We compare FAIR models to 7 other related and state-of-the-art models and demonstrate that FAIR is able to achieve a better trade-off between accuracy and unfairness. To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
Team diversity can be seen as a double-edged sword. It brings additional cognitive resources to teams at the risk of increased conflict. Few studies have investigated how different types of diversity impact software teams. This study views diversity through the lens of the categorization-elaboration model (CEM). We investigated how diversity in gender, age, role, and cultural background impacts team effectiveness and conflict, and how these associations are moderated by psychological safety. Our sample consisted of 1,118 participants from 161 teams and was analyzed with Covariance-Based Structural Equation Modeling (CB-SEM). We found a positive effect of age diversity on team effectiveness and gender diversity on relational conflict. Psychological safety contributed directly to effective teamwork and less conflict but did not moderate the diversity-effectiveness link. While our results are consistent with the CEM theory for age and gender diversity, other types of diversity did not yield similar results. We discuss several reasons for this, including curvilinear effects, moderators such as task interdependence, or the presence of a diversity mindset. With this paper, we argue that a dichotomous nature of diversity is oversimplified. Indeed, it is a complex relationship where context plays a pivotal role. A more nuanced understanding of diversity through the lens of theories, such as the CEM, may lead to more effective teamwork.
A series of Yb2Ti2O7 doped samples demonstrates the effects of off-stoichiometry on Yb2Ti2O7's structure, properties, and magnetic ground state via x-ray diffraction, specific heat, and magnetization measurements. A stoichiometric single crystal of Yb2Ti2O7 grown by the traveling solvent floating zone technique (solvent = 30 wt% rutile TiO2 and 70 wt% Yb2Ti2O7) is characterized and evaluated in light of this series. Our data shows that upon positive x doping, the cubic lattice parameter a increases and the Curie-Weiss temperature decreases. Heat capacity measurements of stoichiometric Yb2Ti2O7 samples exhibit a sharp, first-order peak at T = 268(4) mK that is suppressed in magnitude and temperature in samples doped off ideal stoichiometry. The full entropy recovered per Yb ion is 5.7 J/K ~ Rln2. Our work establishes the effects of doping on Yb2Ti2O7's physical properties, which provides further evidence indicating that previous crystals grown by the traditional floating zone method are doped off ideal stoichiometry. Additionally, we present how to grow high-quality colorless single crystals of Yb2Ti2O7 by the traveling solvent floating zone growth method.
Coincident D3-branes placed at a conical singularity are related to string theory on $AdS_5\times X_5$, for a suitable five-dimensional Einstein manifold $X_5$. For the example of the conifold, which leads to $X_5=T^{1,1}=(SU(2)\times SU(2))/U(1)$, the infrared limit of the theory on $N$ D3-branes was constructed recently. This is ${\cal N}=1$ supersymmetric $SU(N)\times SU(N)$ gauge theory coupled to four bifundamental chiral superfields and supplemented by a quartic superpotential which becomes marginal in the infrared. In this paper we consider D3-branes wrapped over the 3-cycles of $T^{1,1}$ and identify them with baryon-like chiral operators built out of products of $N$ chiral superfields. The supergravity calculation of the dimensions of such operators agrees with field theory. We also study the D5-brane wrapped over a 2-cycle of $T^{1,1}$, which acts as a domain wall in $AdS_5$. We argue that upon crossing it the gauge group changes to $SU(N)\times SU(N+1)$. This suggests a construction of supergravity duals of ${\cal N}=1$ supersymmetric $SU(N_1)\times SU(N_2)$ gauge theories.
A coincident D-brane - anti-D-brane pair has a tachyonic mode. We present an argument showing that at the classical minimum of the tachyonic potential the negative energy density associated with the potential exactly cancels the sum of the tension of the brane and the anti-brane, thereby giving a configuration of zero energy density and restoring space-time supersymmetry.
The higher topological complexity of a space $X$, $\text{TC}_r(X)$, $r=2,3,\ldots$, and the topological complexity of a map $f$, $\text{TC}(f)$, have been introduced by Rudyak and Pave\v{s}i\'{c}, respectively, as natural extensions of Farber's topological complexity of a space. In this paper we introduce a notion of higher topological complexity of a map~$f$, $\text{TC}_{r,s}(f)$, for $1\leq s\leq r\geq2$, which simultaneously extends Rudyak's and Pave\v{s}i\'{c}'s notions. Our unified concept is relevant in the $r$-multitasking motion planning problem associated to a robot devise when the forward kinematics map plays a role in $s$ prescribed stages of the motion task. We study the homotopy invariance and the behavior of $\text{TC}_{r,s}$ under products and compositions of maps, as well as the dependence of $\text{TC}_{r,s}$ on $r$ and $s$. We draw general estimates for $\text{TC}_{r,s}(f\colon X\to Y)$ in terms of categorical invariants associated to $X$, $Y$ and $f$. In particular, we describe within one the value of $\text{TC}_{r,s}$ in the case of the non-trivial double covering over real projective spaces, as well as for their complex counterparts.
In the field of multi-class anomaly detection, reconstruction-based methods derived from single-class anomaly detection face the well-known challenge of "learning shortcuts", wherein the model fails to learn the patterns of normal samples as it should, opting instead for shortcuts such as identity mapping or artificial noise elimination. Consequently, the model becomes unable to reconstruct genuine anomalies as normal instances, resulting in a failure of anomaly detection. To counter this issue, we present a novel unified feature reconstruction-based anomaly detection framework termed RLR (Reconstruct features from a Learnable Reference representation). Unlike previous methods, RLR utilizes learnable reference representations to compel the model to learn normal feature patterns explicitly, thereby prevents the model from succumbing to the "learning shortcuts" issue. Additionally, RLR incorporates locality constraints into the learnable reference to facilitate more effective normal pattern capture and utilizes a masked learnable key attention mechanism to enhance robustness. Evaluation of RLR on the 15-category MVTec-AD dataset and the 12-category VisA dataset shows superior performance compared to state-of-the-art methods under the unified setting. The code of RLR will be publicly available.
In arXiv:1303.1129, the authors provided a bound for the palindromic width of free abelian-by-nilpotent group $AN_n$ of rank $n$ and free nilpotent group ${\rm N}_{n,r}$ of rank $n$ and step $r$. In the present paper we study palindromic widths of groups $\widetilde{AN}_n$ and $\widetilde{\rm N}_{n,r}$. We denote by $\widetilde{G}_n = G_n / \langle \langle x_1^2, \ldots, x_n^2 \rangle \rangle$ the quotient of group $G_n = \langle x_1, \ldots, x_n \rangle$, which is free in some variety by the normal subgroup generated by $x_1^2, \ldots, x_n^2$. We prove that the palindromic width of the quotient $\widetilde{AN}_n$ is finite and bounded by $3n$. We also prove that the palindromic width of the quotient $\widetilde{\rm N}_{n, 2}$ is precisely $2(n-1)$. We improve the lower bound of the palindromic width for ${\rm N}_{n, r}$. We prove that the palindromic width of ${\rm N}_{n, r}$, $r\geq 2$ is at least $2(n-1)$. We also improve the bound for palindromic widths of free metabelian groups. We prove that the palindromic width of free metabelian group of rank $n$ is at most $4n-1$.
Let $\varphi: G \times (M,d) \rightarrow (M,d)$ be a left action of a Lie group on a differentiable manifold endowed with a metric $d$ (distance function) compatible with the topology of $M$. Denote $gp:=\varphi(g,p)$. Let $X$ be a compact subset of $M$. Then the isotropy subgroup of $X$ is a closed subgroup of $G$ defined as $H_X:=\{g\in G; gX=X\}$. The induced Hausdorff metric is a metric on the left coset manifold $G/H_X$ defined as $d_X(gH_X,hH_X)=d_H(gX,hX)$, where $d_H$ is the Hausdorff distance in $M$. Suppose that $\varphi$ is transitive and that there exist $p\in M$ such that $H_X=H_p$. Then $gH_X \mapsto gp$ is a diffeomorphism that identifies $G/H_X$ and $M$. In this work we define a discrete dynamical system of metrics on $M$. Let $d^1=\hat d_X$, where $\hat d_X$ stands for the intrinsic metric associated to $d_X$. We can iterate $\varphi: G \times (M\equiv G/H_X,d^1)\rightarrow (M\equiv G/H_X,d^1)$, in order to get $d^2, d^3$ and so on. We study the particular case where $M=G$, the left action $\varphi: G\times (G,d) \rightarrow (G,d)$ is the product of $G$, $d$ is bounded above by a right invariant intrinsic metric on $G$ and $X\ni e$ is a finite subset of $G$. We prove that the sequence $d^i$ converges pointwise to a metric $d^\infty$. In addition, if $d$ is complete and the semigroup generated by $X$ is dense in $G$, then $d^\infty$ is the distance function of a right invariant $C^0$-Carnot-Carath\'eodory-Finsler metric. The case where $d^\infty$ is $C^0$-Finsler is studied in detail.
I review new results on s-channel helicity nonconservation (SCHNC) in diffractive DIS. I discuss how by virtue of unitarity diffractive DIS gives rise to spin structure functions which were believed to vanish at x << 1. These include tensor polarization of sea quarks in the deuteron, strong breaking of the Wandzura-Wilczek relation and demise of the Burkhardt-Cottingham sum rule.
Piatetski-Shapiro the concept of CAP representations was introduced, elucidating the Saito-Kurokawa representations of $PGSp(4)$. In this paper we present a family of CAP representations for the group $Sp_{4n}(\mathbb A)$ through the application of the theta correspondence and Howe duality to the reductive dual pair $(O_{2n}(\mathbb A) , Sp_{4n}(\mathbb A))$. The construction involves constructing a non-trivial automorphic character of $O_{2n}(\mathbb A)$, lifting it to an irreducible cuspidal automorphic representation $\pi=\otimes'_{\nu}\pi_{\nu}$ of $Sp_{4n}(\mathbb A)$, and providing a detailed characterization of the representations $\pi_{\nu}$ of $Sp_{4n}(\mathbb F_\nu)$ at almost all places $\nu$.
We investigate a stochastic network composed of Integrate-and-Fire spiking neurons, focusing on its mean-field asymptotics. We consider an invariant probability measure of the McKean-Vlasov equation and establish an explicit sufficient condition to ensure the local stability of this invariant distribution. Furthermore, we provide a proof of a conjecture originally proposed by J. Touboul and P. Robert regarding the bistable nature of a specific instance of this neuronal model.
Entanglement does not correspond to any observable and its evaluation always corresponds to an estimation procedure where the amount of entanglement is inferred from the measurements of one or more proper observables. Here we address optimal estimation of entanglement in the framework of local quantum estimation theory and derive the optimal observable in terms of the symmetric logarithmic derivative. We evaluate the quantum Fisher information and, in turn, the ultimate bound to precision for several families of bipartite states, either for qubits or continuous variable systems, and for different measures of entanglement. We found that for discrete variables, entanglement may be efficiently estimated when it is large, whereas the estimation of weakly entangled states is an inherently inefficient procedure. For continuous variable Gaussian systems the effectiveness of entanglement estimation strongly depends on the chosen entanglement measure. Our analysis makes an important point of principle and may be relevant in the design of quantum information protocols based on the entanglement content of quantum states.
Explicit conditions are presented for the existence, uniqueness and ergodicity of the strong solution to a class of generalized stochastic porous media equations. Our estimate of the convergence rate is sharp according to the known optimal decay for the solution of the classical (deterministic) porous medium equation.
Edge and Fog computing paradigms overcome the limitations of cloud-centric execution for different latency-sensitive Internet of Things (IoT) applications by offering computing resources closer to the data sources. Small single-board computers (SBCs) like Raspberry Pis (RPis) are widely used as computing nodes in both paradigms. These devices are usually equipped with moderate speed processors and provide support for peripheral interfacing and networking, making them well-suited to deal with IoT-driven operations such as data sensing, analysis, and actuation. However, these small Edge devices are constrained in facilitating multi-tenancy and resource sharing. The management of computing and peripheral resources through centralized entities further degrades their performance and service quality significantly. To address these issues, a fully distributed framework, named Con-Pi, is proposed in this work to manage resources at the Edge or Fog environments. Con-Pi exploits the concept of containerization and harnesses Docker containers to run IoT applications as micro-services. %Moreover, Con-Pi operates in a distributed manner across multiple RPis and enables them to share resources. The software system of the proposed framework also provides a scope to integrate different IoT applications, resource and energy management policies for Edge and Fog computing. Its performance is compared with the state-of-the-art frameworks through real-world experiments. The experimental results show that Con-Pi outperforms others in enhancing response time and managing energy usage and computing resources through its distributed offloading model. Further, we have developed an automated pest bird deterrent system using Con-Pi to demonstrate its suitability in developing practical solutions for various IoT-enabled use cases, including smart agriculture.
Intelligence analysts face a difficult problem: distinguishing extremist rhetoric from potential extremist violence. Many are content to express abuse against some target group, but only a few indicate a willingness to engage in violence. We address this problem by building a predictive model for intent, bootstrapping from a seed set of intent words, and language templates expressing intent. We design both an n-gram and attention-based deep learner for intent and use them as colearners to improve both the basis for prediction and the predictions themselves. They converge to stable predictions in a few rounds. We merge predictions of intent with predictions of abusive language to detect posts that indicate a desire for violent action. We validate the predictions by comparing them to crowd-sourced labelling. The methodology can be applied to other linguistic properties for which a plausible starting point can be defined.
In this paper we study utility maximization with proportional transaction costs. Assuming extended weak convergence of the underlying processes we prove the convergence of the corresponding utility maximization problems. Moreover, we establish a limit theorem for the optimal trading strategies. The proofs are based on the extended weak convergence theory developed in [1] and the Meyer--Zheng topology introduced in [24].
The propagation of electromagnetic fields in matter has been the subject of intensive studies since the discovery of its rich dynamics. Impedance measurements are one most used technique available to study material properties as well as electromagnetic devices and circuits. This way, novelties on device construction and circuit technology associated to new material properties and/or unusual field dynamics generally rely on results supported by impedance data. Recent advances on nanostructured materials explore astounding molecular properties derived from nanoscale levels and apply them to studies foucused on the generation of new devices. Accordingly, properties inherent to quantum dynamics can also generate unusual circuit elements, not included on the former development of the electromagnetic theory. On same footings, advances in field dynamics could also determine the advent of new technologies, producing immediate impact on our everyday life. In this work we present the results obtained by measuring the impedance of single spires and coils of specific geometry in the MHz range. They demonstrate that a new passive circuit element was found, which bears out the existence of an as yet unobserved propagation mode of the electromagnetic fields in matter. Our results also indicates that this effect is more evident using carbon made spires.
We studied the use of deep neural networks (DNNs) in the numerical solution of the oscillatory Fredholm integral equation of the second kind. It is known that the solution of the equation exhibits certain oscillatory behaviors due to the oscillation of the kernel. It was pointed out recently that standard DNNs favour low frequency functions, and as a result, they often produce poor approximation for functions containing high frequency components. We addressed this issue in this study. We first developed a numerical method for solving the equation with DNNs as an approximate solution by designing a numerical quadrature that tailors to computing oscillatory integrals involving DNNs. We proved that the error of the DNN approximate solution of the equation is bounded by the training loss and the quadrature error. We then proposed a multi-grade deep learning (MGDL) model to overcome the spectral bias issue of neural networks. Numerical experiments demonstrate that the MGDL model is effective in extracting multiscale information of the oscillatory solution and overcoming the spectral bias issue from which a standard DNN model suffers.
It is generally acknowledged that most complex diseases are affected in part by interactions between genes and genes and/or between genes and environmental factors. Taking into account environmental exposures and their interactions with genetic factors in genome-wide association studies (GWAS) can help to identify high-risk subgroups in the population and provide a better understanding of the disease. For this reason, many methods have been developed to detect gene-environment (G*E) interactions. Despite this, few loci that interact with environmental exposures have been identified so far. Indeed, the modest effect of G*E interactions as well as confounding factors entail low statistical power to detect such interactions. In this work, we provide a simulated dataset in order to study methods for detecting G*E interactions in GWAS in presence of confounding factor and population structure. Our work applies a recently introduced non-subjective method for H1 simulations called waffect and exploits the publicly available HapMap project to build a datasets with real genotypes and population structures. We use this dataset to study the impact of confounding factors and compare the relative performance of popular methods such as PLINK, random forests and linear mixed models to detect G*E interactions. Presence of confounding factor is an obstacle to detect G*E interactions in GWAS and the approaches considered in our power study all have insufficient power to detect the strong simulated interaction. Our simulated dataset could help to develop new methods which account for confounding factors through latent exposures in order to improve power.
A fibered hyperbolic 3-manifold induces a map from the hyperbolic plane to hyperbolic 3-space, the respective universal covers of the fibre and the manifold. The induced map is an embedding that is exponentially distorted in terms of the individual metrics. In this article, we begin a study of the distortion along typical rays in the fibre. We verify that a typical ray in the hyperbolic plane makes linear progress in the ambient metric in hyperbolic 3-space. We formulate the proof in terms of some soft aspects of the geometry and basic ergodic theory. This enables us to extend the result to analogous contexts that correspond to certain extensions of closed surface groups. These include surface group extensions that are Gromov hyperbolic, the universal curve over a Teichm\"uller disc, and the extension induced by the Birman exact sequence.
The structure of integral manifolds in the Kovalevskaya problem of the motion of a heavy rigid body about a fixed point is considered. An analytic description of a bifurcation set is obtained, and bifurcation diagrams are constructed. The number of two-dimensional tori is indicated for each connected component of the supplement to the bifurcation set in the space of the first integrals constants. The main topological bifurcations of the regular tori are described.
From a new perspective, we discuss the thermodynamic entropy of $n+2$-dimensional Reissner-Nordstr\"om-de Sitter(RNdS) black hole and analyze the phase transition of the effective thermodynamic system. Considering the correlations between the black hole event horizon and the cosmological horizon, we conjecture that the total entropy of the RNdS black hole should contain an extra term besides the sum of the entropies of the two horizons. In the lukewarm case, the effective temperature of the RNdS black hole is the same as that of the black hole horizon and the cosmological horizon. Under this condition, we obtain the extra contribution to the total entropy. With the corrected entropy, we derive other effective thermodynamic quantities and analyze the phase transition of the RNdS black hole in analogy to the usual thermodynamic system.
We show that by "accelerating" relaxation enhancing flows, one can construct a flow that is smooth on $[0,1) \times \mathbb{T}^d$ but highly singular at $t=1$ so that for any positive diffusivity, the advection-diffusion equation associated to the accelerated flow totally dissipates solutions, taking arbitrary initial data to the constant function at $t=1$.
The detection of vascular structures from noisy images is a fundamental process for extracting meaningful information in many applications. Most well-known vascular enhancing techniques often rely on Hessian-based filters. This paper investigates the feasibility and deficiencies of detecting curve-like structures using a Hessian matrix. The main contribution is a novel enhancement function, which overcomes the deficiencies of established methods. Our approach has been evaluated quantitatively and qualitatively using synthetic examples and a wide range of real 2D and 3D biomedical images. Compared with other existing approaches, the experimental results prove that our proposed approach achieves high-quality curvilinear structure enhancement.
As graphical summaries for topological spaces and maps, Reeb graphs are common objects in the computer graphics or topological data analysis literature. Defining good metrics between these objects has become an important question for applications, where it matters to quantify the extent by which two given Reeb graphs differ. Recent contributions emphasize this aspect, proposing novel distances such as {\em functional distortion} or {\em interleaving} that are provably more discriminative than the so-called {\em bottleneck distance}, being true metrics whereas the latter is only a pseudo-metric. Their main drawback compared to the bottleneck distance is to be comparatively hard (if at all possible) to evaluate. Here we take the opposite view on the problem and show that the bottleneck distance is in fact good enough {\em locally}, in the sense that it is able to discriminate a Reeb graph from any other Reeb graph in a small enough neighborhood, as efficiently as the other metrics do. This suggests considering the {\em intrinsic metrics} induced by these distances, which turn out to be all {\em globally} equivalent. This novel viewpoint on the study of Reeb graphs has a potential impact on applications, where one may not only be interested in discriminating between data but also in interpolating between them.
We present the results of a complete tree level calculation of the processes pp(\bar p) -> Wb\bar b and Wb\bar b + jet that includes the single top signal and all irreducible backgrounds simultaneously. In order to probe the structure of the Wtb coupling with the highest possible accuracy and to look for possible deviations from standard model predictions, we identify sensitive observables and propose an optimal set of cuts which minimizes the background compared to the signal. At the LHC, the single top and the single anti-top rates are different and the corresponding asymmetry yields additional information. The analysis shows that the sensitivity for anomalous couplings will be improved at the LHC by a factor of 2--3 compared to the expectations for the first measurements at the upgraded Tevatron. Still, the bounds on anomalous couplings obtained at hadron colliders will remain 2--8 times larger than those from high energy gamma-e colliders, which will, however, not be available for some time. All basic calculations have been carried out using the computer package CompHEP. The known NLO corrections to the single top rate have been taken into account.
The integration of Transparent Displays (TD) in various applications, such as Heads-Up Displays (HUDs) in vehicles, is a burgeoning field, poised to revolutionize user experiences. However, this innovation brings forth significant challenges in realtime human-device interaction, particularly in accurately identifying and tracking a user's gaze on dynamically changing TDs. In this paper, we present a two-fold robust and efficient systematic solution for realtime gaze monitoring, comprised of: (1) a tree-based algorithm for identifying and dynamically tracking gaze targets (i.e., moving, size-changing, and overlapping 2D content) projected on a transparent display, in realtime; (2) a multi-stream self-attention architecture to estimate the depth-level of human gaze from eye tracking data, to account for the display's transparency and preventing undesired interactions with the TD. We collected a real-world eye-tracking dataset to train and test our gaze monitoring system. We present extensive results and ablation studies, including inference experiments on System on Chip (SoC) evaluation boards, demonstrating our model's scalability, precision, and realtime feasibility in both static and dynamic contexts. Our solution marks a significant stride in enhancing next-generation user-device interaction and experience, setting a new benchmark for algorithmic gaze monitoring technology in dynamic transparent displays.
In their classical work (Proc. Natl. Acad. Sci. USA, 1981, 78:6840-6844), Goldbeter and Koshland mathematically analyzed a reversible covalent modification system which is highly sensitive to the concentration of effectors. Its signal-response curve appears sigmoidal, constituting a biochemical switch. However, the switch behavior only emerges in the "zero-order region", i.e. when the signal molecule concentration is much lower than that of the substrate it modifies. In this work we showed that the switching behavior can also occur under comparable concentrations of signals and substrates, provided that the signal molecules catalyze the modification reaction in cooperation. We also studied the effect of dynamic disorders on the proposed biochemical switch, in which the enzymatic reaction rates, instead of constant, appear as stochastic functions of time. We showed that the system is robust to dynamic disorder at bulk concentration. But if the dynamic disorder is quasi-static, large fluctuations of the switch response behavior may be observed at low concentrations. Such fluctuation is relevant to many biological functions. It can be reduced by either increasing the conformation interconversion rate of the protein, or correlating the enzymatic reaction rates in the network.
This is a pedagogical guide to works on this subject which began in the 80s but has seen vibrant activities in the last decade. It aims to help orient readers, especially students, who wish to enter into research but bewildered by the vast and diverse literature on this subject. We describe the three main veins of activities: the Euclidean zero-mode dominance, the Lorentzian interacting quantum field theory and the classical stochastic field theory approaches in some detail, explaining the underlying physics and the technicalities of each. We show how these approaches are interconnected, and highlight recent papers which contain germs of worthy directions for future developments.
The world first samples 0f Ti+SS and Nb+SS joints were manufactured by an explosion welding technology demonstrating a high mechanic properties and leak absence at 4.6 x 10^{-9} atm-cc/sec. Residual stresses in bimetallic joints resulting from explosion welding measured by neutron diffraction method are quite high (~1000 MPa). Thermal tempering of explosion welded Ti+SS and Nb+SS specimens leads to complete relaxation of internal stresses in Ti,Nb and Stainless steel and makes the transition elements quite serviceable.
Data sizes that cannot be processed by conventional data storage and analysis systems are named as Big Data.It also refers to nex technologies developed to store, process and analyze large amounts of data. Automatic information retrieval about the contents of a large number of documents produced by different sources, identifying research fields and topics, extraction of the document abstracts, or discovering patterns are some of the topics that have been studied in the field of big data.In this study, Naive Bayes classification algorithm, which is run on a data set consisting of scientific articles, has been tried to automatically determine the classes to which these documents belong. We have developed an efficient system that can analyze the Turkish scientific documents with the distributed document classification algorithm run on the Cloud Computing infrastructure. The Apache Mahout library is used in the study. The servers required for classifying and clustering distributed documents are
We present results of a microscopic density functional theory study of wedge filling transitions, at a right-angle wedge, in the presence of dispersion-like wall-fluid forces. Far from the corner the walls of the wedge show a first-order wetting transition at a temperature $T_w$ which is progressively closer to the bulk critical temperature $T_c$ as the strength of the wall forces is reduced. In addition, the meniscus formed near the corner undergoes a filling transition at a temperature $T_f<T_w$, the value of which is found to be in excellent agreement with macroscopic predictions. We show that the filling transition is {\it first-order} if it occurs far from the critical point but is {\it continuous} if $T_f$ is close to $T_c$ even though the walls still show first-order wetting behaviour. For this continuous transition the distance of the meniscus from the apex grows as $\ell_w\approx (T_f-T)^{-\beta_w}$ with critical exponent $\beta_w\approx 0.46 \pm 0.05$ in good agreement with the phenomenological effective Hamiltonian prediction. Our results suggest that critical filling transitions, with accompanying large scale universal interfacial fluctuation effects, are more generic than thought previously, and are experimentally accessible.
By a recent result of Livingston, it is known that if a knot has a prime power branched cyclic cover that is not a homology sphere, then there is an infinite family of non-concordant knots having the same Seifert form as the knot. In this paper, we extend this result to the full extent. We show that if the knot has nontrivial Alexander polynomial, then there exists an infinite family of non-concordant knots having the same Seifert form as the knot. As a corollary, no nontrivial Alexander polynomial determines a unique knot concordance class. We use Cochran-Orr-Teichner's recent result on the knot concordance group and Cheeger-Gromov's von Neumann rho invariants with their universal bound for a 3-manifold.
One of the important problems in federated learning is how to deal with unbalanced data. This contribution introduces a novel technique designed to deal with label skewed non-IID data, using adversarial inputs, created by the I-FGSM method. Adversarial inputs guide the training process and allow the Weighted Federated Averaging to give more importance to clients with 'selected' local label distributions. Experimental results, gathered from image classification tasks, for MNIST and CIFAR-10 datasets, are reported and analyzed.
This paper focuses on the sparse subspace clustering problem, and develops an online algorithmic solution to cluster data points on-the-fly, without revisiting the whole dataset. The strategy involves an online solution of a sparse representation (SR) problem to build a (sparse) dictionary of similarities where points in the same subspace are considered "similar," followed by a spectral clustering based on the obtained similarity matrix. When the SR cost is strongly convex, the online solution converges to within a neighborhood of the optimal time-varying batch solution. A dynamic regret analysis is performed when the SR cost is not strongly convex.
Standard full-shape clustering analyses in Fourier space rely on a fixed power spectrum template, defined at the fiducial cosmology used to convert redshifts into distances, and compress the cosmological information into the Alcock-Paczynski parameters and the linear growth rate of structure. In this paper, we propose an analysis method that operates directly in the cosmology parameter space and varies the power spectrum template accordingly at each tested point. Predictions for the power spectrum multipoles from the TNS model are computed at different cosmologies in the framework of $\Lambda \rm{CDM}$. Applied to the final eBOSS QSO and LRG samples together with the low-z DR12 BOSS galaxy sample, our analysis results in a set of constraints on the cosmological parameters $\Omega_{\rm cdm}$, $H_0$, $\sigma_8$, $\Omega_{\rm b}$ and $n_s$. To reduce the number of computed models, we construct an iterative process to sample the likelihood surface, where each iteration consists of a Gaussian process regression. This method is validated with mocks from N-body simulations. From the combined analysis of the (e)BOSS data, we obtain the following constraints: $\sigma_8=0.877\pm 0.049$ and $\Omega_{\rm m}=0.304^{+0.016}_{-0.010}$ without any external prior. The eBOSS quasar sample alone shows a $3.1\sigma$ discrepancy compared to the Planck prediction.
In this paper, we discuss reduced order modelling approaches to bifurcating systems arising from continuum mechanics benchmarks. The investigation of the beam's deflection is a relevant topic of investigation with fundamental implications on their design for structural analysis and health. When the beams are exposed to external forces, their equilibrium state can undergo to a sudden variation. This happens when a compression, acting along the axial boundaries, exceeds a certain critical value. Linear elasticity models are not complex enough to capture the so-called beam's buckling, and nonlinear constitutive relations, as the hyperelastic laws, are required to investigate this behavior, whose mathematical counterpart is represented by bifurcating phenomena. The numerical analysis of the bifurcating modes and the post-buckling behavior, is usually unaffordable by means of standard high-fidelity techniques such (as the Finite Element method) and the efficiency of Reduced Order Models (ROMs), e.g.\ based on Proper Orthogonal Decomposition (POD), are necessary to obtain consistent speed-up in the reconstruction of the bifurcation diagram. The aim of this work is to provide insights regarding the application of POD-based ROMs for buckling phenomena occurring for 2-D and 3-D beams governed by different constitutive relations. The benchmarks will involve multi-parametric settings with geometrically parametrized domains, where the buckling's location depends on the material and geometrical properties induced by the parameter. Finally, we exploit the acquired notions from these toy problems, to simulate a real case scenario coming from the Norwegian petroleum industry.
In the past decade, surveys of the stellar component of the Galaxy have revealed a number of streams from tidally disrupted dwarf galaxies and globular clusters. Simulations of hierarchical structure formation in LCDM cosmologies predict that the dark matter halo of a galaxy like the Milky Way contains hundreds of subhalos with masses of ~10^8 solar masses and greater, and it has been suggested that the existence of coherent tidal streams is incompatible with the expected abundance of substructure. We investigate the effects of dark matter substructure on tidal streams by simulating the disruption of a self-gravitating satellite on a wide range of orbits in different host models both with and without substructure. We find that the halo shape and the specific orbital path more strongly determine the overall degree of disruption of the satellite than does the presence or absence of substructure, i.e., the changes in the large-scale properties of the tidal debris due to substructure are small compared to variations in the debris from different orbits in a smooth potential. Substructure typically leads to an increase in the degree of clumpiness of the tidal debris in sky projection, and in some cases a more compact distribution in line-of-sight velocity. Substructure also leads to differences in the location of sections of debris compared to the results of the smooth halo model, which may have important implications for the interpretation of observed tidal streams. A unique signature of the presence of substructure in the halo which may be detectable by upcoming surveys is identified. We conclude, however, that predicted levels of substructure are consistent with a detection of a coherent tidal stream from a dwarf galaxy.
On the search of lightweight lightning protection materials that can be used as part of lightning protection systems, we investigate some types of electroconductive fabrics by applying several lightning impulse currents in laboratory. Samples of four commercially available electroconductive textiles were analyzed, two rip-stop, a plain-weave, a nonwoven, and additionally a carbon-impregnated polymeric film. Under laboratory conditions, each sample was subjected to several lengthwise subsequent lightning-like currents of 8/20 us standard waveform, recording both voltage and current signals. Optical and scanning electron microscope observations were performed after tests, revealing some patterns or morphological changes on the fabric surface. Despite these changes, the investigated conductive textiles withstand the several lightning impulse currents applied. Results suggest that some conductive fabrics could be used in personal mobile shelters, to protect human beings against the earth potential rise caused by a close lightning discharge.
It is shown that there exist a probability space $(X,{\mathcal X},\mu)$, two ergodic measure preserving transformations $T,S$ acting on $(X,{\mathcal X},\mu)$ with $h_\mu(X,T)=h_\mu(X,S)=0$, and $f, g \in L^\infty(X,\mu)$ such that the limit \begin{equation*} \lim_{N\to\infty}\frac{1}{N}\sum_{n=0}^{N-1} f(T^{n}x)g(S^{n}x) \end{equation*} does not exist in $L^2(X,\mu)$.
Acetaminophen (APAP) or Paracetamol, despite its wide and common use for pain and fever symptoms, shows a variety of side effects, toxic effects, and overdose effects. The most common form of toxic effects of APAP is in the liver where phosphatidylcholine is the major component of the cell membrane with additional associated functionalities. Although this is the case, the effects of APAP on pure phospholipid membranes have been largely ignored. Here, we used DOPC, a commonly found phospholipid in mammalian cell membranes to synthesize large unilamellar vesicles to investigate how the incorporation of APAP changes pure lipid vesicle structure, morphology, and fluidity at different concentrations. We used a combination of dynamic light scattering (DLS), small-angle neutron and X-ray scattering (SANS, SAXS), cryo TEM for structural characterization, and neutron spin-echo (NSE) spectroscopy to investigate dynamics. We showed that the incorporation of Acetaminophen in the lipid bilayer significantly impacts the spherical phospholipid self-assembly in terms of its morphology as well as influences the lipid content in the bilayer, causing a decrease in bending rigidity. We discussed how the overall impact of APAP molecules on the pure lipid membrane may play a significant role in the drug's mechanisms of action. Our results showed the incorporation of APAP reduces membrane rigidity as well as changes the spherical unilamellar vesicles into much more irregularly shaped vesicles. Although bilayer structure did not show much change when observed by SAXS, NSE and cryo-TEM results showed the lipid dynamics change with the addition of APAP in the bilayer which causes the overall decreased membrane rigidity. A strong effect on the lipid tail motion was also observed. rigidity. A strong effect on the lipid tail motion was also observed.
Generalized Operational Perceptron (GOP) was proposed to generalize the linear neuron model in the traditional Multilayer Perceptron (MLP) and this model can mimic the synaptic connections of the biological neurons that have nonlinear neurochemical behaviours. Progressive Operational Perceptron (POP) is a multilayer network composing of GOPs which is formed layer-wise progressively. In this work, we propose major modifications that can accelerate as well as augment the progressive learning procedure of POP by incorporating an information-preserving, linear projection path from the input to the output layer at each progressive step. The proposed extensions can be interpreted as a mechanism that provides direct information extracted from the previously learned layers to the network, hence the term "memory". This allows the network to learn deeper architectures with better data representations. An extensive set of experiments show that the proposed modifications can surpass the learning capability of the original POPs and other related algorithms.
Similar to other programming models, compilers for SYCL, the open programming model for heterogeneous computing based on C++, would benefit from access to higher-level intermediate representations. The loss of high-level structure and semantics caused by premature lowering to low-level intermediate representations and the inability to reason about host and device code simultaneously present major challenges for SYCL compilers. The MLIR compiler framework, through its dialect mechanism, allows to model domain-specific, high-level intermediate representations and provides the necessary facilities to address these challenges. This work therefore describes practical experience with the design and implementation of an MLIR-based SYCL compiler. By modeling key elements of the SYCL programming model in host and device code in the MLIR dialect framework, the presented approach enables the implementation of powerful device code optimizations as well as analyses across host and device code. Compared to two LLVM-based SYCL implementations, this yields speedups of up to 4.3x on a collection of SYCL benchmark applications. Finally, this work also discusses challenges encountered in the design and implementation and how these could be addressed in the future.
Countries and cities are likely to enter economic activities that are related to those that are already present in them. Yet, while these path dependencies are universally acknowledged, we lack an understanding of the diversification strategies that can optimally balance the development of related and unrelated activities. Here, we develop algorithms to identify the activities that are optimal to target at each time step. We find that the strategies that minimize the total time needed to diversify an economy target highly connected activities during a narrow and specific time window. We compare the strategies suggested by our model with the strategies followed by countries in the diversification of their exports and research activities, finding that countries follow strategies that are close to the ones suggested by the model. These findings add to our understanding of economic diversification and also to our general understanding of diffusion in networks.
Is is known that the loop space associated to a Riemannian manifold admits a quasi-symplectic structure. This article shows that this structure is not likely to recover the underlying Riemannian metric by proving a result that is a strong indication of the "almost" independence of the quasi-symplectic structure with respect to the metric. Finally conditions to have contact structures on these spaces are studied.
We have reconstructed the three-dimensional density fluctuation maps to z ~ 1.5 using the distribution of galaxies observed in the VVDS-Deep survey. We use this overdensity field to measure the evolution of the probability distribution function and its lower-order moments over the redshift interval 0.7<z<1.5. We apply a self-consistent reconstruction scheme which includes a complete non-linear description of galaxy biasing and which has been throughly tested on realistic mock samples. We find that the variance and skewness of the galaxy distribution evolve over this redshift interval in a way that is remarkably consistent with predictions of first- and second-order perturbation theory. This finding confirms the standard gravitational instability paradigm over nearly 9 Gyrs of cosmic time and demonstrates the importance of accounting for the non-linear component of galaxy biasing to consistently reproduce the higher-order moments of the galaxy distribution and their evolution.
In this paper, we consider the problem of model equivalence for quantum systems. Two models are said to be (input-output) equivalent if they give the same output for every admissible input. In the case of quantum systems, the output is the expectation value of a given observable or, more in general, a probability distribution for the result of a quantum measurement. We link the input-output equivalence of two models to the existence of a homomorphism of the underlying Lie algebra. In several cases, a Cartan decomposition of the Lie algebra su(n) is useful to find such a homomorphism and to determine the classes of equivalent models. We consider in detail the important cases of two level systems with a Cartan structure and of spin networks. In the latter case, complete results are given generalizing previous results to the case of networks of spin particles with any value of the spin. In treating this problem, we prove some instrumental results on the subalgebras of su(n) which are of independent interest.
We discuss the potential of the photon-induced two lepton final states at the LHC to explore the phenomenology of the Kaluza-Klein (KK) tower of gravitons in the scenarios of the Arkani-Hamed, Dimopoulos and Dvali(ADD) model and Randall-Sundrum (RS) model. The sensitivity to model parameters can be improved compared to the present LEP or Tevatron sensitivity.
We investigate bond- and site-percolation models on several two-dimensional lattices numerically, by means of transfer-matrix calculations and Monte Carlo simulations. The lattices include the square, triangular, honeycomb kagome and diced lattices with nearest-neighbor bonds, and the square lattice with nearest- and next-nearest-neighbor bonds. Results are presented for the bond-percolation thresholds of the kagome and diced lattices, and the site-percolation thresholds of the square, honeycomb and diced lattices. We also include the bond- and site-percolation thresholds for the square lattice with nearest- and next-nearest-neighbor bonds. We find that corrections to scaling behave according to the second temperature dimension $X_{t2}=4$ predicted by the Coulomb gas theory and the theory of conformal invariance. In several cases there is evidence for an additional term with the same exponent, but modified by a logarithmic factor. Only for the site-percolation problem on the triangular lattice such a logarithmic term appears to be small or absent. The amplitude of the power-law correction associated with $X_{t2}=4$ is found to be dependent on the orientation of the lattice with respect to the cylindrical geometry of the finite systems.
Einstein was deeply puzzled by the success of natural science, and thought that we would never be able to explain it. He came to this conclusion on the ground that we cannot extract the basic laws of physics from experience using induction or deduction, and he took this to mean that they cannot be arrived at in a logical manner at all. In this paper I use Charles Peirce's logic of abduction, a third mode of reasoning different from deduction and induction, and show that it can be used to explain how laws in physics are arrived at, thereby addressing Einstein's puzzle about the incomprehensible comprehensibility of the universe. Interpreting Einstein's reflections in terms of Peirce's abduction also sheds light on abduction itself, by seeing it applied in an area where our common sense, and with that our intuitions, give us little or no guidance, and is even prone to lead us astray.
We study analytically the computational cost of the Generalised Hybrid Monte Carlo (GHMC) algorithm for free field theory. We calculate the Metropolis acceptance probability for leapfrog and higher-order discretisations of the Molecular Dynamics (MD) equations of motion. We show how to calculate autocorrelation functions of arbitrary polynomial operators, and use these to optimise the GHMC momentum mixing angle, the trajectory length, and the integration stepsize for the special cases of linear and quadratic operators. We show that long trajectories are optimal for GHMC, and that standard HMC is more efficient than algorithms based on Second Order Langevin Monte Carlo (L2MC), sometimes known as Kramers Equation. We show that contrary to naive expectations HMC and L2MC have the same volume dependence, but their dynamical critical exponents are z = 1 and z = 3/2 respectively.
The parametric cubic van der Waals polynomial $\,\, p V^3 - (R T + b p) V^2 + a V - a b \,\,$ is analysed mathematically and some new generic features (theoretically, for any substance) are revealed: the temperature range for applicability of the van der Waals equation, $T > a/(4Rb)$, and the isolation intervals, at any given temperature between $a/(4Rb)$ and the critical temperature $8a/(27Rb)$, of the three volumes on the isobar-isotherm: $\,\, 3b/2 < V_A \le 3b$, $ \,\, 2b < V_B < 4b/(3 - \sqrt{5})$, and $\,\, 3b < V_C < b + RT/p$. The unstable states of the van der Waals model have also been generically localized: they lie in an interval within the isolation interval of $V_B$. In the case of unique intersection point of an isotherm with an isobar, the isolation interval of this unique volume is also determined. A discussion on finding the volumes $V_{A, B, C}$, on the premise of Maxwell's hypothesis, is also presented.
The Steiner diameter $sdiam_k(G)$ of a graph $G$, introduced by Chartrand, Oellermann, Tian and Zou in 1989, is a natural generalization of the concept of classical diameter. When $k=2$, $sdiam_2(G)=diam(G)$ is the classical diameter. The problem of determining the minimum size of a graph of order $n$ whose diameter is at most $d$ and whose maximum is $\ell$ was first introduced by Erd\"{o}s and R\'{e}nyi. Recently, Mao considered the problem of determining the minimum size of a graph of order $n$ whose Steiner $k$-diameter is at most $d$ and whose maximum is at most $\ell$, where $3\leq k\leq n$, and studied this new problem when $k=3$. In this paper, we investigate the problem when $n-3\leq k\leq n$.
We give a characterization of alternating link exteriors in terms of cubed complexes. To this end, we introduce the concept of a "signed BW cubed-complex", and give a characterization for a signed BW cubed-complex to have the underlying space which is homeomorphic to an alternating link exterior.
We provide systematic analysis on a non-Hermitian PT -symmetric quantum impurity system both in and out of equilibrium, based on exact computations. In order to understand the interplay between non-Hermiticity and Kondo physics, we focus on a prototypical noninteracting impurity system, the resonant level model, with complex coupling constants. Explicitly constructing biorthogonal basis, we study its thermodynamic properties as well as the Loschmidt echo starting from the initially disconnected two free fermion chains. Remarkably, we observe the universal crossover physics in the Loschmidt echo, both in the PT broken and unbroken regimes. We also find that the ground state quantities we compute in the PT broken regime can be obtained by analytic continuation. It turns out that Kondo screening ceases to exist in the PT broken regime, which was also previously predicted in the non-hermitian Kondo model. All the analytical results are corroborated against biorthogonal free fermion numerics.
The complete quasi-particle spectrum of a magnetized electromagnetic plasma is systematically explored at zero and nonzero temperatures. To this purpose, the general structure of the one-loop corrected propagator of magnetized fermions is determined, and the dispersion relations arising from the pole of this propagator are numerically solved. It turns out that in the lowest Landau level, where only one spin direction is allowed, the spectrum consists of one positively (negatively) charged fermionic mode with positive (negative) spin. In contrast, in higher Landau levels, as an indirect consequence of the double spin degeneracy of fermions, the spectrum consists of two massless collective modes with left- and right-chiralities. The mechanism through which these new collective excitations are created in a uniform magnetic field is similar to the production mechanism of dynamical holes (plasminos) at finite temperature and zero magnetic fields. Whereas cold magnetized plasminos appear for moderate magnetic fields and for all positive momenta of propagating fermions, hot magnetized plasminos appear only in the limit of weak magnetic fields and soft momenta.
This paper presents V-Guard, a new permissioned blockchain that achieves consensus for vehicular data under changing memberships, targeting the problem in V2X networks where vehicles are often intermittently connected on the roads. To achieve this goal, V-Guard integrates membership management into the consensus process for agreeing on data entries. It binds a data entry with a membership configuration profile that describes responsible vehicles for achieving consensus for the data entry. As such, V-Guard produces chained consensus results of both data entries and their residing membership profiles, which enables consensus to be achieved seamlessly under changing memberships. In addition, V-Guard separates the ordering of transactions from consensus, allowing concurrent ordering instances and periodic consensus instances to order and commit data entries. These features make V-Guard efficient for achieving consensus under dynamic memberships with high throughput and latency performance.
In this paper, we address the problem of estimating the positions of human joints, i.e., articulated pose estimation. Recent state-of-the-art solutions model two key issues, joint detection and spatial configuration refinement, together using convolutional neural networks. Our work mainly focuses on spatial configuration refinement by reducing variations of human poses statistically, which is motivated by the observation that the scattered distribution of the relative locations of joints e.g., the left wrist is distributed nearly uniformly in a circular area around the left shoulder) makes the learning of convolutional spatial models hard. We present a two-stage normalization scheme, human body normalization and limb normalization, to make the distribution of the relative joint locations compact, resulting in easier learning of convolutional spatial models and more accurate pose estimation. In addition, our empirical results show that incorporating multi-scale supervision and multi-scale fusion into the joint detection network is beneficial. Experiment results demonstrate that our method consistently outperforms state-of-the-art methods on the benchmarks.
BCH codes have been studied for over fifty years and widely employed in consumer devices, communication systems, and data storage systems. However, the dimension of BCH codes is settled only for a very small number of cases. In this paper, we study the dimensions of BCH codes over finite fields with three types of lengths $n$, namely $n=q^m-1$, $n=(q^m-1)/(q-1)$ and $n=q^m+1$. For narrow-sense primitive BCH codes with designed distance $\delta$, we investigate their dimensions for $\delta$ in the range $1\le \delta \le q^{\lceil\frac{m}{2}\rceil+1}$. For non-narrow sense primitive BCH codes, we provide two general formulas on their dimensions and give the dimensions explicitly in some cases. Furthermore, we settle the minimum distances of some primitive BCH codes. We also explore the dimensions of the BCH codes of lengths $n=(q^m-1)/(q-1)$ and $n=q^m+1$ over finite fields.
We present a polar coordinate lattice Boltzmann kinetic model for compressible flows. A method to recover the continuum distribution function from the discrete distribution function is indicated. Within the model, a hybrid scheme being similar to, but different from, the operator splitting is proposed. The temporal evolution is calculated analytically, and the convection term is solved via a modifiedWarming-Beam (MWB) scheme.Within theMWB scheme a suitable switch function is introduced. The current model works not only for subsonic flows but also for supersonic flows. It is validated and verified via the following well-known benchmark tests: (i) the rotational flow, (ii) the stable shock tube problem, (iii) the Richtmyer-Meshkov (RM) instability, and (iv) the Kelvin-Helmholtz instability. As an original application, we studied the nonequilibrium characteristics of the system around three kinds of interfaces, the shock wave, the rarefaction wave, and the material interface, for two specific cases. In one of the two cases, the material interface is initially perturbed, and consequently the RMinstability occurs. It is found that themacroscopic effects due to deviating from thermodynamic equilibrium around thematerial interface differ significantly from those around the mechanical interfaces. The initial perturbation at the material interface enhances the coupling of molecular motions in different degrees of freedom. The amplitude of deviation from thermodynamic equilibrium around the shock wave is much higher than those around the rarefaction wave and material interface. By comparing each component of the high-order moments and its value in equilibrium, we can draw qualitatively the main behavior of the actual distribution function.
For $N$-point best-packing configurations $\omega_N$ on a compact metric space $(A,\rho)$, we obtain estimates for the mesh-separation ratio $\gamma(\omega_N,A)$, which is the quotient of the covering radius of $\omega_N$ relative to $A$ and the minimum pairwise distance between points in $\omega_N$. For best-packing configurations $\omega_N$ that arise as limits of minimal Riesz $s$-energy configurations as $s\to \infty$, we prove that $\gamma(\omega_N,A)\le 1$ and this bound can be attained even for the sphere. In the particular case when N=5 on $S^2$ with $\rho$ the Euclidean metric, we prove our main result that among the infinitely many 5-point best-packing configurations there is a unique configuration, namely a square-base pyramid $\omega_5^*$, that is the limit (as $s\to \infty$) of 5-point $s$-energy minimizing configurations. Moreover, $\gamma(\omega_5^*,S^2)=1$.
We have obtained adaptive optics images and accurate radial velocities for 7 very low mass systems, in the course of a long term effort to determine accurate masses for very low mass stars (M<0.6 Solar Mass). We use the new data, together with measurements from the litterature for some stars, to determine new or improved orbits for these 7 systems. They provide masses for 16 very low mass stars with accuracies that range between 0.2% and 5%, and in some cases a very accurate distance as well. This information is used in a companion paper to discuss the Mass-Luminosity relation for the V, J, H and K photometric bands.
We propose notions of "Noetherian" and "integral" for schemes over an abelian symmetric monoidal category $(\mathcal C,\otimes,1)$. For Noetherian integral schemes, we construct a "function field" that is a commutative monoid object of $(\mathcal C,\otimes,1)$. Under certain conditions, we show that a Noetherian scheme over $(\mathcal C,\otimes,1)$ is integral if and only if it is reduced and irreducible.
Existing methods, such as concept bottleneck models (CBMs), have been successful in providing concept-based interpretations for black-box deep learning models. They typically work by predicting concepts given the input and then predicting the final class label given the predicted concepts. However, (1) they often fail to capture the high-order, nonlinear interaction between concepts, e.g., correcting a predicted concept (e.g., "yellow breast") does not help correct highly correlated concepts (e.g., "yellow belly"), leading to suboptimal final accuracy; (2) they cannot naturally quantify the complex conditional dependencies between different concepts and class labels (e.g., for an image with the class label "Kentucky Warbler" and a concept "black bill", what is the probability that the model correctly predicts another concept "black crown"), therefore failing to provide deeper insight into how a black-box model works. In response to these limitations, we propose Energy-based Concept Bottleneck Models (ECBMs). Our ECBMs use a set of neural networks to define the joint energy of candidate (input, concept, class) tuples. With such a unified interface, prediction, concept correction, and conditional dependency quantification are then represented as conditional probabilities, which are generated by composing different energy functions. Our ECBMs address both limitations of existing CBMs, providing higher accuracy and richer concept interpretations. Empirical results show that our approach outperforms the state-of-the-art on real-world datasets.
Recent advances to combine structured regression models and deep neural networks for better interpretability, more expressiveness, and statistically valid uncertainty quantification demonstrate the versatility of semi-structured neural networks (SSNs). We show that techniques to properly identify the contributions of the different model components in SSNs, however, lead to suboptimal network estimation, slower convergence, and degenerated or erroneous predictions. In order to solve these problems while preserving favorable model properties, we propose a non-invasive post-hoc orthogonalization (PHO) that guarantees identifiability of model components and provides better estimation and prediction quality. Our theoretical findings are supported by numerical experiments, a benchmark comparison as well as a real-world application to COVID-19 infections.
In recent years, space-born experiments have delivered new measurements of high energy cosmic-ray (CR) $\bar p$ and $e^+$. In addition, unprecedented sensitivity to CR composite anti-nuclei anti-d and anti-He is expected to be achieved in the near future. We report on the theoretical interpretation of these measurements. While CR antimatter is a promising discovery tool for new physics or exotic astrophysical phenomena, an irreducible background arises from secondary production by primary CR collisions with interstellar matter. Understanding this irreducible background or constraining it from first principles is an interesting challenge. We review the attempt to obtain such understanding and apply it to CR $\bar p,\, e^+,$ anti-d and anti-He. Based on state of the art Galactic cosmic ray measurements, dominated currently by the AMS-02 experiment, we show that: (i) CR $\bar p$ most likely come from CR-gas collisions; (ii) $e^+$ data is consistent with, and suggestive of the same secondary astrophysical production mechanism responsible for $\bar p$ and dominated by proton-proton collisions. In addition, based on recent accelerator analyses we show that the flux of secondary high energy anti-He may be observable with a few years exposure of AMS-02. We highlight key open questions, as well as the role played by recent and upcoming space and accelerator data in clarifying the origins of CR antimatter.
Although there are millions of transgender people in the world, a lack of information exists about their health issues. This issue has consequences for the medical field, which only has a nascent understanding of how to identify and meet this population's health-related needs. Social media sites like Twitter provide new opportunities for transgender people to overcome these barriers by sharing their personal health experiences. Our research employs a computational framework to collect tweets from self-identified transgender users, detect those that are health-related, and identify their information needs. This framework is significant because it provides a macro-scale perspective on an issue that lacks investigation at national or demographic levels. Our findings identified 54 distinct health-related topics that we grouped into 7 broader categories. Further, we found both linguistic and topical differences in the health-related information shared by transgender men (TM) as com-pared to transgender women (TW). These findings can help inform medical and policy-based strategies for health interventions within transgender communities. Also, our proposed approach can inform the development of computational strategies to identify the health-related information needs of other marginalized populations.
We address the problem of network quantization, that is, reducing bit-widths of weights and/or activations to lighten network architectures. Quantization methods use a rounding function to map full-precision values to the nearest quantized ones, but this operation is not differentiable. There are mainly two approaches to training quantized networks with gradient-based optimizers. First, a straight-through estimator (STE) replaces the zero derivative of the rounding with that of an identity function, which causes a gradient mismatch problem. Second, soft quantizers approximate the rounding with continuous functions at training time, and exploit the rounding for quantization at test time. This alleviates the gradient mismatch, but causes a quantizer gap problem. We alleviate both problems in a unified framework. To this end, we introduce a novel quantizer, dubbed a distance-aware quantizer (DAQ), that mainly consists of a distance-aware soft rounding (DASR) and a temperature controller. To alleviate the gradient mismatch problem, DASR approximates the discrete rounding with the kernel soft argmax, which is based on our insight that the quantization can be formulated as a distance-based assignment problem between full-precision values and quantized ones. The controller adjusts the temperature parameter in DASR adaptively according to the input, addressing the quantizer gap problem. Experimental results on standard benchmarks show that DAQ outperforms the state of the art significantly for various bit-widths without bells and whistles.
We present a method to compute the full non-linear deformations of matrix factorizations for ADE minimal models. This method is based on the calculation of higher products in the cohomology, called Massey products. The algorithm yields a polynomial ring whose vanishing relations encode the obstructions of the deformations of the D-branes characterized by these matrix factorizations. This coincides with the critical locus of the effective superpotential which can be computed by integrating these relations. Our results for the effective superpotential are in agreement with those obtained from solving the A-infinity relations. We point out a relation to the superpotentials of Kazama-Suzuki models. We will illustrate our findings by various examples, putting emphasis on the E_6 minimal model.
In a recent paper~[Nature Catalysis 3, 573 (2020)], Robatjazi {\em et al.} demonstrate hydrodefluorination on Al nanocrystals decorated by Pd islands under illumination and under external heating. They conclude that photocatalysis accomplishes the desired transformation \ce{CH3F + D2 -> CH3D + DF} efficiently and selectively due to "hot" electrons, as evidenced by an illumination-induced reduction of the activation energy. Although some of the problems identified in prior work by the same group have been addressed, scrutiny of the data in~[Nature Catalysis 3, 573 (2020)] raises doubts about both the methodology and the central conclusions. First, we show that the thermal control experiments in~[Nature Catalysis 3, 573 (2020)] do not separate thermal from "hot electron" contributions, and therefore any conclusions drawn from these experiments are invalid. We then show that an improved thermal control implies that the activation energy of the reaction does not change, and that an independent purely thermal calculation (based solely on the sample parameters provided in the original manuscript) explains the measured data perfectly. For the sake of completeness, we also address technical problems in the calibration of the thermal camera, an unjustifiable disqualification of some of the measured data, as well as concerning aspects of the rest of the main results, including the mass spectrometry approach used to investigate the selectivity of the reaction, and claims about the stoichiometry and reaction order. All this shows that the burden of proof for involvement of hot electrons has not been met.
The paper is devoted to vector fields on the spaces R^2 and R^3, their flow and invariants. Attention is plaid on the tensor representations of the group GL(2,R) and on fundamental vector fields. The rotation group on R^3 is generalized to rotation groups with arbitrary quadrics as orbits.
It is predicted that nuclear spin conversion in molecules can be efficiently controlled by strong laser radiation resonant to rovibrational molecular transition. The phenomenon can be used for substantial enrichment of spin isomers, or for detection of very weak (10-100 Hz) interactions in molecules.
We present the study of $\bar{B}^{0} \to \Sigma_{c}(2455)^{0,++} \pi^{\pm} \bar{p}$ decays based on $772\times 10^{6}$ $B\bar{B}$ events collected with the Belle detector at the KEKB asymmetric-energy $e^+e^-$ collider. The $\Sigma_{c}(2455)^{0,++} $ candidates are reconstructed via their decay to $\Lambda_{c}^{+} \pi^{\mp}$ and $\Lambda_{c}^{+}$ decays to $pK^{-}\pi^{+},~pK_{S}^{0},$ and $\Lambda\pi^{+}$ final states. The corresponding branching fractions are measured to be ${\cal B}(\bar{B}^{0} \to \Sigma_{c}(2455)^{0} \pi^{+} \bar{p}) = (1.09 \pm 0.06 \pm 0.07)\times10^{-4}$ and ${\cal B}(\bar{B}^{0} \to \Sigma_{c}(2455)^{++} \pi^{-} \bar{p}) = (1.84\pm 0.11 \pm 0.12)\times 10^{-4}$, which are consistent with the world average values with improved precision. A new structure is found in the $M_{\Sigma_{c}(2455)^{0,++}\pi^{\pm}}$ spectrum with a significance of $4.2\sigma$ including systematic uncertainty. The structure is possibly an excited $\Lambda_{c}^{+}$ and is tentatively named $\Lambda_{c}(2910)^{+}$. Its mass and width are measured to be $(2913.8 \pm 5.6 \pm 3.8)$ MeV/$c^{2}$ and $(51.8\pm20.0 \pm 18.8)$ MeV, respectively. The products of branching fractions for the $\Lambda_{c}(2910)^{+}$ are measured to be ${\cal B}(\bar{B}^{0} \to \Lambda_{c}(2910)^{+}\bar{p})\times{\cal B}(\Lambda_{c}(2910)^{+} \to \Sigma_{c}(2455)^{0}\pi^{+}) = (9.5 \pm 3.6 \pm 1.6)\times 10^{-6}$ and ${\cal B}(\bar{B}^{0} \to \Lambda_{c} (2910)^{+}\bar{p})\times {\cal B}(\Lambda_{c}(2910)^{+} \to \Sigma_{c}(2455)^{++}\pi^{-}) = (1.24 \pm 0.35 \pm 0.10)\times 10^{-5}$. Here, the first and second uncertainties are statistical and systematic, respectively.
Finite-precision arithmetic computations face an inherent tradeoff between accuracy and efficiency. The points in this tradeoff space are determined, among other factors, by different data types but also evaluation orders. To put it simply, the shorter a precision's bit-length, the larger the roundoff error will be, but the faster the program will run. Similarly, the fewer arithmetic operations the program performs, the faster it will run; however, the effect on the roundoff error is less clear-cut. Manually optimizing the efficiency of finite-precision programs while ensuring that results remain accurate enough is challenging. The unintuitive and discrete nature of finite-precision makes estimation of roundoff errors difficult; furthermore the space of possible data types and evaluation orders is prohibitively large. We present the first fully automated and sound technique and tool for optimizing the performance of floating-point and fixed-point arithmetic kernels. Our technique combines rewriting and mixed-precision tuning. Rewriting searches through different evaluation orders to find one which minimizes the roundoff error at no additional runtime cost. Mixed-precision tuning assigns different finite precisions to different variables and operations and thus provides finer-grained control than uniform precision. We show that when these two techniques are designed and applied together, they can provide higher performance improvements than each alone.
Neural networks are a powerful class of nonlinear functions that can be trained end-to-end on various applications. While the over-parametrization nature in many neural networks renders the ability to fit complex functions and the strong representation power to handle challenging tasks, it also leads to highly correlated neurons that can hurt the generalization ability and incur unnecessary computation cost. As a result, how to regularize the network to avoid undesired representation redundancy becomes an important issue. To this end, we draw inspiration from a well-known problem in physics -- Thomson problem, where one seeks to find a state that distributes N electrons on a unit sphere as evenly as possible with minimum potential energy. In light of this intuition, we reduce the redundancy regularization problem to generic energy minimization, and propose a minimum hyperspherical energy (MHE) objective as generic regularization for neural networks. We also propose a few novel variants of MHE, and provide some insights from a theoretical point of view. Finally, we apply neural networks with MHE regularization to several challenging tasks. Extensive experiments demonstrate the effectiveness of our intuition, by showing the superior performance with MHE regularization.
A contravariant pseudo-Hessian manifold is a manifold $M$ endowed with a pair $(\nabla,h)$ where $\nabla$ is a flat connection and $h$ is a symmetric bivector field satisfying a contravariant Codazzi equation. When $h$ is invertible we recover the known notion of pseudo-Hessian manifold. Contravariant pseudo-Hessian manifolds have properties similar to Poisson manifolds and, in fact, to any contravariant pseudo-Hessian manifold $(M,\nabla,h)$ we associate naturally a Poisson tensor on $TM$. We investigate these properties and we study in details many classes of such structures in order to highlight the richness of the geometry of these manifolds.
NASA's Transiting Exoplanet Survey Satellite (TESS) has begun a two-year survey of most of the sky, which will include lightcurves for thousands of solar-like oscillators sampled at a cadence of two minutes. To prepare for this steady stream of data, we present a mock catalogue of lightcurves, designed to realistically mimic the properties of the TESS sample. In the process, we also present the first public release of the asteroFLAG Artificial Dataset Generator, which simulates lightcurves of solar-like oscillators based on input mode properties. The targets are drawn from a simulation of the Milky Way's populations and are selected in the same way as TESS's true Asteroseismic Target List. The lightcurves are produced by combining stellar models, pulsation calculations and semi-empirical models of solar-like oscillators. We describe the details of the catalogue and provide several examples. We provide pristine lightcurves to which noise can be added easily. This mock catalogue will be valuable in testing asteroseismology pipelines for TESS and our methods can be applied in preparation and planning for other observatories and observing campaigns.
Distance measurements for extragalactic objects are a fundamental problem in astronomy and cosmology. In the era of precision cosmology, we urgently need better measurements of cosmological distances to observationally test the increasing $H_{0}$ tension of the Hubble constant measured from different tools. Using spectroastrometry, GRAVITY at The Very Large Telescope Interferometer successfully revealed the structure, kinematics and angular sizes of the broad-line region (BLR) of 3C 273 with an unprecedentedly high spatial resolution. Fortunately, reverberation mapping (RM) of active galactic nuclei (AGNs) reliably provides linear sizes of their BLRs. Here we report a joint analysis of spectroastrometry and RM observations to measure AGN distances. We apply this analysis to 3C 273 observed by both GRAVITY and an RM campaign,and find an angular distance of $551.5_{-78.7}^{+97.3}\, {\rm Mpc}$ and $H_{0}=71.5_{-10.6}^{+11.9}\,{\rm km\,s^{-1}\,Mpc^{-1}}$. Advantages of the analysis are 1) its pure geometrical measurements and 2) it simultaneously yields mass of the central black hole in the BLR. Moreover, we can conveniently repeat measurements of selected AGNs to efficiently reduce the statistical and systematic errors. Future observations of a reasonably sized sample ($\sim 30$ AGNs) will provide distances of the AGNs and hence a new way of measuring $H_{0}$ with a high precision $\left(\lesssim 3\%\right)$ to test the $H_{0}$ tension.
It is shown that the local axial anomaly in $2-$dimensions emerges naturally if one postulates an underlying noncommutative fuzzy structure of spacetime . In particular the Dirac-Ginsparg-Wilson relation on ${\bf S}^2_F$ is shown to contain an edge effect which corresponds precisely to the ``fuzzy'' $U(1)_A$ axial anomaly on the fuzzy sphere . We also derive a novel gauge-covariant expansion of the quark propagator in the form $\frac{1}{{\cal D}_{AF}}=\frac{a\hat{\Gamma}^L}{2}+\frac{1}{{\cal D}_{Aa}}$ where $a=\frac{2}{2l+1}$ is the lattice spacing on ${\bf S}^2_F$, $\hat{\Gamma}^L$ is the covariant noncommutative chirality and ${\cal D}_{Aa}$ is an effective Dirac operator which has essentially the same IR spectrum as ${\cal D}_{AF}$ but differes from it on the UV modes. Most remarkably is the fact that both operators share the same limit and thus the above covariant expansion is not available in the continuum theory . The first bit in this expansion $\frac{a\hat{\Gamma}^L}{2}$ although it vanishes as it stands in the continuum limit, its contribution to the anomaly is exactly the canonical theta term. The contribution of the propagator $\frac{1}{{\cal D}_{Aa}}$ is on the other hand equal to the toplogical Chern-Simons action which in two dimensions vanishes identically .
We consider an integrable system in five unknowns having three quartics invariants. We show that the complex affine variety defined by putting these invariants equal to generic constants, completes into an abelian surface; the jacobian of a genus two hyperelliptic curve. This system is algebraic completely integrable and it can be integrated in genus two hyperelliptic functions.
In this paper, we extend our investigation of the validity of the cosmic no-hair conjecture within non-canonical anisotropic inflation. As a result, we are able to figure out an exact Bianchi type I solution to a power-law {\it k}-inflation model in the presence of unusual coupling between scalar and electromagnetic fields as $-f^2(\phi)F_{\mu\nu}F^{\mu\nu}/4$. Furthermore, stability analysis based on the dynamical system method indicates that the obtained solution does admit stable and attractive hairs during an inflationary phase and therefore violates the cosmic no-hair conjecture. Finally, we show that the corresponding tensor-to-scalar ratio of this model turns out to be highly consistent with the observational data of the Planck 2018.
Interactive reinforcement learning agents use human feedback or instruction to help them learn in complex environments. Often, this feedback comes in the form of a discrete signal that is either positive or negative. While informative, this information can be difficult to generalize on its own. In this work, we explore how natural language advice can be used to provide a richer feedback signal to a reinforcement learning agent by extending policy shaping, a well-known Interactive reinforcement learning technique. Usually policy shaping employs a human feedback policy to help an agent to learn more about how to achieve its goal. In our case, we replace this human feedback policy with policy generated based on natural language advice. We aim to inspect if the generated natural language reasoning provides support to a deep reinforcement learning agent to decide its actions successfully in any given environment. So, we design our model with three networks: first one is the experience driven, next is the advice generator and third one is the advice driven. While the experience driven reinforcement learning agent chooses its actions being influenced by the environmental reward, the advice driven neural network with generated feedback by the advice generator for any new state selects its actions to assist the reinforcement learning agent to better policy shaping.
Deep Reinforcement Learning (DRL) sometimes needs a large amount of data to converge in the training procedure and in some cases, each action of the agent may produce regret. This barrier naturally motivates different data sets or environment owners to cooperate to share their knowledge and train their agents more efficiently. However, it raises privacy concerns if we directly merge the raw data from different owners. To solve this problem, we proposed a new Deep Neural Network (DNN) architecture with both global NN and local NN, and a distributed training framework. We allow the global weights to be updated by all the collaborator agents while the local weights are only updated by the agent they belong to. In this way, we hope the global weighs can share the common knowledge among these collaborators while the local NN can keep the specialized properties and ensure the agent to be compatible with its specific environment. Experiments show that the framework can efficiently help agents in the same or similar environments to collaborate in their training process and gain a higher convergence rate and better performance.
In this work, we propose a classifier for distinguishing device-directed queries from background speech in the context of interactions with voice assistants. Applications include rejection of false wake-ups or unintended interactions as well as enabling wake-word free follow-up queries. Consider the example interaction: $"Computer,~play~music", "Computer,~reduce~the~volume"$. In this interaction, the user needs to repeat the wake-word ($Computer$) for the second query. To allow for more natural interactions, the device could immediately re-enter listening state after the first query (without wake-word repetition) and accept or reject a potential follow-up as device-directed or background speech. The proposed model consists of two long short-term memory (LSTM) neural networks trained on acoustic features and automatic speech recognition (ASR) 1-best hypotheses, respectively. A feed-forward deep neural network (DNN) is then trained to combine the acoustic and 1-best embeddings, derived from the LSTMs, with features from the ASR decoder. Experimental results show that ASR decoder, acoustic embeddings, and 1-best embeddings yield an equal-error-rate (EER) of $9.3~\%$, $10.9~\%$ and $20.1~\%$, respectively. Combination of the features resulted in a $44~\%$ relative improvement and a final EER of $5.2~\%$.
Planets embedded within dust disks may drive the formation of large scale clumpy dust structures by trapping dust into resonant orbits. Detection and subsequent modeling of the dust structures would help constrain the mass and orbit of the planet and the disk architecture, give clues to the history of the planetary system, and provide a statistical estimate of disk asymmetry for future exoEarth-imaging missions. Here we present the first search for these resonant structures in the inner regions of planetary systems by analyzing the light curves of hot Jupiter planetary candidates identified by the Kepler mission. We detect only one candidate disk structure associated with KOI 838.01 at the 3-sigma confidence level, but subsequent radial velocity measurements reveal that KOI 838.01 is a grazing eclipsing binary and the candidate disk structure is a false positive. Using our null result, we place an upper limit on the frequency of dense exozodi structures created by hot Jupiters. We find that at the 90% confidence level, less than 21% of Kepler hot Jupiters create resonant dust clumps that lead and trail the planet by ~90 degrees with optical depths >~5*10^-6, which corresponds to the resonant structure expected for a lone hot Jupiter perturbing a dynamically cold dust disk 50 times as dense as the zodiacal cloud.
We study a complex analogue of a Wilson Loop, defined over a complex curve, in non-Abelian holomorphic Chern-Simons theory. We obtain a version of the Makeenko-Migdal loop equation describing how the expectation value of these Wilson Loops varies as one moves around in a holomorphic family of curves. We use this to prove (at the level of the integrand) the duality between the twistor Wilson Loop and the all-loop planar S-matrix of N=4 super Yang-Mills by showing that, for a particular family of curves corresponding to piecewise null polygons in space-time, the loop equation reduce to the all-loop extension of the BCFW recursion relations. The scattering amplitude may be interpreted in terms of holomorphic linking of the curve in twistor space, while the BCFW relations themselves are revealed as a holomorphic analogue of skein relations.
We prove that a monoid is sofic, in the sense recently introduced by Ceccherini-Silberstein and Coornaert, whenever the J-class of the identity is a sofic group, and the quotients of this group by orbit stabilisers in the rest of the monoid are amenable. In particular, this shows that the following are all sofic: cancellative monoids with amenable group of units; monoids with sofic group of units and finitely many non-units; and monoids with amenable Schutzenberger groups and finitely many L-classes in each D-class. This provides a very wide range of sofic monoids, subsuming most known examples (with the notable exception of locally residually finite monoids). We conclude by discussing some aspects of the definition, and posing some questions for future research.
Side-channel attacks such as Spectre that utilize speculative execution to steal application secrets pose a significant threat to modern computing systems. While program transformations can mitigate some Spectre attacks, more advanced attacks can divert control flow speculatively to bypass these protective instructions, rendering existing defenses useless. In this paper, we present Venkman: a system that employs program transformation to completely thwart Spectre attacks that poison entries in the Branch Target Buffer (BTB) and the Return Stack Buffer (RSB). Venkman transforms code so that all valid targets of a control-flow transfer have an identical alignment in the virtual address space; it further transforms all branches to ensure that all entries added to the BTB and RSB are properly aligned. By transforming all code this way, Venkman ensures that, in any program wanting Spectre defenses, all control-flow transfers, including speculative ones, do not skip over protective instructions Venkman adds to the code segment to mitigate Spectre attacks. Unlike existing defenses, Venkman does not reduce sharing of the BTB and RSB and does not flush these structures, allowing safe sharing and reuse among programs while maintaining strong protection against Spectre attacks. We built a prototype of Venkman on an IBM POWER8 machine. Our evaluation on the SPEC benchmarks and selected applications shows that Venkman increases execution time to 3.47$\times$ on average and increases code size to 1.94$\times$ on average when it is used to ensure that fences are executed to mitigate Spectre attacks. Our evaluation also shows that Spectre-resistant Software Fault Isolation (SFI) built using Venkman incurs a geometric mean of 2.42$\times$ space overhead and 1.68$\times$ performance overhead.
KH 15D is a protostellar binary system that shows a peculiar light curve. In order to model it, a narrow circumbinary precessing disc has been invoked, but a proper dynamical model has never been developed. In this paper, we analytically address the issue of whether such a disc can rigidly precess around KH 15D, and we relate the precessional period to the main parameters of the system. Then, we simulate the disc's dynamics by using a 1D model developed in a companion paper, such that the warp propagates into the disc as a bending wave, which is expected to be the case for protostellar discs. The validity of such an approach has been confirmed by comparing its results with full 3D SPH simulations on extended discs. In the present case, we use this 1D code to model the propagation of the warp in a narrow disc. If the inner truncation radius of the disc is set by the binary tidal torques at {\sim} 1 AU, we find that the disc should extend out to 6-10 AU (depending on the models), and is therefore wider than previously suggested. Our simulations show that such a disc does reach an almost steady state, and then precesses as a rigid body. The disc displays a very small warp, with a tilt inclination that increases with radius in order to keep the disc in equilibrium against the binary torque. However, for such wider discs, the presence of viscosity leads to a secular decay of the tilt on a timescale of {\approx} 3000 ({\alpha}/0.05)^(-1) years, where {\alpha} is the disc viscosity parameter. The presence of a third body (such as a planet), orbiting at roughly 10 AU might simultaneously explain the outer truncation of the disc and the maintenance of the tilt for a prolonged time.
We study bundles on projective spaces that have vanishing lower cohomologies using their short minimal free resolutions. We partition the moduli $\mathbf{M}$ according to the Hilbert function $H$ and classify all possible Hilbert functions $H$ of such bundles. For each $H$, we describe a stratification of $\mathbf{M}_H$ by quotients of rational varieties. We show that the closed strata form a graded lattice given by the Betti numbers.
We proof a uniqueness and periodicity theorem for bounded solutions of uniformly elliptic equations in certain unbounded domains.
Obesity is associated with higher fatality risk and altered distribution of occupant injuries in motor vehicle crashes partially because of the increased depth of abdominal soft tissue, which results in limited and delayed engagement of the lap belt with the pelvis and increases the risk of pelvis submarining under the lap belt exposing occupant abdomen to belt loading.
We study the magnetic properties of nanometer-sized graphene structures with triangular and hexagonal shapes terminated by zig-zag edges. We discuss how the shape of the island, the imbalance in the number of atoms belonging to the two graphene sublattices, the existence of zero-energy states, and the total and local magnetic moment are intimately related. We consider electronic interactions both in a mean-field approximation of the one-orbital Hubbard model and with density functional calculations. Both descriptions yield values for the ground state total spin, $S$, consistent with Lieb's theorem for bipartite lattices. Triangles have a finite $S$ for all sizes whereas hexagons have S=0 and develop local moments above a critical size of $\approx 1.5$ nm.
We make some remarks on earlier works on $R-$bisectoriality in $L^p$ of perturbed first order differential operators by Hyt\"onen, McIntosh and Portal. They have shown that this is equivalent to bounded holomorphic functional calculus in $L^p$ for $p$ in any open interval when suitable hypotheses are made. Hyt\"onen and McIntosh then showed that $R$-bisectoriality in $L^p$ at one value of $p$ can be extrapolated in a neighborhood of $p$. We give a different proof of this extrapolation and observe that the first proof has impact on the splitting of the space by the kernel and range.