text
stringlengths
100
500k
subset
stringclasses
4 values
We study the probability distribution function (PDF) of the smallest eigenvalue of Laguerre-Wishart matrices $W = X^\dagger X$ where $X$ is a random $M \times N$ ($M \geq N$) matrix, with complex Gaussian independent entries. We compute this PDF in terms of semi-classical orthogonal polynomials, which can be viewed as a deformation of Laguerre polynomials. By analyzing these polynomials, and their associated recurrence relations, in the limit of large $N$, large $M$ with $M/N \to 1$ -- i.e. for quasi-square large matrices $X$ -- we show that this PDF can be expressed in terms of the solution of a Painlevé III equation, as found by Tracy and Widom by analyzing a Fredholm determinant built from the Bessel kernel. In addition, our method allows us to compute the first $1/N$ corrections to this limiting Tracy-Widom distribution (at the hard edge). Our computations corroborate a recent conjecture by Edelman, Guionnet and Péché. Joint work with Anthony Perret (University of Orsay, Paris-Sud).
CommonCrawl
Abstract: The purpose of this paper is to provide a brief review of some recent developments in quantum feedback networks and control. A quantum feedback network (QFN) is an interconnected system consisting of open quantum systems linked by free fields and/or direct physical couplings. Basic network constructs, including series connections as well as feedback loops, are discussed. The quantum feedback network theory provides a natural framework for analysis and design. Basic properties such as dissipation, stability, passivity and gain of open quantum systems are discussed. Control system design is also discussed, primarily in the context of open linear quantum stochastic systems. The issue of physical realizability is discussed, and explicit criteria for stability, positive real lemma, and bounded real lemma are presented. Finally for linear quantum systems, coherent $H^\infty$ and LQG control are described.
CommonCrawl
Bonus: Using a lambda function write a function which takes a unit vector $n$ and returns a reflection function. In order to do long term weather prediction, we'll iteratively apply $T$ to some initial data $x_0$. Generate a few choices of random initial data and compute the long term probability distribution. Does it depend on the initial state? Careful to normalize the initial data x0 so the sum of the probabilties is 1! You can do this by dividing through by np.sum(x0)! As a consequence of conservation of energy, solutions to the system are constrained to level sets of $E(x,v)$. Compute $E$ on the domain $[-4 \pi, 4 \pi]\times [-2\pi,2\pi]$ then create a contour or filled contour plot using either plt.contour(E) or plt.contourf(E) respectively. It's your preference if you prefer the filled or unfilled version; see which one you like more! If you want to play around with the availible colormaps to tweak the look, you can find them here! Can you tell what some of the different contour regions correspond to physically? Although we wrote our own method to read the data and convert it the floats, since the data is very uniform, we can easily leverage Numpy's np.loadtxt('some-file-name.txt') function to read in the data as an n x 3 array whose columns are the $t$, $x$ and $v$ entries. Give this a try and print the resulting array to verify that it matches what you have in your file. Notice that it will do both the reading and type conversion for you! Now, let $x$ be the slice of the $x$-column and $v$ be the slice of the $v$-column. Do the $(x,v)$-pairs roughly lie on a circle centered at the origin? This is useful when applicable, but not all data sets amenable to this; they must have a certain amount of regularity. In cases where you're mostly working with arrays, you can pair this up with Numpy's np.savetxt('output-file-name.txt', A) to save and load data to file in an easy way. Use the affine map $A$ and the volume of the standard simplex to compute the volume of $\Delta$. You may find the det function in numpy.linalg useful for this! If the description of $A$ is tricky, try drawing a picture in 2d and 3d of what the column vectors look like.
CommonCrawl
Abstract: This talk is focusing to characterize the Christoffel pairs of timelike isothermic surfaces in the split-quaternions $H' = \mathbb R^4_2$. When we restrict the ambient space to the imaginary split-quaternions $Im H' = \mathbb R^3_2$, we also characterize that kind of pair through of the existence of an integranting factor. This a joint work with Prof. M. Magid. Abstract: This presentation will review a line of research investigating a theoretical model of interpersonal dynamics based on processes of self-organization and complexity theory. Empirical investigations of the complexity of patterns of repetition in verbal turn-taking behaviors during conversations using various measures of entropy (topological, information, and fractal dimension) have consistently demonstrated that interpersonal dynamics exhibit fractal patterns characteristic of far-from-equilibrium conditions at the "edge-of-chaos." For example, statistical fits between an inverse-power-law (IPL) model and long-sequence patterning in conversations have ranged from R2 = .87 to R2 = .99. These results have been found in conversations within group therapy, family therapy, and experimentally-created relationships among strangers. Furthermore, a statistical model of interpersonal closeness, conflict, and control accounted for a combined 48% of the variance in pattern repetition (structure) within the IPL above and beyond speaker base-rates (combinatorial probabilities). Finally, a series of experimental investigations has demonstrated that the fractal dimension of turn-taking patterns within small groups shifts significantly in the direction of rigidity depending upon levels of internal conflict within group members. In addition to some practical significance for applied psychology, these results provide for a number of possible theoretical integrations, within psychology between psychology and other scientific domains. Key areas for discussion will include: a) The potential to derive more formal and specific mathematical models (e.g., differential equation models) to simulate relationship development over time and under various conditions; b) Graphics programs that could more effectively display the fractals underlying these conversations; and c) improved statistical procedures that could better identify processes such as bifurcations, transients, and changes in the contributions of individuals to overall group structure during conversations. Abstract: In recent years, techniques from computational algebra have become important to render effective general results in the theory of Partial Differential Equations. Following our work with D.C. Struppa, I. Sabadini, F. Colombo, and M. Vajiac, we will present how these tools can be used to discover and identify important properties of several physical systems of interest such as Electromagnetism and Abelian Instantons. Abstract: We consider the Generalized Walras-Wald Equilibrium (GE) as an alternative to Linear Programming (LP) approach for optimal recourse allocation. There are two fundamental differences between the GE and LP approach for the best resource allocation. First, the prices for goods (products) are not fixed as they are in LP; they are functions of the production output. Second, the factors (resources) used in the production process are not fixed either; they are functions of the prices for the resources. It was shown that under natural economic assumptions on both prices and factors vector functions the GE exists and unique. Finding the GE is equivalent to solving a variational inequality with a strongly monotone operator on nonnegative octants of the primal and dual spaces. For solving the variational inequality a projected pseudo-gradient method was introduced , his global convergence with Q-linear rate was proven an its computational complexity was estimated. The method can be viewed as a natural pricing mechanism for establishing an economic equilibrium. About this week's speaker: Roman A.Polyak, received Ph. D in mathematics from Moscow Central Institute of Mathematics and Economics at the Soviet Academy of Sciences. After emigration from the former Soviet Union he was a visiting scientist at the Mathematical Sciences Department at the T.J. Watson Research Center IBM. Since 1995 Dr. Polyak is a Professor of Mathematics and Operations Research at George Mason University. He is an author and co-author of six monographs and chapters of books and published more than sixty papers in refereed professional journals. His area of expertise is Linear and Nonlinear programming , game theory and mathematical economics. He received several NSF and NASA Awards as well as the Fulbright Scholarship Award for his work on Nonlinear Rescaling Theory and Methods in constrained optimization. Abstract: This work, in collaboration with A. Damiano, D. Struppa, and A. Vajiac introduces the notion of antisyzygies, which studies the inverse problem of finding a system of PDEs, given compatibility conditions. The system obtained possesses the property of removability of compact singularities. We also write explicit computations in the cases of the Cauchy-Fueter system and Maxwell's system for electromagnetism, and we conclude with a study of systems of non-maximal rank. Title: Unweighted analysis of counter-matched case-control data. Abstract: Informative sampling based on counter-matching risk set subjects on an exposure variable has been shown to be an efficient alternative to simple random sampling when the counter-matching variable is correlated with the variable of interest; however, the opposite is true when the counter-matching variable is independent of the variable of interest. For given counter-matched data, we consider a naive analysis of the effect of a dichotomous covariate on the disease rates that ignores the underlying sampling design and its corresponding effect on the analytic expression of the partial likelihood. We provide analytical expressions for the bias and variance and show that under mild common conditions such an analysis is clearly advantageous over the standard "weighted" approach. The efficiency gains could be large and are inversely related to the prevalence of the counter-matching variable. Finally, we ascertain that departures from the required assumptions entail biased estimates and provide numerical values for the bias for common scenarios. Title: The interplay of symbolic computing with geometry in dynamical systems arising from theoretical electrical engineering problems. Abstract: The talk accessible to nonprofessionals will lead the audience into an exciting journey of the beauty of geometric structures obtained using computational mathematics derived from discontinuous dynamical systems that arise from problems in electrical engineering. We illustrate how rigorous computational mathematics and elementary number theory is used producing fractal structure of piecewise isometries. Key words: fractals, computational mathematics, cyclotomic fields, dynamical systems, digital filters, piecewise isometries, computers producing publishable papers. The CV link: http://calculus.sfsu.edu/goetz/vitae.pdf Arek Goetz is professor of mathematics at San Francisco State University. An active researcher in dynamical systems, software architect and an educator, has delivered over 90 talks on 5 continents. He is a recipient of two National Science Foundation Grants, as well as numerous teaching grants. Title: The Basic Picture: an interactive introduction by questions & answers. Abstract: To define a topology on a set $X$ in a constructive way (where contructive here means both intuitionistic and predicative), one needs a second set $S$ and a family of subsets of $X$ indexed on $S$, that is, a relation between $X$ and $S$. I have called this structure a basic pair. One can show that interior and closure of a subset are defined in a basic pair by formulae which are strictly dual of each other (the former is of the form $\exists\forall$, the latter $\forall \exists $). Continuity between basic pairs is expressed by a commutative diagram of relations (up to a suitable notion of equality). The main pointfree structure has a primitive for closed subsets (positivity) which is dual to that for open subsets (formal cover), and they are linked by a condition (called compatibility) which is best expressed by using the notion of overlap between subsets (the existential dual of inclusion). These discoveries show that there is a clear and simple structure underlying topology, and that it is a sort of applied logic. I have called it the Basic Picture. Both traditional ("pointwise") and pointfree topology in the proper sense are obtained as a special case. In fact, a topological space is just a basic pair equipped with convergence (any two approximations of a point can be refined to a third), and continuous functions are just those relations which preserve convergence. Besides allowing for a fully constructive development of topology, this approach brings to some technical improvements which are new also for the classical approach. In particular, one can prove that the category of topological spaces with continuous relations can be embedded in the category of formal topologies (i.e. pointfree topologies in a constructive sense), thus giving a mathematical form to the well-known claim that pointfree topology generalizes the traditional one with points. On all this, I am writing a book (The Basic Picture. Structures for constructive topology", Oxford U.P., forthcoming). After around 20 minutes expanding on the above summary, I am ready to give an introduction to the content and general philosophy of the book, by discussing with the audience any questions they would like to put. Abstract: The formal analysis and verification of computing systems has so far been dominated by model checkers and other decision procedures which are fully automated, but limited in expressive power, and by interactive theorem provers which are quite expressive, but limited in automation. Due to improved hardware and theoretical developments, automated deduction is currently emerging as a third way in which expressivity and computational power are differently balanced and which conveniently complements the other approaches. I will present a new approach to formal verification in which computational algebras are combined with off-the-shelf automated theorem provers for first-order equational logic. The algebras considered are variants of Kleene algebras and their extensions by modal operators. Particular strengths of these structures are syntactic simplicity, wide applicability, concise elegant equational proofs, easy mechanizability and strong decidability. I will sketch the axiomatization and calculus of Kleene algebras and modal Kleene algebras, discuss some computationally interesting models, such as traces, graphs, languages and relations, and point out their relationship to popular verification formalisms, including dynamic logic, temporal logic and Hoare logic. I will also report on some automation results in the areas of action system refinement, termination analysis and program verification that demonstrate the benefits and the potential of the algebraic approach. Abstract: Recently, P. Alegre, D. E. Blair and the speaker defined generalized Sasakian-space-forms as those almost contact metric manifolds with a Riemann curvature tensor satisfying the usual equation for a Sasakian-space-form, but with some differentiable functions $f_1; f_2; f_3$ instead of the well-known constant quantities (c + 3)=4 and (c ¡ 1)=4. In this talk, we will review the main facts about generalized Sasakian-space-forms, such as the existence of interesting examples in any dimension, or the possible structures for these spaces. After that, we will present sharp inequalities involving ±-invariants for submanifolds in this setting, with arbitrary codimension. In fact, ±-invariants, introduced by B.-Y. Chen, have proven to be a key tool in Submanifolds Theory, providing new very useful information concerning the immersion problem. Abstract: In this talk I'll speak about recent results obtained in collaboration with Emmanuele DiBenedetto and Ugo Gianazza. In the sixties Moser, using deep Nash ideas, proved Harnack inequalities for nonnegative solutions of linear parabolic equations with $L^\infty$ coefficients. This approach, however, seems not to work in the nonlinear case (for instance in the case of p-Laplacean and porous medium equation). In recent papers published on Calc. Var., Acta Math and Duke Math. J., we give an alternative proof of the Harnack inequality with respect the Moser's one based on DeGiorgi's function classes. This approach is so flexible that it can be extended to the nonlinear case. Abstract: We propose to analyze model data 1) using errors-in-variables (EIV) model and 2) using the assumptions that the error random variables are subject to the influence of skewness through Bayesian approach. The use of EIV in model is necessary and realistic in studying many statistical problems, but their analysis usually mandate many simplifying and restrictive assumptions. Previous studies have shown the superiority of Bayesian approach in dealing with the complexity of these models. In fitting statistical models for the analysis of growth data, many models have been proposed. We selected an extensive list of the most important growth curves and using some of them in our model analysis. Much research using classical approach has clustered on this area. However, the incorporation of EIV into these growth models under Bayesian formulation with skewness models have not yet been considered or studied. A motivating example is presented and in which we expose certain lacunae in the analysis previously done as well as justify, the applicability of the our general approach proposed alone. In addition, auxiliary covariates, both qualitative and quantitative, can be added into our model as an extension. This EIV growth curves with auxiliary covariates in models renders a very general framework for practical application. Another illustrative example is also available to demonstrate how Bayesian approach through MCMC (Metropolis Hastings/slice sampling in Gibbs sampler) techniques as well as Bayesian Information Criterion (BIC) for model selection can be utilized in the analysis of this complex EIV growth curves with skewness in models. Abstract: In my talk I will present an overview of some of the results from this year's collaboration with D. Struppa, and A. and M. Vajiac, together with some of the questions that still remain open. First, I will introduce a new concept of "antisyzygies", which constitutes a sort of inverse problem within the framework of algebraic analysis. Usually people start from a (non-homogeneous) system of PDEs, and hunt for the integrability conditions. We asked ourselves the opposite question: if we are given the compatibility laws, how can we reconstruct the system in some canonical way? What are the properties of the system that we get? In particular, I will mention the relation between the antisyzygy construction and Hartogs type of phenomena (removability of compact singularities from the solutions). On a more computational side, I will present an idea that D. Eelbode and I had during his stay at Chapman. The goal is to construct the syzygies for the Dirac system in several vector variables. I already touched upon some of the possible techniques to computer those syzygies in my previous talks. This time, I will show how to use the computer algebra software Singular and the structure of the super Lie algebra osp(1|2) to get the (minimal) free resolution of the system, in just a few command lines. Abstract: Biology is in a state of flux. The tremendous changes are due to rapidly advancing experimental technologies that are producing previously unheard of quantities of data. Storing these data and extracting information from them have caused an explosion in the field of biomedical informatics. A further quantitative emphasis results from the fact that these new experimental technologies are able to make many measurements from complex multi-scale biological systems in a single experiment. Our ability to effectively validate these data and hypothesize with them is dependent on our ability to produce predictive quantitative models of these systems. While the past decade has been truly exciting for biologists and others drawn to the field, it has resulted in a dire need for curricular reform in the biological sciences in order to prepare students for careers in post-genomic biology. In my talk I will survey the quantitative and computational landscape of contemporary biology, and I will discuss various organizations, programs and resources that have been developed in support of efforts to integrate quantitative and computational training into the undergraduate biology curriculum. Abstract: In this lecture, we will explain how one can fully exploit the framework of Clifford algebras and Clifford analysis in order to construct a function theory for higher spin Dirac operators. These are to be seen as far-reaching generalizations of the classical Dirac operator, possibly describing (exotic) elementary particles in higher dimensions. We will carefully explain how to describe the underlying invariance with respect to the underlying Lie algebra so(m), and how to relates the associated theory to Clifford analysis in several vector variables. Abstract: Association methods employed for finding deleterious mutations in human populations are categorized into two broad classes with respect to the structure of the analyzed data, case-control and family-based. Case-control studies are flexible, powerful and cost-efficient approaches that possess an inherent design disadvantage, susceptibility to inflated rates of false-positive results due to unaccounted population structure and hidden relatedness. Family-based tests for association (FBATs) provide a robust alternative to case-control methods that address the above-mentioned shortcomings by conditioning on the population information. New FBAT extensions for handling multiple correlated genes and a relatedness-adjusted case-control statistical method that accounts for stratified populations are proposed and studied through extensive simulations in various settings. Our results show that in most of the analyzed scenarios the new tests attain higher power than the currently existing approaches. Abstract: Superspaces are spaces with not only commuting variables but also anti-commuting variables. We will show how it is possible to extend harmonic and Clifford analysis to these superspaces by constructing a suitable representation of sl(2) and osp(1|2). Then we will use this representation to consider integration in superspaces and we will give a set of properties that uniquely determine the Berezin integral on the supersphere. Abstract: Gentzen systems have been used to present many logics, such as classical logic, intuitionistic logic, modal logics, substructural logics, and their corresponding algebraic axiomatizations, in modular ways that provide new insights and often lead to effective decision procedures. The aim of this talk is to show that Gentzen sequent calculi can be used in standard resolution theorem provers to improve their search space characteristics. This is mostly of use with (semi)lattice ordered algebras, and does not require cut-free Gentzen systems. For example it is currently not known if there is a cut-free Gentzen system for residuated Kleene algebras or residuated Kleene lattices. However if axiomatizations for these equational theories are presented in the style of Gentzen sequent rules, then a theorem prover such as Prover9 (www.prover9.org) can be quite an effective reasoning tool in these otherwise rather untractable theories. I will also discuss how cut-free Gentzen systems can be implemented effectively using standard rewriting tools such as Maude (http://maude.cs.uiuc.edu/). Abstract: We develop a "generators and relations" method of constructing directed complete partial orders (dcpos), and show that the results are indeed free in the usual sense. We then continue by considering the situation where the generator set is equipped with finitary operations, showing that the free construction yields a dcpo algebra (in which the operations are Scott continuous) in the same algebraic variety as the generating structure. We apply the results to the construction of co-products, and the characterization of sub-objects, in the category of "proto-frames". This is joint work with Achim Jung and Steve Vickers of Univ. of Birmingham, UK. This page was last modified on 23 January 2012, at 13:16.
CommonCrawl
I am trying to find the solid angle taken up by a large set of triangles around a central point. I have normalized the vertices of the triangles to a unit sphere around the central point, But the triangles now are slightly inside of the sphere, because their vertices are on the sphere, but the plane between the points is in the interior of the sphere. I would like a simple and fast way to project the triangles onto the unit sphere, so I can union the regions, then find the area of the union of all of the regions and compare that to 4*pi, the surface area of the unit sphere. If anyone can help, that would be much appreciated. EDIT: I start with upwards of 3000 simple triangles in 3 dimensions, with vertices on the unit sphere. I want to find the solid angle taken up by all of the triangles combined. The part of this problem that I find most confusing is that the triangles may overlap, so simply summing their individual areas gives an a solid angle that is much larger than it should be. Be sure to play around with the three-dimensional figure, rotating it as you like, to appreciate the answer. Note that there is some ambiguity of the definition of your triangle, and you'll have to refine the signs to select the smallest triangle defined by your three points (which is what I presume you seek). Not the answer you're looking for? Browse other questions tagged graphics geometry projection or ask your own question. Triangle mapped on a sphere in $\mathbb R^3$? The fastest thing since sliced cubes?
CommonCrawl
I had my students conduct a fermentation experiment manipulating sugar amounts (number of sugar packets in solution) and measuring CO₂ production via capture in a balloon and measuring circumference (very common HS lab experiment). I also had them conduct a trial with two packets of Equal sweetener. To everyone's surprise two packets of the "zero calorie" sweetener produced as much CO₂ as four packets of sugar. Can anyone shed light on why this would have occurred? Equal contains maltodextrin and dextrose in addition to the aspartame. I'm assuming that yeast are able to metabolize one or more of these substances, but I haven't been able to find a more detailed explanation. Your sugar substrate was sucrose. Yeast cells metabolise this by secreting an enzyme, invertase, which splits the disaccharide into glucose and fructose both of which can be fermented by yeast to produce CO2. Equal Original (blue packaging) is a zero calorie sweetener that contains aspartame and acesulfame potassium as its sweetening ingredients along with a bulking agent. This agrees with what you say - that the sweetener that you used contained dextrose - and leaves me very confused. Dextrose is another name for glucose so how can this contain zero calories? The answer lies in the legal definition of zero calories = less than 5 calories per serving (presumably per package in this case.) So let's assume that by adding two packs you were adding 8 calories equivalent of glucose. Two packs could contain 8 calories = 2 g glucose. if completely fermented to CO2 each mole -> 6 moles CO2 12/180 moles CO2 = 0.75 litres of CO2. I conclude that there is enough glucose in two packets of Equal to generate a reasonable amount of CO2. Yeast cannot ferment maltodextrins. It is possible that aspartame (aspartyl-phenylalanine-methyl ester) could be metabolised, but I think the dextrose/glucose solution is the best explanation. EDIT: I meant to say, just to illustrate that this is a significant amount of glucose, standard rich yeast medium is 2 g glucose /100 ml (2 % w/v). EDIT #2 My back-of-an-envelope calculation has a fundamental error in it, but luckily I am the first to spot it. If glucose is being used fermentatively, then only one third of the carbon will end up in CO2, with the remainder going to ethanol. This means 0.25 litres instead of 0.75 litres. Yeast is so oriented towards fermentation that even in the presence of oxygen the fermentative pathway will dominate. Equal is marketed as a "zero calorie" sweetener, with respect to human digestion. The sweetening agents are aspartame (Asp-Phe; a dipeptide) and acesulfame K. The maltodextrin and dextrose are probably bulking agents to give the product a free-flowing, poweder consistence. Dextrose. This is another name of glucose, and is the most easily metabolized sugar in the Equal packet. Maltodextrin is an oligomer of D-glucose linked by $\alpha$(1,4) linkages. If this were the disaccharide of glucose-glucose, or maltose, then yeast could readily ferment this. However, yeast cannot hydrolyze maltodextrin and therefore cannot metabolize it. Aspartame is a dipeptide which can easily be fermented. Therefore, yeast will happily metabolize dextrose and aspartame, but not the other two agents. I don't have a definitive answer to this, but a little over a decade ago I was in an undergraduate lab that had a similar thing happen - a small amount of metabolism of a "control" group of bacteria fed artificial, sugar/calorie free sweeteners instead of sugar. Contamination. Always a problem in laboratory experiments, and especially undergraduate lab experiments. Trace contaminants. "Zero calorie" isn't actually 0, it's just small, and hardy microorganisms may be able to do just fine with small amounts of nutrients. If there's another factor limiting their growth, that may result in the identical CO₂ results. Some intermediate compound or the like that is useful for metabolism. As you said, possibly the ability to use maltodextrin or dextrose. Not the answer you're looking for? Browse other questions tagged microbiology yeast fermentation or ask your own question. Why did the yeast rise? Why would yeast grow but not produce alcohol? Can yeast in a primary fermentation of wine be used as a source of yeast for new fermentation? What concentration of sugar is optimal for baker's yeast fermentation? What fruits will naturally ferment? Is the CMV promoter active in yeast? Can yeast ferment maltose directly? Why do some yeast flocculate on top an other at bottom?
CommonCrawl
Abstract: We review the results of algebro-geometric approach to $4\times 4$ solutions of the Yang–Baxter equation. We emphasis some further geometric properties, connected with the double-reflection theorem, the Poncelet porism and the Euler–Chasles correspondence. We present a list of classifications in Mathematical Physics with a similar geometric background, related to pencils of conics. In the conclusion, we introduce a notion of discriminantly factorizable polynomials as a result of a computational experiment with elementary $n$-valued groups.
CommonCrawl
Why is the degree of dissociation for a weak acid α, but for a weak base, it's 1−α? Degree of dissociation of weak acid is $\alpha$. However, when it comes to weak base, degree of dissociation is $1-\alpha$. What is the explanation behind this? Browse other questions tagged physical-chemistry acid-base or ask your own question. Is nifedipine a weak base or weak acid? What is the effect of dilution on the degree of dissociation of a weak acid or base? Why does the degree of dissociation change when we dilute a weak acid even though the equilibrium constant $K$ is constant?
CommonCrawl
With M. Argerami, M. Kalantar, M. Kennedy, M. Lupini, and M. Sabok. Research report, 2014. Abstract: The classification of separable operator spaces and systems is commonly believed to be intractable. We analyze this belief from the point of view of Borel complexity theory. On one hand we confirm that the classification problems for arbitrary separable operator systems and spaces are intractable. On the other hand we show that the finitely generated operator systems and spaces are completely classifiable (or smooth); in fact a finitely generated operator system is classified by its complete theory when regarded as a structure in continuous logic. In the particular case of operator systems generated by a single unitary, a complete invariant is given by the spectrum of the unitary up to a rigid motion of the circle, provided that the spectrum contains at least 5 points. As a consequence of these results we show that the relation on compact subsets of $\mathbb C^n$, given by homeomorphism via a degree 1 polynomial, is smooth. Further notes: This article is the product of the BIRS focused research group titled "Borel complexity and the classification of operator systems", in the Summer of 2014.
CommonCrawl
When limits are involved, the left Riemann sum approaches the right Riemann sum, as $n \to \infty$. Not the answer you're looking for? Browse other questions tagged riemann-sum or ask your own question. If the left Riemann sum of a function converges, is the function integrable? Riemann sums, finding the lower sum? How to tell whether a left and right riemann sum are overestiamtes and underestimates?
CommonCrawl
The quadratic equation 2x^2+4x+1=0 has roots α and β. Find the quadratic equation with roots α^2 and β^2. Help with solving the question would be very much appreciated! 2. You can use the factorised result to obtain values for the roots. 4. Try doing the reverse of steps 1-3 using new roots, where the new roots are calculated by taking the square of the old roots. It is possible to get the answer without actually finding $\displaystyle \alpha$ and $\displaystyle \beta$. Google "elementary symmetric functions". As 2x² + 4x + 1 = 0 has roots $\alpha$ and $\beta$, 2x² - 4x + 1 = 0 has roots $-\alpha$ and $-\beta$. Hence ±$\alpha$ and ±$\beta$ are the roots of (2x² + 4x + 1)(2x² - 4x + 1) = 0, i.e. 4x$^4$ - 12x² + 1 = 0. It follows that $\alpha$² and $\beta$² are the roots of 4x² - 12x + 1 = 0. Could please share your thought process regarding how you made such indirect approach to the required quadratic by presuming it to be the product of the given quadratic and the other replacing x by -x? Last edited by skipjack; July 10th, 2017 at 12:26 PM. so 2x² - 4x + 1 = 0 is satisfied by x = -$\alpha$ and x = -$\beta$. so ±$\alpha$ and ±$\beta$ are roots of (2x² + 4x + 1)(2x² - 4x + 1) = 0. so $\alpha$² and $\beta$² are roots of 4x² - 12x + 1 = 0. It's often useful to start on a problem by considering what simple observations you can make, and whether they can help you find a way to tackle the problem. so if they are zeros of a quadratic in x, x = ±$\alpha$ and x = ±$\beta$ are zeros of a corresponding quadratic in x². Now, one can easily see how to find that quadratic in x² by considering its zeros in the order $\alpha$, $\beta$, -$\alpha$, -$\beta$. In effect, I had found a way to reduce the problem to the much easier problem of finding a quadratic equation with roots -$\alpha$ and -$\beta$. Last edited by skipjack; July 11th, 2017 at 03:54 PM.
CommonCrawl
Abstract: We study the entropy production of the sandwiched Rényi divergence under a primitive Lindblad equation with GNS-detailed balance. We prove that the Lindblad equation can be identified as the gradient flow of the sandwiched Rényi divergence for any order $\alpha\in (0,\infty)$. This extends a previous result by Carlen and Maas [Journal of Functional Analysis, 273(5), 1810--1869] for the quantum relative entropy (i.e., $\alpha=1$). Moreover, we show that the sandwiched Rényi divergence with order $\alpha\in (0,\infty)$ decays exponentially fast under the time-evolution of such a Lindblad equation.
CommonCrawl
Abstract: The standard one-dimensional generalized model of a viscoelastic body and some of its special cases—Voigt, Maxwell, Kelvin and Zener models are considered. Based on the V. Volterra hypothesis of hereditary elastically deformable solid body and the method of structural modeling the fractional analogues of classical rheological models listed above are introduced. It is shown that if an initial V. Volterra constitutive relation uses the Abel-type kernel, the fractional derivatives arising in constitutive relations will be the Rieman–Liouville derivatives on the interval. It is noted that in many works deal with mathematical models of hereditary elastic bodies, the authors use some fractional derivatives, convenient for the integral transforms, for example, the Riemann–Liouville derivatives on the whole real number line or Caputo derivatives. The explicit solutions of initial value problems for the model fractional differential equations are not given. The correctness of the Cauchy problem is shown for some linear combinations of functions of stress and strain for constitutive relations in differential form with Riemann–Liouville fractional derivatives. Explicit solutions of the problem of creep at constant stress in steps of loading and unloading are found. The continuous dependence of the solutions on the model fractional parameter is proved, in the sense that these solutions transform into a well-known solutions for classical rheological models when $\alpha\to1$. We note the persistence of instantaneous elastic deformation in the loading and unloading process for fractional Maxwell, Kelvin and Zener models. The theorems on the existence and asymptotic properties of the solutions of creep problem are presented and proved. The computer system identifying the parameters of the fractional mathematical model of the viscoelastic body is developed, the accuracy of the approximations for experimental data and visualization solutions of creep problems is evaluated. Test data with constant tensile stresses of polyvinyl chloride tube were used for experimental verification of the proposed models. The results of the calculated data based on the fractional analog of Voigt model are presented. There is a satisfactory agreement between the calculated and experimental data.
CommonCrawl
Abstract: A wide class of binary-state dynamics on networks---including, for example, the voter model, the Bass diffusion model, and threshold models---can be described in terms of transition rates (spin-flip probabilities) that depend on the number of nearest neighbors in each of the two possible states. High-accuracy approximations for the emergent dynamics of such models on uncorrelated, infinite networks are given by recently-developed compartmental models or approximate master equations (AME). Pair approximations (PA) and mean-field theories can be systematically derived from the AME. We show that PA and AME solutions can coincide under certain circumstances, and numerical simulations confirm that PA is highly accurate in these cases. For monotone dynamics (where transitions out of one nodal state are impossible, e.g., SI disease-spread or Bass diffusion), PA and AME give identical results for the fraction of nodes in the infected (active) state for all time, provided the rate of infection depends linearly on the number of infected neighbors. In the more general non-monotone case, we derive a condition---that proves equivalent to a detailed balance condition on the dynamics---for PA and AME solutions to coincide in the limit $t \to \infty$. This permits bifurcation analysis, yielding explicit expressions for the critical (ferromagnetic/paramagnetic transition) point of such dynamics, closely analogous to the critical temperature of the Ising spin model. Finally, the AME for threshold models of propagation is shown to reduce to just two differential equations, and to give excellent agreement with numerical simulations. As part of this work, Octave/Matlab code for implementing and solving the differential equation systems is made available for download.
CommonCrawl
Even though I am one of the founders of the MathWorks, I only acted as an advisor to the company for its first five years. During that time, from 1985 to 1989, I was trying my luck with two Silicon Valley computer startup companies. Both enterprises failed as businesses, but the experience taught me a great deal about the computer industry, and influenced how I viewed the eventual development of MATLAB. The first of these startups developed the Intel Hypercube. In 1981, Caltech Professor Chuck Seitz and his students developed one of the world's first parallel computers, which they called the Cosmic Cube. There were 64 nodes. Each node was a single board computer based on the Intel 8086 CPU and 8087 floating point coprocessor. These were the chips that were being used in the IBM PC at the time. There was room on the board of the Cosmic Cube for only 128 kilobytes of memory. Seitz's group designed a chip to handle the communication between the nodes. It was not feasible to connect a node directly to each of the 63 other nodes. That would have required $64^2 = 4096$ connections. Instead, each node was connected to only six other nodes. That required only $6 \times 64 = 384$ connections. Express each node's address in binary. Since there are $2^6$ nodes this requires six bits. Connect a node to the nodes whose addresses differ by one bit. This corresponds to regarding the nodes as the vertices of a six-dimensional cube and making the connections along the edges of the cube. So, these machines are called "hypercubes". The graphic shows the 16 nodes in a four-dimensional hypercube. Each node is connected to four others. For example, using binary, node 0101 is connected to nodes 0100, 0111, 0001, and 1101. Caltech Professor Geoffrey Fox and his students developed applications for the Cube. Among the first were programs for the branch of high energy physics known as quantum chromodynamics (QCD) and for work in astrophysics modeling the formation of galaxies. They also developed a formidable code to play chess. The first startup I joined wasn't actually in Silicon Valley, but in an offspring in Oregon whose boosters called the Silicon Forest. By 1984, Intel had been expanding its operations outside California and had developed a sizeable presence in Oregon. Gordon Moore, one of Intel's founders and then its CEO, was a Caltech alum and on its Board of Trustees. He saw a demo of the Cosmic Cube at a Board meeting and decided that Intel should develop a commercial version. Two small groups of Intel engineers had already left the company and formed startups in Oregon. One of the co-founders of one of the companies was John Palmer, a former student of mine and one of the authors of the IEEE 754 floating point standard. Palmer's company, named nCUBE, was already developing a commercial hypercube. Hoping to dissuade more breakoff startups, Intel formed two "intrepreneurial" operations in Beaverton, Oregon, near Portland. The Wikipedia dictionary defines intrapreneurship to be "the act of behaving like an entrepreneur while working within a large organization." Justin Rattner was appointed to head one of the new groups, Intel Scientific Computers, which would develop the iPSC, the Intel Personal Supercomputer. UC Berkeley Professor Velvel Kahan, was (and still is) a good friend of mine. He had been heavily involved with Intel (and Palmer) on the development of the floating point standard and the 8087 floating point chip. He recommended that Intel recruit me to join the iPSC group, which they did. At the time in 1984, I had been chairman of the University of New Mexico Computer Science Department for almost five years. I did not see my future in academic administration. We had just founded The MathWorks, but Jack Little was quite capable of handling that by himself. I was excited by the prospect of being involved in a startup and learning more about parallel computing. So my wife, young daughter, and I moved to Oregon. This involved driving through Las Vegas and the story that I described in my Potted Palm blog post. As I drove north across Nevada towards Oregon, I was hundreds of miles from any computer, workstation or network connection. I could just think. I thought about how we should do matrix computation on distributed memory parallel computers when we got them working. I knew that the Cosmic Cube guys at Caltech had broken matrices into submatrices like the cells in their partial differential equations. But LINPACK and EISPACK and our fledgling MATLAB stored matrices by columns. If we preserved that column organization, it would be much easier to produce parallel versions of some of those programs. So, I decided to create distributed arrays by dealing their columns like they were coming from a deck of playing cards. If there are p processors, then column j of the array would be stored on processor with identification number mod(j, p). Gaussian elimination, for example, would proceed in the following way. At the k -th step of the elimination, the node that held the k -th column would search it for the largest element. This is the k -th pivot. After dividing all the other elements in the column by the pivot to produce the multipliers, it would broadcast a message containing these multipliers to all the other nodes. Then, in the step that requires most of the arithmetic operations, all the nodes would apply the multipliers to their columns. This column oriented approach could be used to produce distributed memory parallel versions of the key matrix algorithms in LINPACK and EISPACK. Introducing in 1985, the iPSC was available in three models, the d5, d6, and d7, for 5, 6, and 7-dimensional hypercubes. The d5 had 32 nodes in the one cabinet pictured here. The d6 had 64 nodes in two of these cabinets, and the d7 had 128 nodes in four cabinets. The list prices ranged from 170 thousand dollars to just over half a million. Each node had an Intel 80286 CPU and an 80287 floating point coprocessor. These were the chips used in the IBM PC/AT, the "Advanced Technology" personal computer that was the fastest available at the time. There was 512 kB, that's half a megabyte, of memory. A custom chip handled the hypercube communication with the other nodes, via the backplane within a cabinet and via ethernet between cabinets. It was possible to replace half the nodes with boards that were populated with memory chips, 4 megabytes per board, to give 4.5 megabytes per node. That would turn one cabinet into a d4 with a total of 72 megabytes of memory. A year later another variant was announced that had boards with vector floating point processors. A front end computer called the Cube Manager was an Intel-built PC/AT microcomputer with 4 MBytes of RAM, a 140 MByte disk, and a VT100-compatible "glass teletype". The Manager had direct ethernet connections to all the nodes. We usually accessed it by remote login from workstations at our desks. The Manager ran XENIX, a derivative of UNIX System III. There were Fortran and C compilers. We would compile code, build an executable image, and download it to the Cube. There was a minimal operating system on the cube, which handled message passing between nodes. Messages sent between nodes that were not directly connected in the hypercube interconnect would have to pass through intermediate nodes. We soon had library functions that included operations like global broadcast and global sum. If you look carefully in the picture, you can see red and green LEDs on each board. These lights proved to be very useful. The green light was on when the node was doing useful computation and the red light was on when the node was waiting for something to do. You could watch the lights and get an idea of how a job was doing and even some idea of its efficiency. One day I was watching the lights on the machine and I was able to say "There's something wrong with node 7. It's out of sync with the others." We removed the board and, sure enough, a jumper how been set incorrectly so the CPU's clock was operating at 2/3 its normal rate. At my suggestion, a later model had a third, yellow, light that was on when then the math coprocessor was being used. That way one could get an idea of arithmetic performance. In the computer manufacturing business, the date of First Customer Ship, or FCS, is a Big Deal. It's like the birth of a child. Our first customer was the Computer Science Department at Yale University and, in fact, they had ordered a d7, the big, 128-node machine. When the scheduled FCS date grew near, the machine wasn't quite ready. So we had Bill Gropp, who was then a grad student at Yale and a leading researcher in their computer lab, fly out from Connecticut to Oregon and spend several days in our computer lab. So it was FCS all right, but of the customer, not of the equipment. It was during this time that President Ronald Reagan's administration had proposed the Strategic Defense Initiative, SDI. The idea was to use both ground-based and space-based missiles to protect the US from attack by missiles from elsewhere. The proposal had come to be known by its detractors as the "Star Wars" system. One of the defense contractors working on SDI believed that decentralized, parallel computing was the key to the command and control system. If it was decentralized, then it couldn't be knocked out with a single blow. They heard about our new machine and asked for a presentation. Rattner, I, and our head of marketing went to their offices near the Pentagon. We were ushered into a conference room with the biggest conference table I had ever seen. There were about 30 people around the table, half of them in military uniforms. The lights were dim because PowerPoint was happening. I was about halfway through my presentation about how to use the iPSC when a young Air Force officer interrupts. He says, "Moler, Moler, I remember your name from someplace." I start to reply, "Well, I'm one of the authors of LINPACK and ..." He interrupts again, "No, I know that... It's something else." Pause. "Oh, yeh, nineteen dubious ways!" A few years earlier Charlie Van Loan and I had published "Nineteen dubious ways to compute the exponential of a matrix." It turns out that this officer had a Ph.D. in Mathematics and had been teaching control theory and systems theory at AFIT, the Air Force Institute of Technology in Ohio. So for the next few minutes we talked about eigenvalues and Jordan Canonical Forms while everybody else in the room rolled their eyes and looked at the ceiling. In my next post, part 2, I will describe how this machine, the iPSC/1, influenced both MATLAB, and the broader technical computing community.
CommonCrawl
Hyperbolic quadratic matrix polynomials $Q(\lambda) = \lambda^2 A + \lambda B + C$ are an important class of Hermitian matrix polynomials with real eigenvalues, among which the overdamped quadratics are those with nonpositive eigenvalues. Neither the definition of overdamped nor any of the standard characterizations provides an efficient way to test if a given $Q$ has this property. We show that a quadratically convergent matrix iteration based on cyclic reduction, previously studied by Guo and Lancaster, provides necessary and sufficient conditions for $Q$ to be overdamped. For weakly overdamped $Q$ the iteration is shown to be generically linearly convergent with constant at worst 1/2, which implies that the convergence of the iteration is reasonably fast in almost all cases of practical interest. We show that the matrix iteration can be implemented in such a way that when overdamping is detected a scalar $\mu < 0$ is provided that lies in the gap between the $n$ largest and $n$ smallest eigenvalues of the $n \times n$ quadratic eigenvalue problem (QEP) $Q(\lambda)x = 0$. Once such a $\mu$ is known, the QEP can be solved by linearizing to a definite pencil that can be reduced, using already available Cholesky factorizations, to a standard Hermitian eigenproblem. By incorporating an initial preprocessing stage that shifts a hyperbolic $Q$ so that it is overdamped, we obtain an efficient algorithm that identifies and solves a hyperbolic or overdamped QEP maintaining symmetry throughout and guaranteeing real computed eigenvalues.
CommonCrawl
Notice that algorithms from the first two groups have to be called multiple times to get the full distance matrix. Now, you should be able to run Packman example. This scenario is prepared with a working example which finds the shortest tour (TSP solution) based on the Euclidean distance. However, the obstacles in the environment are not considered and, therefore, the final trajectory collides with the walls, see the image above. Our task is to replace the Euclidean distance by the shortest feasible trajectory in the maze. Since the original Pac-man can move only in 4 direction, we also utilizes the same 4 moves during the planning phase and discard all diagonal moves. In the prepared codes, a naive implementation is provided using A* from previous lectures. The A* algorithm needs to be called $\mathcal(n^2)$ times which is the main drawback of this approach. Use 4 neighborhood for planning. Solve TSP instance of the created distance matrix using existing LKH Solver. Create the final path from the found sequence by connecting corresponding shortest paths between visited dots. An example of a closed-loop path of 50 goals (left) and an open-loop path of 10 goals (right). Use 8 neighborhood for planning. First, find the goal sequence by the ETSP. Then, find the final trajectory. First, find the goal sequence by the TSP utilizing the found shortest paths between centres of the given regions. Then, find the final trajectory as the shortest tour connecting the neighborhood samples in the given sequence.
CommonCrawl
Is it true that if a Sudoku puzzle has the following features there will be no repetitions in rows, columns and $3 \times 3$ subsquares? If so, why? Is there a mathematical proof? If not, why? Is there a case where these conditions are satisfied, but is there at least one repetition? If all the cells are distinct 1 through 9 then the sum is 1+2+....+9 =45. But there is utterly no reason earth to assume the converse, that is $a+b+.... +i = 45$ then they are all distinct. For any $b,...,h =N$ we can have $a$ be any $1 \le a \le 45-N $ and $i = 45-N-a$. And we can determine values for the other rows and columns. Yes, it takes a bit of thought to actually work this out but there is no reason that that keeping them distinct will be a requirement. Let's suppose for instance we have a grid labeled A1....A9..... I1.... I9 where every row, column and quadrant add up to 45. Then lets say we replace mk (where $A \le m \le I$ and $1\le k \le 9$) with mk + 1. Then we replace mj in the same column and quadrant with mk - 1$, replace nk in the same column and quadrant with nk-1 and nj with nj + 1. Then all the quadrants, columns and rows still add to 45 but one or the other or both grids are no longer distinct. Note, the sums must be the same but values need not be distinct. imagine each digit is a 5, then all summation to 45 are met and we clearly have repition, all that's necessary is a pattern with an average of 5 to pull this off. Not the answer you're looking for? Browse other questions tagged combinatorics matrices puzzle sudoku or ask your own question. Does a solved sudoku game always have same sum? Is this sum unique to solved game? Is a Sudoku a Cayley table for a group?
CommonCrawl
Abstract: We show that the eight-dimensional instanton solution, which satisfies the self-duality equation $F \wedge F = *_8 F \wedge F$, realizes the static Skyrmion configuration in eight dimensions through the Atiyah-Manton construction. The relevant energy functional of the Skyrme field is obtained by the formalism developed by Sutcliffe. By comparing the Skyrmion olution associated with the extreme of the energy, with the Atiyah-Manton solution constructed by the instantons, we find that they agree with high accuracy. This is a higher-dimensional analogue of the Atiyah-Manton construction of Skyrmions in four dimensions. Our result indicates that the instanton/Skyrmion correspondence seems to be an universal property in $4k \ (k=1, 2, \ldots)$ dimensions.
CommonCrawl
Let $V$ be a finite-dimensional vector space, let $U_1,\dots,U_n$ be subspaces, and let $L$ be the lattice they generate; namely, the smallest collection of subspaces containing the $U_i$ and closed under intersections and sums. Is $L$ finite? This is well-known to hold if $n\le3$: if $n=3$ then there are at most $28$ elements in $L$, indpendently of $V$'s dimension. Note that I suspect the answer to be "no" if $V$ is allowed to be infinite-dimensional: there exist infinite modular lattices generated by $4$ elements; here $L$ is a bit more than modular ("arguesian", see Is the free modular lattice linear?) and finiteness of free arguesian lattices doesn't seem to be known. It would be nice to have an example. The answer is in fact "no", and appears in another MO post, How many subspaces are generated by three or more subspaces in a Hilbert space? : starting from four points in $P^2(\mathbb R)$, infinitely many points may be generated by intersecting lines and joining points. Not the answer you're looking for? Browse other questions tagged linear-algebra lattice-theory or ask your own question. How many subspaces are generated by three or more subspaces in a Hilbert space? Is the free modular lattice linear? How is the free modular lattice on 3 generators related to 8-dimensional space? What is the meaning of this analogy between lattices and topological spaces? Efficiently embedding finite Boolean algebras into lattices of set partitions?
CommonCrawl
How would I calculate CL and CD for a VTOL aircraft while it is in it's takeoff/landing stage? The particular aircraft is a fixed wing/quadcopter hybrid so I know that I can treat this as a thin rectangular plate problem. However, because the aircraft takes off vertically, I do not understand what formulas to use to calculate these coefficients. If I use CL=2$\pi$$\alpha$, where $\alpha$=$\pi$/2, then I end up with CL=9.87, which just doesn't make sense. Browse other questions tagged aerodynamics vtol or ask your own question. How do VTOL aircraft like the V-22 Osprey not tilt? what are the advantages and disadvantages of tilt rotor vs ducted jet engine for a VTOL aircraft? Why didn't we see many VTOL aircraft in the history of aircraft development?
CommonCrawl
What I had in mind was to extract a series of frequency band and map the power estimate to the intensity of one/many actuators. For instance, very low frequency could control the center of the matrix, the range from C3 to B6 would be mapped to arbitrarily selected actuators and very high frequency would be spread all across the matrix, so that sounds with high component such as synth sound would give a "buzz" effect. Well the strategy has to be elaborated more with some trial and errors I guess and a lot of human intepretation. Doing an STFT (or a moving DFT as this will be real time). I could multiply the STFT result with a mapping matrix of $N \times M$ (where $N=$ number of FFT bins and $M=$ number of actuators) to get a vector of intensity value that could be sent to the hardware controlling the actuator. The amount of calculation will of course depend on the amount of non-zero value in the matrix, which I suspect will be high. Wavelet transform : Probably a more efficient way of doing #1, trading precision at high frequency for lower cpu operations. Still, this would imply a downsmapling filtering as well. Doing a filter bank of bandpass filters. I did a quick estimate of the amount of operation needed. With an IIR bandpass filter with a 40dB attenuation at the neighbor demi-tone. I find roughly 16 multiplication ~10 addition per sample at low frequency and 30 multiplication, 20 additions at high freq. Which we can approximate ~1350 Multiplications and 720 additions if we consider 48 actuators and the average values of low and high freq requirements. So, the question goes as follow : What technique should I mostly look for? Even if there is still requirements to define, is there a technique that is to be avoided? Or maybe a similar application? Right now, the wavelet approach seems to me like the most appealing approach. Note that I may move some load on a FPGA if I get too short on CPU power. Browse other questions tagged wavelet power-spectral-density stft or ask your own question. Band-pass filtering on whole signal or overlapping chunks?
CommonCrawl
I use \mathabx package because I like much more their mathematical symbols. Except for the empty set symbol ($\emptyset$ in LaTeX). I would like to changed it by the standard Computer Modern symbol. But I don't know how to do it. There are some examples to import particular symbols, but I don't undertsand them. has been taken from the question The standard \cup vs. the mathabx \cup. Question Importing a Single Symbol From a Different Font also has a similar code. How can I changed them to get the standard empty set symbol? For completeness, I provide the original solution, on the assumption that the font needed is not already loaded by TeX. Afterward, I show a streamlined solution, since cmsy (Computer Modern symbol font family) has already been loaded by default. Here, after mathabx is loaded, I declare and define the cmsy font family, found on p.431 of the TeXbook. cmsy is the name that the Computer Modern symbol font family is known as, to TeX. It is called a font "family" because the font is provided in different sizes, and \DeclareFontShape tells which glyph set (member of the family) to use depending on the fontsize that is requested. I create a new symbol font Xcmsy that points to the cmsy font family. I then declare the symbol \cmemptyset to be of categorymathord, and found in slot 59 of the Xcmsy font. If you uncomment the two fonttable lines of code, you will see the cmsy font printed out in tabular form, and can verify that the empty set glyph is found at slot 59. The answer below is inspired by the way the author of the mathabx package, Anthony Phan, uses to retain the old \emptyset symbol (which he calls \voidset) while loading his mathabx package (see the TeX source of its documentation). How can I recreate MnSymbol's \udotdot command without using MnSymbol?
CommonCrawl
I am running linear regression models and wondering what the conditions are for removing the intercept term. In comparing results from two different regressions where one has the intercept and the other does not, I notice that the $R^2$ of the function without the intercept is much higher. Are there certain conditions or assumptions I should be following to make sure the removal of the intercept term is valid? The shortest answer: never, unless you are sure that your linear approximation of the data generating process (linear regression model) either by some theoretical or any other reasons is forced to go through the origin. If not the other regression parameters will be biased even if intercept is statistically insignificant (strange but it is so, consult Brooks Introductory Econometrics for instance). Finally, as I do often explain to my students, by leaving the intercept term you insure that the residual term is zero-mean. For your two models case we need more context. It may happen that linear model is not suitable here. For example, you need to log transform first if the model is multiplicative. Having exponentially growing processes it may occasionally happen that $R^2$ for the model without the intercept is "much" higher. Screen the data, test the model with RESET test or any other linear specification test, this may help to see if my guess is true. And, building the models highest $R^2$ is one of the last statistical properties I do really concern about, but it is nice to present to the people who are not so well familiar with econometrics (there are many dirty tricks to make determination close to 1 :)). Removing the intercept is a different model, but there are plenty of examples where it is legitimate. Answers so far have already discussed in detail the example where the true intercept is 0. I will focus on a few examples where we may be interested in an atypical model parametrization. Example 1: The ANOVA-style Model. For categorical variables, we typically create binary vectors encoding group membership. The standard regression model is parametrized as intercept + k - 1 dummy vectors. The intercept codes the expected value for the "reference" group, or the omitted vector, and the remaining vectors test the difference between each group and the reference. But in some cases, it may be useful to have each groups' expected value. Example 2: The case of standardized data. In some cases, one may be working with standardized data. In this case, the intercept is 0 by design. I think a classic example of this was old style structural equation models or factor, which operated just on the covariance matrices of data. In the case below, it is probably a good idea to estimate the intercept anyway, if only to drop the additional degree of freedom (which you really should have lost anyway because the mean was estimated), but there are a handful of situations where by construction, means may be 0 (e.g., certain experiments where participants assign ratings, but are constrained to give out equal positives and negatives). Example 3: Multivariate Models and Hidden Intercepts. This example is similar to the first in many ways. In this case, the data has been stacked so that two different variables are now in one long vector. A second variable encodes information about whether the response vector, y, belongs to mpg or disp. In this case, to get the separate intercepts for each outcome, you suppress the overall intercept and include both dummy vectors for measure. This is a sort of multivariate analysis. It is not typically done using lm() because you have repeated measures and should probably allow for the nonindepence. However, there are some interesting cases where this is necessary. For example when trying to do a mediation analysis with random effects, to get the full variance covariance matrix, you need both models estimated simultaneously, which can be done by stacking the data and some clever use of dummy vectors. I am not arguing that intercepts should generally be removed, but it is good to be flexible. Several people make the point that you should be certain the intercept must be 0 (for theoretical reasons) before dropping it, and not just that it isn't 'significant'. I think that's right, but it's not the whole story. You also need to know that the true data generating function is perfectly linear throughout the range of $X$ that you are working with and all the way down to 0. Remember that it is always possible that the function is approximately linear within your data, but actually slightly curving. It may be quite reasonable to treat the function as though it were linear within the range of your observations, even if it isn't perfectly so, but if it isn't and you drop the intercept you will end up with a worse approximation to the underlying function even if the true intercept is 0. You shouldn't drop the intercept, regardless of whether you are likely or not to ever see all the explanatory variables having values of zero. There's a good answer to a very similar question here. If you remove the intercept then the other estimates all become biased. Even if the true value of the intercept is approximately zero (which is all you can conclude from your data), you are messing around with the slopes if you force it to be exactly zero. UNLESS - you are measuring something with a very clear and obvious physical model that demands intercept be zero (eg you have height, width and length of a rectangular prism as explanatory variables and the response variable is volume with some measurement error). If your response variable is value of the house, you definitely need to leave the intercept in. You can leave out the intercept when you know it's 0. That's it. And no, you can't do it because it's not significantly different from 0, you have to know it's 0 or your residuals are biased. And, in that case it is 0 so it won't make any difference if you leave it out... therefore, never leave it out. The finding you have with $R^2$ suggests the data are not linear. And, given that you had area as a predictor that particular one is probably definitely not linear. You could transform the predictor to fix that. In a simple regression model, the constant represents the Y-intercept of the regression line, in unstandardized form. In a multiple regression model, the constant represents the value that would be predicted for the dependent variable if all the independent variables were simultaneously equal to zero--a situation which may not physically or economically meaningful. If you are not particularly interested in what would happen if all the independent variables were simultaneously zero, then you normally leave the constant in the model regardless of its statistical significance. In addition to ensuring that the in-sample errors are unbiased, the presence of the constant allows the regression line to "seek its own level" and provide the best fit to data which may only be locally linear. the constant is redundant with the set of independent variables you wish to use. An example of case (1) would be a model in which all variables--dependent and independent--represented first differences of other time series. If you are regressing the first difference of Y on the first difference of X, you are directly predicting changes in Y as a linear function of changes in X, without reference to the current levels of the variables. In this case it might be reasonable (although not required) to assume that Y should be unchanged, on the average, whenever X is unchanged--i.e., that Y should not have an upward or downward trend in the absence of any change in the level of X. An example of case (2) would be a situation in which you wish to use a full set of seasonal indicator variables--e.g., you are using quarterly data, and you wish to include variables Q1, Q2, Q3, and Q4 representing additive seasonal effects. Thus, Q1 might look like 1 0 0 0 1 0 0 0 ..., Q2 would look like 0 1 0 0 0 1 0 0 ..., and so on. You could not use all four of these and a constant in the same model, since Q1+Q2+Q3+Q4 = 1 1 1 1 1 1 1 1 . . . . , which is the same as a constant term. I.e., the five variables Q1, Q2, Q3, Q4, and CONSTANT are not linearly independent: any one of them can be expressed as a linear combination of the other four. A technical prerequisite for fitting a linear regression model is that the independent variables must be linearly independent; otherwise the least-squares coefficients cannot be determined uniquely, and we say the regression "fails." A word of warning: R-squared and the F statistic do not have the same meaning in an RTO model as they do in an ordinary regression model, and they are not calculated in the same way by all software. See this article for some caveats. You should not try to compare R-squared between models that do and do not include a constant term, although it is OK to compare the standard error of the regression. Note that the term "independent" is used in (at least) three different ways in regression jargon: any single variable may be called an independent variable if it is being used as a predictor, rather than as the predictee. A group of variables is linearly independent if no one of them can be expressed exactly as a linear combination of the others. A pair of variables is said to be statistically independent if they are not only linearly independent but also utterly uninformative with respect to each other. In a regression model, you want your dependent variable to be statistically dependent on the independent variables, which must be linearly (but not necessarily statistically) independent among themselves. Full revision of my thoughts. Indeed dropping the intercept will cause a bias problem. Have you considered centering your data so an intercept would have some meaning and avoid explaining how some (unreasonable) values could give negative values? If you adjust all three explanatory variables by subtract the mean sqrft, mean lotsize and mean bath, then the intercept will now indicate the value (of a house?) with average sdrft, lotsize, and baths. This centering will not change the relative relationship of the independent variables. So, fitting the model on the centered data will still find baths as insignificant. Refit the model without the bath included. You may still get a large p-value for the intercept, but it should be included and you will have a model of the form y=a+b(sqrft)+c(lotsize). I just spent some time answering a similar question posted by someone else, but it was closed. There are some great answers here, but the answer I provide is a bit simpler. It might be more suited to people who have a weak understanding of regression. Q1: How do I interpret the intercept in my model? where y is the predicted value of your outcome measure (e.g., log_blood_hg), b0 is the intercept, b1 is the slope, x is a predictor variable, and ϵ is residual error. The intercept (b0) is the predicted mean value of y when all x = 0. In other words, it's the baseline value of y, before you've used any variables (e.g., species) to further minimise or explain the variance in log_blood_hg. By adding a slope (which estimates how a one-unit increase/decrease in log_blood_hg changes with a one unit increase in x, e.g., species), we add to what we already know about the outcome variable, which is its baseline value (i.e. intercept), based on change in another variable. Q2: When is it appropriate to include or not include the intercept, especially in regards to the fact that the models give very different results? For simple models like this, it's never really appropriate to drop the intercept. The models give different results when you drop the intercept because rather than grounding the slope in the baseline value of Y, it is forced to go through the origin of y, which is 0. Therefore, the slope gets steeper (i.e. more powerful and significant) because you've forced the line through the origin, not because it does a better job of minimizing the variance in y. In other words, you've artificially created a model which minimizes the variance in y by removing the intercept, or the initial grounding point for your model. There are cases where removing the intercept is appropriate - such as when describing a phenomenon with a 0-intercept. You can read about that here, as well as more reasons why removing an intercept isn't a good idea. Short answer: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is zero. You almost never know that. $R^2$ becomes higher without intercept, not because the model is better, but because the definition of $R^2$ used is another one! $R^2$ is an expression of a comparison of the estimated model with some standard model, expressed as reduction in sum of squares compared to sum of squares with the standard model. In the model with intercept, the comparison sum of squares is around the mean. Without intercept, it is around zero! The last one is usually much higher, so it easier to get a large reduction in sum of squares. Conclusion: DO NOT LEAVE THE INTERCEPT OUT OF THE MODEL (unless you really, really know what you are doing). Some exceptions: One exception is a regression representing a one-way ANOVA with dummies for ALL the factor levels (usually one is left out) (but that is only seemingly an exception, the constant vector 1 is in the column space of the model matrix $X$.) Otherwise, such as physical relationships $s=v t$ where there are no constant. But even then, if the model is only approximate (speed is not really constant), it might be better to leave in a constant even if it cannot be interpreted. There are also special models which leave out the intercept. One example is paired data, twin studies. Not the answer you're looking for? Browse other questions tagged regression linear-model r-squared intercept or ask your own question. What are the uses and pitfalls of regression through the origin? Ridge Regression: When should the intercept be included ? What is the purpose of the intercept term? Removing the intercept term in a dynamic regression justified? Which regression model to choose? How is the standard error of a slope calculated when the intercept term is omitted? Role of the random intercept when predicting with a mixed-linear model? Is it ok to remove the intercept in a linear regression model (OLS) if the results are really good?
CommonCrawl
This jar used to hold perfumed oil. It contained enough oil to fill granid silver bottles. Each bottle held enough to fill ozvik golden goblets and each goblet held enough to fill vaswik crystal spoons. Each day a spoonful was used to perfume the bath of a beautiful princess. For how many days did the whole jar last? The genie's master replied: Five hundred and ninety five days. What three numbers do the genie's words granid, ozvik and vaswik stand for? This cube is made up from $3\times 3\times 3$ little cubes whose faces are either all red or all yellow. The views from all sides of the cube look like this, and the little cube in the centre is red. How many little red cubes are used in total? How many little yellow cubes are used? Suppose the other views of the cube do not necessarily look like this, and the little cube in the centre is not necessarily red. What is the most and least number of little red cubes that could be used? Graphs. Visualising. Area - triangles, quadrilaterals, compound shapes. Time. Volume and capacity. Cubes & cuboids. Surface and surface area. Maths Supporting SET. Spheres, cylinders & cones. Curious.
CommonCrawl
in order to change the limit phase value to be 0. We first define the system of ODEs in terms of symbolic variables. m = 10; %% [m]: number of oscillators, which we may change later. th_eq = zeros(m,1); %% Evaluation of Jacobian is at equilibrium, [0;0;0;0]. We next set the physical parameters. Natural frequencies, however, we put them zero to apply linear-quadratic control model. and save it in the folder 'functions/coupling.mat'. The cost is only for the symmetric quadrature at $[0,0,0]$ since we want all of them to be 0. Practically, any $K$ with positive elements can make the limit point to be 0, e.g., K=[1,1,1]. which is stored in the functions folder, 'functions/ini.mat'. 1) Simulate the nonlinear dynamics without any control: $u = 0$. plot(timenc, statenc(:,1)/pi(),'-k','LineWidth',2) %% The first oscillator is black-colored. ylabel('Phases [\pi]') %% The unit is $\pi$. In Figure 1, the limit point is not zero since the mean value of initial phases is nonzero. Figure 2 shows that the first oscillator keeps it phase above zero to make the final phases zero. The nonlinear model has less decay then the linearized model, but goes to zero in Figure 3. which is uniformly distributed on $[-\pi,\pi]$. We expect the limit point to be separated in the real line with differences $2\pi$. We can see that $\theta_1$ does not tend to zero in Figure 5 since $K\times\Theta$ is near zero already. Linear feedback control is not enough, or too weak, for this setting.
CommonCrawl
Abstract: We develop and test an automated technique to model the dynamics of interacting galaxy pairs. We use Identikit (Barnes & Hibbard 2009, Barnes 2011) as a tool for modeling and matching the morphology and kinematics of the interacting pairs of equal-mass galaxies. In order to reduce the effect of subjective human judgement, we automate the selection of phase-space regions used to match simulations to data, and we explore how selection of these regions affects the random uncertainties of parameters in the best-fit model. In this work, we use an independent set of GADGET SPH simulations as input data to determine the systematic bias in the measured encounter parameters based on the known initial conditions of these simulations. We test both cold gas and young stellar components in the GADGET simulations to explore the effect of choosing HI vs. H$\alpha$ as the line of sight velocity tracer. We find that we can group the results into tests with good, fair, and poor convergence based on the distribution of parameters of models close to the best-fit model. For tests with good and fair convergence, we rule out large fractions of parameter space and recover merger stage, eccentricity, pericentric distance, viewing angle, and initial disc orientations within 3$\sigma$ of the correct value. All of tests on prograde-prograde systems have either good or fair convergence. The results of tests on edge-on discs are less biased than face-on tests. Retrograde and polar systems do not converge and may require constraints from regions other than the tidal tails and bridges.
CommonCrawl
The square numbers $ 1^2, 2^2, 3^2, 4^2, \cdots, 100^2, 101^2 $ are written on the blackboard. Each minute any two numbers are wiped out, and the absolute value of their difference is written instead. At the end only one number remains. What is the smallest value that this final number can take? Is a lower number possible? The only other lower number would be a $0$. But this is not possible because we start with an odd count of odd numbers. We can only remove odd numbers in pairs which leaves us with an odd number at the end. The lowest non-negative odd number is $1$. the number 1. This is the lowest number possible: a move does not change the parity of the number of odd integers on the blackboard. As in the beginning the blackboard does contain $51$ odd integers, it will also contain an odd number of odd integers when the process terminates. Hence the final number will always be odd. Where the bolded area indicates the part that's different from Joe Z's method. If you erase $n^2$ and $(n+1)^2$ starting from $n = 2$ and going up by $2$, you end up with the numbers $1, 5, 9, \ldots, 201$, which is a total of 51 numbers. If you then erase pairs of consecutive numbers starting from $5$ and $9$, you end up with a single $1$ and 25 $4$'s. If you erase consecutive pairs of 4's until they become $0$, you're left with one $1$ and one $4$, which gives you a final number of $3$. The question is, what's the lowest number possible from subtracting pairs until one number remains... the series is n² for n = 1 to 101. Ok, I got it down to 1. So subtract pairs from 101² to 16² then go through that list and match pairs that are closest, eg (99²-98²)-14² Go through the list and reduce like so, you will find all the squares will cancel to 0 or 1. then you can subtract the remainder via repetition down. Iterate a few times and you can get down to 1... perhaps 0. No time for a mathematical proof, i used excel. Not the answer you're looking for? Browse other questions tagged mathematics reachability blackboard or ask your own question.
CommonCrawl
19/05/2018�� How to Win at Connect 4. Connect 4 is a two-player strategy game that can be played on a computer or with a board and disks. The board is made up of horizontal and vertical columns that contain slots, and each player takes �... In the game connect four with a $7 \times 6$ grid like in the image below, how many game situations can occur? Rules: Connect Four [...] is a two-player game in which the players first choose a color and then take turns dropping colored discs from the top into a seven-column, six-row vertically-suspended grid. In the game connect four with a $7 \times 6$ grid like in the image below, how many game situations can occur? Rules: Connect Four [...] is a two-player game in which the players first choose a color and then take turns dropping colored discs from the top into a seven-column, six-row vertically-suspended grid. 25 Revision: Sounds like 4 teams would play while one team would sit out, so that would make 5 weeks for each time. Guess there would be something like 16 different outcome � s without the "bye" week/game and with the "bye" week/game that could freely move into any week/game the outcomes would bump up to 80. 25 just didn't sound right.
CommonCrawl
We study the geometry of germs of singular surfaces in $\mathbb R^3$ whose parametrisations have an $\mathcal A$-singularity of $\mathcal A_e$-codimension less than or equal to 3, via their contact with planes. These singular surfaces occur as projections of smooth surfaces in $\mathbb R^4$ to $\mathbb R^3$. We recover some aspects of the extrinsic geometry of these surfaces in $\mathbb R^4$ from those of the images of their projections.
CommonCrawl
Activations functions are widely used in almost every type of Neural Networks. Feed-forward neural network, Convolutional Neural Networks and Recurrent Neural Networks use activation functions. There are a lot of different activation functions that could be used for different purposes. An activation function is used to separate active and inactive neurons depending on a rule or function. Depending on their status they will modify their values or not. This activation function will not produce any change to the input value. The output will be the same. It's a simple fragmented function. It's very similar to the logistic function but it will tend to -1 in the $-\infty$ and to 1 in $\infty$ . Similar to ReLU but with an exponential curve in the negative X axis. In this function the Y value always increases. Finally we have the softmax function that is mainly used for the output layers specially in the Convolutional Neural Network. It's explained in the Convolutional Neural Network tutorial in the Output layer section.
CommonCrawl
Abstract: Particle capture by a slowly varying one-dimensional periodic potential is studied by the method of averaging . For large time intervals $t\sim 1/\alpha$ ($\alpha$ is the small parameter which characterizes the rate of change of the potential), including the point of intersection of the separatrix, the solution is constructed up to the first correction terms of order relative to the leading term. The increment $\Delta I$ of the action in a complete evolution interval is also calculated in the leading order in $\alpha$.
CommonCrawl
Numerical data can be sorted in increasing or decreasing order. Thus the values of a numerical data set have a rank order. A percentile is the value at a particular rank. For example, if your score on a test is on the 95th percentile, a common interpretation is that only 5% of the scores were higher than yours. The median is the 50th percentile; it is commonly assumed that 50% the values in a data set are above the median. But some care is required in giving percentiles a precise definition that works for all ranks and all lists. To see why, consider an extreme example where all the students in a class score 75 on a test. Then 75 is a natural candidate for the median, but it's not true that 50% of the scores are above 75. Also, 75 is an equally natural candidate for the 95th percentile or the 25th or any other percentile. Ties – that is, equal data values – have to be taken into account when defining percentiles. You also have to be careful about exactly how far up the list to go when the relevant index isn't clear. For example, what should be the 87th percentile of a collection of 10 values? The 8th value of the sorted collection, or the 9th, or somewhere in between? In this section, we will give a definition that works consistently for all ranks and all lists. Before giving a general definition of all percentiles, we will define the 80th percentile of a collection of values to be the smallest value in the collection that is at least as large as 80% of all of the values. For example, let's consider the sizes of the five largest continents – Africa, Antarctica, Asia, North America, and South America – rounded to the nearest million square miles. The 80th percentile is a value on the list, namely 12. You can see that 80% of the values are less than or equal to it, and that it is the smallest value on the list for which this is true. Analogously, the 70th percentile is the smallest value in the collection that is at least as large as 70% of the elements of sizes. Now 70% of 5 elements is "3.5 elements", so the 70th percentile is the 4th element on the list. That's 12, the same as the 80th percentile for these data. The percentile function takes two arguments: a rank between 0 and 100, and a array. It returns the corresponding percentile of the array. Let $p$ be a number between 0 and 100. The $p$th percentile of a collection is the smallest value in the collection that is at least as large as p% of all the values. By this definition, any percentile between 0 and 100 can be computed for any collection of values, and it is always an element of the collection. Sort the collection in increasing order. Find p% of n: $(p/100) \times n$. Call that $k$. If $k$ is an integer, take the $k$th element of the sorted collection. If $k$ is not an integer, round it up to the next integer, and take that element of the sorted collection. The table scores_and_sections contains one row for each student in a class of 359 students. The columns are the student's discussion section and midterm score. According to the percentile function, the 85th percentile was 22. To check that this is consistent with our new definition, let's apply the definition directly. There are 359 scores in the array. So next, find 85% of 359, which is 305.15. That's not an integer. By our definition, the 85th percentile is the 306th element of sorted_scores, which, by Python's indexing convention, is item 305 of the array. That's the same as the answer we got by using percentile. In future, we will just use percentile. The first quartile of a numercial collection is the 25th percentile. The terminology arises from the first quarter. The second quartile is the median, and the third quartile is the 75th percentile. Distributions of scores are sometimes summarized by the "middle 50%" interval, between the first and third quartiles.
CommonCrawl
Life Sciences have been established and widely accepted as a foremost Big Data discipline; as such they are a constant source of the most computationally challenging problems. In order to provide efficient solutions, the community is turning towards scalable approaches such as the utilization of cloud resources in addition to any existing local computational infrastructures. Although bioinformatics workflows are generally amenable to parallelization, the challenges involved are however not only computationally, but also data intensive. In this paper we propose a data management methodology for achieving parallelism in bioinformatics workflows, while simultaneously minimizing data-interdependent file transfers. We combine our methodology with a novel two-stage scheduling approach capable of performing load estimation and balancing across and within heterogeneous distributed computational resources. Beyond an exhaustive experimentation regime to validate the scalability and speed-up of our approach, we compare it against a state-of-the-art high performance computing framework and showcase its time and cost advantages. There is no doubt that Life Sciences have been firmly established as a Big Data science discipline, largely due to the high-throughput sequencers that are widely available and extensively utilized in research. However, when it comes to tools for analyzing and interpreting big bio-data, the research community has always been one step behind the actual acquisition and production methods. Although the amount of data currently available is considered vast, the existing methods and extensively used techniques can only hint at the knowledge that can be potentially extracted and consequently applied for addressing a plethora of key issues, ranging from personalized healthcare and drug design to sustainable agriculture, food production and nutrition, and environmental protection. Researchers in genomics, medicine and other life sciences are using big data to tackle fundamental issues, but actual data management and processing requires more networking and computing power . Big data is indeed one of today's hottest concepts, but it can be misleading. The name itself suggests mountains of data, but that's just the start. Overall, big data consists of three v's: volume of data, velocity of processing the data, and variability of data sources. These are the key features of information that require particular tools and methodologies to efficiently address them. The main issue with dealing with big data is the constantly increasing demands for both computational resources as well as storage facilities. This in turn, has led to the rise of large-scale high performance computing (HPC) models, such as cluster, grid and cloud computing. Cloud computing can be defined as a potentially high performance computing environment consisting of a number of virtual machines (VMs) with the ability to dynamically scale resources up and down according to the computational requirements. This computational paradigm has become a popular choice for researchers that require a flexible, pay-as-you-go approach to acquiring computational resources that can accompany their local computational infrastructure. The combination of public and privately owned clouds defines a hybrid cloud, i.e. an emerging form of a distributed computing environment. From this perspective, optimizing the execution of data-intensive bioinformatics workflows in hybrid clouds is an interesting problem. Generally speaking, a workflow can be described as the execution of a sequence of concurrent processing steps, or else computational processes, the order of which is determined by data interdependencies as well as the target outcome. In a data-intensive workflow, data and metadata, either temporary or persistent, are created and read at a high rate. Of course, a workflow can be both data and computationally intensive and the two are often found together in bioinformatics workflows. In such workflows, when scheduling tasks to distributed resources, the data transfers between tasks are not a negligible factor and may comprise a significant portion of the total execution time and cost. A high level of data transfers can quickly overwhelm the storage and network throughput of cloud environments, which is usually on the order of 10–20 MiB/s , while also saturating the bandwidth of local computational infrastructures and leading to starvation of resources to other users and processes. It is well known that a high level of parallelization can be achieved in a plethora of bioinformatics workflows by fragmenting the input of individual processes into chunks and processing them independently, thus achieving parallelism in an embarrassingly parallel way. This is the case in most evolutionary investigation, comparative genomics and NGS data analysis workflows. This fact can be largely taken advantage of in order to achieve parallelism by existing workflow management approaches emphasizing parallelization. The disadvantage of this approach however is that it creates significant data interdependencies, which in turn lead to data transfers that can severely degrade performance and increase overall costs. In this work, we investigate the problem of optimizing the parallel execution of data-intensive bioinformatics workflows in hybrid cloud environments. Our motivation is to achieve better time and cost efficiency than existing approaches by minimizing file transfers in highly parallelizable data-intensive bioinformatics workflows. The main contributions of this paper are twofold; (a) We propose a novel data management paradigm for achieving parallelism in bioinformatics workflows while simultaneously minimizing data-interdependency file transfers, and (b) based on our data management paradigm, we introduce a 2-stage scheduling approach balancing the trade-off between parallelization opportunities and minimizing file transfers when mapping the execution of bioinformatics workflows into a set of heterogeneous distributed computational resources comprising a hybrid cloud. Finally, in order to validate and showcase the time and cost efficiency of our approach, we compare our performance with Swift, one of the most widely used and state-of-the-art high performance workflow execution frameworks. The rest of the paper is organized as follows: a review of the state-of-the-art on workflow management systems and frameworks in general and in the field of bioinformatics in particular is presented in "Related work" section. "Methods" section outlines the general characteristics and operating principles of our approach. "Use case study" section briefly presents the driving use case that involves the construction of phylogenetic profiles from protein homology data. "Results and discussion" section provides the results obtained through rigorous experimentation, in order to evaluate the scalability and efficiency as well as the performance of our approach when compared against a high performance framework. Finally, concluding remarks and directions for future work are given in "Conclusions and future work" section. The aforementioned advantages of cloud computing have led to its widespread adoption in the field of bioinformatics. Initial works were mostly addressed on tackling specific, highly computationally intensive problems that outstretched the capabilities of local infrastructures. As the analyses became more complex and incorporated an increasing number of modules, several tools and frameworks appeared that aimed to streamline computations and automate workflows. The field of bioinformatics has also sparked the interest of many domain agnostic workflow management systems, some of the most prolific applications of which were bioinformatics workflows, thus leading to the development of pre-configured customized versions specifically for bioinformatics workflows . Notable works addressing well-known bottlenecks in computationally expensive pipelines, the most characteristic of which are Next Generation Sequencing (NGS) data analysis and whole genome assembling (WGA) include , Rainbow , CloudMap , CloudBurst , SURPI and RSD-Cloud . These works, although highly successful, lack a general approach as they are problem specific and are often difficult to setup, configure, maintain and most importantly integrate within a pipeline, when considering the experience of a non-expert life sciences researcher. Tools and frameworks aiming to streamline computations and automate standard analysis bioinformatics workflows include Galaxy , Bioconductor , EMBOSS and Bioperl . Notable examples of bioinformatics workflow execution in the cloud include [11, 33] and an interesting review on bioinformatics workflow optimization in the cloud can be found in . In the past few years, there is a significant trend in integrating existing tools into unified platforms featuring an abundance of ready to use tools, with particular emphasis on ease of deployment and efficient use of resources of the cloud. A platform based approach is adopted by CloudMan , Mercury , CLoVR , Cloud BioLinux and others [24, 32, 42, 44]. Most of these works are addressing the usability and user friendly aspect of executing bioinformatics workflows, while some of them also support the use of distributed computational resources. However, they largely ignore the underlying data characteristics of the workflow and do not perform any data-aware optimizations. Existing domain agnostic workflow management systems including Taverna , Swift , Condor DAGMan , Pegasus , Kepler and KNIME are capable of also addressing bioinformatics workflows. A comprehensive review of the aspects of parallel workflow execution along with parallelization in scientific workflow managements systems can be found in . Taverna, KNIME and Kepler mainly focus on usability by providing a graphical workflow building interface while offering limited to non-existent support, in their basic distribution, for use of distributed computational resources. On the other side, Swift, Condor DAGMan and Pegasus are mainly inclined over accomplishing parallelization on both local and distributed resources. Although largely successful in achieving parallelization, their scheduling policies are non data-aware and do not address minimizing file transfers between sites. Workflow management systems like Pegasus, Swift and Spark can utilize shared file systems like Hadoop and Google Cloud Storage. The existence of a high performance shared file system can be beneficial in a data intensive worfklow as data can be transferred directly between sites and not staged back and forth from the main site. However, the advantages of a shared file system can be outmatched by a data-aware scheduling policy which aims to minimize the necessity of file transfers to begin with. Furthermore, the existence of a shared file system is often prohibitive in hybrid clouds comprising of persistent local computational infrastructures and temporarily provisioned resources in the cloud. Beyond the significant user effort and expertise required in setting up a shared file system, one of the main technical reasons for this situation is that elevated user operating system privileges are required for this operation, which are not usually granted in local infrastructures. A Hadoop MapReduce approach is capable of using data locality for efficient task scheduling. However, its advantages become apparent in a persistent environment where the file system is used for long term storage purposes. In the case of temporarily cloud provisioned virtual machines, the file system is not expected to exist either prior or following the execution of the workflow and consequently all input data are loaded at the beginning of the workflow. There is no guarantee that all the required data for a specific task will be placed in the same computational site and even if that were the case, no prior load balancing mechanism exists for assigning all the data required for each task to computational sites while taking into account the computational resources of the site and the computational burden of the task. Additionally, a MapReduce approach requires re-implementation of many existing bioinformatics tools which is not only impractical but also unable to keep up to date with the vanilla and standardized versions. Finally, it is important to note that none of the aforementioned related work clearly addresses the problem of applying a data-aware optimization methodology when executing data-intensive bioinformatics workflows in hybrid cloud environments. It is exactly this problem that we address in this work, by applying a data organization methodology coupled with a novel scheduling approach. In this section we introduce the operating principles and the underlying characteristics of the data management and scheduling policy comprising our methodology. The fact that data parallelism can be achieved in bioinformatics workflows has largely been taken advantage of in order to accelerate workflow execution. Data parallelism involves fragmenting the input into chunks which are then processed independently. For certain tasks of bioinformatics workflows, such as sequence alignment and mapping of short reads which are also incidentally some of the most computationally expensive processes, this approach can allow for a very high degree of parallelism in multiprocessor architectures and distributed computing environments. However, prior to proceeding to the next step, data consistency requires that the output of the independently processed chunks be recombined. In a distributed computing environment, where the data is located on multiple sites, this approach creates significant data interdependency issues as data needs to be transferred from multiple sites in order to be recombined, allowing the analysis to proceed to the next step. The same problem is not evident in a multiprocessor architecture, as the data exists within the same physical machine. A sensible approach to satisfying data interdependencies with the purpose of minimizing, or even eliminating unnecessary file transfers would be to stage all fragments whose output must be recombined on the same site. Following that, the next step, responsible for processing the recombined output, can also be completed on the same site, and then the next step, that will operate on the output of the previous, also on the same site, further advancing this course until it is no longer viable. It is becoming apparent that this is a recursive process that takes into account the anticipated data dependencies of the analysis. In this way, segments of the original workflow are partitioned into workflow ensembles (workflows of similar structure but differing in their input data) that have no data interdependencies and can then be executed independently in an approach reminiscent of a bag-of-tasks. Undoubtedly, not all steps included in a workflow can be managed this way, but a certain number can, often also being the most computationally and data intensive. Instead of fragmenting the input of data parallelizable tasks into chunks arbitrarily, we propose fragmenting into chunks that can also sustain the data dependencies of a number of subsequent steps in the analysis. Future tasks operating on the same data can be grouped back-to-back into forming a pipeline. To accomplish the aforementioned, we model the data input space as comprising of Instances. An Instance (Inst) is a single data entry, the simplest form data can exist independently. An example of an Inst would be a single protein sequence in a .fasta file. Instances are then organized into organization units (OU), which are sets of instances that satisfy the data dependencies of one or more tasks. The definition of an OU is a set of Insts that can satisfy the data dependencies of a number of consecutive tasks, thus allowing the formation of an OU pipeline. However, before attempting to directly analyze the data involved, a key step is to preprocess the data instances in order to allow for a structured optimization of the downstream analysis process. A common occurrence in managing big data is the fact that their internal organization is dependent on its specific source. Our data organization model is applied through a preprocessing step that restructures the initial data organization into sets of Insts and OUs in a way reminiscent of a base transformation. The process involves identifying Insts in the input data, and grouping them together into OUs according to workflow data interdependencies. An identifier is constructed for each Inst that also includes the OU it belongs to. The identifier is permanently attached to the respective data and therefore is preserved indefinitely. The initial integrity of the input data is guaranteed to be preserved during workflow execution, thus ensuring the accessibility to this information in later stages of the analysis and allowing for the recombination process. The identifier construction process is defined as follows. At some point, some or all the pipelines may converge in what usually is a non parallelizable merging procedure. This usually happens at the end of the workflow, or in intermediate stages, before a new set of OU pipelines is formed and the analysis continues onward. It is obvious that this data organization approach although highly capable of minimizing data transfers, it severely limits the opportunities for parallelization, as each OU pipeline is processed in its entirety in a single site. In very small analyses where the number of OUs is less than the number of sites, obviously some sites will not be utilized, though this is a boundary case, unlikely to occur in real world analyses. In a distributed computing environment, comprised of multiprocessor architecture computational sites, ideally each OU pipeline will be assigned to a single processor. Given that today's multiprocessor systems include a significant number of CPU cores, the number of OU pipelines must significantly exceed, by a factor of at least 10, the number of sites in order to achieve adequate utilization. Unfortunately, even that would prove inadequate, as the computational load of OU pipelines may vary significantly, thus requiring an even higher number of them in order to perform proper load balancing. It is apparent that this strategy would be fruitful only in analyses where the computational load significantly exceeds the processing capabilities of the available sites, spanning execution times into days or weeks. In solely data-intensive workflows, with no computationally intensive component, under-utilization of multiprocessor systems may not become apparent as storage and network throughput are the limiting factors. Otherwise, it will most likely severely impact performance. Evidently, a mechanism for achieving parallelism in the execution of an OU pipeline in a single site is required. Furthermore, in a heterogeneous environment of computational sites of varying processing power and OU pipelines of largely unequal computational loads, load balancing must be performed in order to map the OU pipelines into sites. To address these issues we propose a novel 2-stage scheduling approach which combines an external scheduler at stage 1 mapping the OU pipelines into sites and an internal to each site scheduler at stage 2 capable of achieving data and task parallelism when processing an OU pipeline. The external scheduler is mainly concerned with performing load balancing of the OU pipelines across the set of computational resources. As both the OU pipelines and the computational sites are largely heterogeneous, the first step is performing an estimation regarding both the OU pipeline loads and the processing power of the sites. The second step, involves the utilization of the aforementioned estimations by the scheduling algorithm tasked with assigning the OU pipelines to the set of computational resources. In order to perform an estimation of the load of an OU pipeline, a rough estimation could be made based on the size of the OU input. A simple approach would be to use the disk file size in MB but that would most likely be misleading. A more accurate estimation could be derived by counting the number of instances, this approach too however is also inadequate as the complexity cannot be directly assessed in this way. In fact, the computational load can only be estimated by taking into account the type of information presented by the file, which is specific to its file type. For example, given a .fasta file containing protein sequences, the most accurate approach for estimating the complexity of a sequence alignment procedure would be to count the number of bases, rather than count the number of instances. Fortunately, the number of distinct file types found in the most common bioinformatics workflows is small, and therefore we have created functions for each file type that can perform an estimation of the computational load that corresponds to them. We already support formats of .fasta, .fastq and plain ASCII (such as tab-delimited sequence similarity files) among others. In order to better match the requirements of the data processing tasks to the available computational resources, the computational processing power of each site must also be assessed. This is accomplished by running a generic benchmark on each site which is actually a mini sample workflow that aims to estimate the performance of the site for similar workflows. The benchmarks we currently use are applicable on comparative genomics and pangenome analysis approaches, and measure the multithreaded performance of the site, taking into account its number of CPU cores. We also use the generic tool UnixBench to benchmark the sites when no similar sample workflow is available. The problem can now be modeled as one of scheduling independent tasks of unequal load to processors of unequal computational power. As these tasks are independent, they can be approached as a bag of tasks. Scheduling bag of tasks has been extensively studied and many algorithms exist, derived from heuristic , list scheduling or metaheuristic optimization approaches . In this work we utilize one of the highest performing algorithms, the FPLT (fastest processor largest task) algorithm. According to FPLT, tasks are placed in descending order based on their computational load and each task, starting from the largest task, is assigned to the fastest available processor. Whenever a processor completes a task, it is then added to the list of available processors, the fastest of which is assigned the largest remaining task. FPLT is a straightforward and lightweight algorithm, capable of outperforming other solutions most of the time when all tasks are available from the start, as is the case here, without adding any computational burden. The disadvantage of FPLT is that when the computational power of processors is largely unequal, a processor might be assigned a task that severely exceeds its capabilities, thus delaying the makespan of the workflow. This usually happens when some processors are significantly slower than the average participating in the workflow. The external scheduler initially performs an assessment of the type and load of the OU pipelines. It then determines the capabilities of the available sites in processing the pipelines by retrieving older targeted benchmarks or completing new on the fly. The OU pipelines are then submitted to the sites according to FPLT and job failures are handled by resubmission. The pseudocode of the external scheduler is presented in Algorithmic Box 1. The internal scheduler is local to each site and is responsible for achieving data and task parallelism when processing an OU pipeline. Task parallelism involves executing independent tasks directly in parallel while data parallelism requires the identification of tasks whose input can be fragmented in chunks and processed in parallel. The second requires that such tasks are marked as suitable for fragmentation at the workflow description stage or maintaining a list of such tasks for automatic identification. Our approach supports both. The internal scheduler automatically identifies the number of CPUs on the computational site and sets the number of simultaneous processing slots accordingly. It receives commands from the master and assigns them to threads in order to execute them in parallel. In case it receives a task where data parallelism is possible, it will fragment the input into individual chunks, or else subsets, and then launch threads in order to process them in parallel. A decision must be made on the number of fragments a task must be split to, which involves a trade off between process initialization overhead and load balancing between threads. Given the widely accepted assumption that the CPU cores of a given site have the same computational capabilities, a simple solution would be to launch a number of threads equal to the machine's CPU count and divide the total number of input data, or else the instances, across them. This solution is in turn predicated on the assumption that the load assigned to a thread should directly correspond to the amount of data it has to process and as such is prone to variations. In our case however, as all required data exists within the same site, it is no longer desirable to distribute the data processing load among the threads in advance, as the data can be accessed by any thread at any time without any additional cost thus providing greater flexibility. Therefore, when considering the situation within a single \(site_l\), our approach can be defined by the process of splitting the superset of all m Insts of the OU pipeline into k subsets of fixed size n. The number of subsets is given when dividing m by n. Each given \(Subset_i\), is assigned to a thread responsible for completing the respective task. Initially the subsets are placed into a list in random order. Each thread attempts to process the next available subset and this continues recursively until all available subsets are exhausted. In order to synchronize this process and to ensure that no two threads process the same subset, a lock is established that monitors the list of subsets. Every time a thread attempts to obtain the next available subset it must first acquire the lock. If the lock is unavailable the thread is set to sleep in a waiting queue. If the lock is available, the thread acquires the requested subset and increases an internal counter that points to the next available subset. It then immediately releases the lock, an action that also wakes the first thread that may be present in the queue. The pseudocode describing the operation of the internal scheduler is presented in Algorithmic boxes 2 and 3. It becomes apparent that minimizing the totalDelay time is equal to minimizing the number of subsets k. The minimum value of k is equal to the number of threads in which case the overhead penalty is suffered only once by each thread. However it is unwise to set k equal to the number of threads as the risk of unequally distributing the data between the threads far outweighs the delay penalty. We make the reasonable hypothesis that the execution times of chunks of fixed size \(n=1\) resemble a Log Normal distribution, which is typically encountered in processing times . Our hypothesis was verified on an individual basis experimentally by running a BLAST procedure as presented in Fig. 1. BLAST is the most computationally intensive task of our use case study workflow presented in . Evidently, this does not apply to all tasks but is a reasonable hypothesis and a common observation in processing times. A Log Normal distribution appears approximately like a skewed to the right, positive values only, normal distribution. This particular distribution presented in Fig. 1 allows us to estimate that only 8.2 % of the processing times were twice as large as the average processing time. Moreover, less than 0.5 % of the processing times were larger than five times the average processing time. It can easily be asserted that from a given set size and below, it is highly unlikely for many of the slower processing times to appear within it. However, it must be noted that this already low probability is further reduced by the fact that this is a boundary situation, to be encountered by the end of the workflow where other threads have terminated. After experimentation we have established that an empirical rule to practically eliminate the chance is to set n equal to 0.01 % of the number m of instances. The \(delayTime\,\%\) defined by Eq. 7 is the total time wasted as a percentage of the actual processing time. Assuming that the average processing time, avgProcessingTime, of a single instance is at least two and a half times greater than the overhead time and the number of threads is at least eight, then by setting n at 0.01 % of m will lead to a \(delayTime\,\%\) value equal to 0.05 % which is considered insignificant. We conclude that a value of n approximating 0.01 % of m is a reasonable compromise. In practice, other limitations to the size of the subset n may exist, that are related to the nature of the memberships functions involved and must be taken into account. For example, in processes using hash tables extensively or having significant memory requirements, a relatively high subset size would not be beneficial as there is risk for the hash tables to be overloaded resulting in poor performance and high RAM usage. It is evident that an accurate size n of the subsets cannot be easily calculated from a general formula as it may have specific constraints due to the actual processes involved. However, a general rule of thumb can be established of setting n around 0.01 % of m and is expected to work reasonably well for the majority of cases. It is however, classified as a parameter that can be optimized and thus its manipulation is encouraged on a use case basis. A number of requirements motivated us to implement a basic workflow execution engine that was used in our experiments for validating our approach. These requirements are the deployment of containers on sites that include all the necessary software and tools, graphic workflow description, secure connections over SSH tunneling and HTTPS and not requiring elevated user privileges for accessing sites. The execution environment is comprised of a number of computational sites having a UNIX based operating system and a global, universally accessible cloud storage similar to Amazon S3, referred to as object storage. The object storage is used to download input data, upload final data and to share data between sites. It is not used for storing intermediate data that temporarily exist within each site. We have implemented the proposed framework using Java 8 and Shell scripting in Ubuntu Linux 14.04. The overall architecture is loosely based on a master/slave model, where a master node responsible for executing the external scheduler serves as the coordinator of actions from the beginning to the completion of a given workflow. The master node is supplied with basic information like the description of the workflow and input data, the object storage and the computational sites. The workflow can be described as a directed acyclic graph (DAG) in the GraphML language by specifying graph nodes corresponding to data and compute procedures and connecting them with edges as desired. To describe the workflow in a GUI environment, the user can use any of the available and freely distributed graph design software tools that supports exporting to GraphML. The only requirement for using a computational site is the existence of a standard user account and accessibility over the SSH protocol. Each site is initialized by establishing a secure SSH connection through which a Docker container equipped with the software dependencies required to execute the workflow is fetched and deployed. Workflow execution on each site takes place within the container. The object storage access credentials are transferred to the containers and a local daemon is launched for receiving subsequent commands from the master. The daemon is responsible for initiating the internal scheduler and passing all received commands to it. Communication between the master and the daemons running within the Docker container on each site is encrypted and takes place over SSH tunneling. File transfers between sites and the object storage are also encrypted and take place over the HTTPS protocol. The selected case study utilized in validating our approach is from the field of comparative genomics, and specifically the construction of the phylogenetic profiles of a set of genomes. Phylogenetic profiling is a bioinformatics technique in which the joint presence or joint absence of two traits across large numbers of genomes is used to infer a meaningful biological connection, such as involvement of two different proteins in the same biological pathway [35, 37]. By definition, a phylogenetic profile of a genome is an array where each line corresponds to a single sequence of a protein belonging to the genome and contains the presence or absence of the particular entity across a number of known genomes that participate in the study. The first step in building phylogenetic profiles involves the sequence alignment of the participating protein sequences of all genomes against themselves. It is performed by the widely used NCBI BLAST tool and the process is known as a BLAST all vs all procedure. Each protein is compared to all target sequences and two values are derived, the identity and the e-value. Identity refers to the extent to which two (nucleotide or amino acid) sequences have the same residues at the same positions in an alignment, and is often expressed as a percentage. E-value (or expectation value or expect value) represents the number of different alignments with scores equivalent to or better than a given threshold S, that are expected to occur in a database search by chance. The lower the E-value, the more significant the score and the alignment. Running this process is extremely computationally demanding, the complexity of which is not straightforward to estimate , but can approach \(O(n^2)\). For example, a simple sequence alignment between 0.5 million protein sequences, can take up to a week on a single high-end personal computer. Even when employing high-performance infrastructures, such as a cluster, significant time as well as the expertise to both run and maintain a cluster-enabled BLAST variant are required. Furthermore the output files consume considerable disk space which for large analyses can easily exceed hundreds of GBs. Based on the sequence alignment data, each phylogenetic profile requires the comparison and identification of all homologues across the different number of genomes in the study. The phylogenetic profiling procedure for each genome requires the sequence alignment data of all its proteins against the proteins of all other genomes. Its complexity is linear to the number of sequence alignment matches generated by blast. Different types of phylogenetic profiles exist, including binary, extended and best bi-directional all 3 of which are constructed in our workflow procedure. According to our data organization methodology, in this case proteins correspond to Insts and are grouped into OUs, which in this case are their respective genomes. Independent pipelines are formed for each OU consisting firstly of the BLAST process involving the sequence alignment of the proteins of the OU against all other proteins of all OUs and secondly of the three phylogenetic profile creation processes which utilizes the output of the first in order to create the binary, extended and best bi-directional phylogenetic profile of the genome corresponding to the OU. These pipelines are then scheduled according to the scheduling policy described in . A number of experiments have been performed in order to validate and evaluate our framework. Therefore, this section is divided into (a) the validation experiments further discussed in "Validation" subsection, where the methods outlined in "Methods" section are validated and (b) the comparison against Swift, a high performance framework, further discussed in "Comparison against a high performance framework" subsection where the advantages of our approach become apparent. The computational resources used are presented in Table 1. Apart from the privately owned resources of our institution, the cloud resources consist of a number of virtual machines belonging to the European Grid Infrastructure (EGI) federated cloud and operated by project Okeanos of GRNET (Greek Research and Technology Network). Okeanos is based on the Synnefo (the meaning of the word is "cloud" in Greek) open source cloud software which uses Google Ganeti and other third party open source software. Okeanos, is the largest academic cloud in Greece, spanning more than 5400 active VMs and more than 500,000 spawned VMs. As the resources utilize different processors of unequal performance, their performance was compared to the processors of the cloud resources which served as a baseline reference. As such, the number of CPUs of each site was translated to a number of baseline CPUs, so a direct comparison can be performed. In this way, non integer numbers appear in the number of baseline CPUs of each site. This combination of local, privately owned computational resources with cloud-based resources represents the typical use case we are addressing, individuals or research labs that wish to extend their computational infrastructure by adopting resources of one or multiple cloud vendors. The input data used in our experiments consists of an extended plant pangenome of 64 plant genomes including 39 cyanobacteria for which the complete proteome was available. The total size was 268 MB and includes 619,465 protein sequences nd \(2.3\times 10^8\) base pairs. In order to accommodate our range of experiments, the data was divided into sub-datasets. It must be noted that, although the input data used may appear relatively small in file size, it can be very demanding to process, requiring weeks on a single personal computer. The particular challenge in this workflow is not the input size but the computational requirements in conjunction with the size of the output as will become apparent in the following sections. The dataset consist of files downloaded from the online and publicly accessible databases of UniProt and PLAZA and can also be provided by our repositories upon request. The source code of the proposed framework along with the datasets utilized in this work can be found in our repository https://www.github.com/akintsakis/odysseus. In order to experimentally validate the optimal subset size value as outlined in "Internal scheduler" section and the overall scalability performance of our approach, a number of experiments were conducted utilizing the phylogenetic profiling use case workflow. All execution times reported below involve only the workflow runtime and do not include site initialization and code and initial data downloads as they require a nearly constant time, irrespective of both problem size and number of sites and as such they would distort the results and not allow for accurately measuring scaling performance. For reporting purposes, the total time for site initialization is approximately 3–5 min. The phylogenetic profiling workflow was executed with an internal scheduler subset size value n of 0.0010, 0.0025, 0.0050, 0.0100, 0.0250, 0.0500 and \(0.2500\,\%\) as a percentage of the total number of protein sequences in three distinct datasets comprising 189,378, 264,088 and 368,949 protein sequences. All sites presented in Table 1 except for the first one, participated in this experiment. The site execution times for each subset size for all three datasets are presented in boxplot form in Fig. 2. They verify the hypothesis presented in "Internal scheduler" subsection, we observe that the fastest execution time is achieved when the subset size n is set close to our empirical estimation of \(0.01\,\%\) of the total dataset size. It is apparent that smaller or larger values of n lead to increased execution times. Generally, in both three datasets analyzed we observe the same behavior and pattern of performance degradation when diverging from the optimal subset size. Smaller values of n lead to substantially longer processing times mainly due to the delay effect presented in Eq. 7. As n increases, the effect gradually attenuates and is diminished for values larger than \(0.0050\,\%\) of the dataset. Larger subset sizes impact performance negatively, with the largest size of \(0.2500\,\%\) tested, yielding the slowest execution time overall. This can be attributed to the fact that for larger subset sizes, the load may not be optimally balanced and some threads that were assigned disproportionately higher load might prolong the overall total execution time while other threads are idle. Additionally, large subset sizes can lead to reduced opportunities for parallelization, especially on smaller OUs that are broken into fewer chunks than the available threads on site, thus leaving some threads idle. The average memory usage of all execution sites for each subset size for all three datasets is presented in Fig. 3. It is apparent that both the subset size and the size of the dataset increase memory consumption. Between smaller subset sizes, differences in memory usage are insignificant and inconsistent, thus difficult to measure. As we reach the larger subsets, the differences become more apparent. Due to the current workflow not being memory intensive, increases in memory usage are only minor. However, in a memory demanding workflow these differences could be substantial. Although the size of the dataset to be analyzed cannot be tuned, the subset size can and it should be taken into account in order to remain within the set memory limits. A subset size n value of \(0.0100\,\%\) is again a satisfactory choice when it comes to keeping memory requirements on the low end. Although we have validated that an adequate and cost-effective approach is to set the value of n at \(0.0100\,\%\) of the total size of the dataset, we must state that optimal selection of n is also largely influenced by the type of workflow and thus its manipulation is encouraged on a use case basis. where T(1) is the execution time with one processor and T(p) is the execution time with p processors. The above equations found in literature assume that p processors of equal computational power are used. As in our case we use resources of uneven computational performance, we translate their processing power into baseline processor units. Consequently, p can take continuous and not discrete values, corresponding to the increase in computational power as measured in baseline processors. All sites presented in Table 1 participated in this experiment. The sites were sorted in ascending order according to their multithreaded performance and the workflow was executed a number of times equal to the number of sites, by increasing the number of participating sites one at a time. The execution times of all sites are presented in boxplot form for all workflow runs in Fig. 4. The X axis represents the total computational power score of the sites participating in the workflow and the Y axis, in logarithmic scale, represents the site execution time in seconds. The dashed magenta line is the ideal workflow execution time (corresponding to linear speed-up) and it intersects the mean values of all boxplots. As we can see variations in site execution time for each workflow run are consistent with no large deviations present. There are outliers in some workflow runs towards the lower side, where one site would terminate before others as there are no more OU pipelines to process. Despite being an outlier value however, they are not laid too far away in absolute quantities. Execution times fall consistently as new sites are added and computational resources are increased. A closer inspection of the results can be found in Table 2 where the execution times and the average and makespan speed-up and efficiency are analyzed. As expected, the average speed-up is almost identical to the ideal case, where the speed-up is equal to p and efficiency approaches the optimal value of 1. This was to be expected, as our approach does not introduce any overhead and keeps file transfers to a minimum, almost like as if all the processing took place on a single site. The minuscule variations observed can be attributed to random variations in the processing power of our sites and/or our benchmarking of the sites and to random events controlled by the OS. It can be observed that when using a high number of CPUs the efficiency tends to marginally drop to 0.97. This is attributed to the fact the data intensive part of the workflow is limited by disk throughput and cannot be accelerated by increasing the CPU count. Although the data intensive part is approximately 3–5 % of the total workflow execution time when excluding potential file transfers, using such a high number of CPUs for this workflow begins to approach the boundaries of Amdahl's law . On average, the makespan efficiency is 0.954 % for all runs. It can be presumed that the makespan speed-up and efficiency tend to reach lower values when a higher number of sites are involved. This is to be expected as some sites terminate faster than others when the pool of OU pipelines is exhausted and as such their resources are no longer utilized. This effect becomes apparent mostly when using a very high number of CPUs for the given workflow that results in a workflow completion time of less than 30 min. Although it is apparent in this experiment, we are confident it will not be an issue in real world cases as using 14 sites for this workflow, can be considered as an overkill and therefore slightly inefficient. In general, the average speed-up and efficiency is the metric of interest when evaluating the system's cost efficiency and energy savings as our approach automatically shuts down and releases the resources of sites that have completed their work. The makespan speedup corresponds to the actual completion time of the workflow when all sites have terminated and the resulting data is available. Our approach attempts to optimize the makespan speed-up but with no compromise in the average speed-up, i.e. the system's cost efficiency. We can conclude from this experiment that the average speed-up is close to ideal and the makespan speed-up is inferior to the ideal case by about 5 % on average and can approach 10 % when using a high number of resources when compared to the computational burden of the workflow. To establish the advantages of our approach against existing approaches, we chose to execute our use case phylogenetic profiling workflow in Swift and perform a comparison. Swift is an implicitly parallel programming language that allows the writing of scripts that distribute program execution across distributed computing resources , including clusters, clouds, grids, and supercomputers. Swift is one of the highest performing frameworks for executing bioinformatics workflows in a distributed computing environment. The reason we chose Swift is that it is a well established framework that emphasizes parallelization performance and in use in a wide range of applications also including bioinformatics. Swift has also been integrated into the popular bioinformatics platform Galaxy, in order to allow for utilization of distributed resources. Although perfectly capable of achieving parallelization, Swift is unable to capture the underlying data characteristics of the bioinformatics workflows addressed in this work, thus leading to unnecessary file transfers that increase execution times and costs and may sometimes even become overwhelming to the point of causing job failures. The testing environment included all sites presented in Table 1 except for the first one, as we were unable to set the system environment variables required by Swift, due to not having elevated privileges access to it. In the absence of a pre-installed shared file system, the Swift filesystem was specified as local, where all data were staged from the site where Swift was executing from. This is the default Swift option that is compatible with all execution environments and does not require a preset shared file system. The maximum number of jobs on each site was set equal to the site's number of CPUs. Three datasets were chosen as input to the phylogenetic profiling workflow, these are the total of 64 plant genomes and its subsets of 58 and 52 genomes. The datasets were chosen with the purpose of approximately doubling the execution time of each workflow run when compared to the previous one. Uptime, system load and network traffic among others were monitored on each site. In order to perform a cost analysis, we utilized parameters from the Google Cloud Compute Engine pricing model, according to which, the cost per hour to operate the computational resources is 0.232$ per hour per 8 baseline CPUs and the cost of network traffic is 0.12$ per GB as per Google's internet egress worldwide cheapest zone policy. The makespan execution time, total network traffic and costs of our approach against Swift when executing the phylogenetic profiling workflow for the three distinct datasets are presented in Table 3. The values presented are average values of 3 execution runs. As can be seen, for workflow runs 1 and 2, Swift is approximately 20 % slower in makespan and 16 % slower in the case of workflow run 3. This is attributed mostly to the time lost waiting for the file transfers to take place in the case of Swift. It must be noted that we were unable to successfully execute workflow 3 until termination with Swift, due to network errors near the end of the workflow that we attribute to the very large number of required file transfers. Had the workflow reached termination, we expect Swift to be about 17–18 % slower. As the particular use case workflow is primarily computationally intensive, an increase in the input size of the workflow increases the computational burden faster than the data intensive part, thus the performance gap is slightly smaller in the case of workflow 3. The total network traffic includes all inbound and outbound network traffic of all sites. It is apparent that it is significantly higher in Swift thus justifying the increased total execution time accounting to file transfers. Regarding the cost of provisioning the VMs, it was calculated by multiplying the uptime of each site with the per processors baseline cost of operation. The external scheduler of our approach will release available resources when the pool of OU pipelines is exhausted, thus leading to cost savings that can range from 10 to 25 % when compared to keeping all resources active until the makespan time. Oppositely, this feature is not supported by Swift and as such in this case all sites are active until makespan time, leading to increased costs. The cost savings of our approach regarding provisioning of VMs were higher than 40 % in all three workflow. The cost of network transfers is difficult to interpret as it is dependent on the locations and the providers of the computational resources. The cost presented here is a worst case estimate that would take place when all network traffic between sites were charged at the nominal rate. That is not always true, for example if all sites were located within the same cloud facility of one vendor there would be no cost at all for file transfers. However, they would still slow down the workflow leading to increased uptime costs, unless the sites were connected via a high speed link like InfiniBand often found in supercomputer configuration environments. In a hybrid cloud environment, which this work addresses, as computational sites will belong to different cloud vendors and private infrastructures, the file transfer cost can be significant and may even approach the worst case scenario. In total, our approach is significantly more cost effective than Swift, which can be anywhere from 40 to 47 % to more than 120 % more expensive, depending on the pricing of network file transfers. To further analyze the behavior of our framework against Swift, in Fig. 5 we present the system load and network activity of all sites when executing the phylogenetic profiling workflow with the 64 genome input dataset for both our approach and Swift. The Swift system load and network activity are denoted by the blue line and red line respectively, while the system load and network activity of our approach are denoted by the green and magenta lines respectively. Figure 6 plots each line separately for site 0, allowing for increased clarity. A system load value of 1 means that the site is fully utilized, while values higher than 1 means that the site is overloaded. A network activity value of 1 corresponds to utilization of 100 MBps. The network activity reported is both incoming and outgoing, so the maximum value it can reach is 2, which means 100 MBps of incoming and outgoing traffic simultaneously, though this is difficult to achieve due to network switch limitations. Regarding our approach, the network traffic magenta line is barely visible, marking only a few peaks, that coincide with drops in system load as denoted by the green line. This is to be expected as network traffic takes place while downloading the input data of the next OU pipeline and simultaneously uploading the output of the just processed OU pipeline, during which the cpu is mostly inactive. It is apparent that the number of sections between the load drops are equal to the number of OU pipelines, 64 in this case. Other than that, the system load is consistently at a value of 1. In the Swift execution case, load values are slightly higher than 1 in all sites except site 1 which has 12 instead of 8 CPUs. This can be attributed to the slightly increased computational burden of submitting the jobs remotely and transferring inputs and outputs to the main site. The internal scheduler of our approach operating on each site can be more efficient. Network traffic is constant and on the low end for the duration of the workflow, as data is transferred to and from the main site. However, near the end of the workflow, system load drops and network traffic increases dramatically, especially on site 0 which is the main site from which Swift operates and stages all file transfers to and from the other sites. As the computationally intensive part of most OU pipelines comes to an end, the data intensive part then requires a high number of file transfers that overloads the network and creates a bottleneck. This effect significantly slows down the makespan and is mostly responsible for the increased execution times of Swift and the costly file transfers. In large workflows where the data to be moved is hundreds of GBs, it can even lead to instability due to network errors. In this work, we presented a versatile framework for optimizing the parallel execution of data-intensive bioinformatics workflows in hybrid cloud environments. The advantage of our approach is that it achieves surpassing time and cost efficiency than existing solutions through minimization of file transfers between sites. It accomplishes that through the combination of a data management methodology that organizes the workflow into pipelines with minimal data interdependencies along with a scheduling policy for mapping their execution into a set of heterogeneous distributed resources comprising a hybrid cloud. Furthermore, we compared our methodology with Swift, a state of the art high performance framework and achieved superior cost and time efficiency in our use case workflow. By minimizing file transfers, the total workflow execution time is reduced thus leading to directly decreasing costs based on uptime of computational resources. Costs can also decrease indirectly, as file transfers can be costly especially in hybrid clouds where resources are not located within the facility of a single cloud vendor. We are confident that our methodology can be applied to a wide range of bioinformatics workflows sharing similar characteristics with our use case study. We are currently working on expanding our use case basis by implementing workflows in the fields of metagenomics, comparative genomics, and haplotype analysis according to our methodology. Additionally, we are improving our load estimation functions so as to more accurately capture the computational load of a given pipeline through an evaluation of the initial input. In the era of Big data, cost-efficient high performance computing is proving to be the only viable option for most scientific disciplines . Bioinformatics is one of the most representative fields in this area, as the data explosion has overwhelmed current hardware capabilities. The rate at which new data is produced is expected to increase significantly faster compared to the advances, and the cost, in hardware computational capabilities. Data-aware optimization can be the a powerful weapon in our arsenal when it comes to utilizing the flood of data to advance science and to provide new insights. AMK and FEP conceived and designed the study and drafted the manuscript. AMK implemented the platform as a software solution. PAM participated in the project design and revision of the manuscript. AMK and FEP analyzed and interpreted the results and coordinated the study. FEP edited the final version of the manuscript. All authors read and approved the final manuscript. This work used the European Grid Infrastructure (EGI) through the National Grid Infrastructure NGI_GRNET - HellasGRID. We also thank Dr. Anagnostis Argiriou (INAB-CERTH) for access to their computational infrastructure.
CommonCrawl
Abstract: In this paper we show a new way of constructing deterministic polynomial-time approximation algorithms for computing complex-valued evaluations of a large class of graph polynomials on bounded degree graphs. In particular, our approach works for the Tutte polynomial and independence polynomial, as well as partition functions of complex-valued spin and edge-coloring models. More specifically, we define a large class of graph polynomials $\mathcal C$ and show that if $p\in \cal C$ and there is a disk $D$ centered at zero in the complex plane such that $p(G)$ does not vanish on $D$ for all bounded degree graphs $G$, then for each $z$ in the interior of $D$ there exists a deterministic polynomial-time approximation algorithm for evaluating $p(G)$ at $z$. This gives an explicit connection between absence of zeros of graph polynomials and the existence of efficient approximation algorithms, allowing us to show new relationships between well-known conjectures. Our work builds on a recent line of work initiated by. Barvinok, which provides a new algorithmic approach besides the existing Markov chain Monte Carlo method and the correlation decay method for these types of problems.
CommonCrawl
Is there a WZW string theory? This is a naive question. Is there a way to couple the WZW model to gravity to obtain a perturbatively consistent string theory? where $n_G$ is the Coxeter number of $G$. This $c$ is always positive, so the $WZW$ model must be coupled to gravity in some non-trivial way. Conformal field theories with negative central charges include bosonic and $N=1$ supersymmetric sigma models below the critical dimension. Thus, by judicious choice of target space $X$, we get a composite theory with target space $X \times G$. So far, the two theories are uncoupled. Maybe something interesting happens if I make the target space some non-trivial $G$ bundle over $X$? Look up coset models, you can make string theory on group manifolds, but the more interesting spaces for this purpose are cosets of groups, which are smaller manifolds, the dimensions of Lie groups go up fast. You get currents and stress tensors by Sugawara construction like in WZ models, it's an 80s method of finding vacua. (just a comment, proper answer later). You maybe interested in Urs Schreiber's article on WZW SFT. Here is something quick, as I don't have time. I didn't reply because it wasn't clear to me what the original question really is. But here are some quick comments. Of course the WZW model famously exists and describes strings propagating on a spacetime which is a group manifold. Via the FRS theorem and related facts the WZW model is one of the mathematically best understood string models. As such it has received a great deal of attention. Of course it is not critical and needs to be combined with more stuff to make a critical string background. I guess that's what the question is wondering about, and the answer is: sure! That's what one considers all the time. The only trouble is that KK-compactification on group manifolds is not quite realistic. On the other hand WZW-type superstring field theory is something different. This is not about strings propagating on group manifolds (well it is about strings generally, so it will also include strings on group manifolds) but is instead about a way of formulating "string field theory" (second quantized strings) in a form that exhibits a kind of "second quantized WZW term". Urs Schreiber has written a (currently stub) article on the \(n\)CatLab about "WZW-type superstring field theory" here. I hope @UrsSchreiber, if he finds time, may want to write an answer, since he's started to write an nLab article on it. Urs Schreiber explained why this is unrelated. @RonMaimon Yes you're right, I'm not sure if I should hide this answer?
CommonCrawl
As part of a CS course, Alice just finished programming her robot to explore a graph having $n$ nodes, labeled $1, 2, \ldots , n$, and $m$ directed edges. Initially the robot starts at node $1$. While nodes may have several outgoing edges, Alice programmed the robot so that any node may have a forced move to a specific one of its neighbors. For example, it may be that node $5$ has outgoing edges to neighbors $1$, $4$, and $6$ but that Alice programs the robot so that if it leaves $5$ it must go to neighbor $4$. We consider two sample graphs, as given in Figures 1 and 2. In these figures, a red arrow indicate an edge corresponding to a forced move, while black arrows indicate edges to other neighbors. The circle around a node is red if it is a possible stopping node. Figure 1: First sample graph. Figure 2: Second sample graph. In the first example, the robot will cycle forever through nodes $1$, $5$, and $4$ if it does not make a buggy move. A bug could cause it to jump from $1$ to $2$, but that would be the only buggy move, and so it would never move on from there. It might also jump from $5$ to $6$ and then have a forced move to end at $7$. In the second example, there are no forced moves, so the robot would stay at $1$ without any buggy moves. It might also make a buggy move from $1$ to either $2$ or $3$, after which it would stop. The first line contains two integers $n$ and $m$, designating the number of nodes and number of edges such that $1 \le n \le 10^3$, $0 \le m \le 10^4$. The next $m$ lines will each have two integers $a$ and $b$, $1 \le |a|, b \le n$ and $|a| \neq b$. If $a > 0$, there is a directed edge between nodes $a$ and $b$ that is not forced. If $a$ < 0, then there is a forced directed edge from $-a$ to $b$. There will be at most $900$ such forced moves. No two directed edges will be the same. No two starting nodes for forced moves will be the same.
CommonCrawl
Here is a square filled with numbers and holes. The square is divided in multiple invisible areas which all have their own specific logical sequence. Although every numbers in an area is found with the same logic, the numbers used to find the next number can come from another area. Can you find every positive integers missing by finding out the logical sequences? This is an new experimental puzzle for me, so let me know how it is in the comments. Hopefully it will go well. I used the word "sequence" quite loosely here. The correct solution will be found with a total of 4 areas. The shapes of the areas are quite basic. No weird awkward shapes. Equally divide the square into 4 3x3 quadrants. Top left quadrant - sum of the cell above and to the left. Bottom left - sum of all the cells above. Top right - sum of the cell 2 steps to the left and of the cell to the left. Bottom right- sum of the cell 3 steps to the left and of the cell 3 steps above. The numbers (as we fill the rows) are 0,1,2,3,5,8,1,2,4,6,10,16,2,4,8,12,20,32,3,7,14,6,12,22,6,14,28,12,24,44,12,28,56,24,48,88. Got a lot of help for this from Angzuril and mactro's answers. Green: Sum of cells 3 and 4 to the left. In case there is no cell 4 to the left, it's two times the 3rd cell. My solution, probably not intended solution. Red: Form of $2 \times (a - b^2)$ , where a is 3 cells to the left, and b is 3 cells above a. The complexity of this rule leads me to believe it is not intended.
CommonCrawl
Abstract: Field redefinitions occur in string compactifications at the one loop level. We review arguments for why such redefinitions occur and study their effect on moduli stabilisation and supersymmetry breaking in the LARGE volume scenario. For small moduli, although the effect of such redefinitions can be larger than that of the $\alpha'$ corrections in both the Kähler and scalar potentials, they do not alter the structure of the scalar potential. For the less well motivated case of large moduli, the redefinitions can dominate all other terms in the scalar potential. We also study the effect of redefinitions on the structure of supersymmetry breaking and soft terms.
CommonCrawl
A product rule of two or more factors when its product equals to zero is called zero product property. The product of two or more factors is zero in some cases. It is possible mathematically if at least one of them is equal to zero. The property is used as a rule in mathematics to find the values of the variables. $a$ and $b$ are two factors and the product of them is equal to zero. There are two possibilities for the product of two factors to become zero. If $a \,=\, 0$, then $0 \times b \,=\, 0$. If $b \,=\, 0$, then $a \times 0 \,=\, 0$. In this case, either $a$ or $b$ is equal to zero. Therefore, $a$ is equal to zero or $b$ is equal to zero, is known as zero product property. The rule of zero product property is not limited to two factors and can also be applied to more than two factors as well.
CommonCrawl
An integer $n$ ($2 \leq n \leq 200000$) in the first line is the number of conveyor lanes. The lanes are numbered from 1 to $n$, and two lanes with their numbers differing with 1 are adjacent. All of them start from the position $x = 0$ and end at $x = 100000$. The other integer $m$ ($1 \leq m < 100000$) is the number of robot arms. The following $m$ lines indicate the positions of the robot arms by two integers $x_i$ ($0 < x_i < 100000$) and $y_i$ ($1 \leq y_i < n$). Here, $x_i$ is the x-coordinate of the $i$-th robot arm, which can pick goods on either the lane $y_i$ or the lane $y_i + 1$ at position $x = x_i$, and then release them on the other at the same x-coordinate. You can assume that positions of no two robot arms have the same $x$-coordinate, that is, $x_i \ne x_j$ for any $i \ne j$. Output $n$ integers separated by a space in one line. The $i$-th integer is the number of the manufacturing lines from which the storage room connected to the conveyor lane $i$ can accept goods.
CommonCrawl
Wang, W. and Zhou, X. (2018). A draw-down reflected spectrally negative Levy process. arXiv:1812.06923. Li, P.S. and Zhou, X. (2018). Integral functionals for spectrally positive Levy processes. arXiv:1809.05759. Foucart, C., Li P.S. and Zhou, X. (2019). Time-changed spectrally positive Levy processes starting from infinity. arXiv:1901.10689. Li, Z., Liu, H., Xiong, J. and Zhou, X. (2013). The irreversibility and an SPDE for the generalized Fleming-Viot processes with mutation. Stochastic Processes and their Applications 123, 4129-4155. Li, B. and Zhou X. (2013). The joint Laplace transforms for diffusion occupation times. Advances in Applied Probability 45,1049-1067. Zhou, X. (2014) On criteria of disconnectedness for $\Lambda$-Fleming-Viot support. Electronic Communications in Probability 19 no 53, 1-16. Loeffen R., Renaud, J.-F. and Zhou, X. (2014). Occupation times of intervals until first passage times for spectrally negative Levy processes. Stochastic Processes and their Applications 124, 1408-1435. Li, Y. and Zhou, X. (2014). On pre-exit joint occupation times for spectrally negative Levy processes. Statistics and Probability Letters 94, 48-55. Liu, H. and Zhou X. (2015). Some support properties for a class of $\Lambda$-Fleming-Viot processes. Annales de L'Institut Henri Poincare (B) Probabilites et Statistiques, 1076-1101. Li, Y., Zhou, X. and Zhu, N. (2015). Two-sided discounted potential measures for spectrally negative Levy processes. Statistics and Probability Letters 100, 67-76. Albrecher, H., Ivanovs, J. and Zhou, X.(2016). Exit identities for a Levy processes observed at Poisson arrival times. Bernoulli 22, 1364-1382. Wang, L., Yang, X. and Zhou, X. (2017). A distribution-function-valued SPDE and its applications. Journal of Differential Equations 262, 1085-1118. Yang, X. and Zhou, X. (2017). The pathwise uniqueness of solution to a SPDE driven by $\alpha$-stable noise with Holder continuous coefficient. Electronic Journal of Probability 22, no. 4, 1-48. Avram, F., Vu, N. L. and Zhou, X.(2017). On taxed spectrally negative Levy processes with draw-down stopping. Insurance, Mathematics and Economics 76, 69-74. Li, B. and Zhou, X. (2018) On weighted occupation times for refracted spectrally negative Levy processes. Journal of Mathematical Analysis and Applications 466, 215-237. Wang, W. and Zhou, X. (2018). General draw-down based de Finetti optimization for spectrally negative Levy risk processes. Journal of Applied Probability 55(2018), 1-30. Li, P. S., Yang, X. and Zhou, X. (2019). A general continuous-state nonlinear branching process. Accepted. Annals of Applied Probab. arXiv: 1708.01560. Li, B. and Zhou, X. (2019). Local times for spectrally negative Levy processes. Accepted. Potential Analysis. Zheng, J., Xiong, J. and Zhou X. (2019). Unique strong solutions of Levy processes driven stochastic differential equations with discontinuous coefficients. Accepted. Stochastics. Li, B., Vu, N. L. and Zhou, X. (2019). Exit problems for general draw-down times of spectrally negative Levy processes. Accepted. Journal of Applied Probability. arXiv: 1702.07259.
CommonCrawl
I am wondering how to solve the following problem efficiently. Other software for permuation groups (magma, gap) allows to do this by specifing an additional option "on sets/on tuples" to compute the specifed orbits. I am wondering how could I do the same in sage, given a permuation group $G$ and an $S$ as described above. Are you saying that you want the orbits of $G$ on $A\times A$ ? Such orbits are usually called orbitals. GAP can do this for you, e.g. using its package called GRAPE. But can it be directly done in sage? How to treat a vector space as a group?
CommonCrawl
How to find the additive inverse and additive identity of an element of the tensor-product vector space? (V3) there exists an element $0 \in V$ such that $x + 0 = x$ for every $x \in V$. How to check the mentioned axioms of a vector space, i.e. (V3) and (V4), for the tensor-product vector space? PS I am learning tensor product from this link and I because it claimed that tensor product is a vector space "by force [=definition] I tired to check that by what I had learnt from Robertson's Basic Linear Algebra. A more formal treatment of this could be done via equivalence relations, which I highly encourage you to read on. With this machinery, one takes the cartesian product $V \times W$ and then identifies some pairs as equivalent, which is what 'declaring' equality formally means in the article. Then, $v \otimes w$ is nothing more than the equivalence class of $(v,w)$. Not the answer you're looking for? Browse other questions tagged vector-spaces tensor-products axioms or ask your own question. Suficient condition for tensor product of vector spaces.. How to determine vector space? Why is the additive identity of a Quotient Space the subspace? Group Axioms, Vector Space Axioms, Inner Product Axioms.
CommonCrawl
Abstract: The electrodynamics of topological insulators (TIs) is described by modified Maxwell's equations, which contain additional terms that couple an electric field to a magnetization and a magnetic field to a polarization of the medium, such that the coupling coefficient is quantized in odd multiples of $e^2 / 2 h c $ per surface. Here, we report on the observation of this so-called topological magnetoelectric (TME) effect. We use monochromatic terahertz (THz) spectroscopy of TI structures equipped with a semi-transparent gate to selectively address surface states. In high external magnetic fields, we observe a universal Faraday rotation angle equal to the fine structure constant $\alpha = e^2 / \hbar c$ when a linearly polarized THz radiation of a certain frequency passes through the two surfaces of a strained HgTe 3D TI. These experiments give insight into axion electrodynamics of TIs and may potentially be used for a metrological definition of the three basic physical constants.
CommonCrawl
We present the solution of the homogeneous fractional differential Euler-type equation on the half-axis in the class of functions representable by the fractional integral of order $\alpha$ with the density of $L_1(0; 1)$. Using the method of Hermitian forms (Lienard–Schipar's method), solvability conditions are obtained for the cases of two, three and a finite number of derivatives. It is shown that in the case when the characteristic equation has multiple roots original equation admits a solution with logarithmic singularities. Kilbas A. A., Srivastava H. M., and Trujillo J. J., Theory and Applications of Fractional Differential Equations, Elsevier, Amsterdam (2006). (Math. Stud.; V. 204). Podlubny I., Fractional Differential Equations, Acad. Press, San Diego (1999). Samko S. G., Kilbas A. A., and Marichev O. I., Fractional Integrals and Derivatives: Theory and Applications, Gordon and Breach Sci. Publ., Switzerland (1993). Krein M. G. and Naimark M. A., "The method of symmetric and Hermitian forms in the theory of separation of the roots of algebraic equations," Linear Multilin. Algebra, 10, No. 4, 265–308 (1981). Published online: 02 Apr 2008 DOI: 10.1080/03081088108817420. Matveev N. M., New Methods of Integration of Ordinary Differential Equations [in Russian], Vysshaya Shkola, Moscow (1967). Bateman H. and Erdelyi A., Higher Transcendental Functions, V. 1, McGraw–Hill (1953). Gantmakher F. R., Matrix Theory [in Russian], Nauka, Moscow (1988). Postnikov M. M., Stable Polynomials [in Russian], Nauka, Moscow (1981). Sitnik S. M., "Transmutations and applications: a survey," arXiv: 1012.3741 [math.CA] (2010). Katrakhov V. V. and Sitnik S. M., "Transmutation method and new boundary-value problems for singular elliptic equations [in Russian]," Modern Math. Fund. Res., 64, No. 2, 211–426 (2018). Sitnik S. M. and Shishkina E. L., Transmutation Method for Differential Equations with Bessel Operators [in Russian], Fizmatlit, Moscow (2018). Zhukovskaya, N. and Sitnik, S. ( ) "Applying Lienard–Schipar's method to solving of homogeneous fractional differential Euler-type equations on an interval", Mathematical notes of NEFU, 25(3), pp. 33-42. doi: https://doi.org/10.25587/SVFU.2018.99.16949.
CommonCrawl
@micsthepick Doing stated action, very painful. Surely speaking doubles cannot present something diffic-- dammit! Is this Sphinx's Lair 2 now? Wecan always runwords into pairsif weneed tospeak doubles. What about just using words having one greater or fewer syllables compared to other words beside it? thats easy, not very complicated in any way. That implies increasing difficulty, unfortunately. @Wen1now They increase every sequentially communicated syllabification. O, we can also (maybe) employ growing sentence phrasings singularly -- thenceforth backtracking, diminishing ostensible verbosity, reducing lengths before drops down low to 0. @Deusovi Wow. I'm genuinely impressed by that one. In awe, we're quite amazed Deusovi. Must've taken time for it. ...then again, that's the entire point. Sure, I'll add it. I do see that now, ok ! are you aware of a solution? I think it's fine. It's not a rote problem or anything. but not have a proof? oh, was it something like, "prove that this a works"? Are you sure it's not floating-point errors? @Wen1now how does onetimesecret.com/secret/le1j9ygshfpsx1xpmzt81q8e7jlvyuh look? are there others like it? (This is the question talking) Even I don't know the answer, Until I am posted. All I know is the length, 5 characters that is. Random it is, But less than the 64 base youtube id. This is what identifies me. What is he talking about? What is the answer? This is obviously correct. New C4, then? pff, you have some answer sets memorized? @Sp3000 oh hey! speaking of australia and watching others nut it out, gay marriage is now legal there! I've been staring at both of them occasionally. Not had much time though - been busy preparing for finals. @Deusovi oh, Good luck for your finals. I already made my Steins;Gate C4 - "Girl's endless tutturu is grating (8)" Every good joke has a small truth hidden inside. So you're not as hopeless as you claim to be. At least your crossword intuition is good. Q: Does there exist $a\geq0$ such that $\lfloor a\rfloor,\lfloor a^2\rfloor,\lfloor a^3\rfloor,\ldots$ alternate between even and odd? Does there exist $a\geq0$ such that $\lfloor a\rfloor,\lfloor a^2\rfloor,\lfloor a^3\rfloor,\ldots$ alternate between even and odd? I thought you had already solved it? @Deusovi you owe us a C4. Q: Which letter am I? Soul 61.2392559 46.6729113 Thunder 41.0257551 28.9742227 Rainbow 41.6947766 44.7778411 Marsh 25.2913771 51.5345946 Cascade 15.5297874 32.5621657 Boulder 60.187195 24.9250693 Follow the path in the correct order to solve this riddle. Which letter am I? Okay, someone tell me whether the thing that I have drawn in the recent answer that I gave is an "H" or a "K"? look much more H than K to me. @micsthepick Could you explain what you mean by "not all are telling the truth"? "The statement 'all are telling the truth' is false." @Sid Whoops, yes I do. Just a sec. @Deusovi Is it LITERATI in (BONG)* = OBLITERATING? have you got one ready @sid? @micsthepick nope. I will wait for confirmation, though. I think it would be wise to prepare now, but you do what you want to. @Sphinx a nice simple one. Can't right now but maybe in the not too distant future. Call to arms! Mithrandir is currently defending in Contact!
CommonCrawl
Of the following, which is the product formed when cyclohexanone undergoes aldol condensation followed by heating ? A rowland ring of mean radius $15\;cm$ has $3500$ turns of wire wound on a ferro magnetic core of relative permeability 800. What is the magnetic field B in the core for a magnetizing current of $12.A$? An infinite line charge produces a field of $ 9 \times 10^4 \;N/C$ at a distance of $2 \;cm$ . calculated linear charge density. State whether the following statement is true or false and justify : The vertex of an equilateral triangle is (2, 3) and the equation of the opposite side is $x + y = 2$. Then the other two sides are $y - 3 = (2 \pm \sqrt3 ) (x - 2)$. Which of the following alkenes possess highest reactivity in cationic polymerisation? Which one of the following statement is correct for $CsBr_3$? Give the numbers of N and M? The standard reduction potentials at 298K for following half reaction are given against each. Which is the strongest reducing agent ? Find the equation of common tangent to the parabola $y^2=4x$ and $x^2=4y$? Which one of the following is highest melting halide. The equation of the plane through the line of intersection of the planes $ax+by+cz=d=0$ and $a'x+b'y+c'z+d'=0$ and parallel to the line $y=0,z=0$ is ?
CommonCrawl
Hansel's lemma (not Hensel's lemma) states that if the complete graph on $n$ vertices can be expressed as the union of $r$ bipartite graphs $B_1,B_2,\ldots,B_r$ such that $n_i$ is the number of non-isolated vertices in $B_i$, then $n_1+n_2+\cdots+n_r \geq n\log n$. The first published proof of the lemma is by Georges Hansel in 1964, but the lemma has seen several proofs in literature, many of them combinatorial and also an information theoretic proof. In this talk, we will prove Hansel's lemma using the probabilistic method, and follow it up with an application of the lemma to distance-preserving subgraphs. Specifically, we will show a separation between branching vertices and branching edges by exhibiting an interval graph on $n$ terminal vertices for which every distance-preserving subgraph has $O(n)$ branching vertices and $\Omega(n\log n)$ branching edges [joint work with Prof. Jaikumar].
CommonCrawl
A matrix whose all elements are arranged in a row is called a Row matrix. Row matrix is one type of matrix and it is also called as a row vector. In this type of matrix, all elements are arranged in only one row but in different columns. $M$ is a matrix in general form and it is a row matrix of order $1 \times n$. It can also be expressed in simple form. In this case of a Row vector , each element is displayed in one row. So, the row $i = 1$. Similarly, the number of rows $m = 1$. Therefore, a row matrix can be displayed in simple but in general form as follows. The following matrices are best examples for a row matrix. $A$ is a row matrix of order $1 \times 1$. Only one element is arranged in one row and one column in this matrix. $B$ is a row matrix of order $1 \times 2$. In this matrix, two elements are arranged in one row but in two columns. $C$ is a row matrix of order $1 \times 3$. Three elements are arranged in one row but in three columns in this matrix. $D$ is a row matrix of order $1 \times 4$. In this matrix, four elements are arranged in one row but in four columns. All the row matrices are single row matrices but they have common shape and it is a rectangle. Hence, the row matrices are known as rectangular matrices.
CommonCrawl
The hypothesis that antioxidant vitamins (ascorbate and tocopherols) along with urate protect blood plasma lipids from oxidation was tested. Dietary fat is also an important factor influencing plasma lipid peroxidation. The purpose of this study was to investigate the role of plasma antioxidants and dietary fat on low density lipoprotein (LDL) and plasma lipid oxidation. In the first part of this study, we compared the ability of urate and ascorbate to protect human LDL from in vitro oxidation. LDL oxidation was initiated by 15 mM of a water soluble azo-initiator in the presence or absence of ascorbate or urate. The rate of lipid hydroperoxide (LOOH) formation was increased after the LDL tocopherols were totally consumed, i.e., after the lag phase. Urate (50 $\mu$M) was more effective than ascorbate (50 $\mu$M) in extending the lag phase. Moreover, urate was consumed more slowly than ascorbate under identical oxidation conditions. The combination af 25 $\mu$M ascorbate and 25 $\mu$M urate was more effective in extending the lag phase than ascorbate alone but less effective than urate alone. An empirical mathematical model was developed to describe the oxidation kinetics of LDL tocopherols. In the second part of this study, we studied the role of dietary fat and dietary $\alpha$-tocopherol ($\alpha$-toc) levels on rat plasma oxidation. The fatty acid composition of plasma was found to be modulated by the type of dietary fat. Neither dietary fat nor $\alpha$-toc influenced the plasma levels of water soluble antioxidants (ascorbate, urate and sulfhydryl content). Rat plasma was oxidized either by a water soluble azo-initiator (25 mM) or a lipid soluble azo-initiator (10 mM). In both cases, the rate of LOOH formation in plasma from rats fed butter oil diets was markedly suppressed compared to the plasma from rats fed corn oil diets. When oxidation was initiated by a lipid soluble azo-initiator, plasma from rats fed $\alpha$-toc supplemented diets showed higher LOOH levels than plasma from rats fed $\alpha$-toc deficient diets. Surprisingly, when oxidation was initiated by water soluble azo-initiator, tocopherol appeared to act as a pro-oxidant. The results suggest that urate may be more significant than ascorbate in delaying the consumption of tocopherols in human LDL and that low dietary PUFAs levels are more important in preventing the in vitro oxidation of plasma lipids than high dietary levels of $\alpha$-tocopherol.
CommonCrawl
作者简介: 苑 霸,男,中国科学院计算技术研究所硕士研究生,研究方向为数字信号处理、机器学习等。E-mail: [email protected];姚 萍,女,中国科学院计算技术研究所副研究员,硕士生导师,研究方向为数字信号处理与嵌入式系统;郑天垚,男,中国科学院计算技术研究所高级工程师,硕士生导师,研究方向为计算机系统结构、信号处理、遥感图像. Abstract: With the continuous advancement of modern technology, more types of radar and related technologies are continuously being developed, and the identification of radar emitter signals has gradually become a very important research field. This paper focuses on the identification of modulation types in radar emitter signal identification. We propose a weighted normalized Singular-Value Decomposition (SVD) feature extraction algorithm, which is based on the perspective of data energy and SVD. The filtering effect of complex SVD is analyzed, as well as the influence of the number of rows of data matrix on the decomposition results, and the recognition effect of different classification models. The experimental results show that the algorithm has better filtering and recognition effects on common radar signals. Under –20 dB, the cosine similarity values of the reconstructed and original signals remain at about 0.94, and the recognition accuracy remains above 97% under a confidence level $\alpha $ of 0.65. In addition, experiments show that the weighted normalized SVD feature extraction algorithm has better robustness than the traditional Principal Component Analysis (PCA) algorithm. DE CARVALHO E, DENEIRE L, and SLOCK D T M. Blind and semi-blind maximum likelihood techniques for multiuser multichannel identification[C]. Proceedings of the 9th European Signal Processing Conference (EUSIPCO 1998), Rhodes, 1998: 1–4. KANTERAKIS E and SU W. OFDM signal classification in frequency selective Rayleigh channels[C]. Proceedings of MILCOM 2011 Military Communications Conference, Baltimore, MD, 2011: 1–6. SALAM A O A, SHERIFF R E, AL-ARAJI S R, et al. Automatic modulation classification in cognitive radio using multiple antennas and maximum-likelihood techniques[C]. Proceedings of 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, 2015: 1–5. PUNITH KUMAR H L and SHRINIVASAN L. Automatic digital modulation recognition using minimum feature extraction[C]. Proceedings of the 2nd International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 2015: 772–775. BAGGA J and TRIPATHI N. Analysis of digitally modulated signals using instantaneous and stochastic features for classification[J]. International Journal of Soft Computing and Engineering (IJSCE) , 2011, 1(2): 57–61. CHENG Y Z, ZHANG H L, and WANG Y. Research on modulation recognition of the communication signal based on statistical model[C]. Proceedings of the 3rd International Conference on Measuring Technology and Mechatronics Automation, Shanghai, 2011: 46–50. WANG L and REN Y. Recognition of digital modulation signals based on high order cumulants and support vector machines[C]. 2009 ISECS International Colloquium on Computing, Communication, Control, and Management, Sanya, 2009: 271–274. SHAKRA M M, SHAHEEN E M, BAKR H A, et al. C3. automatic digital modulation recognition of satellite communication signals[C]. Proceedings of the 32nd National Radio Science Conference (NRSC), 6th of October City, 2015: 118–126. HASSANPOUR S, PEZESHK A M, and BEHNIA F. Automatic digital modulation recognition based on novel features and support vector machine[C]. Proceedings of the 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Naples, 2016: 172–177. SUN G C, AN J P, YANG J, et al. A new key features extraction algorithm for automatic digital modulation recognition[C]. Proceedings of 2007 International Conference on Wireless Communications, Networking and Mobile Computing, Shanghai, 2007: 2306–2309. LI S P, CHEN F C, and WANG L. Modulation recognition algorithm of digital signal based on support vector machine[C]. Proceedings of the 24th Chinese Control and Decision Conference (CCDC), Taiyuan, 2012: 3326–3330. AMOEDO D A, DA SILVA JÚNIOR W S, and DE LIMA FILHO E B. Parameter selection for SVM in automatic modulation classification of analog and digital signals[C]. Proceedings of 2014 International Telecommunications Symposium (ITS), Sao Paulo, 2014: 1–5. GUESMI L and MENIF M. Modulation formats recognition technique using artificial neural networks for radio over fiber systems[C]. Proceedings of the 17th International Conference on Transparent Optical Networks (ICTON), Budapest, 2015: 1–4. ZHAO Z J and GU J W. Recognition of digital modulation signals based on hybrid three-order restricted Boltzmann machine[C]. Proceedings of the IEEE 16th International Conference on Communication Technology (ICCT), Hangzhou, 2015: 166–169. PATIL N M and NEMADE M U. Audio signal deblurring using singular value decomposition (SVD)[C]. Proceedings of 2017 IEEE International Conference on Power, Control, Signals and Instrumentation Engineering (ICPCSI), Chennai, India, 2017: 1272–1276. TANIN U H, JAHAN A, SHARMIN S, et al. De-noised and compressed image watermarking based on singular value decomposition[C]. Proceedings of 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, 2017: 628–633. KABIR S S, RIZVE M N, and HASAN M K. ECG signal compression using data extraction and truncated singular value decomposition[C]. Proceedings of 2017 IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, 2017: 5–7. ZHANG X N, LUO P C, and HU X W. A hybrid method for classification and identification of emitter signals[C]. Proceedings of 4th International Conference on Systems and Informatics (ICSAI), Hangzhou, 2017: 1060–1065.
CommonCrawl
I am using the Newton's method to solve $3\times3$ systems. For some particular cases, it turns out that at a given iteration, the Jacobian matrix cannot be inverted and that its determinant is very close to zero (looking at the matrix, there are terms that are around 1e+0 and others that are 1e-15). After investigations, it is clear that one variable has no influence on the system when it is close to the solution. What is the most clever thing to do with such an issue ? I would like to have an algorithm that can adapt itself to such situations when they happen. It is about an optical optimisation problem. The point is to add a surface to an optical system so that it fits some optical properties. The Newton's method finds the roots of a function that takes as input the parameters of the surface and outputs the differences between the optical properties computed and the targeted properties. I noticed that if the system is complex enough, then we have convergence. But if the system is too simple, the surface is more spherical and the Jacobian goes to very small values, because some parameters like astigmatism axis become influence-less. From what you describe, you have an ill-posed problem: The solution is not unique (not even locally). A standard way of dealing with this is the following: Instead of trying to solve $F(x)=y$, where $x$ is your vector of geometrical parameters, $y$ the vector of optical parameters, and $F$ the mapping that computes the latter via the former, you minimize $$ J(x) = \frac12\|F(x)-y\|^2 + \frac\alpha2\|x\|^2$$ for some (small) $\alpha>0$. The last term ensures (local) uniqueness: Among all solutions of $F(x)=y$, it will pick the one with minimal norm (in your case, if any angle will give the same surface, the minimizer will have zero angle). You can then compute a minimizer using any optimization method (e.g., BFGS with line search globalization), as described in the book by Nocedal and Wright Numerical Optimization. Not the answer you're looking for? Browse other questions tagged nonlinear-equations convergence newton-method jacobian or ask your own question.
CommonCrawl
I am interested in where you have placed zero in both Venn diagrams, Cong. It is difficult to say whether 0 is odd or even - I can see why you have placed it as an even number, though, as it seems to extend the pattern $8, 6, 4, 2, ... 0$. It's easy to forget that $0$ is a square number - but certainly $0 \times 0=0$. In order to find the solution to this problem you first need to list the squares which are $49, 36, 25, 16, 9, 4$ and $1$. There are $4$ odd square numbers which are $49, 25, 9$, and $1$. So you put the rest of the squares in the square numbers and odd numbers in the odd section and anything else you don't do anything with it. Well done! Eliza, from Danebank School, also left $0$ out of the Venn diagram. What do you think? Is $0$ square and even? Venn diagrams. Practical Activity. Visualising. Prime numbers. Interactivities. Comparing and Ordering numbers. Games. Odd and even numbers. Square numbers. Compound transformations.
CommonCrawl
Abstract : In this paper we study a combinatorial optimization problem issued from on-board networks in satellites. In this kind of networks the entering signals (inputs) should be routed to amplifiers (outputs). The connections are made via expensive switches with four links available. The paths connecting inputs to outputs should be link-disjoint. More formally, we call it $\plk-$network an undirected graph with $p+\lambda$ inputs, $p+k$ outputs and internal vertices of degree four. A $\plk-$network is valid if it is tolerant to a restricted number of faults in the network, i.e. if for any choice of at most $k$ faulty inputs and $\lambda$ faulty outputs, there exist $p$ edge-disjoint paths from the remaining inputs to the remaining outputs. In the special case $\lambda=0$, a $\plk-$network is already known as a selector. Our optimization problem consists of determining $N\plk$, the minimum number of nodes in a valid $\plk-$network. For this, we present validity certificates and a gluing lemma from which derive lower bounds for $N\plk$. We also provide constructions, and hence upper bounds, based on expanders. The problem is very sensitive to the order of $\lambda$ and $k$. For instance, when $\lambda$ and $k$ are small compared to $p$, the question reduces to avoid certain forbidden local configurations. For larger values of $\lambda$ and $k$, the problem is to find graphs with a good expansion property for small sets. This leads us to introduce a new parameter called $\alpha$-robustness. We use $\alpha$-robustness to generalize our constructions to higher order values of $k$ and $\lambda$. In many cases, we provide asymptotically tight bounds for $N\plk$.
CommonCrawl
The compound interest on a certain sum at the rate of 5% pa for 2 years is Rs. 287. What is the sum? A tank is normally filled in 9 hours, but takes 3 hours longer to fill it because of a leak. If the tank is full, then how long will the leak take to empty the tank? What is the average of the following set of scores? In how many different ways can the letters of the word POLICE be arranged? When 32 is added to 64% of a number, the result is 25% of 576. What is the number? The cost of a cricket bat after 16% discount is Rs. 1134. What is the cost of the bat before discount? An amount was distributed among A, B and C in the ratio of 7 : 11 : 16. If A received Rs. 4986 less than C, then what was the share of B? A person spent 40% of his monthly salary on house rent and 30% of the remaining amount on food. If the person is left with Rs. 3591 then what is his monthly salary? $\sqrt 7.84 \times 10.24$ = ? 10 . What should come in place of question mark (?) in the following questions? 73174 - 29617 + 43156 - 31619 = ?
CommonCrawl
I describe recent work with with Stefan Hollands that establishes a new criterion for the dynamical stability of black holes in $D \geq 4$ spacetime dimensions in general relativity with respect to axisymmetric perturbations: Dynamic stability is equivalent to the positivity of the canonical energy, $\mathcal E$, on a subspace of linearized solutions that have vanishing linearized ADM mass, momentum, and angular momentum at infinity and satisfy certain gauge conditions at the horizon. We further show that $\mathcal E$ is related to the second order variations of mass, angular momentum, and horizon area by $\mathcal E = \delta^2 M - \sum_i \Omega_i \delta^2 J_i - (\kappa/8\pi) \delta^2 A$, thereby establishing a close connection between dynamic stability and thermodynamic stability. thermodynmically unstable black holes are dynamically unstable, as conjectured by Gubser and Mitra. We also prove that positivity of $\mathcal E$ is equivalent to the satisfaction of a ``local Penrose inequality,'' thus showing that satisfaction of this local Penrose inequality is necessary and sufficient for dynamical stability.
CommonCrawl
Hyperoperation is a field of mathematics which studies indexed families of binary operations, Hyperoperations families, that generalize and extend the standard sequence of the basic arithmetic operations of addition, multiplication and exponentiation. Why are addition and multiplication commutative, but not exponentiation? Continuum between addition, multiplication and exponentiation? What combinatorial quantity the tetration of two natural numbers represents? Where can I learn more about commutative hyperoperations? what operation repeated $n$ times results in the addition operator? How to define $A\uparrow B$ with a universal property as well as $A\oplus B$, $A\times B$, $A^B$ in category theory? How exactly does Knuth's Up-Arrow notation work? Are there nontrivial equations for hyperoperations above exponentiation? Could someone tell me how large this number is? Has this phenomenon been discovered and named? There is a way to write TREE(3) via $F^a(n)$? What is the geometric, physical or other meaning of the tetration? What is the geometric, physical or other meaning of the tetration or more high hyperoperations? Is it exists in general or it has only math concept? Tighter bounds on the fast growing hierarchy? Are hyperoperations < 3 to a reciprocal of a positive integer equivalent to the 'root' inverse to that integer?
CommonCrawl
Theoretical ex-rights price refers to the theoretical value of a company's share immediately after a right issue. After a right issue the price of a share falls below the prevailing price depending on the number of extra shares issued and the extent of discount at which the new shares are issued. In reality, the actual share price after a rights issue would be much lower or higher than the TERP based on investor's response towards the issue. However, since this aspect cannot be mathematically incorporated into the formula, only the theoretically considerable aspects (extra shares and discount) are included.This is the reason why it is called the 'Theoretical ex-rights price', rather than simply the ex-rights price. Before one can learn the formula to find the theoretical ex-rights price, it is necessary to know the concept 'market capitalisation'. The current price of a company's share is \(P\), and there are \(N\) number of shares. This means the market capitalisation is \(N\times P\) . The rights issue price is \(Q\). TERP is therefore the weighted average of the share price before rights issue and the rights issue prices. The current price of a company's share is \(\$200\). 1 for 1 rights issue will be made (1 new share for every 1 share held), at a rights issue price of \(\$100\). The TERP needs to be ascertained. Consider the same prices as in example 1 (current price\(\$200\) and rights issue price \(\$100\) ). Now, 1 for 5 rights issue is made. (1 new share is given for 5 shares held). Consider the same prices as in example 1 (current price\(\$200\) and rights issue price\(\$100\) ). Now, 1 for 10 rights issue is made. (1 new share is given for 10 shares held). All of the above given examples consider the same current share price and rights issue price (200 and 100 respectively). But still, all three examples give different values for TERP. This difference is due to the basis of rights issue. In example 1, a \(1-for-1\) rights issue was made. So, the number of new shares is \(100\%\) of the existing number of shares. This pushed the price down to a greater extent, i.e. the share price fell from \(\$200\) to \(\$150\). In example 2, a \(1-for-5\) rights issue was made. The number of new shares was only\(20\%\) of the existing number of shares. So, the price dropped to a lesser extent compared to example 1, i.e. the share price dropped from \(\$200\) to \(\$183.33\) . In example 3, a \(1-for-10\) rights issue was made, which means the number of additional shares raised was only \(10\%\) of the existing number of shares. So, the drop in share price was the lowest (\(\$200\) to \(\$190.91\) ) compared to examples 1 and 2. So, the additional number of shares issued through the rights issue would influence the value of TERP. Also, the amount of discount given on the rights issue will influence the value of TERP. Higher the discount on the rights issue price, lower the TERP would be.
CommonCrawl
A low-profile dual-polarized wideband omnidirectional antenna with artificial magnetic conductor (AMC) reflector is proposed. The proposed antenna is operated in the long term evolution band (1.7–2.7 GHz), and has a compact size of 200 mm $\times200$ mm $\times $ 30.6 mm (about $0.25\lambda $ height at 2.7 GHz). The antenna structure consists of a horizontally polarized circular loop antenna, a vertically polarized low-profile monopole antenna, and an AMC reflector. By carefully designing the reflection characteristics of the AMC reflector, the profile height of the proposed antenna is significantly reduced as compared with those of antennas backed by conventional perfect electric conductor (PEC) ground planes. Simulated and measured results show that the proposed antenna is able to achieve over 45% impedance bandwidth (VSWR <1.8) with stable radiation patterns in the band of 1.7–2.7 GHz. Owing to the attractive wide bandwidth, low-profile configuration, and ease of fabrication, the proposed antenna is suitable for microbase station systems, especially for indoor ceiling antenna networking applications.
CommonCrawl
Complex genetic disorders often involve products of multiple genes acting cooperatively. Hence, the pathophenotype is the outcome of the perturbations in the underlying pathways, where gene products cooperate through various mechanisms such as protein-protein interactions. Pinpointing the decisive elements of such disease pathways is still challenging. Over the last years, computational approaches exploiting interaction network topology have been successfully applied to prioritize individual genes involved in diseases. Although linkage intervals provide a list of disease-gene candidates, recent genome-wide studies demonstrate that genes not associated with any known linkage interval may also contribute to the disease phenotype. Network based prioritization methods help highlighting such associations. Still, there is a need for robust methods that capture the interplay among disease-associated genes mediated by the topology of the network. Here, we propose a genome-wide network-based prioritization framework named GUILD. This framework implements four network-based disease-gene prioritization algorithms. We analyze the performance of these algorithms in dozens of disease phenotypes. The algorithms in GUILD are compared to state-of-the-art network topology based algorithms for prioritization of genes. As a proof of principle, we investigate top-ranking genes in Alzheimer's disease (AD), diabetes and AIDS using disease-gene associations from various sources. We show that GUILD is able to significantly highlight disease-gene associations that are not used a priori. Our findings suggest that GUILD helps to identify genes implicated in the pathology of human disorders independent of the loci associated with the disorders. Copyright: © Guney, Oliva. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: Departament d'Educació i Universitats de la Generalitat de Catalunya i del Fons Social Europeu (Department of Education and Universities of the Generalitat of Catalonia and the European Social Fons). Spanish Ministry of Science and Innovation (MICINN), FEDER (Fonds Européen de Développement Régional) BIO2008-0205, BIO2011-22568, PSE-0100000-2007, and PSE-0100000-2009; and by EU grant EraSysbio+ (SHIPREC) Euroinvestigación (EUI2009-04018). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Genetic diversity is augmented by variations in genetic sequence, however not all the mutations are beneficial for the organism. Coupled with environmental factors these variations can disrupt the complex machinery of the cell and cause functional abnormalities. Over the past few decades, a substantial amount of effort has been exerted towards explaining sequential variations in human DNA and their consequences on human biology . Linkage analysis , association studies and genome-wide association studies (GWAS) have achieved considerable success in identifying causal loci of human disorders, albeit with limitations , . Complex genetic disorders implicate several genes involved in various biological processes. Interactions of the proteins of these genes have helped extend our view of the genetic causes of common diseases –. Genes related to a particular disease phenotype (disease genes) have been demonstrated to be highly connected in the interaction network (e.g., in toxicity modulation and cancer , ). Yet, rather than having random connections through the network, the interactions of proteins encoded by genes implicated in such phenotypes involve partners from similar disease phenotypes –. Linkage analysis typically associates certain chromosomal loci (linkage interval) with a particular disease phenotype. Such analysis produces a set of genes within the linkage interval. Recent studies have confirmed the usefulness of network-based approaches to prioritize such candidate disease genes based on their proximity to known disease genes (seeds) in the network. These studies can be distinguished by the way they define proximity between the gene products in the network of protein-protein interactions. Thus, proximity is defined by considering direct neighborhood –, or by ranking with respect to shortest distance between disease genes , – or using methods based on random walk on the edges of the network , , . Making use of the global topology of the network, random walk based methods have been shown to perform better than local approaches , , . Two inherent properties of available data on protein-protein interactions (PPI) that affect the prioritization methods are incompleteness (false negatives) and noise (false positives). The bias towards highly connected known disease nodes in protein interaction networks has recently motivated statistical adjustment methods on the top of the association scores computed by prioritization algorithms where node scores are normalized using random networks . Furthermore, taking network quality into consideration, several approaches incorporate gene expression and data on functional similarity in addition to physical PPIs , –. Gene prioritization is then based on the integrated functional network, redefining "gene-neighborhood" at the functional level. Network-based approaches can also aid in identifying novel disease genes, even when the associated linkage intervals are not considered, for instance, to prioritize genes from GWAS , , . In fact, using the whole genome to prioritize disease-gene variants is expected to produce more robust results in identifying modest-risk disease-gene variants than using high-risk alleles . Nonetheless, existing prioritization methods substantially suffer from a lack of linkage interval information and depend on the quality of the interaction network . Thus, to identify genes implicated in diseases, stout methods that exploit interaction networks to capture the communication mechanism between genes involved in similar disease phenotypes are needed. Available network-topology based prioritization methods treat all the paths in the network equally relevant for the pathology. We hypothesize that the communication between nodes of the network (proteins) can be captured by taking into account the "relevance" of the paths connecting disease-associated nodes. Here, we present GUILD (Genes Underlying Inheritance Linked Disorders), a network-based disease gene prioritization framework. GUILD proposes four topology-based ranking algorithms: NetShort, NetZcore, NetScore and NetCombo. Additionally, several other state-of-the-art algorithms that use global network topology have been included in GUILD: PageRank with priors (as used in ToppNet ), Functional Flow , Random walk with restart and Network propagation . The framework uses known disease genes and interactions between the products of these genes. We show the effectiveness of the proposed prioritization methods developed under the GUILD framework for genome-wide prioritization. We also use several interaction data sets with different characteristics for various disease phenotypes to evaluate the classifier performance of these methods. As a proof of principle we use GUILD to pinpoint genes involved in the pathology of Alzheimer's disease (AD), diabetes and AIDS. GUILD is freely available for download at http://sbi.imim.es/GUILD.php. We tested the prioritization algorithms in GUILD using three sources of gene-phenotypic association and the largest connected components of five different protein-protein interaction networks (see "Methods" for details and names of these sets). The area under ROC curve (AUC) was used to compare each ranking method (four novel methods NetScore, NetZcore, NetShort and NetCombo; and four existing state-of-the-art methods, Functional Flow, PageRank with priors, Random walk with restart and Network propagation). The AUCs for each method averaged over all disorders in different disease data sets (OMIM, Goh and Chen) and interaction data sets (Goh, Entrez, PPI, bPPI, weighted bPPI) are given in Table 1 (see Table S1 for the AUC values averaged over diseases on each interaction network separately). We also compared the ratio of seeds covered (sensitivity) among the top 1% predictions of each method (Table 1). In general, our methods produced more accurate predictions and better sensitivity in genome-wide prioritization than the up-to-date algorithms with which we compared. NetCombo, the consensus method combining NetScore, NetZcore and NetShort, proved to be an effective strategy of prioritization independent of the data set used. NetCombo produced significantly better predictions than Network Propagation, the best of the state-of-art tested approaches, on each data set (P≤5.7e-6, see Table S2 for associated p-values). Also the improvement of NetScore versus Network Propagation was significant in Goh and Chen data sets (P≤8.2e-5). Figure S1 compares the significant improvements in AUC. We also tested alternative ways to combine prioritization methods. However, none of the combinations using other methods proved as effective as combining the three methods included in NetCombo. Details showing the average AUC and sensitivity among the top 1% high scoring genes of each disorder for each prioritization method using OMIM, Goh and Chen data sets on each interaction network can be found in Tables S3 and S4. In order to avoid bias towards highly studied diseases we used equal number of gold standard positive and negative instances via grouping all the non-seed scores in k groups, where k is the number of seeds associated with the disease under evaluation (see "Methods"). Considering that the distribution of disease associated genes among all the genes is not known a priori, this assumption provided a fair testing set to compare different prediction methods than using all non-seeds as negatives or using only a random subsample of non-seeds. We also compared the prioritization methods when all non-seeds were assumed as negatives. The AUC values increased for all methods on all data sets (up to 10%). In all tests NetCombo and NetScore outperformed existing prioritization methods (see Table S5). The prediction performance of these methods depended on the topology of the network and the quality of the knowledge of protein-protein interactions in regards to size and reliability. We grouped the AUCs of all disorders by network type to test these dependencies (see "Methods" for network definitions). The distribution of AUCs for each interaction data set using OMIM, Goh and Chen data sets is given in Figure 1 (see Figure S2 for the distribution of sensitivity values with the top 1% predictions). Interestingly, most of the methods produced their best results with the weighted bPPI network, which used the scores from the STRING database to weight the edges (see Table S1 for the average AUC). The improvement of the prediction performance using edge confidence values from STRING was significant for most methods (with the exception of NetShort and Random walk with restart algorithms, for which the performance improved but not significantly). These results justify the importance of network quality (i.e. using reliable binary interactions). Figure 1. Prediction performance of GUILD approaches on each interaction network over all phenotypes of OMIM, Goh and Chen data sets. The distribution of AUCs for different phenotypes in each network is represented with a box-plot of different color. Color legend: red, Goh network; yellow, Entrez network; green, PPI network; blue, bPPI network; purple, weighted bPPI network. Furthermore, we hypothesized that removing interactions detected by pull down methods, such as Tandem Affinity Purification (TAP), would filter the noise produced by false binary interactions, consequently increasing the AUC and the sensitivity among top ranked predictions when the bPPI network was used instead of the PPI network (see Table S1). Our results indicated that the network size was relevant too when binary interactions were used. The Goh network, which was smaller than the bPPI network, produced significantly lower AUC values for the majority of prioritization methods (all but NetShort). Thus, the use of the largest possible network with assessed binary interactions could improve the predictions. Based on the AUC values for each phenotype when the bPPI network is used, NetCombo, NetScore, NetZcore, and NetShort were significantly better than Functional Flow, PageRank with priors and Random walk with restart. NetCombo had an average AUC of 74.7% using the bPPI network on OMIM data set and this was the only method over 70% AUC (Table S1). However, when the weighted bPPI network was used to study the same data set, the AUCs of NetScore and NetZcore methods also surpassed this limit, with values around 74% and 72% respectively (NetCombo achieved 76.5% AUC in this case). Next, we questioned whether the prediction methods depended on the connectivity between seeds using OMIM, Goh and Chen data sets. Table 2 shows the correlation between the average AUC of the prioritization methods and the graph features involving seeds of each disease phenotype in the bPPI network (number of seeds, number of neighboring seeds, and average shortest path length between seeds). A small inverse correlation was found between the average length of the shortest paths connecting seeds and the prediction capacity for all methods. This correlation was observed when using any of the interaction networks; therefore, it was independent of the underlying network. Average number of neighboring seeds also correlated with prediction performance, but less than the average length of the shortest paths connecting the seeds. Table 2. Correlations between prediction performances of methods, measured as the average AUC over phenotypes, and seed connectivity values (associated p-values are included in parenthesis). We questioned whether our methods depended on the number of seeds associated with a disorder using OMIM, Goh and Chen data sets. We addressed the dependence on the number of seeds by splitting all disorders into two groups with respect to the number of seeds (i.e. using the median of the distribution of the seeds associated with the diseases). There were 65 disorders with less than 23 seeds (the median number of seeds) and 67 disorders with at least 23 seeds (2 disorders had exactly 23 seeds). Figure 2 shows the AUC distribution for the eight methods studied for these two groups using bPPI network. In general, the AUCs were similar in the two groups, supporting the lack of correlation between the number of seeds and AUC in Table 2. The differences between AUCs of the two groups were only significant for NetCombo, NetShort and Network propagation (all associated p-values are less than 0.009, assessed by non-paired Wilcoxon test). This was consistent with the anti-correlation observed between the number of seeds and AUC for these methods. Figure 2. Dependence on the number of seeds. Tests and evaluations were performed using the human bPPI network and genes from OMIM, Chen and Goh disease phenotypes. Box plots of the AUCs are based on the predictions of disease-gene associations for disorders with less than 23 seeds (light gray) and disorders with at least 23 seeds (dark gray) using each prioritization method. Using disease-gene association information in OMIM data set and the proposed consensus prioritization method (NetCombo) on the human interactome, we calculated the disease-association scores of all genes in the network for Alzheimer's Disease (AD), diabetes and AIDS, three phenotypes with relatively high prevalence in the society. In order to check the validity of these scores, we used disease-gene associations from the Comparative Toxicogenomics Database (CTD) , the Genetic Association Database (GAD) and available expert curated data sets (see Methods for details). Moreover, we analyzed the GO functional enrichment of the top-ranking genes. First, we used the disease-gene associations in CTD to confirm the biological significance of the scores calculated by the prioritization method in these three diseases. We retrieved direct and indirect disease-gene associations in CTD. We compared the distribution of the scores assigned by NetCombo in the "direct association group" with the distribution of these scores in the "no-association group" and with the distribution in the "indirect association group" (see methods for details). In the three examples, the scores were significantly higher for the direct disease-gene associations than indirect-associations or no-associations (see Figure 3 and Table S6). In the analysis of AD and AIDS, more than 40% of the CTD disease-genes had NetCombo score higher than 0.1. Moreover, only around 5% of the genes in the no-association group for each disease had scores higher than 0.1 and the mean of the direct association group was significantly higher than the mean of the indirect association group (Table S6). Figure 3. Cumulative percentage of disease-genes with direct associations in CTD (dark gray) and non associated genes (light gray) as a function of the NetCombo score for Alzheimer's disease (A), diabetes (B), and AIDS (C). Second, we checked how many of the gene-disease associations in GAD coincided with the top-ranking genes for each phenotype (AD, diabetes and AIDS). The top-ranking genes covered significant number of genes in GAD (Table 3). The rankings of the highest scoring genes for AD, diabetes and AIDS are given in Table S7. Then, we checked the GO functions enriched among the top-ranking genes (Table S8). GO enrichment in the subnetwork induced by the top-ranking genes in AD highlighted the role of the Notch signaling and amyloid processing pathways. The link between these pathways and the pathology of AD has been demonstrated recently . The enrichment of GO functions among the prioritized genes for AIDS and diabetes showed the relevance of biological process triggered by inflammatory response, such as cytokine and in particular chemokin activity. This result was also consistent with the literature , . Table 3. Number of genes (excluding seeds) in the top 1% using NetCombo score and its significance with respect to the number of genes in GAD and in the network. Finally, we further analyzed in detail the results for AD, showing that some well-ranked top genes were out of any known linkage interval associated with AD and still played a relevant role. Figure 4 shows the top-scoring genes for AD and the subnetwork induced by the interactions between their proteins. The 17 AD seeds (disease-gene associations from OMIM) and the 106 genes prioritized by NetCombo involved several protein complexes and signaling pathways such as the gamma-secretase complex, serine protease inhibitors, the cohesin complex, structural maintenance of chromosome (SMC) family, the short-chain dehydrogenases/reductases (SDR) family, adamalysin (ADAM) family, cytokine receptor family and Notch signaling pathway. Some genes within these families have been demonstrated to be involved in AD pathology –: ADAM10 (ADAM family), HSD17B10 (SDR family), and PSENEN, APH1A, APH1B, and NCSTN (gamma-secretase complex). It is worth mentioning that AD has been central to recent research efforts, but mechanisms underlying the disorder are still far from understood. The accumulation of senile plaques and neurofibrillary tangles is postulated as the main cause of the disease. The gamma-secretase is involved in the cleavage of the amyloid precursor protein. This process produces the amyloid beta peptide, the primary constituent of the senile plaques in AD. Interestingly, the six genes predicted by the method (pointed by arrows in Figure 4) were not associated with AD in OMIM. Remarkably, only APH1A (1q21–q22), and PSENEN (19q13.13) lied either under or close to a linkage interval associated with AD (i.e. 1q21, OMIM:611152; and 19q13.32, OMIM:107741) and none of the remaining four genes lied under or close to a known linkage interval associated with AD. Moreover, the subnetwork of top-ranking AD genes covered several genes in the expert curated data set reported by Krauthammer et al. such as APBB1, VLDLR, SERPINA1 and BACE1 (p-value associated with this event<1.3e-3). Figure 4. Alzheimer's disease-associated top-scored proteins and their interactions. AD-implicated proteins identified using NetCombo method on the weighted bPPI network with OMIM AD data. High-scored proteins were selected at the top 1% level using NetCombo scores. Proteins are labeled with the gene symbols of their corresponding genes. Edge thickness was proportional to the weight of the edge (assigned with respect to STRING score). Red nodes are associated with AD. Diamond and round rectangle nodes come from the OMIM AD set (seeds). Round rectangle and red circle nodes have been associated with AD using the analysis of differential expression. The nodes highlighted with arrows (ADAM10, HSD17B10, PSENEN, APH1A, APH1B, NCSTN) have been recently reported in the literature to be involved in the pathology of AD. The main contributions of this paper are twofold. First, we presented four novel methods that are comparable to, or outperform, state-of-the-art approaches on the use of protein-protein interactions to predict gene-phenotype associations at genome-wide scale, extending the set of relevant genes of a phenotype. Second, we demonstrated to which extent these prioritization methods could be used to prioritize genes on multiple gene-phenotype association and interaction data sets. We investigated the prediction capacity and robustness of the approaches by testing their performance against the quality and number of interactions. Typically, network-based methods consider the paths between nodes equally relevant for a particular disease. The prioritization methods proposed in this study differ from others in the way the information is transferred through the network topology. NetShort considered a path between nodes shorter if it contained more seeds (known-disease gene associations) in comparison to other paths. NetScore accounted for multiple shortest paths between nodes. NetZcore assessed the biological significance of the neighborhood configuration of a node using an ensemble of networks in which nodes were swapped randomly but the topology of the original network was preserved. Our results demonstrated that combining different prioritization methods could exploit better the global topology of the network than existing methods. The prediction performance of the prioritization methods depended on the quality and size of the underlying interaction network. Yet, this dependence affected the performance of the methods similarly. The improvement of the network quality also improved the predictions for all methods. On the other hand, the prediction accuracy of the prioritization methods showed a large variation depending on the phenotype in consideration, but this variation was reduced when a consensus method was used (NetCombo). On average, the prediction performance was better on Chen and OMIM data sets compared to the Goh data set. It can be argued that this is because the Goh data set contains gene-phenotype associations where the phenotype is defined in a broader sense (i.e. the physiological system affected). Still, the AUC values were consistent among different data sets for all the prioritization methods. Although network-based prioritization of whole genome provides a ranking of genes according to their phenotypic relevance, the interplay between genes in many diseases might not be captured by solely the PPI information. In fact, for several phenotypes in OMIM data set such as amyloidosis, myasthenic, myocardial and xeroderma the genes associated with the disease were predicted with high accuracy in our analysis, whereas for mitochondrial, osteopetrosis and epilepsy phenotypes, the network-based prioritization was less successful. The best AUC and coverage of disease genes among high-scored gene-products were obtained with the largest and highly confident network (in which interactions integrated from public repositories were filtered out if detected by TAP and edges were positively weighted using the scores provided by STRING database). This improvement was significant for all proposed approaches. The increased coverage and AUC when the bPPI network was used instead of the Goh and Entrez networks showed the benefit of integrating information from various data sources. Prioritization algorithms rely on the topology of the network; thus, increasing the number of known interactions should improve coverage. Nonetheless, interaction data integrated in this manner is prone to include false positives, and filtering possible non-binary interactions (e.g., complexes identified by TAP) can improve the use of integrated data. The hypothesis that we required the largest reliable set for the study of gene prioritization was supported by the increase of AUC when the bPPI network was used instead of the PPI network. The AUC values over dozens of different phenotypes that vary in number of initial gene-phenotype associations showed the applicability of the methods independent of the number of genes originally associated with the phenotype. Moreover, having more number of seeds associated with a pathophenotype did not necessarily improve the prediction accuracy. Most prioritization methods achieved better performance for disorders with low number of seeds. This difference in performance was significant for NetCombo, NetShort and Network propagation. In fact, the accuracy of the predictions was rather correlated with the average shortest path length between seeds, which shows the importance of the topology of the network. We applied the prioritization methods to study the implication of genes in AD, diabetes and AIDS. We claimed that the genes discovered in the high scoring portion of the network would be more likely to be involved in the pathology of these diseases. Therefore, we further analyzed the genes prioritized by NetCombo using the human bPPI network. We verified that some of these predictions were consistent with the literature and the scores assigned by GUILD distinguished between the genes associated with a specific disease and the rest of genes. We have to note that we merged the entries for diabetes type 1 and type 2 in OMIM and defined it as "diabetes phenotype". This may explain why 1) the top-ranking genes predicted for diabetes covered relatively less genes in GAD (assessed by hypergeometric p-value) than AD and AIDS; and 2) the genes with direct-associations were more easily segregated by NetCombo-scores for AD and AIDS than diabetes. Furthermore, we showed that the groups of genes predicted to be associated with these three phenotypes were enriched in biological processes related to the disease. In AD, top-ranking genes formed a subnetwork implying the Notch and amyloid pathways, while top-ranking genes for diabetes and AIDS were involved in the inflammatory response mechanisms. Our analysis on these diseases suggested that our approach in whole genome prioritization was a competent way to discover novel genes contributing to the pathology of diseases. Based on this study, we have shown that the new approaches (NetCombo, NetShort, NetScore, and NetZcore) improved the results of state-of-the-art algorithms, such as Functional Flow, PageRank with priors, Random walk with restart and Network propagation. It is worth mentioning that PageRank with priors and Random walk with restart have been adopted to address genome-wide disease-gene prioritization previously , . Furthermore, a variation of Random walk with restart algorithm that incorporates phenotypic similarity was recently proposed . Since our aim was to compare the algorithms with each other, here, we evaluated them on the same benchmarking data set using only the initial disease-gene associations and the interaction network. Finally, we made all eight methods publicly available in the GUILD framework. Overall, our results suggest that human diseases employ different mechanisms of communication through their interactions. Our analysis reveals a collective involvement of sets of genes in disorders and could be extended to identify higher order macromolecular complexes and pathways associated with the phenotype. However, the use of a single and generic prioritization scheme may not be sufficient for completing the set of pathways affected by a disease and may require the use of more than one method. Furthermore, network-based prioritization methods that use only PPI information fail to identify the disease-genes whose proteins do not interact with other proteins. Therefore, towards a comprehensive understanding of biological pathways underlying diseases, the network-based prioritization methods suggested here can be complemented by incorporating gene expression, functional annotations or phenotypic similarity profiles and by using functional association networks rather than PPI networks. We used three human interactomes: i) Goh network, the PPI network from the work of Goh et al. in which data was taken from two high quality yeast two-hybrid experiments , and PPIs obtained from the literature; ii) Entrez network, a compilation of interactions from BIND and HPRD provided by NCBI (ftp://ftp.ncbi.nih.gov/gene/GeneRIF/interactions.gz); and iii) PPI network, the set of experimentally known PPIs integrated as in Garcia-Garcia and colleagues using BIANA (see Methods S1 on the details of the integration protocol). Considering that high throughput pull down interaction detection methods introduce many indirect relationships (such as being involved in the same complex) in addition to direct physical interactions, we removed the subset of interactions obtained by TAP, resulting in the bPPI network. Furthermore, we have incorporated edge scores for the interactions between two proteins in this network using STRING database . We refer this network as weighted bPPI network. In all other networks, the edge weights have the default value of 1. When edge weights from STRING were used (in weighted bPPI network), the scores given by STRING were rescaled to range between 0 and 1 and then added to the default value of 1. We have to note that the algorithms being studied depend solely on the topology of the network, implying that unconnected nodes and very small components cannot effectively transfer the relevant information along the network. Consequently, only the largest connected component of the network was used for the evaluation (see Table S9 for the sizes of the remaining components in the interaction networks). Hereafter, the term "network" refers to the largest connected component of the network unless otherwise stated. See Table S10 for a summary of the data contained in these interaction networks. Genes and their associated disorders were taken from: 1) Online Mendelian Inheritance in Man (OMIM) database , 2) Goh et al. (referred as Goh data set throughout the text), and 3) Chen et al. (referred as Chen data set throughout the text). OMIM is one of the most comprehensive, authoritative and up-to-date repositories on human genes and genetic disorders. The information in OMIM is expert curated and provides the mutations on the genes associated with the disorders. Phenotypic associations for genes were extracted from the OMIM Morbid Map (omim.org/downloads retrieved on November 4, 2011) by merging entries using the first name as previously done , , . A disorder was considered if and only if it had at least 5 gene products in any of the interaction networks mentioned above (this data set is referred as OMIM hereafter). Having 5 proteins in the interaction network was required for a five-fold cross validation evaluation and also ensured that we tested the capacity to use global topology (in the case of few genes the amount of annotation transfer is limited, diminishing the benefit of using network based methods as opposed to direct neighborhood). In Goh data set , OMIM disorders (from December 2005) were manually classified in 22 disorder classes based on the physiological system affected (21 classes excluding the unclassified category). In Chen data set , a total of 19 diseases were collected from OMIM and GAD See Table S11 for a summary of the diseases used in this study. Additionally, we used an independent gene-phenotype association data set to optimize the required parameters of prioritization methods (see below) without over-fitting the available gene-disease associations. This data set contains gene-disease associations identified by text mining PubMed abstracts using SCAIView for aneurysm (168 genes, keyword search "intracranial aneurysm" and restricting the query to include entries with MeSH "genetics" term) and breast cancer (1588 genes, similar to aneurysm but using "breast cancer" as the keyword). These genes are listed in Table S12. Genes associated with a disorder were mapped to their products (proteins) in the protein-protein interaction network and assigned an initial score for their phenotypic relevance. Thus, proteins translated by genes known to be involved in a particular pathology were termed seeds and have the higher scores in the network. All other proteins in the network were assigned non-seed scores (lower scores in the network). The number of proteins (nodes) and interactions (edges) in all interaction networks used in this study are given in Table S10. Table S11 summarizes all diseases used under the context of this study, the number of genes associated with them and number of corresponding proteins translated by these genes covered in the largest connected component of the network. NetShort is motivated by the idea that a node important for a given phenotype would have shorter distances to other seed nodes in the network. As opposed to previous approaches that employ shortest paths, we incorporate "disease-relevance" of the path between a node and disease nodes by considering not only the number of links that reach to the disease-associated node but also number of disease-associated nodes that are included in the path. Thus, we modify the length (weight) of the links in shortest path algorithm such that the links connecting seed nodes are shorter than the links connecting non-seed nodes. Formally the score of a node, u, is defined as: where d(u,v) is the shortest path length between nodes u and v with weighted edges of graph G(V,E,f). The graph is defined by nodes V, edges E, and the edge weight mapping function, f, where f is defined as . The weight f(i,j) is given by the multiplication of edge score and average of the initial scores of both nodes as follows: This definition implies that the edge is short when the scores of the nodes forming the edge are high (e.g. when they are seeds) and long otherwise. NetZcore assesses the relevance of a node for a given phenotype by normalizing scores of nodes in a network with respect to a set of random networks with similar topology. Intuitively, NetZcore extends the direct neighborhood approach, where all the neighbors of the node contribute to the relevance of the node, to a normalized direct neighborhood. It highlights the relevance of the node compared to the background distribution of the relevance of neighboring nodes (using random networks). The score of a node is calculated as the average of the scores of its neighboring nodes. This score is then normalized using the z-score formula: where and are the mean and standard deviation of the distribution of scores in a set of random networks with the same topology as the original graph. Networks with the same topology are generated such that a node u having degree d is swapped with another node v in the network with the same degree d. In this study, we use a set of 100 random networks. The process of calculating node scores based on the neighbor scores using random networks is repeated by a number of times (iterations) specified by the user in order to propagate the information along the links of the network. The iteration number (k) varies from 1 to a maximum (MaxZ). MaxZ is a specific parameter of the method, and scorek(u) at iteration k is calculated as: where in a graph G(V,E,f) with nodes V, edges E, Nb(u) is the set of neighbors of node u, and f(u,v) = weight(u,v) is an edge weight mapping function. Note that, NetZcore incorporates the statistical adjustment method suggested by Erten and colleagues into the scoring by both normalizing and propagating scores at each iteration . NetScore is based on the propagation of information through the nodes in the network by considering multiple shortest paths from the source of information to the target and ignoring all other paths between them. To calculate the information passed through all the shortest paths in between two nodes, NetScore uses a message-passing scheme such that each node sends its associated information as a message to the neighbors and then iteratively to their neighbors (pseudo-code is given in Figure S3). Each message contains the node identity of the emitter and the path weight (defined as the multiplication of edge weights of the path that the message has traveled). Messages are stored in each node so that only the first messages arriving from a node are considered (i.e. the messages arriving through all the shortest paths from that node). At the end of each iteration, the score of a node is defined as the average score for the messages received. The score carried by a message is calculated as the score of the emitter multiplied by the path weight. Thus, at iteration k, a node has the score of the nodes reaching it from shortest paths of length k (more than once if multiple shortest paths exist) weighted by the edge weights in these paths. Considering that storing all the messages coming from the k-neighborhood introduces a memory and time penalty, we restrict the number of iterations during score calculation to a maximum (MaxS). To cover the whole diameter of the network, we repeat the scoring with updated scores after emptying the message arrays (resetting the node scores with the scores accumulated in the last iteration). Therefore, in addition to the number of iterations (MaxS), NetScore uses the number of repetitions (NR) as parameters of the algorithm. NetCombo combines NetScore, NetShort and NetZcore in a consensus scheme by averaging the normalized score of each prioritization method. The normalized score of a prioritization method for a node n is calculated using the distribution of scores with this method. The mean of the scores of all nodes prioritized by this method is subtracted from the score of node n and then divided by the standard deviation of the distribution. In addition to the four methods above, four state-of-the-art algorithms have been included in GUILD for prediction performance comparison purposes. These methods are PageRank with priors (as used in ToppNet ), Functional Flow , Random walk with restart and Network propagation . See Methods S1 for the details of the implementation of these methods. PageRank with priors has recently been proven to be superior to available topology-based prioritization methods , . The methods based on random walk with restart proposed by Kohler et al. and propagation algorithm by Vanunu et al. are both conceptually similar to PageRank with priors and differ in the way that they incorporate link weights (edge scores) , . We also apply Functional Flow, a global network topology-based method, originally addressed the functional annotation problem . To evaluate the prioritization methods, we used five-fold cross validation on three gene-phenotype annotation data sets mentioned above. Proteins known to be associated with a phenotype (seeds) were split into five groups; four of them were used as seeds for the prioritization methods and the remaining one group was used to evaluate the predictions. This process was repeated five times, changing the group for evaluation each time. The area under the ROC curve (AUC) and sensitivity were averaged over the five folds. These averages and their standard deviations were used to assess the quality of the predictions and compare the methods. A ROC (receiver operating characteristic) curve plots true positive rate (sensitivity) against false positive rate (1-specificity) while the threshold for considering a prediction as a positive prediction is varied. The AUC is the area under this plot and corresponds to the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative one. ROCR package was used to calculate these performance metrics and the selection of positive and negative instance scores are explained in the next paragraph. In the context of functional annotation and gene-phenotype association studies, obtaining negative data (proteins/genes that have no effect on a disease, disorder, or phenotype) is a challenge. We tackled this problem with an alternative procedure. First, all proteins not associated with a particular disease (or phenotype) were treated as potential negatives. Then, we used a random sampling (without replacement) of the potential negatives to calculate an average score. This score was defined as the score of a negative instance. We calculated as many scores of negative instances as positive instances (seeds) in the evaluation set. We ensured that each of the potential negatives were included in one of the random samples by setting the sample size equal to the number of all potential negatives divided by the number of seeds. Using this procedure, we had the same number of positive and negative scores, and the probability associated with choosing a positive instance by chance was 0.5. We used the aforementioned data sets for aneurysm and breast cancer to optimize the initial scores of seeds and non-seeds and the following parameters of the prioritization methods: MaxZ for NetZcore, MaxF for Functional Flow, and MaxS and NR for NetScore. For each of these parameters, the values that result in the largest average five-fold cross validation AUC were selected. The optimal values for initial scores of seeds and non-seeds were identified as 1.00 and 0.01 respectively, among the values we have tested (1.00 or text mining score associated with the seed, for seeds; and 0.01, 1.0e-3, 1.0e-5 or 0, for non-seeds). The number of iterations for NetZcore (MaxZ) and Functional Flow (MaxF) was 5. In the case of Functional Flow, 5 was also the limit specified by the authors. For NetScore, the optimized values were two iterations (MaxS) with three repetitions (NR). To test the significance of AUC differences between a pair of networks, or prioritization approaches, the one-sided Wilcoxon test was used. The alternative hypothesis was that the mean AUC of the network (or prioritization method) under consideration was greater than the other network under test (or prioritization method). No assumption was made regarding the normality of the distribution of AUCs, and AUCs were paired over the variable in concern (either network type or prioritization method); thus, a non-parametric paired test was applied. Alpha values were set to 0.05. The values for the samples of the random variable subject to the statistical test are given in relevant supplementary tables. R software (http://www.r-project.org) was used to compute statistics. We investigated the relationship between prediction performance of the prioritization methods and the connectivity of seeds in the network. We calculated the average number of neighbor seeds and the average shortest path distances between each pair of seeds for each phenotype as in Navlakha and Kingsford . The average number of neighbor seeds (Ns) is given as follows: where S is the set of seeds, Nb(s) is the set of nodes interacting with s (neighbors), and X(u) is 1 if u belongs to S and 0; otherwise. Similarly, the average shortest path distances (Ss) are given by where is the set of all seed pairs and d(s,v) is the shortest distance between s and v. We used the weighted bPPI network and products of AD, diabetes and AIDS seeds according to OMIM to investigate high-scoring nodes (top 1%) obtained with NetCombo algorithm. We calculated the scores by applying NetCombo and then selected 113 proteins in the network (top 1% of 11250 proteins in the network). These proteins were uniquely mapped to their corresponding gene symbols, yielding 106, 110 and 109 genes for AD, diabetes and AIDS respectively. Next, we counted how many of these genes were listed in Genetic Association Database (GAD) for each phenotype. GAD is a database that catalogs disease-gene associations curated from genetic association studies and collects findings of low significance in addition to those with high significance. We considered only the records in GAD that reported a positive association and merged the entries using the first name of the disease as we did for OMIM data set. In this analysis we excluded the seeds (disease-gene associations in OMIM). The p-values shown in Table 3 correspond to the probability of identifying GAD disease-gene associations at the top-ranking portion of the network assuming a hypergeometric model. The level of significance was set to 0.05. For AD, we also checked whether the top-ranking genes covered the expert curated genes implicated in AD pathology reported in Krauthammer et al. . We analyzed the GO functional enrichment of the top-ranking genes using FuncAssociate2.0 web service. The background consisted of all the genes in the network. A GO term was associated with a gene set if the adjusted p-value associated with the term was lower than 0.05. We used the disease-gene associations in Comparative Toxicogenomics Database (CTD) to check the biological significance of the scores calculated by the prioritization method of AD, diabetes and AIDS. CTD contains both manually curated disease-gene associations (direct) and inferred disease-gene associations (indirect). Again, the entries were merged using the first name of the disease. The scores of the direct disease-genes, indirect disease-genes and no-association genes (not found in CTD) were grouped as direct-association group, indirect-association group and no-association group. We tested the difference between the means of the distributions of scores using one tailed Student's t-test (assuming higher score for the direct associations and the alpha value was set to 0.05 as before). Comparison of the significance in prediction performance between prioritization methods. Significance of the differences in average AUC performance (averaged over all interaction networks and disease data sets) is represented as a heatmap. Dark blue color in a cell (i, j) of the heatmap denotes that the p-value associated with the one sided Wilcoxon test for the comparison of AUCs between ith and jth method (where the alternative hypothesis is that the mean of the first is greater than the second) is smaller or equal than 0.05. Ratio of successful predictions among the top 1% scores obtained by each method on each interaction network over all phenotypes of OMIM, Goh and Chen data sets. Color legend is same as Figure 1 in the manuscript. Pseudo-code of the NetScore algorithm. The repetition part is handled inside the first for-loop where message arrays are reset. The inside for-loop goes over the iterations, where only "new" messages are accepted. At the end of each iteration, the score of a node is calculated based on the messages it received. Average AUC of the prioritization methods on each data set of seeds (OMIM, Goh and Chen) using different interaction networks (Goh, Entrez, PPI, bPPI and weighted bPPI). P-values associated with the paired Wilcoxon signed rank test between Network Propagation and our two best prioritization methods on each data set using average AUCs over all networks. AUC of the prioritization methods for each disorder and network. Sensitivity values at top 1% predictions of the prioritization methods for each disorder and network. Five-fold AUC (%) for each method averaged over all diseases within the data set and all interaction networks considering all non-seeds (genes not associated with the diseases) as negatives. The average NetCombo scores (the standard deviation is given in parenthesis) of CTD direct/indirect disease-genes and the genes with no-association in CTD and the p-value associated with the difference between these groups. Top ranking genes in Alzheimer's Disease (AD), diabetes and AIDS identified by NetCombo (the top 1% high scoring genes) using weighted bPPI network and OMIM associations. Functional enrichment of high scoring common genes in NetCombo for AD, diabetes and AIDS. Number and size of the connected components other than the largest connected component (LCC) in the network. Interaction data sets used in the analysis. Number of disease-gene associations covered in each network. Genes used for parameter optimization. Conceived and designed the experiments: EG BO. Performed the experiments: EG. Analyzed the data: EG BO. Contributed reagents/materials/analysis tools: EG. Wrote the paper: EG BO. 1. Altshuler D, Daly MJ, Lander ES (2008) Genetic Mapping in Human Disease. Science 322: 881–888 doi:https://doi.org/10.1126/science.1156409. 2. Broeckel U, Schork NJ (2004) Identifying genes and genetic variation underlying human diseases and complex phenotypes via recombination mapping. The Journal of Physiology 554: 40–45 doi:https://doi.org/10.1113/jphysiol.2003.051128. 3. Hirschhorn JN, Daly MJ (2005) Genome-wide association studies for common diseases and complex traits. Nat Rev Genet 6: 95–108 doi:https://doi.org/10.1038/nrg1521. 4. Wang WY, Barratt BJ, Clayton DG, Todd JA (2005) Genome-wide association studies: theoretical and practical concerns. Nat Rev Genet 6: 109–118. 5. Kann MG (2007) Protein interactions and disease: computational approaches to uncover the etiology of diseases. Brief Bioinform 8: 333–346. 6. Ideker T, Sharan R (2008) Protein networks in disease. Genome Res 18: 644–652. 7. Barabasi A-L, Gulbahce N, Loscalzo J (2011) Network medicine: a network-based approach to human disease. Nat Rev Genet 12: 56–68 doi:https://doi.org/10.1038/nrg2918. 8. Said MR, Begley TJ, Oppenheim AV, Lauffenburger DA, Samson LD (2004) Global network analysis of phenotypic effects: protein networks and toxicity modulation in Saccharomyces cerevisiae. Proc Natl Acad Sci U S A 101: 18006–18011. 9. Wachi SY (2005) Interactome-transcriptome analysis reveals the high centrality of genes differentially expressed in lung cancer tissues. Bioinformatics 21: 4205–4208. 10. Jonsson PF, Bates PA (2006) Global Topological Features of Cancer Proteins in the Human Interactome. Bioinformatics 22: 2291–2297 doi:https://doi.org/10.1093/bioinformatics/btl390. 11. Gandhi TKB, Zhong J, Mathivanan S, Karthick L, Chandrika KN, et al. (2006) Analysis of the human protein interactome and comparison with yeast, worm and fly interaction datasets. Nature Genetics 38: 285–293 doi:https://doi.org/10.1038/ng1747. 12. Lim J, Hao T, Shaw C, Patel AJ, Szabo G, et al. (2006) A protein-protein interaction network for human inherited ataxias and disorders of Purkinje cell degeneration. Cell 125: 801–814. 13. Goh KI, Cusick ME, Valle D, Childs B, Vidal M, et al. (2007) The human disease network. Proc Natl Acad Sci U S A 104: 8685. 14. Lage KK (2007) A human phenome-interactome network of protein complexes implicated in genetic disorders. Nature Biotechnology 25: 309–316. 15. Oti MS (2006) Predicting disease genes using protein-protein interactions. British Medical Journal 43: 691. 16. Pujana MA, Han JD, Starita LM, Stevens KN, Tewari M, et al. (2007) Network modeling links breast cancer susceptibility and centrosome dysfunction. Nat Genet 39: 1338–1349. 17. Wu X, Jiang R, Zhang MQ, Li S (2008) Network-based global inference of human disease genes. Mol Syst Biol 4: 189. 18. Xu JL (2006) Discovering disease-genes by topological features in human protein-protein interaction network. Bioinformatics 22: 2800–2805. 19. Kohler S, Bauer S, Horn D, Robinson PN (2008) Walking the Interactome for Prioritization of Candidate Disease Genes. The American Journal of Human Genetics 82: 949–958 doi:https://doi.org/10.1016/j.ajhg.2008.02.013. 20. Franke L, van Bakel H, Fokkens L, de Jong ED, Egmont-Petersen M, et al. (2006) Reconstruction of a functional human gene network, with an application for prioritizing positional candidate genes. Am J Hum Genet 78: 1011–1025. 21. Dezso Z, Nikolsky Y, Nikolskaya T, Miller J, Cherba D, et al. (2009) Identifying disease-specific genes based on their topological significance in protein networks. BMC Syst Biol 3: 36. 22. Vanunu O, Magger O, Ruppin E, Shlomi T, Sharan R (2010) Associating genes and protein complexes with disease via network propagation. PLoS computational biology 6: e1000641. 23. Chen J, Aronow B, Jegga A (2009) Disease candidate gene identification and prioritization using protein interaction networks. BMC bioinformatics 10: 73. 24. Navlakha S, Kingsford C (2010) The Power of Protein Interaction Networks for Associating Genes with Diseases. Bioinformatics 26: 1057–1063 doi:https://doi.org/10.1093/bioinformatics/btq076. 25. Erten S, Bebek G, Ewing RM, Koyuturk M (2011) DADA: Degree-Aware Algorithms for Network-Based Disease Gene Prioritization. Bio Data mining 4: 19. 26. Aerts S, Lambrechts D, Maity S, Van Loo P, Coessens B, et al. (2006) Gene prioritization through genomic data fusion. Nat Biotech 24: 537–544 doi:https://doi.org/10.1038/nbt1203. 27. Ala U, Piro RM, Grassi E, Damasco C, Silengo L, et al. (2008) Prediction of human disease genes by human-mouse conserved coexpression analysis. PLoS Comput Biol 4: e1000043. 28. Lee I, Blom UM, Wang PI, Shim JE, Marcotte EM (2011) Prioritizing candidate disease genes by network-based boosting of genome-wide association data. Genome Res advance online article. 29. Linghu B, Snitkin ES, Hu Z, Xia Y, Delisi C (2009) Genome-wide prioritization of disease genes and identification of disease-disease associations from an integrated human functional linkage network. Genome Biol 10: R91. 30. Perez-Iratxeta C, Bork P, Andrade MA (2002) Association of genes to genetically inherited diseases using data mining. Nature genetics 31: 316–319. 31. Aragues R, Sander C, Oliva B (2008) Predicting cancer involvement of genes from heterogeneous data. BMC Bioinformatics 9: 172. 32. Kitsios GD, Zintzaras E (2009) Genomic Convergence of Genome-wide Investigations for Complex Traits. Annals of human genetics 73: 514–519. 33. Akula N, Baranova A, Seto D, Solka J, Nalls MA, et al. (2011) A Network-Based Approach to Prioritize Results from Genome-Wide Association Studies. PloS one 6: e24220. 34. Carlson CS, Eberle MA, Kruglyak L, Nickerson DA (2004) Mapping complex disease loci in whole-genome association studies. Nature 429: 446–452. 35. White S, Smyth P (2003) Algorithms for estimating relative importance in networks. Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining. KDD '03. New York, NY, USA: ACM. pp. 266–275. Available:http://doi.acm.org/10.1145/956750.956782. Accessed 22 February 2012. 36. Nabieva E, Jim K, Agarwal A, Chazelle B, Singh M (2005) Whole-proteome prediction of protein function via graph-theoretic analysis of interaction maps. Bioinformatics 21: i302–i310 doi:https://doi.org/10.1093/bioinformatics/bti1054. 37. von Mering C, Jensen LJ, Kuhn M, Chaffron S, Doerks T, et al. (2007) STRING 7–recent developments in the integration and prediction of protein interactions. Nucleic Acids Res 35: D358–D362. 38. Davis AP, King BL, Mockus S, Murphy CG, Saraceni-Richards C, et al. (2010) The Comparative Toxicogenomics Database: update 2011. Nucleic Acids Research 39: D1067–D1072 doi:https://doi.org/10.1093/nar/gkq813. 39. Becker KG, Barnes KC, Bright TJ, Wang SA (2004) The Genetic Association Database. Nature Genetics 36: 431–432 doi:https://doi.org/10.1038/ng0504-431. 40. Woo HN, Park JS, Gwon AR, Arumugam TV, Jo DG (2009) Alzheimer's disease and Notch signaling. Biochem Biophys Res Commun 390: 1093–1097. 41. Wellen KE, Hotamisligil GS (2005) Inflammation, stress, and diabetes. Journal of Clinical Investigation 115: 1111–1119 doi:https://doi.org/10.1172/JCI25102. 42. Appay V, Sauce D (2007) Immune activation and inflammation in HIV-1 infection: causes and consequences. The Journal of Pathology 214: 231–241 doi:https://doi.org/10.1002/path.2276. 43. Yang SY, He XY, Miller D (2007) HSD17B10: a gene involved in cognitive function through metabolism of isoleucine and neuroactive steroids. Mol Genet Metab 92: 36–42. 44. He G, Luo W, Li P, Remmers C, Netzer WJ, et al. (2010) Gamma-secretase activating protein is a therapeutic target for Alzheimer's disease. Nature 467: 95–98. 45. Kim M, Suh J, Romano D, Truong MH, Mullin K, et al. (2009) Potential late-onset Alzheimer's disease-associated mutations in the ADAM10 gene attenuate $\alpha$-secretase activity. Human molecular genetics 18: 3987. 46. Krauthammer M, Kaufmann CA, Gilliam TC, Rzhetsky A (2004) Molecular triangulation: bridging linkage and molecular-network information for identifying candidate genes in Alzheimer's disease. Proc Natl Acad Sci U S A 101: 15148–15153. 47. Li Y, Patra JC (2010) Genome-wide inferring gene–phenotype relationship by walking on the heterogeneous network. Bioinformatics 26: 1219–1224. 48. Stelzl U, Worm U, Lalowski M, Haenig C, Brembeck FH, et al. (2005) A human protein-protein interaction network: a resource for annotating the proteome. Cell 122: 957–968. 49. Rual JF, Venkatesan K, Hao T, Hirozane-Kishikawa T, Dricot A, et al. (2005) Towards a proteome-scale map of the human protein-protein interaction network. Nature 437: 1173–1178. 50. Garcia-Garcia J, Guney E, Aragues R, Planas-Iglesias J, Oliva B (2010) Biana: a software framework for compiling biological interactions and analyzing networks. BMC Bioinformatics 11: 56 doi:https://doi.org/10.1186/1471-2105-11-56. 51. Hamosh A, Scott AF, Amberger JS, Bocchini CA, McKusick VA (2005) Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic acids research 33: D514–D517. 52. Chen J, Xu H, Aronow BJ, Jegga AG (2007) Improved human disease candidate gene prioritization using mouse phenotype. BMC Bioinformatics 8: 392. 53. Hofmann-Apitius M, Fluck J, Furlong L, Fornes O, Kolářik C, et al. (2008) Knowledge environments representing molecular entities for the virtual physiological human. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366: 3091–3110. 54. Sing T, Sander O, Beerenwinkel N, Lengauer T (2005) ROCR: Visualizing Classifier Performance in R. Bioinformatics 21: 3940–3941 doi:https://doi.org/10.1093/bioinformatics/bti623. 55. Berriz GF, Beaver JE, Cenik C, Tasan M, Roth FP (2009) Next generation software for functional trend analysis. Bioinformatics 25: 3043–3044. Is the Subject Area "Random walk" applicable to this article?
CommonCrawl
I'm pretty sure it is nonsense. I, however, would appreciate comments and your thoughts. No. Just the odd equation that happened. I know it is absurd, but... I think this has more to do with intuitive sense, more than mathematics or science...although intuitive insights are common to both art and science. I was hoping for some understanding of the equation in conventional terms. Comment Source:No. Just the odd equation that happened. I know it is absurd, but... I think this has more to do with intuitive sense, more than mathematics or science...although intuitive insights are common to both art and science. I was hoping for some understanding of the equation in conventional terms. As Jacob says, Azimuth could definitely do with some interesting art. Comment Source:You'll find more details [on this wikipedia page](http://en.wikipedia.org/wiki/Ordinal_arithmetic) but in the most common way of modelling things, basically $\infty + 1 \ne \infty$, but you do have that $1+\infty = \infty$... I'd love to spend more time discussing this, but unfortunately I'm already behind on my contribution to a paper... (Edit: the "most common way of modelling" above was an attempt to avoid having to talk about the distinction between ordinal and cardinal infinities. The above is in terms of ordinals, and the non-commutativity of ordinal transfinite addition might have been intriguing and make readers want to find out more about the weird world. Todd has a great explanation about cardinal infinities below.) As Jacob says, Azimuth could definitely do with some interesting art. All of my art for the past four years is digital and can be found on my Facebook site: R Henry Nigl, there are some 2500 images 'abstractions' there open to the public, additionally, another 3000 images called 'FACES' available only to friends...all are freely available, that is free without cost for non-commercial use...attribution is necessary for all media presentation, Creative Commons licenses apply. Take your pick, let me know if you wish to download. Thank you for your input regarding the equation. Comment Source:All of my art for the past four years is digital and can be found on my Facebook site: R Henry Nigl, there are some 2500 images 'abstractions' there open to the public, additionally, another 3000 images called 'FACES' available only to friends...all are freely available, that is free without cost for non-commercial use...attribution is necessary for all media presentation, Creative Commons licenses apply. Take your pick, let me know if you wish to download. Thank you for your input regarding the equation. There's a television commercial currently playing in the US (I think for AT&T) which I almost invariably find delightful: a man is seated at a table with a group of young children and asks them what is the biggest number they can think of. The first girl says, "a billion trillion zillion!", and the man says, "That's pretty big" and (turning to the next child), "how about you?" -- "Okay... How about you?" -- (To the next child) "Can you top that?" -- "Actually, we were looking for infinity plus infinity. Sorry." -- (The first girl) "What about infinity times infinity?!" Maybe you should just say "Pfft, he doesn't know about ordinals versus cardinals!" Thus planting a seed of curiosity (if done right). But you've probably realized you need to resist most of those urges to explain math... since I'm not a parent, I haven't learned that skill. Well, except for my wife. Comment Source:Maybe you should just say "Pfft, he doesn't know about ordinals versus cardinals!" Thus planting a seed of curiosity (if done right). But you've probably realized you need to resist most of those urges to explain math... since I'm not a parent, I haven't learned that skill. Well, except for my wife. BTW, I am seventy years old...not exactly a child...as your post might suggest to some. Comment Source:Thanks John...But the Wikipedia entries for 'ordinals' and 'cardinals' gave a pretty good overview...not something I read daily, but I do have some understanding of what is going on here. Why would you discourage explanation? I have many other questions relevant to my personal curiosities involving natural and synthetic systems and formulations BTW, I am seventy years old...not exactly a child...as your post might suggest to some. I suspect ∞ is a nominal number, just a name, and does not exist as either a cardinal or ordinal number in space-time. However, without 'space' and 'time', 'infinity' would be easily perceived...even understood, as a fish understands water. Comment Source:I suspect ∞ is a nominal number, just a name, and does not exist as either a cardinal or ordinal number in space-time. However, without 'space' and 'time', 'infinity' would be easily perceived...even understood, as a fish understands water. I have always considered 'infinity' as a simple identification that with efforts to describe mathematically seem to be about as perplexing as the 'Ship of Thebes Paradox' and of course any paradox, the study of which can be most frustrating and treacherous when treating a paradox, and thus 'infinity', as a number. Regarding the 'Ship of Thebes' paradox: it, the 'ship' always remains the same identity, no matter how many planks are changed or even if totally rebuilt, it will always be called 'The Ship of Thebes'. In the end I arrived at the notion, for myself at least, that the perception of 'paradox' is keyed to movement in space and thus time, thus, as all things concrete and all of consciousness, being in constant motion through space-time, so to is 'paradox', it is essentially language non-sense and so too is 'infinity'. To illustrate, 'infinity' as a 'number' used in any way other than as a simple identifier, that is as a symbol, results in a paradox. I hope you understand that this discussion is not relevant to the main aims of the Azimuth Project, which are listed here. Comment Source:I hope you understand that this discussion is not relevant to the main aims of the Azimuth Project, which are listed [here](http://www.azimuthproject.org/azimuth/show/HomePage). I reviewed several Forum discussions and believed it conformed, I have read the mission guidelines and may have misunderstood those aims...it seemed topics were all over the place. In any event I am happy for the input I received. Comment Source:I reviewed several Forum discussions and believed it conformed, I have read the mission guidelines and may have misunderstood those aims...it seemed topics were all over the place. In any event I am happy for the input I received. The Azimuth Forum is about environmental problem - and mathematics, science and engineering that might help solve those problems. Comment Source:The Azimuth Forum is about environmental problem - and mathematics, science and engineering that might help solve those problems. Got it, John...thank you greatly...it's not a problem, just a misunderstanding on my part, I've been a fan of yours for years and honored to be able to access this site...I think I started reading your posts actually way back when the internet was just text, I dunno when that was...but you maintained a mathematics forum bulletin board, even then (how do you find the time?)...my commercial background (re. how I maintained income over the years) is in architectural design, advertising and marketing...mathematics, especially the history and philosophy of, is for me simply a 'hobby'. Thank you. Comment Source:Got it, John...thank you greatly...it's not a problem, just a misunderstanding on my part, I've been a fan of yours for years and honored to be able to access this site...I think I started reading your posts actually way back when the internet was just text, I dunno when that was...but you maintained a mathematics forum bulletin board, even then (how do you find the time?)...my commercial background (re. how I maintained income over the years) is in architectural design, advertising and marketing...mathematics, especially the history and philosophy of, is for me simply a 'hobby'. Thank you. Okay, no problemo. I started running "sci.physics.research" sometime around 1993, I think. Comment Source:Okay, no problemo. I started running "sci.physics.research" sometime around 1993, I think.
CommonCrawl
Abstract: This paper presents a comprehensive analysis on the security of the Yi-Tan-Siew chaotic cipher proposed in [IEEE TCAS-I 49(12):1826-1829 (2002)]. A differential chosen-plaintext attack and a differential chosen-ciphertext attack are suggested to break the sub-key K, under the assumption that the time stamp can be altered by the attacker, which is reasonable in such attacks. Also, some security Problems about the sub-keys $\alpha$ and $\beta$ are clarified, from both theoretical and experimental points of view. Further analysis shows that the security of this cipher is independent of the use of the chaotic tent map, once the sub-key $K$ is removed via the proposed suggested differential chosen-plaintext attack.
CommonCrawl
This matrix equation has the trivial solution $x = 0$. The equation in general has either $1$ solution or infinitely many solutions. Now recall from linear algebra that a matrix equation of the form $Ax = 0$ has infinitely many solutions if and only if $\det A = 0$. For the matrix equation above, this means that $(A - \lambda I)x = 0$ has infinitely many solutions provided that $\det (A - \lambda I) = 0$. Definition: Let $A$ be an $n \times n$ matrix and consider the matrix equation $Ax = \lambda x$. The Characteristic Polynomial for the matrix $A$ is the polynomial $\det (A - \lambda I) = 0$. The roots/solutions of the characteristic polynomial are called the Eigenvalues of $A$. There are two eigenvalues for $A$. Namely $\lambda_1 = 1$ and $\lambda_2 = -5$. There are two distinct eigenvalues of $A$. $\lambda_1 = 3$ is an eigenvalue with multiplicity $2$, and $\lambda_2 = 5$ is an eigenvalue with multiplicity $1$.
CommonCrawl
Abstract: A new class $\mathcal H_q(\alpha, b, n)$ of Bazilevi\v c functions of type $\alpha$ defined in terms of quasi-subordination is presented. We obtain the coefficient estimates as well as the Fekete-Szeg\"o inequality of functions that are related to the new class. The improved results for the associated classes involving subordination and majorization were briefly discussed. Keyword: Analytic Function, Subordination, Fekete-Szeg¨o Inequality, Coefficient Estimates, Majorization. S Olatunji, E J Dansu and A Abidemi, On a class of Bazilevic functions associated with quasi-subordination, International Journal of Advances in Mathematics, Volume 2017, Number 2, Pages 33-39, 2017.
CommonCrawl
In the classical Gaussian analysis the Clark-Ocone formula can be written in the form $$ F=\mathbf EF+\int\mathbf E_t\partial_t FdW_t, $$ where the function (the random variable) $F$ is square integrable with respect to the Gaussian measure and differentiable by Hida; $\mathbf E$ denotes the expectation; $\mathbf E_t$ denotes the conditional expectation with respect to the full $\sigma$-algebra that is generated by a Wiener process $W$ up to the point of time $t$; $\partial_\cdot F$ is the Hida derivative of $F$; $\int\circ (t)dW_t$ denotes the It\^o stochastic integral with respect to the Wiener process. This formula has applications in the stochastic analysis and in the financial mathematics. In this paper we generalize the Clark-Ocone formula to spaces of test and generalized functions of the so--called Meixner white noise analysis, in which instead of the Gaussian measure one uses the so--called generalized Meixner measure $\mu$ (depending on parameters, $\mu$ can be the Gaussian, Poissonian, Gamma measure etc.). In particular, we study properties of integrands in our (Clark-Ocone type) formulas. Using a general approach that covers the cases of Gaussian, Poissonian, Gamma, Pascal and Meixner measures on an infinite- dimensional space, we construct a general integration by parts formula for analysis connected with each of these measures. Our consideration is based on the constructions of the extended stochastic integral and the stochastic derivative that are connected with the structure of the extended Fock space. We introduce and study Hida-type stochastic derivatives and stochastic differential operators on the parametrized Kondratiev-type spaces of regular generalized functions of Meixner white noise. In particular, we study the interconnection between the stochastic integration and differentiation. Our researches are based on the general approach that covers the Gaussian, Poissonian, Gamma, Pascal and Meixner cases. We introduce an extended stochastic integral and construct elements of the Wick calculus on the Kondratiev-type spaces of regular and nonregular gene alized functions, study the interconnection between the extended stochastic integration and the Wick calculus, and consider examples of stochastic equations with Wick-type nonlinearity. Our researches are based on the general approach that covers the Gaussian, Poissonian, Gamma, Pascal and Meixner analyses. We introduce and study a generalized stochastic derivative on the Kondratiev-type space of regular generalized functions of Gamma white noise. Properties of this derivative are quite analogous to the properties of the stochastic derivative in the Gaussian analysis. As an example we calculate the generalized stochastic derivative of the solution of some stochastic equation with Wick-type nonlinearity.
CommonCrawl
Then we can represent this evolution via a labeled rooted binary tree: the root represents the root life-form at time 0, and each branching of the tree represents a different evolution. The labels mark which life-form is which. Of course this model isn't perfect (I can't find the word for it but it's a thing where two different species evolve separately from the same ancestor, then meet again and make one species. If we were to represent this information in a graph, it'd make a cycle and not be a tree), but it's been fruitful. The rooted binary tree of the wikipedia picture: node 0 is the root life-form, then 1-7 are the life-forms at our current time. Now let's mathify this. We'd like to encode the evolutionary information into our tree. We've already decided that all life-forms will end at the same time (now), so if we just assign lengths to each of the non-leaf edges this will automatically determine the lengths of the leaf edges. A leaf in a tree is a vertex with only one neighbor, and we call the edge leading to that vertex a leaf-edge. Let's call the non-leaf edges interior edges. In the picture above, we have 5 non-leaf edges, which determine a tree with 7 leaves. Using this exact configuration of labels and edges, we have five degrees of freedom: we can make those interior edges whatever lengths we want, as long as they are positive numbers. So in math-terms, the set of phylogenetic trees (aka rooted, binary, labeled trees) in this configuration forms a positive orthant of . You can smoothly change any one of the edges to a slightly longer or shorter length, and still have a phylogenetic tree with the same combinatorial data. This is from the paper I'm writing, but it does show that in 3D, there are 8 orthants cut by the three axes (red dot is the origin). The pink box represents a single orthant. Why aren't they different? Because they encode the same data for each life-form: reading from node 0 we see that first 1 branches off, then 2, then 3 and 4 in all three cases. There's some combinatorics here with partitions that you can do (one can label a tree with a set of partitions). However, changing the labels so that first 2 branches off, then 1, then 3 and 4 will be a different phylogenetic tree. In fact I can smoothly go from one to the other in the space that we're creating: first I shrink the length of the green edge below to zero, which takes us to the middle tree (not binary!), and then extend the blue edge. There are 15 different orthants glued together in this picture, because the number of labelled rooted binary trees on n vertices is (2n-3)!!. The double !! means you only multiply the odds, a.k.a. (2n-3)(2n-5)(2n-7)… This is also known as Schroeder's fourth problem , which as far as I can tell was open for 100 years. Pretty cool! If you truncate BHV(n) so it's not infinite (just pick some compact bound), then it forms a nonpositively curved cube complex, and we love those! CAT(0) cube complexes are great. I haven't blogged too much about them (first terrible post and then those truncated Haglund notes) but they are the basis of all that I do and the number one thing I talk about when I give math talks. Whoops! The gist is that you glue cubes together in not-terrible ways, and then the resulting complex has great and fun properties (like you can cut it in half the way you want to). That's about all I have to say about this! Gillian is working on some stuff about putting a probability measure on BHV(n) [you can't do it with certain conditions], embedding it into a small enough Euclidean space that still preserves some of its features, and finding an isometrically embedded copy of the phylogenetic tree inside BHV(n) instead of just the coordinate point. Also, fun fact to prove to yourself (actually please don't scoop my friend), find the automorphism group of BHV(n)! It's just the symmetric group on some number that has to do with n (n+1 or something like that; I can't remember and didn't take notes). Again, the main reference for this is the seminal paper that should also be accessible as it's meant for biologists and statisticians. What I'm reading right now: Special Cube Complexes, a 2008 paper by Frederic Haglund and Dani Wise. Recently Ian Agol proved the Virtual Haken Conjecture , which was a Big Deal in math (this link is LONG but a very well written non-math-person friendly summary of 30 years of math). In fact, one of my professors from undergraduate, Jesse Johnson, wrote a nice little blog post on what it might mean for the future of low dimensional topology. Basically, Agol used this special cube complex stuff to prove this Big Deal, which means that we might be able to use these to prove Lots of Big Deal and Little Deal theorems. So let's get into what these guys are. Update: I just found out that it's my turn to give the talk in our little colloquium this week. There's seven of us, four are my advisor's students (I guess he's not technically my adviser yet) and three are in related fields. So every two months or so, you have to give a half hour talk on some math you're learning about. We aren't supposed to talk about our research, but I think I get a pass since it's still my first year. So this post is a prelude to my talk! A cube complex is an object built by gluing a whole bunch of Euclidean cubes together. So a one-dimensional cube complex is built by gluing a bunch of lines together; that is, it's a (mathematical) graph. And a two-dimensional cube complex is built by gluing a bunch of squares together. The gluings can happen in funky ways though, and special cube complexes are objects where these pathologies don't happen. We'll define these topologies in terms of hyperplanes. So if I have a square , I'll have two hyperplanes running through it: one at and one at $[-1,1]\times 0$. In the picture below, which is taken from the 11th page of this paper, the red lines are hyperplanes, and the gray lines represent the cubes they're cutting through. To be special, we need our cube complex to a) not self-intersect, b) have no one-sided hyperplanes, c) not directly self-osculate, and e) have no two hyperplanes that inter-osculate. Turns out that case d) hyperplanes indirectly self-osculate is OK. Pathologies of cube complexes (these guys are not special, but don't tell their parents I said that). Really quick, notice that a cube complex is special if and only if its two skeleton is (the part made up of filled-in squares). That's why we can just use this picture. So what's so special about special cube complexes? The ultimate idea is that given a cube complex, if none of these funky things happens, I'll be able to cut along the hyperplanes and have nice things happen. That's how we get to the Big Deal. But that's neither here nor there; this post is about a Smaller Deal: that a cube complex is special if and only if it corresponds to a right angled Artin group, that is, that there's some graph so that our cube complex has an isometry into the Salvetti complex of that graph. Pub crawl that winds up back at the first bar invite idea: be there or be a torus! So far we have the standard 2-complex for the group , which is the definition of a right angled Artin group. Now to make the Salvetti complex, we attach an n-torus for every n-cycle in our graph. So if there was a triangle in our graph, we get a corresponding 3-torus in the Salvetti complex. And the fundamental group of our Salvetti complex is that right angled Artin group. To restate our deal, we're saying that special cube complexes always have some graph so that we can see our cube complex somewhere in the corresponding Salvetti complex, and everything still looks nice. More formally and rigidly, we're saying that X is special if and only if there's an immersion from X into some Salvetti complex, which is a local isometry on the 2-skeleton. Proof in one direction: Suppose I've got a local isometry from the 2-skeleton of my cube complex X to a Salvetti complex. Since the Salvetti complex is special from how we built it, and local isometries keep things tidy (you can't uncross intersecting hyperplanes, for instance), that means my 2-skeleton is also special. So from our remark above, X is special. Proof in the other direction: Say my cube complex is special. Then make a graph with vertices being the hyperplanes of X, and edges connecting intersecting hyperplanes. Now make the Salvetti complex of this graph. We can map X into the complex by sending an edge to the vertex of the hyperplane that crosses it, and then extend the rest of the map. It's a hop and skip (no jumping allowed) that this map is, in fact, a local isometry on the level of 2-skeleta. Right angled Artin groups have really nice properties and are fun stuff, so this little theorem can lead to a whole bunch of conclusions. Phew, first math blog post. I'll get better at these, I promise. I have to figure out what audience I'm writing for.
CommonCrawl
Two-dimensional turbulent flows host two disparate cascades: of enstrophy and of energy. The phenomenological theory of turbulence, which provides the theoretical underpinning of these cascades, assumes local isotropy. This assumption has been amply verified via computational, experimental and field data amassed to date. Local isotropy mandates that the streamwise ($u$) and transverse ($v$) velocity fluctuations partake in the same cascade; consequently, the attendant spectral exponents ($\alpha_u$ and $\alpha_v$) of the turbulent energy spectra are the same, $\alpha_u = \alpha_v$. Here we report experiments in soap-film flows where $\alpha_u$ corresponds to the energy cascade, but concurrently $\alpha_v$ corresponds to the enstrophy cascade, as if two mutually independent turbulent fields of disparate dynamics were concurrently active within the flow. This species of turbulent energy spectra, which we term the Janus spectra, has never been observed or predicted theoretically. Remarkably, the tools of phenomenological theory can be invoked to elucidate this manifestly anisotropic flow.
CommonCrawl
Abstract: In the limit of large central charge $c$ the 4-point Virasoro conformal block becomes a hypergeometric function. It is represented by a sum of chiral Nekrasov functions, which can also be explicitly evaluated. In this way the known proof of the AGT relation is extended from special to generic set of external states, but in the special limit of c=\infty.
CommonCrawl
Atlas publication policy, available from the atlas publication committee web page: http://atlas.web.cern.ch/atlas/groups/general/scinotes/scinotes.html. Final report of the atlas aod/esd definition task force, atlas-soft-2004-006. 2004. Report of the event tag review and recommendation group, atl-soft-pub-2006-002. 2006. Atlas streams test relational database:\ http://test-service-tags.web.cern.ch/test-service-tags/prod/tag_browser.php. 2007. Morad Aaboud and others. Search for new phenomena in final states with an energetic jet and large missing transverse momentum in $pp$ collisions at $\sqrt s=13$ TeV using the ATLAS detector. Phys. Rev., D94(3):032005, 2016. arXiv:1604.07773, doi:10.1103/PhysRevD.94.032005. Morad Aaboud and others. Measurement of the $k_\mathrm t$ splitting scales in $Z \to \ell \ell $ events in $pp$ collisions at $\sqrt s = 8$ TeV with the ATLAS detector. JHEP, 08:026, 2017. arXiv:1704.01530, doi:10.1007/JHEP08(2017)026. Morad Aaboud and others. Search for new high-mass phenomena in the dilepton final state using 36 fb$^-1$ of proton-proton collision data at $ \sqrt s=13 $ TeV with the ATLAS detector. JHEP, 10:182, 2017. arXiv:1707.02424, doi:10.1007/JHEP10(2017)182. Morad Aaboud and others. Search for triboson $W^\pm W^\pm W^\mp $ production in $pp$ collisions at $\sqrt s=8$ $\text TeV$ with the ATLAS detector. Eur. Phys. J., C77(3):141, 2017. arXiv:1610.05088, doi:10.1140/epjc/s10052-017-4692-1. Morad Aaboud and others. $ZZ \to \ell ^+\ell ^-\ell ^\prime +\ell ^\prime -$ cross-section measurements and search for anomalous triple gauge couplings in 13 TeV $pp$ collisions with the ATLAS detector. Phys. Rev., D97(3):032005, 2018. arXiv:1709.07703, doi:10.1103/PhysRevD.97.032005. Morad Aaboud and others. Measurements of $t\bar t$ differential cross-sections of highly boosted top quarks decaying to all-hadronic final states in $pp$ collisions at $\sqrt s=13\,$ TeV using the ATLAS detector. Phys. Rev., D98(1):012003, 2018. arXiv:1801.02052, doi:10.1103/PhysRevD.98.012003. Morad Aaboud and others. Search for new phenomena in events with same-charge leptons and $b$-jets in $pp$ collisions at $\sqrt s= 13$ TeV with the ATLAS detector. JHEP, 12:039, 2018. arXiv:1807.11883, doi:10.1007/JHEP12(2018)039. Morad Aaboud and others. Search for pair- and single-production of vector-like quarks in final states with at least one $Z$ boson decaying into a pair of electrons or muons in $pp$ collision data collected with the ATLAS detector at $\sqrt s = 13$ TeV. Phys. Rev., D98(11):112010, 2018. arXiv:1806.10555, doi:10.1103/PhysRevD.98.112010. Morad Aaboud and others. Constraints on mediator-based dark matter and scalar dark energy models using $\sqrt s = 13$ TeV $pp$ collision data collected by the ATLAS detector. Technical Report, CERN, 2019. arXiv:1903.01400. Morad Aaboud and others. Measurement of the four-lepton invariant mass spectrum in 13 TeV proton-proton collisions with the ATLAS detector. Technical Report, CERN, 2019. arXiv:1902.05892. Georges Aad and others. Measurement of the production cross section of an isolated photon associated with jets in proton-proton collisions at $\sqrt s=7$ TeV with the ATLAS detector. Phys. Rev., D85:092014, 2012. arXiv:1203.3161, doi:10.1103/PhysRevD.85.092014. Georges Aad and others. Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC. Phys.Lett., B716:1–29, 2012. arXiv:1207.7214, doi:10.1016/j.physletb.2012.08.020. Georges Aad and others. Search for the Standard Model Higgs boson produced in association with a vector boson and decaying to a $b$-quark pair with the ATLAS detector. Phys.Lett., B718:369–390, 2012. arXiv:1207.0210, doi:10.1016/j.physletb.2012.10.061. Georges Aad and others. Dynamics of isolated-photon plus jet production in pp collisions at $\sqrt (s)=7$ TeV with the ATLAS detector. Nucl. Phys., B875:483–535, 2013. arXiv:1307.6795, doi:10.1016/j.nuclphysb.2013.07.025. Georges Aad and others. Measurement of $ZZ$ production in $pp$ collisions at $\sqrt s=7$ TeV and limits on anomalous $ZZZ$ and $ZZ\gamma $ couplings with the ATLAS detector. JHEP, 03:128, 2013. arXiv:1211.6096, doi:10.1007/JHEP03(2013)128. Georges Aad and others. Measurement of isolated-photon pair production in $pp$ collisions at $\sqrt s=7$ TeV with the ATLAS detector. JHEP, 01:086, 2013. arXiv:1211.1913, doi:10.1007/JHEP01(2013)086. Georges Aad and others. Measurement of the production cross section of jets in association with a Z boson in pp collisions at $\sqrt s$ = 7 TeV with the ATLAS detector. JHEP, 07:032, 2013. arXiv:1304.7098, doi:10.1007/JHEP07(2013)032. Georges Aad and others. Measurements of $W \gamma $ and $Z \gamma $ production in $pp$ collisions at $\sqrt s$=7 TeV with the ATLAS detector at the LHC. Phys. Rev., D87(11):112003, 2013. [Erratum: Phys. Rev.D91,no.11,119901(2015)]. arXiv:1302.1283, doi:10.1103/PhysRevD.87.112003, 10.1103/PhysRevD.91.119901. Georges Aad and others. Performance of jet substructure techniques for large-$R$ jets in proton-proton collisions at $\sqrt s$ = 7 TeV using the ATLAS detector. JHEP, 1309:076, 2013. arXiv:1306.4945, doi:10.1007/JHEP09(2013)076. Georges Aad and others. Search for squarks and gluinos with the ATLAS detector in final states with jets and missing transverse momentum using 4.7 fb$^-1$ of $\sqrt s=7$ TeV proton-proton collision data. Phys. Rev., D87(1):012008, 2013. arXiv:1208.0949, doi:10.1103/PhysRevD.87.012008. Georges Aad and others. Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $pp$ collisions at $\sqrt s$=8 TeV with the ATLAS detector. Phys. Lett., B738:234–253, 2014. arXiv:1408.3226, doi:10.1016/j.physletb.2014.09.054. Georges Aad and others. Measurement of dijet cross sections in $pp$ collisions at 7 TeV centre-of-mass energy using the ATLAS detector. JHEP, 05:059, 2014. arXiv:1312.3524, doi:10.1007/JHEP05(2014)059. Georges Aad and others. Measurement of the electroweak production of dijets in association with a Z-boson and distributions sensitive to vector boson fusion in proton-proton collisions at $\sqrt s =$ 8 TeV using the ATLAS detector. JHEP, 04:031, 2014. arXiv:1401.7610, doi:10.1007/JHEP04(2014)031. Georges Aad and others. Measurement of the inclusive isolated prompt photons cross section in pp collisions at $\sqrt s=7$ TeV with the ATLAS detector using 4.6 fb$^−1$. Phys. Rev., D89(5):052004, 2014. arXiv:1311.1440, doi:10.1103/PhysRevD.89.052004. Georges Aad and others. Measurement of the low-mass Drell-Yan differential cross section at $\sqrt s$ = 7 TeV using the ATLAS detector. JHEP, 06:112, 2014. arXiv:1404.1212, doi:10.1007/JHEP06(2014)112. Georges Aad and others. Measurements of fiducial and differential cross sections for Higgs boson production in the diphoton decay channel at $\sqrt s=8$ TeV with ATLAS. JHEP, 09:112, 2014. arXiv:1407.4222, doi:10.1007/JHEP09(2014)112. Georges Aad and others. Measurements of jet vetoes and azimuthal decorrelations in dijet events produced in $pp$ collisions at $\sqrt s=7\,\mathrm TeV$ using the ATLAS detector. Eur. Phys. J., C74(11):3117, 2014. arXiv:1407.5756, doi:10.1140/epjc/s10052-014-3117-7. Georges Aad and others. Measurements of normalized differential cross sections for $t\bar t$ production in pp collisions at $\sqrt s=7$ TeV using the ATLAS detector. Phys.Rev., D90(7):072004, 2014. arXiv:1407.0371, doi:10.1103/PhysRevD.90.072004. Georges Aad and others. Measurement of four-jet differential cross sections in $\sqrt s=8$ TeV proton-proton collisions using the ATLAS detector. JHEP, 12:105, 2015. arXiv:1509.07335, doi:10.1007/JHEP12(2015)105. Georges Aad and others. Measurement of the inclusive jet cross-section in proton-proton collisions at $ \sqrt s=7 $ TeV using 4.5 fb$^−1$ of data with the ATLAS detector. JHEP, 02:153, 2015. [Erratum: JHEP09,141(2015)]. arXiv:1410.8857, doi:10.1007/JHEP02(2015)153, 10.1007/JHEP09(2015)141. Georges Aad and others. Measurement of three-jet production cross-sections in $pp$ collisions at 7 TeV centre-of-mass energy using the ATLAS detector. Eur. Phys. J., C75(5):228, 2015. arXiv:1411.1855, doi:10.1140/epjc/s10052-015-3363-3. Georges Aad and others. Measurements of the W production cross sections in association with jets with the ATLAS detector. Eur. Phys. J., C75(2):82, 2015. arXiv:1409.8639, doi:10.1140/epjc/s10052-015-3262-7. Georges Aad and others. Search for the $b\bar b$ decay of the Standard Model Higgs boson in associated $(W/Z)H$ production with the ATLAS detector. JHEP, 1501:069, 2015. arXiv:1409.6212, doi:10.1007/JHEP01(2015)069. Georges Aad and others. Search for vector-like $B$ quarks in events with one isolated lepton, missing transverse momentum and jets at $\sqrt s=$ 8 TeV with the ATLAS detector. Phys. Rev., D91(11):112011, 2015. arXiv:1503.05425, doi:10.1103/PhysRevD.91.112011. Georges Aad and others. Measurement of the differential cross-section of highly boosted top quarks as a function of their transverse momentum in $\sqrt s$ = 8 TeV proton-proton collisions using the ATLAS detector. Phys. Rev., D93(3):032009, 2016. arXiv:1510.03818, doi:10.1103/PhysRevD.93.032009. Georges Aad and others. Measurement of the double-differential high-mass Drell-Yan cross section in pp collisions at $ \sqrt s=8 $ TeV with the ATLAS detector. JHEP, 08:009, 2016. arXiv:1606.01736, doi:10.1007/JHEP08(2016)009. Georges Aad and others. Measurement of the inclusive isolated prompt photon cross section in pp collisions at $ \sqrt s=8 $ TeV with the ATLAS detector. JHEP, 08:005, 2016. arXiv:1605.03495, doi:10.1007/JHEP08(2016)005. Georges Aad and others. Measurement of the transverse momentum and $\phi ^*_\eta $ distributions of Drell–Yan lepton pairs in proton–proton collisions at $\sqrt s=8$ TeV with the ATLAS detector. Eur. Phys. J., C76(5):291, 2016. arXiv:1512.02192, doi:10.1140/epjc/s10052-016-4070-4. Georges Aad and others. Measurement of total and differential $W^+W^-$ production cross sections in proton-proton collisions at $\sqrt s=$ 8 TeV with the ATLAS detector and limits on anomalous triple-gauge-boson couplings. JHEP, 09:029, 2016. arXiv:1603.01702, doi:10.1007/JHEP09(2016)029. Georges Aad and others. Measurements of $Z\gamma $ and $Z\gamma \gamma $ production in $pp$ collisions at $\sqrt s=$ 8 TeV with the ATLAS detector. Phys. Rev., D93(11):112002, 2016. arXiv:1604.05232, doi:10.1103/PhysRevD.93.112002. Georges Aad and others. Measurements of four-lepton production in $pp$ collisions at $\sqrt s=$ 8 TeV with the ATLAS detector. Phys. Lett., B753:552–572, 2016. arXiv:1509.07844, doi:10.1016/j.physletb.2015.12.048. Georges Aad and others. Search for the production of single vector-like and excited quarks in the $Wt$ final state in $pp$ collisions at $\sqrt s$ = 8 TeV with the ATLAS detector. JHEP, 02:110, 2016. arXiv:1510.02664, doi:10.1007/JHEP02(2016)110. Georges Aad and others. Constraints on mediator-based dark matter models using $\sqrt s = 13$ TeV $pp$ collisions at the LHC with the ATLAS detector. Technical Report ATLAS-CONF-2018-051, CERN, Geneva, Nov 2018. URL: https://cds.cern.ch/record/2646248. R Aaij and others. Measurement of the cross-section for $Z \to e^+e^-$ production in $pp$ collisions at $\sqrt s=7$ TeV. JHEP, 02:106, 2013. arXiv:1212.4620, doi:10.1007/JHEP02(2013)106. Roel Aaij and others. Study of forward Z + jet production in pp collisions at $\sqrt s = 7$ TeV. JHEP, 01:033, 2014. arXiv:1310.8197, doi:10.1007/JHEP01(2014)033. Roel Aaij and others. Search for Dark Photons Produced in 13 TeV $pp$ Collisions. Phys. Rev. Lett., 120(6):061801, 2018. arXiv:1710.02867, doi:10.1103/PhysRevLett.120.061801. V. M. Abazov and others. Subjet multiplicity of gluon and quark jets reconstructed with the $k_t$ algorithm in $p\bar p$ collisions. Phys. Rev., D65:052008, 2002. arXiv:hep-ex/0108054. G. Abbiendi and others. Di-jet production in photon photon collisions at s(ee)**(1/2) = from 189-gev to 209-gev. Eur. Phys. J., C31:307–325, 2003. arXiv:hep-ex/0301013. G. Abbiendi and others. Scaling violations of quark and gluon jet fragmentation functions in e+ e- annihilations at s**(1/2) = 91.2-gev and 183-gev - 209-gev. Eur. Phys. J., C37:25–47, 2004. arXiv:hep-ex/0404026. Daniel Abercrombie and others. Dark Matter Benchmark Models for Early LHC Run-2 Searches: Report of the ATLAS/CMS Dark Matter Forum. Technical Report, FNAL, 2015. arXiv:arXiv:1507.00966. Elena Accomando, Claudio Coriano, Luigi Delle Rose, Juri Fiaschi, Carlo Marzo, and Stefano Moretti. Z$^′$, Higgses and heavy neutrinos in U(1)$^′$ models: from the LHC to the GUT scale. JHEP, 07:086, 2016. arXiv:1605.02910, doi:10.1007/JHEP07(2016)086. Elena Accomando, Luigi Delle Rose, Stefano Moretti, Emmanuel Olaiya, and Claire H. Shepherd-Themistocleous. Novel SM-like Higgs decay into displaced heavy neutrino pairs in U(1)′ models. JHEP, 04:081, 2017. arXiv:1612.05977, doi:10.1007/JHEP04(2017)081. Elena Accomando, Luigi Delle Rose, Stefano Moretti, Emmanuel Olaiya, and Claire H. Shepherd-Themistocleous. Extra Higgs boson and Z$^′$ as portals to signatures of heavy neutrinos at the LHC. JHEP, 02:109, 2018. arXiv:1708.03650, doi:10.1007/JHEP02(2018)109. D. Acosta and others. Study of jet shapes in inclusive jet production in $p\bar p$ collisions at $\sqrt s=1.96$ tev. Phys. Rev., D71:112002, 2005. arXiv:hep-ex/0505013. P. A. R. Ade and others. Planck 2015 results. XIII. Cosmological parameters. Astron. Astrophys., 594:A13, 2016. arXiv:1502.01589, doi:10.1051/0004-6361/201525830. J. A. Aguilar-Saavedra, R. Benbrik, S. Heinemeyer, and M. Pérez-Victoria. Handbook of vectorlike quarks: Mixing and single production. Phys. Rev., D88(9):094010, 2013. arXiv:1306.0572, doi:10.1103/PhysRevD.88.094010. Andreas Albert and others. Recommendations of the LHC Dark Matter Working Group: Comparing LHC searches for heavy mediators of dark matter production in visible and invisible decay channels. Technical Report, CERN, 2017. arXiv:1703.05703. S. Alekhin and others. Hera and the lhc - a workshop on the implications of hera for lhc physics: proceedings part a. Technical Report, DESY, 2005. arXiv:hep-ph/0601012. B. Allanach and others. Searching for R parity violation at Run II of the Tevatron. In Physics at Run II: Workshop on Supersymmetry / Higgs: Summary Meeting Batavia, Illinois, November 19-21, 1998. 1999. URL: http://lss.fnal.gov/cgi-bin/find_paper.pl?pub-00-387, arXiv:hep-ph/9906224. Adam Alloul, Neil D. Christensen, Céline Degrande, Claude Duhr, and Benjamin Fuks. FeynRules 2.0 - A complete toolbox for tree-level phenomenology. Comput. Phys. Commun., 185:2250–2300, 2014. arXiv:1310.1921, doi:10.1016/j.cpc.2014.04.012. S. Allwood. Manchester PhD thesis. 2006. A. Altheimer, A. Arce, L. Asquith, J. Backus Mayes, E. Bergeaas Kuutmann, and others. Boosted objects and jet substructure at the LHC. Report of BOOST2012, held at IFIC Valencia, 23rd-27th of July 2012. Eur.Phys.J., C74:2792, 2014. arXiv:1311.2708, doi:10.1140/epjc/s10052-014-2792-8. Daniel Alva, Tao Han, and Richard Ruiz. Heavy Majorana neutrinos from $W\gamma $ fusion at hadron colliders. JHEP, 02:072, 2015. arXiv:1411.7305, doi:10.1007/JHEP02(2015)072. Daniele Alves. Simplified Models for LHC New Physics Searches. J. Phys., G39:105005, 2012. arXiv:1105.2838, doi:10.1088/0954-3899/39/10/105005. J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. -S. Shao, T. Stelzer, P. Torrielli, and M. Zaro. The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. JHEP, 07:079, 2014. arXiv:1405.0301, doi:10.1007/JHEP07(2014)079. S. Amrith, J. M. Butterworth, F. F. Deppisch, W. Liu, A. Varma, and D. Yallup. LHC Constraints on a $B-L$ Gauge Model using Contur. Technical Report, UCL, 2018. arXiv:1811.11452. Andrei Angelescu, Grégory Moreau, and François Richard. Scalar production in association with a Z boson at the LHC and ILC: The mixed Higgs-radion case of warped models. Phys. Rev., D96(1):015019, 2017. arXiv:1702.03984, doi:10.1103/PhysRevD.96.015019. P. L. Anthony and others. Observation of parity nonconservation in Moller scattering. Phys. Rev. Lett., 92:181602, 2004. arXiv:hep-ex/0312035, doi:10.1103/PhysRevLett.92.181602. Nima Arkani-Hamed, Andrew G. Cohen, and Howard Georgi. Electroweak symmetry breaking from dimensional deconstruction. Phys. Lett., B513:232–240, 2001. arXiv:hep-ph/0105239. Mattelaer O. Artoisenet P., Frederix R. and Rietkerk R. Automatic spin-entangled decays of heavy resonances in Monte Carlo simulations. JHEP, 2012. arXiv:1212.3460v2, doi:10.1007/JHEP03(2013)015. ATLAS. Technical design report, CERN/LHCC/94-13. 1994. ATLAS. Detector physics performance technical design report, CERN/LHCC/99-14/15. 1999. ATLAS. ATLAS Sensitivity to the Standard Model Higgs in the HW and HZ Channels at High Transverse Momenta. Technical Report ATL-PHYS-PUB-2009-088, CERN, Geneva, August 2009. ATLAS. Calibrating the $b$-tag and mistag efficiencies of the sv0 $b$-tagging algorithm in 3~pb$^-1$ of data with the atlas detector. Technical Report ATLAS-CONF-2010-099, CERN, Geneva, Dec 2010. Anupama Atre, Georges Azuelos, Marcela Carena, Tao Han, Erkcan Ozcan, Jose Santiago, and Gokhan Unel. Model-Independent Searches for New Quarks at the LHC. JHEP, 08:080, 2011. arXiv:1102.1987, doi:10.1007/JHEP08(2011)080. Anupama Atre, Tao Han, Silvia Pascoli, and Bin Zhang. The Search for Heavy Majorana Neutrinos. JHEP, 05:030, 2009. arXiv:0901.3589, doi:10.1088/1126-6708/2009/05/030. M Baak, M Petteni, and N Makovec. Data-quality requirements and event cleaning for jets and missing transverse energy reconstruction with the atlas detector in proton-proton collisions at a center-of-mass energy of $sqrts=7$ tev. Technical Report ATLAS-COM-CONF-2010-038, CERN, Geneva, May 2010. Mihailo Backović, Michael Krämer, Fabio Maltoni, Antony Martini, Kentarou Mawatari, and Mathieu Pellen. Higher-order QCD predictions for dark matter production at the LHC in simplified models with s-channel mediators. Eur. Phys. J., C75(10):482, 2015. arXiv:1508.05327, doi:10.1140/epjc/s10052-015-3700-6. M. Bahr and others. Herwig++ Physics and Manual. Eur. Phys. J., C58:639–707, 2008. arXiv:0803.0883, doi:10.1140/epjc/s10052-008-0798-9. Austin Ball and others. Cms technical design report, volume ii: physics performance. J. Phys., G34:995–1579, 2007. Shankha Banerjee, Manimala Mitra, and Michael Spannowsky. Searching for a Heavy Higgs boson in a Higgs-portal B-L Model. Phys. Rev., D92(5):055013, 2015. arXiv:1506.06415, doi:10.1103/PhysRevD.92.055013. Daniele Barducci, Alexander Belyaev, Mathieu Buchkremer, Giacomo Cacciapaglia, Aldo Deandrea, Stefania De Curtis, Jad Marrouche, Stefano Moretti, and Luca Panizzi. Framework for Model Independent Analyses of Multiple Extra Quark Scenarios. JHEP, 12:080, 2014. arXiv:1405.0737, doi:10.1007/JHEP12(2014)080. Vernon D. Barger, King-Man Cheung, Tao Han, J. Ohnemus, and D. Zeppenfeld. A comparative study of the benefits of forward jet tagging in heavy higgs production at the ssc. Phys. Rev., D44:1426–1437, 1991. Vernon D. Barger, King-man Cheung, Tao Han, and D. Zeppenfeld. Single forward jet tagging and central jet vetoing to identify the leptonic w w decay mode of a heavy higgs boson. Phys. Rev., D44:2701–2716, 1991. Vernon D. Barger, Tao Han, and R. J. N. Phillips. Improving the heavy higgs boson two charged lepton - two neutrino signal. Phys. Rev., D37:2005–2008, 1988. A. Bassetto, M. Ciafaloni, and G. Marchesini. Jet Structure and Infrared Sensitive Quantities in Perturbative QCD. Phys. Rept., 100:201–272, 1983. A. Bassetto, M. Ciafaloni, G. Marchesini, and Alfred H. Mueller. Jet Multiplicity and Soft Gluon Factorization. Nucl. Phys., B207:189, 1982. Brian Batell, Maxim Pospelov, and Brian Shuve. Shedding Light on Neutrino Masses with Dark Forces. JHEP, 08:052, 2016. arXiv:1604.06099, doi:10.1007/JHEP08(2016)052. Martin Bauer, Ulrich Haisch, and Felix Kahlhoefer. Simplified dark matter models with two Higgs doublets: I. Pseudoscalar mediators. JHEP, 05:138, 2017. arXiv:1701.07427, doi:10.1007/JHEP05(2017)138. G. Bellini and others. Precision measurement of the 7Be solar neutrino interaction rate in Borexino. Phys. Rev. Lett., 107:141302, 2011. arXiv:1104.1816, doi:10.1103/PhysRevLett.107.141302. Johannes Bellm and others. Herwig 7.0/Herwig++ 3.0 release note. Eur. Phys. J., C76(4):196, 2016. arXiv:1512.01178, doi:10.1140/epjc/s10052-016-4018-8. Johannes Bellm and others. Herwig 7.1 Release Note. Technical Report, Various, 2017. arXiv:1705.06919. Alexander Belyaev, Giacomo Cacciapaglia, Haiying Cai, Gabriele Ferretti, Thomas Flacke, Alberto Parolini, and Hugo Serodio. Di-boson signatures as Standard Candles for Partial Compositeness. JHEP, 01:094, 2017. arXiv:1610.06591, doi:10.1007/JHEP01(2017)094. P. S. Bhupal Dev and Apostolos Pilaftsis. Light and Superlight Sterile Neutrinos in the Minimal Radiative Inverse Seesaw Model. Phys. Rev., D87(5):053007, 2013. arXiv:1212.3808, doi:10.1103/PhysRevD.87.053007. G. Brooijmans and others. Les Houches 2017: Physics at TeV Colliders New Physics Working Group Report. In Les Houches 2017: Physics at TeV Colliders New Physics Working Group Report. 2018. URL: http://lss.fnal.gov/archive/2017/conf/fermilab-conf-17-664-ppd.pdf, arXiv:1803.10379. Diogo Buarque Franzosi, Federica Fabbri, and Steffen Schumann. Constraining scalar resonances with top-quark pair production at the LHC. JHEP, 03:022, 2018. arXiv:1711.00102, doi:10.1007/JHEP03(2018)022. Mathieu Buchkremer, Giacomo Cacciapaglia, Aldo Deandrea, and Luca Panizzi. Model Independent Framework for Searches of Top Partners. Nucl. Phys., B876:376–417, 2013. arXiv:1305.4172, doi:10.1016/j.nuclphysb.2013.08.010. Andy Buckley, Jonathan Butterworth, Leif Lonnblad, David Grellscheid, Hendrik Hoeth, and others. Rivet user manual. Comput.Phys.Commun., 184:2803–2819, 2013. arXiv:1003.0694, doi:10.1016/j.cpc.2013.05.021. Andy Buckley and others. General-purpose event generators for LHC physics. Phys. Rept., 504:145–233, 2011. arXiv:1101.2599, doi:10.1016/j.physrep.2011.03.005. D. Buskulic and others. Quark and gluon jet properties in symmetric three jet events. Phys. Lett., B384:353–364, 1996. Giorgio Busoni and others. Recommendations on presenting LHC searches for missing transverse energy signals using simplified $s$-channel models of dark matter. Technical Report, CERN, 2016. arXiv:1603.04156. J. Butterworth and Herbert K. Dreiner. R-parity violation at HERA. Nucl. Phys., B397:3–34, 1993. arXiv:hep-ph/9211204, doi:10.1016/0550-3213(93)90334-L. J. M. Butterworth. BSM constraints from model-independent measurements: A Contur Update. In 5th Biennial Workshop on Discovery Physics at the LHC (Kruger2018) Hazyview, Mpumulanga, South Africa, December 3-7, 2018. 2019. arXiv:1902.03067. J. M. Butterworth, J. P. Couchman, B. E. Cox, and B. M. Waugh. Ktjet: a c++ implementation of the k(t) clustering algorithm. Comput. Phys. Commun., 153:85–96, 2003. arXiv:hep-ph/0210022. J. M. Butterworth, John R. Ellis, and A. R. Raklev. Reconstructing sparticle mass spectra using hadronic decays. JHEP, 05:033, 2007. arXiv:hep-ph/0702150. J. M. Butterworth, Jeffrey R. Forshaw, and M. H. Seymour. Multiparton interactions in photoproduction at hera. Z. Phys., C72:637–646, 1996. arXiv:hep-ph/9601371. Jonathan M. Butterworth, Adam R. Davison, Mathieu Rubin, and Gavin P. Salam. Jet substructure as a new Higgs search channel at the LHC. Phys.Rev.Lett., 100:242001, 2008. arXiv:0802.2470, doi:10.1103/PhysRevLett.100.242001. Jonathan M. Butterworth, David Grellscheid, Michael Krämer, Bjärn Sarrazin, and David Yallup. Constraining new physics with collider measurements of Standard Model signatures. JHEP, 03:078, 2017. arXiv:1606.05296, doi:10.1007/JHEP03(2017)078. G. Cacciapaglia, C. Csaki, G. Marandella, and A. Strumia. The Minimal Set of Electroweak Precision Parameters. Phys. Rev., D74:033011, 2006. arXiv:hep-ph/0604111, doi:10.1103/PhysRevD.74.033011. Giacomo Cacciapaglia, Csaba Csaki, Christophe Grojean, and John Terning. Curing the ills of higgsless models: the s parameter and unitarity. Phys. Rev., D71:035015, 2005. arXiv:hep-ph/0409126. Giacomo Cacciapaglia, Aldo Deandrea, Suzanne Gascon-Shotkin, Solène Le Corre, Morgan Lethuillier, and Junquan Tao. Search for a lighter Higgs boson in Two Higgs Doublet Models. JHEP, 12:068, 2016. arXiv:1607.08653, doi:10.1007/JHEP12(2016)068. Matteo Cacciari and Gavin P. Salam. Dispelling the n**3 myth for the k(t) jet-finder. Phys. Lett., B641:57–61, 2006. arXiv:hep-ph/0512210. Matteo Cacciari, Gavin P. Salam, and Gregory Soyez. The anti-$k_t$ jet clustering algorithm. Journal of High Energy Physics, 2008(04):063, 2008. Matteo Cacciari, Gavin P. Salam, and Gregory Soyez. FastJet User Manual. Eur.Phys.J., C72:1896, 2012. arXiv:1111.6097, doi:10.1140/epjc/s10052-012-1896-2. Robert N. Cahn, Stephen D. Ellis, Ronald Kleiss, and W. James Stirling. Transverse momentum signatures for heavy higgs bosons. Phys. Rev., D35:1626, 1987. John Campbell, R. Keith Ellis, and David L. Rainwater. Next-to-leading order QCD predictions for W + 2jet and Z + 2jet production at the CERN LHC. Phys. Rev., D68:094021, 2003. arXiv:hep-ph/0308195. John M. Campbell and R.K. Ellis. MCFM for the Tevatron and the LHC. Nucl.Phys.Proc.Suppl., 205-206:10–15, 2010. arXiv:1007.3492, doi:10.1016/j.nuclphysbps.2010.08.011. F. Caravaglios, Michelangelo L. Mangano, M. Moretti, and R. Pittau. A new approach to multi-jet calculations in hadron collisions. Nucl. Phys., B539:215–232, 1999. arXiv:hep-ph/9807570. Marcela Carena, Alejandro Daleo, Bogdan A. Dobrescu, and Timothy M. P. Tait. $Z^\prime $ gauge bosons at the Tevatron. Phys. Rev., D70:093009, 2004. arXiv:hep-ph/0408098, doi:10.1103/PhysRevD.70.093009. R. Casalbuoni, S. De Curtis, and M. Redi. Signals of the degenerate bess model at the lhc. Eur. Phys. J., C18:65–71, 2000. arXiv:hep-ph/0007097. S. Catani, Yuri L. Dokshitzer, M. H. Seymour, and B. R. Webber. Longitudinally invariant k(t) clustering algorithms for hadron hadron collisions. Nucl. Phys., B406:187–224, 1993. Joydeep Chakrabortty, Partha Konar, and Tanmoy Mondal. Constraining a class of B-L extended models from vacuum stability and perturbativity. Phys. Rev., D89(5):056014, 2014. arXiv:1308.1291, doi:10.1103/PhysRevD.89.056014. Mikael Chala. Direct bounds on heavy toplike quarks with standard and exotic decays. Phys. Rev., D96(1):015028, 2017. arXiv:1705.03013, doi:10.1103/PhysRevD.96.015028. Mikael Chala, Felix Kahlhoefer, Matthew McCullough, Germano Nardini, and Kai Schmidt-Hoberg. Constraining Dark Sectors with Monojets and Dijets. JHEP, 07:089, 2015. arXiv:1503.05916, doi:10.1007/JHEP07(2015)089. Michael S. Chanowitz. Quantum corrections from nonresonant w w scattering. Phys. Rept., 320:139–146, 1999. arXiv:hep-ph/9903522. Serguei Chatrchyan and others. Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC. Phys.Lett., B716:30–61, 2012. arXiv:1207.7235, doi:10.1016/j.physletb.2012.08.021. Serguei Chatrchyan and others. Measurement of differential top-quark pair production cross sections in $pp$ colisions at $\sqrt s=7$ TeV. Eur.Phys.J., C73(3):2339, 2013. arXiv:1211.2220, doi:10.1140/epjc/s10052-013-2339-4. Serguei Chatrchyan and others. Studies of jet mass in dijet and W/Z + jet events. JHEP, 05:090, 2013. arXiv:1303.4811, doi:10.1007/JHEP05(2013)090. Serguei Chatrchyan and others. Measurement of the ratio of inclusive jet cross sections using the anti-$k_T$ algorithm with radius parameters R=0.5 and 0.7 in pp collisions at $\sqrt s=7$ TeV. Phys. Rev., D90(7):072006, 2014. arXiv:1406.0324, doi:10.1103/PhysRevD.90.072006. Serguei Chatrchyan and others. Measurement of the triple-differential cross section for photon+jets production in proton-proton collisions at $\sqrt s$=7 TeV. JHEP, 06:009, 2014. arXiv:1311.6141, doi:10.1007/JHEP06(2014)009. Serguei Chatrchyan and others. Search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to bottom quarks. Phys.Rev., D89:012003, 2014. arXiv:1310.3687, doi:10.1103/PhysRevD.89.012003. S. Chekanov and others. Substructure dependence of jet cross sections at hera and determination of alpha(s). Nucl. Phys., B700:3–50, 2004. arXiv:hep-ex/0405065. Chien-Yi Chen, S. Dawson, and Yue Zhang. Higgs CP Violation from Vectorlike Quarks. Phys. Rev., D92(7):075026, 2015. arXiv:1507.07020, doi:10.1103/PhysRevD.92.075026. Hsin-Chia Cheng and Ian Low. Tev symmetry and the little hierarchy problem. JHEP, 09:051, 2003. arXiv:hep-ph/0308199. CMS. Technical design report, CERN/LHCC/94-33. 1994. CMS Collaboration. Search for dark matter production in association with jets, or hadronically decaying W or Z boson at $\sqrt s = 13$ TeV. Technical Report, CERN, 2016. CMS Collaboration. Search for new physics in high mass diphoton events in $3.3~\mathrm fb^-1$ of proton-proton collisions at $\sqrt s=13~\mathrm TeV$ and combined interpretation of searches at $8 \mathrm TeV$ and $13 \mathrm TeV$. Technical Report, CERN, 2016. The ATLAS collaboration. Search for resonances in diphoton events with the ATLAS detector at $\sqrt s$ = 13 TeV. Technical Report, CERN, 2016. Eric Conte, Benjamin Fuks, and Guillaume Serret. MadAnalysis 5, A User-Friendly Framework for Collider Phenomenology. Comput. Phys. Commun., 184:222–256, 2013. arXiv:1206.1599, doi:10.1016/j.cpc.2012.09.009. G. Corcella and others. Herwig 6: an event generator for hadron emission reactions with interfering gluons (including supersymmetric processes). JHEP, 01:010, 2001. arXiv:hep-ph/0011363. Claudio Coriano, Luigi Delle Rose, and Carlo Marzo. Vacuum Stability in U(1)-Prime Extensions of the Standard Model with TeV Scale Right Handed Neutrinos. Phys. Lett., B738:13–19, 2014. arXiv:1407.8539, doi:10.1016/j.physletb.2014.09.001. Claudio Coriano, Luigi Delle Rose, and Carlo Marzo. Constraints on abelian extensions of the Standard Model from two-loop vacuum stability and $U(1)_B-L$. JHEP, 02:135, 2016. arXiv:1510.02379, doi:10.1007/JHEP02(2016)135. Glen Cowan, Kyle Cranmer, Eilam Gross, and Ofer Vitells. Asymptotic formulae for likelihood-based tests of new physics. Eur. Phys. J., C71:1554, 2011. [Erratum: Eur. Phys. J.C73,2501(2013)]. arXiv:1007.1727, doi:10.1140/epjc/s10052-011-1554-0, 10.1140/epjc/s10052-013-2501-z. Kyle Cranmer and Itay Yavin. RECAST: Extending the Impact of Existing Analyses. JHEP, 04:038, 2011. arXiv:1010.2506, doi:10.1007/JHEP04(2011)038. Csaba Csaki, Christophe Grojean, Hitoshi Murayama, Luigi Pilo, and John Terning. Gauge theories on an interval: unitarity without a higgs. Phys. Rev., D69:055006, 2004. arXiv:hep-ph/0305237. Csaba Csaki, Christophe Grojean, Luigi Pilo, and John Terning. Towards a realistic model of higgsless electroweak symmetry breaking. Phys. Rev. Lett., 92:101802, 2004. arXiv:hep-ph/0308038. Csaba Csaki, Jay Hubisz, Graham D. Kribs, Patrick Meade, and John Terning. Big corrections from a little higgs. Phys. Rev., D67:115002, 2003. arXiv:hep-ph/0211124. Michał Czakon, Paul Fiedler, and Alexander Mitov. Total Top-Quark Pair-Production Cross Section at Hadron Colliders Through $O(α\frac 4S)$. Phys.Rev.Lett., 110:252004, 2013. arXiv:1303.6254, doi:10.1103/PhysRevLett.110.252004. Arindam Das, Satsuki Oda, Nobuchika Okada, and Dai-suke Takahashi. Classically conformal U(1)$^′$ extended standard model, electroweak vacuum stability, and LHC Run-2 bounds. Phys. Rev., D93(11):115038, 2016. arXiv:1605.01157, doi:10.1103/PhysRevD.93.115038. Arindam Das, Nobuchika Okada, and Nathan Papapietro. Electroweak vacuum stability in classically conformal B-L extension of the Standard Model. Eur. Phys. J., C77(2):122, 2017. arXiv:1509.01466, doi:10.1140/epjc/s10052-017-4683-2. Celine Degrande, Claude Duhr, Benjamin Fuks, David Grellscheid, Olivier Mattelaer, and Thomas Reiter. UFO - The Universal FeynRules Output. Comput. Phys. Commun., 183:1201–1214, 2012. arXiv:1108.2040, doi:10.1016/j.cpc.2012.01.022. Celine Degrande, Olivier Mattelaer, Richard Ruiz, and Jessica Turner. Fully-Automated Precision Predictions for Heavy Neutrino Production Mechanisms at Hadron Colliders. Phys. Rev., D94(5):053002, 2016. arXiv:1602.06957, doi:10.1103/PhysRevD.94.053002. F. del Aguila, J. de Blas, and M. Perez-Victoria. Effects of new leptons in Electroweak Precision Data. Phys. Rev., D78:013010, 2008. arXiv:0803.4008, doi:10.1103/PhysRevD.78.013010. Frank F. Deppisch, P. S. Bhupal Dev, and Apostolos Pilaftsis. Neutrinos and Collider Physics. New J. Phys., 17(7):075019, 2015. arXiv:1502.06541, doi:10.1088/1367-2630/17/7/075019. Frank F. Deppisch, Wei Liu, and Manimala Mitra. Long-lived Heavy Neutrinos from Higgs Decays. JHEP, 08:181, 2018. arXiv:1804.04075, doi:10.1007/JHEP08(2018)181. Daniel Dercks, Nishita Desai, Jong Soo Kim, Krzysztof Rolbiecki, Jamie Tattersall, and Torsten Weber. CheckMATE 2: From the model to the limit. Comput. Phys. Commun., 221:383–418, 2017. arXiv:1611.09856, doi:10.1016/j.cpc.2017.08.021. Daniel Dercks, Herbi Dreiner, Manuel E. Krauss, Toby Opferkuch, and Annika Reinert. R-Parity Violation at the LHC. Eur. Phys. J., C77(12):856, 2017. arXiv:1706.09418, doi:10.1140/epjc/s10052-017-5414-4. Abdelhak Djouadi. Decays of the Higgs bosons. In Quantum effects in the minimal supersymmetric standard model. Proceedings, International Workshop, MSSM, Barcelona, Spain, September 9-13, 1997, 197–222. 1997. arXiv:hep-ph/9712334. Abdelhak Djouadi and Alexander Lenz. Sealing the fate of a fourth generation of fermions. Phys. Lett., B715:310–314, 2012. arXiv:1204.1252, doi:10.1016/j.physletb.2012.07.060. A. Dobado, M. J. Herrero, J. R. Pelaez, and E. Ruiz Morales. Lhc sensitivity to the resonance spectrum of a minimal strongly interacting electroweak symmetry breaking sector. Phys. Rev., D62:055011, 2000. arXiv:hep-ph/9912224. A. Dobado and J. R. Pelaez. The inverse amplitude method in chiral perturbation theory. Phys. Rev., D56:3057–3073, 1997. arXiv:hep-ph/9604416. Yuri L. Dokshitzer, G. D. Leder, S. Moretti, and B. R. Webber. Better jet clustering algorithms. JHEP, 08:001, 1997. arXiv:hep-ph/9707323. Yuri L. Dokshitzer, G.D. Leder, S. Moretti, and B.R. Webber. Better jet clustering algorithms. JHEP, 9708:001, 1997. arXiv:hep-ph/9707323. Manuel Drees, Herbi Dreiner, Daniel Schmeier, Jamie Tattersall, and Jong Soo Kim. CheckMATE: Confronting your Favourite New Physics Model with LHC Data. Comput. Phys. Commun., 187:227–265, 2015. arXiv:1312.2591, doi:10.1016/j.cpc.2014.10.018. R. Keith Ellis and Sinisa Veseli. Strong radiative corrections to W b anti-b production in p anti-p collisions. Phys. Rev., D60:011501, 1999. arXiv:hep-ph/9810489. Stephen D. Ellis and Davison E. Soper. Successive combination jet algorithm for hadron collisions. Phys. Rev., D48:3160–3166, 1993. arXiv:hep-ph/9305266. Christoph Englert, Matthew McCullough, and Michael Spannowsky. S-Channel Dark Matter Simplified Models and Unitarity. Phys. Dark Univ., 14:48–56, 2016. arXiv:1604.07975, doi:10.1016/j.dark.2016.09.002. B. I. Ermolaev and Victor S. Fadin. Log - Log Asymptotic Form of Exclusive Cross-Sections in Quantum Chromodynamics. JETP Lett., 33:269–272, 1981. Malcolm Fairbairn, John Heal, Felix Kahlhoefer, and Patrick Tunney. Constraints on Z' models from LHC dijet searches and implications for dark matter. JHEP, 09:018, 2016. arXiv:1605.07940, doi:10.1007/JHEP09(2016)018. Giancarlo Ferrera, Massimiliano Grazzini, and Francesco Tramontano. Associated WH production at hadron colliders: a fully exclusive QCD calculation at NNLO. Phys.Rev.Lett., 107:152003, 2011. arXiv:1107.1164, doi:10.1103/PhysRevLett.107.152003. Giancarlo Ferrera, Massimiliano Grazzini, and Francesco Tramontano. Associated $ZH$ production at hadron colliders: the fully differential NNLO QCD calculation. Phys.Lett., B740:51–55, 2015. arXiv:1407.4747, doi:10.1016/j.physletb.2014.11.040. Sylvain Fichet, Gero von Gersdorff, Eduardo Ponton, and Rogerio Rosenfeld. The Excitation of the Global Symmetry-Breaking Vacuum in Composite Higgs Models. JHEP, 09:158, 2016. arXiv:1607.03125, doi:10.1007/JHEP09(2016)158. Patrick J. Fox and Ciaran Williams. Next-to-Leading Order Predictions for Dark Matter Production at Hadron Colliders. Phys. Rev., D87(5):054030, 2013. arXiv:1211.6390, doi:10.1103/PhysRevD.87.054030. Stefano Frixione, Eric Laenen, Patrick Motylinski, Bryan R. Webber, and Chris D. White. Single-top hadroproduction in association with a W boson. JHEP, 0807:029, 2008. arXiv:0805.3067, doi:10.1088/1126-6708/2008/07/029. Stefano Frixione, Paolo Nason, and Bryan R. Webber. Matching nlo qcd and parton showers in heavy flavour production. JHEP, 08:007, 2003. arXiv:hep-ph/0305252. Stefano Frixione, Fabian Stoeckli, Paolo Torrielli, and Bryan R. Webber. NLO QCD corrections in Herwig++ with MC@NLO. JHEP, 1101:053, 2011. arXiv:1010.0568, doi:10.1007/JHEP01(2011)053. Stefano Frixione and Bryan R. Webber. Matching nlo qcd computations and parton shower simulations. JHEP, 06:029, 2002. arXiv:hep-ph/0204244. Robert Garisto. Editorial: theorists react to the cern 750 gev diphoton data. Phys. Rev. Lett., 116:150001, Apr 2016. URL: http://link.aps.org/doi/10.1103/PhysRevLett.116.150001, doi:10.1103/PhysRevLett.116.150001. G. F. Giudice and others. Searches for new physics. In 3rd CERN Workshop on LEP2 Physics Geneva, Switzerland, November 2-3, 1995, 463–524. 1996. [,463(1996)]. arXiv:hep-ph/9602207. P. Golonka and others. The tauola-photos-f environment for the tauola and photos packages, release ii. Comput. Phys. Commun., 174:818–835, 2006. arXiv:hep-ph/0312240. A. Gomez Nicola and J. R. Pelaez. Meson meson scattering within one loop chiral perturbation theory and its unitarization. Phys. Rev., D65:054009, 2002. arXiv:hep-ph/0109056. Eilam Gross and Ofer Vitells. Trial factors or the look elsewhere effect in high energy physics. Eur. Phys. J., C70:525–530, 2010. arXiv:1005.1891, doi:10.1140/epjc/s10052-010-1470-8. Ulrich Haisch, Felix Kahlhoefer, and Emanuele Re. QCD effects in mono-jet searches for dark matter. JHEP, 12:007, 2013. arXiv:1310.4491, doi:10.1007/JHEP12(2013)007. Roni Harnik, Joachim Kopp, and Pedro A. N. Machado. Exploring nu Signals in Dark Matter Detectors. JCAP, 1207:026, 2012. arXiv:1202.6073, doi:10.1088/1475-7516/2012/07/026. Julian Heeck. Unbroken B – L symmetry. Phys. Lett., B739:256–262, 2014. arXiv:1408.6845, doi:10.1016/j.physletb.2014.10.067. Jan Heisig, Michael Krämer, Mathieu Pellen, and Christopher Wiebusch. Constraints on Majorana Dark Matter from the LHC and IceCube. Phys. Rev., D93(5):055029, 2016. arXiv:1509.07867, doi:10.1103/PhysRevD.93.055029. JoAnne L. Hewett, Frank J. Petriello, and Thomas G. Rizzo. Constraining the littlest higgs. ((u)). JHEP, 10:062, 2003. arXiv:hep-ph/0211218. B. Holdom. T' at the lhc: the physics of discovery. JHEP, 03:063, 2007. arXiv:hep-ph/0702037. Agnieszka Ilnicka, Tania Robens, and Tim Stefaniak. Constraining Extended Scalar Sectors at the LHC and beyond. Mod. Phys. Lett., A33(10n11):1830007, 2018. arXiv:1803.03594, doi:10.1142/S0217732318300070. Philip Ilten, Yotam Soreq, Mike Williams, and Wei Xue. Serendipity in dark photon searches. JHEP, 06:004, 2018. arXiv:1801.04847, doi:10.1007/JHEP06(2018)004. K. Iordanidis and D. Zeppenfeld. Searching for a heavy higgs boson via the h –> l nu j j decay mode at the cern lhc. Phys. Rev., D57:3072–3083, 1998. arXiv:hep-ph/9709506. Thomas Jacques, Andrey Katz, Enrico Morgante, Davide Racco, Mohamed Rameez, and Antonio Riotto. Complementarity of DM searches in a consistent simplified model: the case of $Z'$. JHEP, 10:071, 2016. arXiv:1605.06513, doi:10.1007/JHEP10(2016)071. Thomas Junk. Confidence level computation for combining searches with small statistics. Nucl. Instrum. Meth., A434:435–443, 1999. arXiv:hep-ex/9902006, doi:10.1016/S0168-9002(99)00498-2. Felix Kahlhoefer, Kai Schmidt-Hoberg, Thomas Schwetz, and Stefan Vogl. Implications of unitarity and gauge invariance for simplified dark matter models. JHEP, 02:016, 2016. [JHEP02,016(2016)]. arXiv:1510.02110, doi:10.1007/JHEP02(2016)016. V. Khachatryan and others. Measurements of the associated production of a Z boson and b jets in pp collisions at $\sqrt s = 8\,\text TeV $. Eur. Phys. J., C77(11):751, 2017. arXiv:1611.06507, doi:10.1140/epjc/s10052-017-5140-y. Vardan Khachatryan and others. Differential cross section measurements for the production of a W boson in association with jets in proton–proton collisions at $\sqrt s=7$ TeV. Phys. Lett., B741:12–37, 2015. arXiv:1406.7533, doi:10.1016/j.physletb.2014.12.003. Vardan Khachatryan and others. Measurements of jet multiplicity and differential production cross sections of $Z +$ jets events in proton-proton collisions at $\sqrt s =$ 7 TeV. Phys. Rev., D91(5):052008, 2015. arXiv:1408.3104, doi:10.1103/PhysRevD.91.052008. Vardan Khachatryan and others. Precise determination of the mass of the Higgs boson and tests of compatibility of its couplings with the standard model predictions using proton collisions at 7 and 8 $\,\text TeV$. Eur. Phys. J., C75(5):212, 2015. arXiv:1412.8662, doi:10.1140/epjc/s10052-015-3351-7. Vardan Khachatryan and others. Search for heavy Majorana neutrinos in $\mu ^\pm \mu ^\pm +$ jets events in proton-proton collisions at $\sqrt s$ = 8 TeV. Phys. Lett., B748:144–166, 2015. arXiv:1501.05566, doi:10.1016/j.physletb.2015.06.070. Vardan Khachatryan and others. Measurement of the double-differential inclusive jet cross section in proton-proton collisions at $\sqrt s = 13\,\text TeV $. Eur. Phys. J., C76(8):451, 2016. arXiv:1605.04436, doi:10.1140/epjc/s10052-016-4286-3. Vardan Khachatryan and others. Measurement of the integrated and differential $t \bar t$ production cross sections for high-$p_t$ top quarks in $pp$ collisions at $\sqrt s =$ 8 TeV. Phys. Rev., D94(7):072002, 2016. arXiv:1605.00116, doi:10.1103/PhysRevD.94.072002. Vardan Khachatryan and others. Measurements of differential cross sections for associated production of a W boson and jets in proton-proton collisions at $\sqrt s =$ 8 TeV. Phys. Rev., D95:052002, 2017. arXiv:1610.04222, doi:10.1103/PhysRevD.95.052002. Vardan Khachatryan and others. Search for single production of a heavy vector-like T quark decaying to a Higgs boson and a top quark with a lepton and jets in the final state. Phys. Lett., B771:80–105, 2017. arXiv:1612.00999, doi:10.1016/j.physletb.2017.05.019. Jong Soo Kim, Daniel Schmeier, Jamie Tattersall, and Krzysztof Rolbiecki. A framework to create customised LHC analyses within CheckMATE. Comput. Phys. Commun., 196:535–562, 2015. arXiv:1503.01123, doi:10.1016/j.cpc.2015.06.002. Michael Klasen, Florian Lyonnet, and Farinaldo S. Queiroz. NLO+NLL collider bounds, Dirac fermion and scalar dark matter in the B–L model. Eur. Phys. J., C77(5):348, 2017. arXiv:1607.06468, doi:10.1140/epjc/s10052-017-4904-8. R. Kleiss and W. James Stirling. Tagging the higgs. Phys. Lett., B200:193, 1988. Sabine Kraml, Suchita Kulkarni, Ursula Laa, Andre Lessa, Wolfgang Magerl, Doris Proschofsky-Spindler, and Wolfgang Waltenberger. SModelS: a tool for interpreting simplified-model results from the LHC and its application to supersymmetry. Eur. Phys. J., C74:2868, 2014. arXiv:1312.4175, doi:10.1140/epjc/s10052-014-2868-5. Kenneth Lane and Stephen Mrenna. The collider phenomenology of technihadrons in the technicolor straw man model. Phys. Rev., D67:115011, 2003. arXiv:hep-ph/0210299. Manfred Lindner, Farinaldo S. Queiroz, Werner Rodejohann, and Xun-Jie Xu. Neutrino-electron scattering: general constraints on Z$^′$ and dark photon models. JHEP, 05:098, 2018. arXiv:1803.00060, doi:10.1007/JHEP05(2018)098. D. López-Val and T. Robens. $\Delta r$ and the W-boson mass in the singlet extension of the standard model. Phys. Rev., D90:114018, 2014. arXiv:1406.1043, doi:10.1103/PhysRevD.90.114018. Eamonn Maguire, Lukas Heinrich, and Graeme Watt. HEPData: a repository for high energy physics data. J. Phys. Conf. Ser., 898(10):102006, 2017. arXiv:1704.05473, doi:10.1088/1742-6596/898/10/102006. Fabio Maltoni and Tim Stelzer. Madevent: automatic event generation with madgraph. JHEP, 02:027, 2003. arXiv:hep-ph/0208156. Michelangelo L. Mangano, Mauro Moretti, Fulvio Piccinini, Roberto Pittau, and Antonio D. Polosa. Alpgen, a generator for hard multiparton processes in hadronic collisions. JHEP, 07:001, 2003. arXiv:hep-ph/0206293. Michelangelo L. Mangano, Mauro Moretti, and Roberto Pittau. Multijet matrix elements and shower evolution in hadronic collisions: w b anti-b + (n)jets as a case study. Nucl. Phys., B632:343–362, 2002. arXiv:hep-ph/0108069. Alberto Mariotti, Diego Redigolo, Filippo Sala, and Kohsaku Tobioka. New LHC bound on low-mass diphoton resonances. Phys. Lett., B783:13–18, 2018. arXiv:1710.01743, doi:10.1016/j.physletb.2018.06.039. A. D. Martin, W. J. Stirling, R. S. Thorne, and G. Watt. Parton distributions for the LHC. Eur. Phys. J., C63:189–285, 2009. arXiv:0901.0002, doi:10.1140/epjc/s10052-009-1072-5. A.D. Martin, W.J. Stirling, R.S. Thorne, and G. Watt. Parton distributions for the LHC. Eur.Phys.J., C63:189–285, 2009. arXiv:0901.0002, doi:10.1140/epjc/s10052-009-1072-5. M Mertens, J Grosse-Knetter, M Schumacher, and M Kobel. Monte carlo study on anomalous quartic couplings in the scattering of weak gauge bosons with the atlas detector. Technical Report ATL-PHYS-INT-2007-009. ATL-COM-PHYS-2007-021, CERN, Geneva, Apr 2007. Alfred H. Mueller. On the Multiplicity of Hadrons in QCD Jets. Phys. Lett., B104:161–164, 1981. Matthias Neubert, Jian Wang, and Cen Zhang. Higher-Order QCD Predictions for Dark Matter Production in Mono-$Z$ Searches at the LHC. JHEP, 02:082, 2016. arXiv:1509.05785, doi:10.1007/JHEP02(2016)082. A Neusiedl and S Tapproge. Measurement of the inclusive $b\overline b$ dijetmass cross section in early atlas data. Technical Report ATL-COM-PHYS-2011-039, CERN, Geneva, January 2011. J. A. Oller, E. Oset, and J. R. Pelaez. Meson meson and meson baryon interactions in a chiral non- perturbative approach. Phys. Rev., D59:074001, 1999. arXiv:hep-ph/9804209. Frank E. Paige, Serban D. Protopopescu, Howard Baer, and Xerxes Tata. ISAJET 7.69: A Monte Carlo event generator for pp, anti-p p, and e+e- reactions. Technical Report, various, 2003. arXiv:hep-ph/0312045. Michele Papucci, Kazuki Sakurai, Andreas Weiler, and Lisa Zeune. Fastlim: a fast LHC limit calculator. Eur. Phys. J., C74(11):3163, 2014. arXiv:1402.0492, doi:10.1140/epjc/s10052-014-3163-1. C. Patrignani and others. Review of Particle Physics. Chin. Phys., C40(10):100001, 2016. doi:10.1088/1674-1137/40/10/100001. Tilman Plehn and Michael Rauch. The quartic higgs coupling at hadron colliders. Phys. Rev., D72:053008, 2005. arXiv:hep-ph/0507321, doi:10.1103/PhysRevD.72.053008. Giovanni Marco Pruna. Phenomenology of the minimal $B-L$ Model: the Higgs sector at the Large Hadron Collider and future Linear Colliders. PhD thesis, Southampton U., 2011. URL: https://inspirehep.net/record/914976/files/arXiv:1106.4691.pdf, arXiv:1106.4691. J. Pumplin and others. New generation of parton distributions with uncertainties from global QCD analysis. JHEP, 07:012, 2002. arXiv:hep-ph/0201195. David L. Rainwater and D. Zeppenfeld. Observing $h \to w^(*)w^(*) \to e^\pm \mu ^\mp /\!\!\!p_t$ in weak boson fusion with dual forward jet tagging at the cern lhc. Phys. Rev., D60:113004, 1999. arXiv:hep-ph/9906218. Alexander L. Read. Presentation of search results: The CL(s) technique. J. Phys., G28:2693–2704, 2002. [,11(2002)]. doi:10.1088/0954-3899/28/10/313. Tania Robens and Tim Stefaniak. Status of the Higgs Singlet Extension of the Standard Model after LHC Run 1. Eur. Phys. J., C75:104, 2015. arXiv:1501.02234, doi:10.1140/epjc/s10052-015-3323-y. Gavin P. Salam and Gregory Soyez. A practical Seedless Infrared-Safe Cone jet algorithm. JHEP, 05:086, 2007. arXiv:arXiv:0704.0292 [hep-ph]. R. Sekhar Chivukula and others. A three site higgsless model. Phys. Rev., D74:075011, 2006. arXiv:hep-ph/0607124. Michael H. Seymour. Searches for new particles using cone and cluster jet algorithms: A Comparative study. Z. Phys., C62:127–138, 1994. Albert M Sirunyan and others. Measurement of the differential cross sections for the associated production of a $W$ boson and jets in proton-proton collisions at $\sqrt s=13$ TeV. Phys. Rev., D96(7):072005, 2017. arXiv:1707.05979, doi:10.1103/PhysRevD.96.072005. Albert M Sirunyan and others. Measurement of differential cross sections for the production of top quark pairs and of additional jets in lepton+jets events from pp collisions at $\sqrt s =$ 13 TeV. Phys. Rev., D97(11):112003, 2018. arXiv:1803.08856, doi:10.1103/PhysRevD.97.112003. Albert M Sirunyan and others. Search for high-mass resonances in dilepton final states in proton-proton collisions at $\sqrt s=$ 13 TeV. JHEP, 06:120, 2018. arXiv:1803.06292, doi:10.1007/JHEP06(2018)120. Albert M Sirunyan and others. Search for single production of vector-like quarks decaying to a b quark and a Higgs boson. JHEP, 06:031, 2018. arXiv:1802.01486, doi:10.1007/JHEP06(2018)031. Albert M Sirunyan and others. Search for vector-like T and B quark pairs in final states with leptons at $\sqrt s =$ 13 TeV. JHEP, 08:177, 2018. arXiv:1805.04758, doi:10.1007/JHEP08(2018)177. Albert M Sirunyan and others. Search for vector-like quarks in events with two oppositely charged leptons and jets in proton-proton collisions at $\sqrt s =$ 13 TeV. Submitted to: Eur. Phys. J., 2018. arXiv:1812.09768. Torbjorn Sjostrand, Stephen Mrenna, and Peter Skands. Pythia 6.4 physics and manual. JHEP, 05:026, 2006. arXiv:hep-ph/0603175. Peter Z. Skands and others. SUSY Les Houches accord: Interfacing SUSY spectrum calculators, decay packages, and event generators. JHEP, 07:036, 2004. arXiv:hep-ph/0311123, doi:10.1088/1126-6708/2004/07/036. Witold Skiba and David Tucker-Smith. Using jet mass to discover vector quarks at the lhc. Phys. Rev., D75:115010, 2007. arXiv:hep-ph/0701247. E. Stefanidis. UCL PhD thesis. 2007. Abraham Wald. An extension of Wilks' method for setting tolerance limits. Annals Math. Statist., 14(1):45–55, March 1943. URL: http://projecteuclid.org/euclid.aoms/1177731491, doi:http://dx.doi.org/10.1214/aoms/1177731491. James D. Wells, Zhengkang Zhang, and Yue Zhao. Establishing the Isolated Standard Model. Phys. Rev., D96(1):015005, 2017. arXiv:1702.06954, doi:10.1103/PhysRevD.96.015005. S. S. Wilks. The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses. Annals Math. Statist., 9(1):60–62, 1938. doi:10.1214/aoms/1177732360. © Copyright 2016, Jon Butterworth, David Grellscheid, Michael Krämer, David Yallup. Last updated on 04 Apr, 2019. Created using Sphinx 1.8.3.
CommonCrawl
We consider a Wigner-type ensemble, i.e. large hermitian $N\times N$ random matrices $H=H^*$ with centered independent entries and with a general matrix of variances $S_ xy =\mathbb E|H_ xy |^2$. The norm of $H$ is asymptotically given by the maximum of the support of the self-consistent density of states. We establish a bound on this maximum in terms of norms of powers of $S$ that substantially improves the earlier bound $2\| S\|^ 1/2 _\infty$. The key element of the proof is an effective Markov chain approximation for the contributions of the weighted Dyck paths appearing in the iterative solution of the corresponding Dyson equation.
CommonCrawl
We study relations between spectra of two operators that are connected to each other through some intertwining conditions. As an application, we obtain new results on the spectra of multiplication operators on $B(\mathcal H)$ relating it to the spectra of the restriction of the operators to the ideal $\mathcal C_2$ of Hilbert-Schmidt operators. We also solve one of the problems, posed in , about the positivity of the spectrum of multiplication operators with positive operator coefficients when the coefficients on one side commute. Using the Wiener-Pitt phenomena we show that the spectrum of a multiplication operator with normal coefficients satisfying the Haagerup condition might be strictly larger than the spectrum of its restriction to $\mathcal C_2$.
CommonCrawl
Let $(M, g) = (N_1, g_1) \times_f(N_2, g_2)$ be an Einstein warped-product manifold, with metric $g=g_1+f^2g_2$. What does it mean if the scalar curvature of its base-manifold $(N_1, g_1)$, equal to a multiple of the $f$ warping function? Does it has any geometrical or physical meaning?
CommonCrawl
Looking to get your computer repaired? ITX Computer Repair is a fast, reliable Philadelphia Computer Repair and IT support company with reasonable rates. We offer on-site and in-shop desktop and laptop repair at a reasonable price. We pride ourselves on being very knowledgeable in all aspects computer repair, including software tech support, PC repair, network installations, spyware and virus removal, data recovery, DSL and 4G Internet installation and consulting. The test data can include displacements, tilts, and strains from static tests and mode shapes and natural...https://books.google.com/books/about/Applications_of_Statistics_to_Minimize_a.html?id=hUpol1kHFxIC&utm_source=gb-gplus-shareApplications of Statistics to Minimize and Quantify Measurement Error in Finite Element Model UpdatingMy libraryHelpAdvanced Social Media Audit: Measure for Impact (SpringerBriefs BWL/Mgmt) - January 2013 - Autor: Urs E. I am looking for a measure that describes the error in the data $y_1,\ldots,y_n$ and I want the measure to take values between $0$ and $1$. Do I need to do this? Podcast powered by podPress v8.8.10.17 Cookies help us deliver our services. In the above example, $p=2$, $q=\infty$. –A.S. Submissions for the Netflix Prize were judged using the RMSD from the test dataset's undisclosed "true" values. Gattiker. Normalized error is also used to identify outliers in the proficiency test results. statistics data-analysis mean-square-error share|cite|improve this question edited Dec 10 '15 at 9:58 asked Dec 8 '15 at 11:39 Eric S. 2,242423 The most obvious correction to Wikipedia's formula is more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed Learn more about me here. thanks for your suggestions, they look interesting. The research utilizes a computer program developed at Tufts University called PARIS, short for PARameter Identification System. Now my question is: are there other (perhaps better) measures to describe a normalized error. Your cache administrator is webmaster. I hope this helps. The PDM performed satisfactorily well in simulating the flows of 17th January 2007 with an average Nash–Sutcliffe Efficiency Index (NSE) of 0.65 and the model was judged insensitive to the significantly RMSD is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent. Contents 1 Formula share|cite|improve this answer edited Dec 11 '15 at 17:48 answered Dec 11 '15 at 15:07 Anatoly 6,8022826 Thank you for your answer. Scott Armstrong & Fred Collopy (1992). "Error Measures For Generalizing About Forecasting Methods: Empirical Comparisons" (PDF). Is Morrowind based on a tabletop RPG? If you have participated in a proficiency test before, you may have noticed it in your final summary report either by name or abbreviated 'En.' When you participate in proficiency tests, RSS of Uncertainties). Our consulting services are targeted to assist calibration and testing laboratories to attain and retain ISO/IEC 17025:2005 accreditation. In the case of normalization of scores in educational assessment, there may be an intention to align distributions to a normal distribution. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. I am going to calculate normalized error using data from one of my proficiency tests. Measuring air density - where is my huge error coming from? For instance, one can exclude the best and worst sub-indicator scores from the inclusion in the index. Is this a valid way to prove this modified harmonic series diverges? rather near to $0$. difference of measurement results) and divide it by the value calculated in the second step (i.e. Why is '१२३' numeric? So, my results were satisfactory. Conclusion Calculating normalized error is not common unless you are a proficiency testing provider.
CommonCrawl
The contribution of the ion current in the lab frame to the total plasma current is studied in the Irvine Field Reversed Configuration (IFRC). A charge-exchange neutral particle analyzer chops the emitted neutrals at a rate of 13 kHz and shows that the peak energy is below the 20eV minimum detectable energy threshold. A modified monochromator that is used to measure Doppler shifts of impurity lines indicates that there is a flow in the range of 5-7km/s in IFRC. By evaluating the collision times between the impurities and hydrogen, the dominant plasma ion species, it is concluded that the ions rotate with an angular frequency of $\sim 4\times 10^4$ rad/s. Estimates of the ion current in the lab frame are accomplished by determining the ion density distribution using pressure balance, and by fitting the measured magnetic probe data to a theoretical equilibrium. The results from these estimates indicate that the ion current is 1-2 orders of magnitude larger than the measured plasma current of 15kA. Calculations of electron drifts from the equilibrium fields show that the electrons cancel most of the ion current.
CommonCrawl
Abstract: We discuss general properties of $A_\infty$-algebras and their applications to the theory of open strings. The properties of cyclicity for $A_\infty$-algebras are examined in detail. We prove the decomposition theorem, which is a stronger version of the minimal model theorem, for $A_\infty$-algebras and cyclic $A_\infty$-algebras and discuss various consequences of it. In particular it is applied to classical open string field theories and it is shown that all classical open string field theories on a fixed conformal background are cyclic $A_\infty$-isomorphic to each other. The same results hold for classical closed string field theories, whose algebraic structure is governed by cyclic $L_\infty$-algebras.
CommonCrawl
Why the sum of residuals equals 0 when we do a sample regression by OLS? If the OLS regression contains a constant term, i.e. if in the regressor matrix there is a regressor of a series of ones, then the sum of residuals is exactly equal to zero, as a matter of algebra. Then the OLS estimator $(\hat a, \hat b)$ minimizes the sum of squared residuals, i.e. The above also implies that if the regression specification does not include a constant term, then the sum of residuals will not, in general, be zero. It is easily verified that $\mathbf M \mathbf X = \mathbf 0$. Also $\mathbf M$ is idempotent and symmetric. So we need the regressor matrix to contain a series of ones, so that we get $\mathbf M\mathbf i = \mathbf 0$. The accepted solution by Alecos Papadopoulos has a mistake at the end. I can't comment so I will have to submit this correction as a solution, sorry. It's true that a series of ones would do the job. But it's not true that we need it. We do not need the regressor to have a series of ones in order for $Mi = 0$. Sum of residuals doesn't exactly equal $0$. However it is a very reasonable assumption that the expectation of the residuals will be $0$. This is similar to the case of unbiased estimation, where we want the bias to be $0$. Here the residual $y_i-\beta_0-\beta_1x_i$ are sometimes negative sometimes positive, but we hope that their overall sum will be $0$, so that the estimation is good enough. Not the answer you're looking for? Browse other questions tagged statistics statistical-inference or ask your own question. Why the expected value of the error when doing regression by OLS is 0? Update a regression on the fly? What is the difference between observed information and Fisher information? Help understanding why p-hat is 1/20 and why the CI is (0.0286,0.0714)? How to account/penalize for low sample size when computing performance? Why is Sample Standard Deviation Biased?
CommonCrawl
Let $A, B, S, T$ be sets such that $A \subseteq B$ and $S \subseteq T$. Let $A, B, S$ be sets such that $A \subseteq B$. Let $A, S, T$ be sets such that $S \subseteq T$. Let $A, B, C$ be sets such that $B \ne \O$. Let $A \times B \subseteq C \times C$. First we show that $A \subseteq B \land S \subseteq T \implies A \times S \subseteq B \times T$. First, let $A = \varnothing$ or $S = \varnothing$. Next, let $A, S \ne \varnothing$. Thus $A \times S \subseteq B \times T$ as we were to prove. Now we show that if $A, S \ne \varnothing$, then $A \times S \subseteq B \times T \implies A \subseteq B \land S \subseteq T$. So suppose that $A \times S \subseteq B \times T$. First note that if $A = \varnothing$, then $A \times S = \varnothing \subseteq B \times T$, whatever $S$ is, so it is not necessarily the case that $S \subseteq T$. Similarly if $S = \varnothing$; it is not necessarily the case that $A \subseteq B$. So that explains the restriction $A, S \ne \varnothing$.
CommonCrawl
In this paper, an explicit method, hereby called an Exponential Method of variable order, is derived from the earlier published Exponential Method of orders 2 and 3. The present method of variable order commands higher accuracy since it obtains numerical solutions which coincide with the exact theoretical solutions, to eight or more decimal places, in virtually all stiff and nonstiff, (linear and nonlinear) ODE systems. Numerical applications show that it has faster convergence and much higher accuracy than many existing methods. New formats are now introduced to make it easy to integrate any $K \times K$ systems. Other remarkable features include the use of the exact Jacobians of nonlinear systems; implementation of a phase to phase integration of stiff systems, with exact formulas for determining the terminal points of phases; avoidance of matrix inversions, LU decompositions and the cumbersome Newton iterations, since the method is explicit; solving oscillatory systems without additional refinements and a straight forward application of the method without starters. Implementations show that any program of the Exponential Method of variable order (e.g the QBASIC program) produces a very fast or instant output in automatic computation.
CommonCrawl
The objective is to connect all pairs while covering the entire board, but in every puzzle there is always a unique solution connecting all pairs (even without the board-filling constraint). So in a mathematical language, we are given $t$ pairs of vertices in an $n\times n$ grid, with the promise that there is only one collection of vertex disjoint paths that connects each pair, and furthermore, this unique collection covers the entire grid. What is the complexity of this problem? Without the promise, the problem becomes NP-hard, here is the related question with many links. (Note that the reduction in the above paper does not work for our promise version). Update: I've realized that in the Hexagonal grid version game that I'm playing a unique solution practically (!) implies that the whole board is covered, so it's not really a big difference. Browse other questions tagged cc.complexity-theory np-hardness puzzles or ask your own question. Can we infer the next player in chess from the current board configuration? What is the minimum number of bits required to store a sudoku puzzle?
CommonCrawl
"""Spread points to different lattice positions""" # well as the point in the cell (corner, body center or face center). # Each lattice point has its own displacement from the ideal position. # Not checking that shapes do not overlap if displacement is too large. model_name = opts.type + "_paracrystal" # in this range 'cuz its easy. $\mathbf Q = [Q \sin\theta\cos\phi, Q \sin\theta\sin\phi, Q \cos\theta]^T$. $\mathrm d\alpha = -\sin\theta\,\mathrm d\theta$. detector. All interference effects are within the particle itself. sufficiently far apart that the interaction between them can be ignored. $\beta = \langle F \rangle \langle F \rangle^* \big/ \langle F F^* \rangle$. $n = V_f\big/\langle V \rangle_\mathbf\xi$. most $P(Q)$ models $V_f$ is not defined and **scale** is used instead. can leave **scale** at 1.0. $P@S$ models can leave **scale** at 1.0. The volume fraction of material. by that to get the absolute scaling on the final $I(Q)$). density $n$ used in $P@S$ to get the absolute scaling on the final $I(Q)$. The radial distance determining the range of the $S(Q)$ interaction. or other of these "size" parameters. also be specified directly, independent of the estimate from $P(Q)$. radii. Whether this makes any physical sense will depend on the system. Selects the **radius_effective** value to use. effective radius should be computed from the parameters of the shape. to 0 for user defined effective radius. the local monodisperse approximation is recovered. The type of structure factor calculation to use. where $P(Q) = \langle F(Q)^2 \rangle$. Call to create a new OpenCL context, such as after a change to SAS_OPENCL. # type: () -> "GpuEnvironment" Return a new OpenCL context, such as after a change to SAS_OPENCL.
CommonCrawl
Volume 23, Number 7/8 (2010), 659-669. We prove that every eigenvalue of a Robin problem with boundary parameter $\alpha$ on a sufficiently smooth domain behaves asymptotically like $-\alpha^2$ as $\alpha \to \infty$. This generalizes an existing result for the first eigenvalue. Differential Integral Equations, Volume 23, Number 7/8 (2010), 659-669.
CommonCrawl
How many liters of pure acetic acid must be mixed with 3 liters of a 30% acetic acid to obtain a mixture of 65%? You ned to think carefully about what this question is asking and develop an algebraic expression. The original solution is 30 liters of a 30% acetic acid solution so in this solution is $0.30 \times 3 = 0.9$ liters of pure acid. You are going to add $x$ liters of pure acid so the volume of the resulting mixture will be $x + 3$ liters. 65% of this mixture is to be pure acid so that's $0.65 \times (x + 3)$ liters of pure acid. Can you complete the problem now? Make sure you verify your answer.
CommonCrawl
Now if $(M,g_m)$ and $(N,g_n)$ are 2 Riemannian manifolds we can construct the product $M\times N$ equipped with the riemannian metric $g_m+g_n$. Is there a link between the "product metric" and the natural metric on $M\times N$ or is it two different things ? $$ds^2 = dx^2 + dy^2$$ which gives Euclidean distance. Relatedly all Riemannian metrics induced by Euclidean metrics. Not the answer you're looking for? Browse other questions tagged riemannian-geometry or ask your own question. Does every smooth manifold admits non-isometric riemannian metrics?
CommonCrawl
by Rudolf Kohulák. Published on 28 April 2016. Due to the immense popularity of tea, many heated arguments have been started in kitchens, at dinner tables and indeed on the internet concerning the proper way of making it. There is a lot of confusion surrounding tea–making, including—but not limited to—the question of the right brewing temperature, the amount of milk one should add and the correct order of adding milk and hot water. Clearly there's way too many variables and one can easily get lost in all that madness. On top of this, looking for advice on the internet results in an inundation of contradictory advice and very hysterical arguments. So let's put all the emotions aside and seek refuge in the realms of maths and science. Recently, students from the University of Leicester came up with a formula for the perfect brew. According to them, one should add 200ml of boiled water, let it brew for 2 minutes, then add 10ml of milk and wait 6 minutes for it to reach its optimum temperature of 60°C. Which is all very nice but not everyone can be bothered to measure the exact amount of milk and water and, ultimately, we don't live in an ideal world where one can start making a cup of tea safe in the knowledge that there will be no interruption halfway through. Since we at Chalkdust are mainly concerned with practical consumer advice, let us consider the scenario described below (in the discussion that follows we shall assume that milk is added after the water. Partly because if you don't, it would make the whole article meaningless, but mostly because any other practice is simply wrong and should not be encouraged in any way). We've all been there, you boil the water, pour it into the cup and suddenly the doorbell rings. It could be the postman, your neighbour asking for some sugar, or a religious enthusiast trying to save you from eternal damnation. Either way, it's going to be a few minutes before you can enjoy your tea. Obviously, you would like to maximise your chances of your tea still being nice and hot upon your return. What should you do? Add milk now and answer the door or pour the milk later once the nuisance business has been taken care of? To start answering the question, we need to have an idea of how an object cools down. According to Newton's law of cooling, the rate of cooling of an object is negatively proportional to the difference between its temperature and the temperature of the environment. This model assumes that all the thermodynamic properties of the system, (such as the heat capacity, thermal conductivity, etc) can be 'hidden' in that one constant k. But what is its value? The best way to find out is to perform some experiments. So we bought some thermometers, put the kettle on and made some measurements. We measured the temperature of the room (which was quite cold, only 16°C!) and made a note of the temperature of the boiled water in 1-minute intervals. Then the next step is relatively simple. We need to fit a value for k that minimises the error between the measured temperatures and the values predicted by the model. For our purposes, we chose the sum squared error as our measure and asked Microsoft Excel do the rest of the work for us. This gave us a value of k equal to roughly 0.03. where $\alpha$ is the volume of liquid one relative to the whole mixture. But does this actually work? To find out we once again grabbed the thermometers and boiled some water. We tested three scenarios: $t_m$ = 0, 5 and 10. Due to lack of time and tiredness, we decided to perform all three of them at once. However, due to a lack of manpower and equipment we could not measure all three of them at exactly the same time, so we measured with a 20s time delay between the three cases. After having done that, we randomised the positions of the cups and their associated time delay and repeated the experiments. The results are plotted in the figure above. It is clear that the model gives a slightly wrong temperature for the smaller times. This is probably due to the fact that the cooling effects of the containers are not accounted for. However, putting that aside, the predictions are NOT TOO BAD. According to the measurements (and indeed the model), putting in the milk earlier resulted in higher final temperatures. So while the maths needed lots of constants and parameter values, our advice to you—unlike that of the students at the University of Leicester—is simple: the next time the doorbell rings while you are making your daily brew, add the milk immediately! Rudolf Kohulák is a PhD student at UCL working on the modelling of freeze-drying processes. We created hot ice from scratch, a solution that remains liquid even below its freezing point!
CommonCrawl
Equivalent formulation of complexity theory in Lambda Calculus? In complexity theory the definition of time and space complexity both reference a universal Turing machine: resp. the number of steps before halting, and the number of cells on the tape touched. Given the Church-Turing thesis, it should be possible to define complexity in terms of lambda calculus as well. My intuitive notion is that time complexity can be expressed as the number of β-reductions (we can define away α-conversion by using De Brujin indexes, and η is barely a reduction anyway), while space complexity can be defined as the number of symbols (λ's, DB-indexes, "apply"-symbols) in the largest reduction. Is this correct? If so, where can I get a reference? If not, how am I mistaken? Is counting β-reduction steps a good complexity measure? To answer this question, we should clarify what we mean by complexity measure in the first place. One good answer is given by the Slot and van Emde Boas thesis: any good complexity measure should have a polynomial relationship to the canonical notion of time-complexity defined using Turing machines. In other words, there should be a reasonable encoding tr(.) from λ-calculus terms to Turing machines, such that for each term $M$ of size $|M|$: $M$ reduces to a value in $poly(|M|)$ exactly when $tr(M)$ reduces to a value in $poly(|tr(M)|)$. For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following. There are terms that produce normal forms in a polynomial number of steps that are of exponential size. See (1). Even writing down the normal forms takes exponential time. The chosen reduction strategy plays an important role, too. For example there exists a family of terms which reduces in a polynomial number of parallel β-steps (in the sense of optimal λ-reduction (2), but whose complexity is non-elementary (3, 4). The paper (1) clarifies the issue by showing a reasonable encoding that preserves the complexity class PTIME assuming leftmost-outermost Call-By-Name reductions. The key insight appears to be that the exponential blow-up can only happen for uninteresting reasons which can be defeated by proper sharing of sub-terms. Note that papers like (1) show that coarse complexity classes like PTIME coincide, whether you count β-steps, or Turing-machine steps. That does not mean lower complexity classes like O(log n) also coincide. Of course such complexity classes are also not stable under variation of Turing machine model (e.g. 1-tape vs multi-tape). I don't know if the situation regarding space complexity is understood. B. Accattoli, U. Dal Lago, Beta Reduction is Invariant, Indeed. J.-J. Levy, Reductions correctes et optimales dans le lambda-calcul. J. L. Lawall, H. G. Mairson, Optimality and inefficiency: what isn't a cost model of the lambda calculus? A. Asperti, H. Mairson, Parallel beta reduction is not elementary recursive. D. Mazza, Church Meets Cook and Levin. Counting $\beta$-reductions is one kind of complexity measure for $\lambda$-calculus, but a more flexible and reasonable one is cost semantics, where the operational semantics is augmented by various notions of cost. One good starting point are the OPLSS 2018 lectures on cost semantics by Jan Hoffmann (videos and lecture materials available at the link). A note about space complexity. While, as pointed out by Martin in his answer, the naive way to count time complexity turns out to work well, the definition of space complexity you suggest is easily seen to be inadequate. Indeed, in the case of space you really want to be able to speak of sublinear complexity, e.g. you want to be able to recover the class $\mathsf L$ (deterministic logspace, which is to space a bit what $\mathsf P$ is to time), and your definition obviously does not allow you to do that: in any reduction $M\to^\ast N$, your are counting at least the size of $M$, which is linear in the input size. The moral of the story is that rewriting is not suitable for counting space. Ulrich Schöpp and Ugo Dal Lago were the first to advocate the use of the so-called geometry of interaction (GoI) for dealing with sublinear space complexity (cf. their ESOP 2010 paper "Functional Programming in Sublinear Space"). As far as I know, the GoI is used in one way or another in all lambda-calculus-based characterizations of sublinear space classes. I do not want to get into what the GoI is here; let's say that it is a way of executing a lambda-term without reducing it (i.e., without firing $\beta$-redexes) but by "traveling" through its syntactic tree with a certain auxiliary information. Not the answer you're looking for? Browse other questions tagged time-complexity lambda-calculus or ask your own question. Using lambda calculus to derive time complexity? Lambda Calculus - are these two expressions equivalent? Are the $\lambda_I$-Calculus and the $\lambda_K$-Calculus equivalent? Most general setting for fine-grained exponential-time complexity classes?
CommonCrawl
The continuation of the lecture course Algebraic Geometry held in the winter term 18/19. We will cover the theory of $\mathscr O_X$-modules and their cohomology. Date/time: Mon, 10-12, Wed, 10-12, S-U-3.03. First lecture: April 8. Notes on the lecture course: pdf (last updated: April 17). Date/time: Wed, 12-2pm, S-U-4.01. First meeting: Wed, April 17.
CommonCrawl
a solution exists (the problem is finding a closed-form expiration). $a,b,v,u$ are parameters such that $0<a<b<1$, $v>0$, $u>0$. Even an approximation for the solution will help. Since an expression is needed then numerical methods are not helpful here. For numerical solution, I would start here (it's a reasonable expression where you can at least estimate the number and nature of solutions). Since you are asking for an analytical solution, this won't help much, because $p$ can be in principle anything from $1$ to $\infty$. Unless you have any other hints of the values -- if $p$ is very big, you can probably ignore the first term and get an analytical approximation. If $p$ is very close to $1$ ($a$ and $b$ very close together), you can do series expansion of all terms. where of course you would have the painful problem of differentiating the $p$ term in the exponent. Unfortunately, I tried and even this equation contains combination $u\ln u$ and is therefore not solvable in terms of standard functions (you need Lambert's W function). Not the answer you're looking for? Browse other questions tagged calculus real-analysis exponential-function roots approximation or ask your own question. Are computer able to implement a algorithm theoretically to determine if a single variable integrals have closed form? Analytical solution of a single reservoir system with exponential outflow?
CommonCrawl
Luigi De Pascale: Local solutions and existence of optimal transport maps for the $$W_\infty$$ Wasserstein distance. Extensions to more classical Monge problems. Yifeng Yu: Asymptotic behaviour of infinity harmonic functions near isolated singularities. Jeremy Kilpatrick: How do we do it? Teaching Mathematics to U.S. Sybilla Beckmann: Mathematics for Elementary Teachers: A Focus on "Explaining Why" Alexander Givental: Will Mathematics Ever Make Sense? Bill Collins: Where do we go from here? Burkhard Wilking: Manifolds with Positive Curvature Operators are Space Forms. Steve Smale: Dynamics of Emergence and Flocking. Kathleen Hoffman: Stability Results for Elastic Rods with Electrostatic Self-Repulsion. Alice Jukes: Symmetric Homoclinic Bifurcation. Aimee Johnson: The Relative Growth of Information in Two-Dimensional Partitions. Jenny Harrison: ChainletTheory and Dynamics. Ana Dias: Couples Cell Networks: ODE-Equivalence, Minimality and Quotients. Mary Silber: Controlling Pattern Formation. Claire Postlethwaite: Controlling Travelling Waves of the Complex Ginzburg-Landau Equation with Spatial Feedback. Anna Ghazaryan: Traveling Waves in Porous Media Combustion: Uniqueness of Waves fro Small Thernal Diffusivity. Rachel Kuske: Multi-Scle Dynamics and Noise Sensitivity. Ami Radunskaya: Stochastic Perturbations of Growth Models. Lea Popovic: Degenerate Diffusion Limits in Gene Duplication. Hans Kaper: What Goes Into a Good Proposal, Where Do I Send It, and What Happens to It?
CommonCrawl
Abstract : Multidimensional hypoelliptic diffusions arise naturally as models of neuronal activity. Estimation in those models is complex because of the degenerate structure of the diffusion coefficient. We build a consistent estimator of the drift and variance parameters with the help of a discretized log-likelihood of the continuous process when discrete time observations of both coordinates are available on an interval $T = N\Delta$, with $\Delta$ the time step between the observations. We discuss the difficulties generated by the hypoellipticity and provide a proof of the consistency and the asymptotic normality of the estimator in the asymptotic setting $T\to\infty$ as $\Delta\to 0$. We test our approach numerically on the hypoelliptic FitzHugh-Nagumo model, which describes the firing mechanism of a neuron.
CommonCrawl
The aim of this article is to give a complete account of the Eichler-Brandt theory over function fields and the basis problem for Drinfeld type automorphic forms. Given arbitrary function field $k$ together with a fixed place $\infty$, the authors construct a family of theta series from the norm forms of "definite" quaternion algebras, and establish an explicit Hecke-module homomorphism from the Picard group of an associated definite Shimura curve to a space of Drinfeld type automorphic forms. The "compatibility" of these homomorphisms with different square-free levels is also examined. These Hecke-equivariant maps lead to a nice description of the subspace generated by the authors' theta series, and thereby contributes to the so-called basis problem. Restricting the norm forms to pure quaternions, the authors obtain another family of theta series which are automorphic functions on the metaplectic group, and this results in a Shintani-type correspondence between Drinfeld type forms and metaplectic forms.
CommonCrawl
3/06/2012 · Ancient Rome Did NOT Build THIS Part 2 - World's LARGEST Stone Columns - Lost Technology - Baalbek - Duration: 9:51. Bright Insight 1,048,639 views... It is quite hard to solve non-linear systems of equations, while linear systems are quite easy to study. There are numerical techniques which help to approximate nonlinear systems with linear ones in the hope that the solutions of the linear systems are close enough to the solutions of the nonlinear systems. 14/01/2019 · Question: Consider the solution set $S$ of the linear equation $x_1 + 2x_2 + x_3 = 1$ in $\Bbb R^3$. Calculate the distance of the point $(1, 1, 1)$ from $S$. Solve a Simultaneous Set of Two Linear Equations This page will show you how to solve two equations with two unknowns. There are many ways of doing this, but this page used the method of substitution.
CommonCrawl
Noting links to [`CyclotomicField`](http://www.sagemath.org/doc/reference/number_fields/sage/rings/number_field/number_field.html#sage.rings.number_field.number_field.CyclotomicField) and [`number_field_elements_from_algebraics`](http://www.sagemath.org/doc/reference/number_fields/sage/rings/qqbar.html?highlight=number_field_elements_from_algebraics#sage.rings.qqbar.number_field_elements_from_algebraics) documentation for reference. Great answer, very helpful! Although `CyclotomicField` looks better at first glance than at second, since it embeds naturally into $\mathbb C$ not $\mathbb R$. Its elements don't support extracting e.g. a real part.
CommonCrawl
Developing intuition about the derivative. The derivative of a piecewise linear function. The blue line segments are the graph of a function $f(x)$ that is linear along each of a bunch of small intervals in $x$. You can change $f$ by dragging the blue points, which move the ends of line segments up and down. You can also drag the red points to move a line segment up and down without changing its slope. The derivative $f'(x)$ of the function $f(x)$ is shown by the green horizontal line segments. The derivative $f'(x)$ indicates the slope of the function $f(x)$. Since, along each small interval of $x$, the function $f(x)$ has the same slope, the derivative $f'(x)$ is constant along each of those intervals. If two adjacent line segments of $f(x)$ have two different slopes, then the derivative $f'(x)$ jumps to a new value at the point between the corresponding intervals in $x$. To test your ability to estimate the derivative from the function, you can uncheck the "show derivative" checkbox and attempt to sketch what you think the derivative is. Alternatively, you can uncheck the "show function" checkbox to test your ability to sketch the function from its derivative. Derivative of interpolating polynomial. The blue curve is the graph of a polynomial $f(x)$. You can change $f$ by dragging the blue points, as $f$ is an interpolating polynomial through those points. The derivative $f'(x)$ of the function $f(x)$ is shown by the green curve. The derivative $f'(x)$ indicates the slope of the function $f(x)$, so that it is positive when $f$ is increasing, negative when $f$ is decreasing, and zero at the points where the tangent line to $f$ is horizontal. To test your ability to estimate the derivative from the function, you can uncheck the "show derivative" checkbox and attempt to sketch what you think the derivative is. Alternatively, you can uncheck the "show function" checkbox to test your ability to sketch the function from its derivative. The derivative of a function. The function $f(x)$ is plotted by the thick blue curve. Its derivative $f'(x)$ is shown by the thin green curve. The large red diamond on the graph of $f$ represents a point $(x_0,f(x_0))$, and you can change $x_0$ by dragging this point with your mouse. A tangent line to $f$ calculated at $x=x_0$ is shown by the red line. Its slope is the derivative $f'(x_0)$ of $f$ evaluated at $x=x_0$. This slope is also displayed by the smaller red diamond on the graph of $f'$, which is at the point $(x_0,f'(x_0))$. As you change $x_0$, this smaller diamond representing the slope traces out the graph of the derivative. You can change $f(x)$ by typing a new value in its box. The value of $f'(x)$ is displayed to the right of the box. You can hide items by unchecking the corresponding check boxes in order to test yourself on how well you can determine the derivative from the function or vice versa. You can use the buttons at the top to zoom in and out as well as pan the view. Developing intuition about the derivative by Duane Q. Nykamp is licensed under a Creative Commons Attribution-Noncommercial-ShareAlike 4.0 License. For permissions beyond the scope of this license, please contact us.
CommonCrawl
In General > s.a. foliations; Hypersurface; immersions. $ Def: A map f : S → M between two differentiable manifolds is an embedding if it is an injective immersion. * Idea: The map f a globally one-to-one immersion, and f(S) does not intersect itself in M. * In addition: Sometimes one wants S to be homeomorphic to f(S) in the induced topology from M. * Whitney (strong) embedding theorem: Any smooth (Hausdorff, second-countable) n-dimensional manifold can be smoothly embedded in 2n-dimensional Euclidean space; > s.a. Wikipedia page. @ General references: Skopenkov T&A(10) [classification of smooth embeddings of 4-manifolds in \(\mathbb R\)7]; Daverman & Venema 09. @ With metric: Carter CM(97)ht-fs, ht/97-ln [formalism]; Pavšič & Tapia gq/00 [references]; > s.a. membranes [dynamics]. @ Embedding diagrams: Romano & Price CQG(95)gq/94 [initial data for black hole collisions]; Lu & Suen GRG(03) [extrinsic-curvature-based]; Hledík et al AIP(06)ap/07; > s.a. reissner-nordström spacetime; schwarzschild geometry. > Related topics: see knots; types of graphs [embedded in manifolds]; Whitney Duality Theorem; Wild Embeddings. Embedding with Riemannian Metric > s.a. riemannian geometry / extrinsic curvature. * Results: Any compact n-dimensional C1 Riemannian manifold (with or without boundary) has a C1 isometric embedding in 2n-dimensional Euclidean space; Any non-compact one in 2n + 1 dimensions; However, if a compact one has a C1 embedding in k > n dimensions, then it also has a C1 isometric embedding there (thus any point has a neighborhood with a C1 isometric embedding in n + 1 dimensions). * Ideal embeddings: The embedded manifold receives the least amount of tension from the surrounding space. * Results: Any compact, n-dimensional Cp Riemannian manifold with p > 2 has a Cp isometric embedding in \(1\over2\)n (3n + 11) dimensional Euclidean space; Any non-compact one in \(1\over2\)n (n + 1) (3n + 11) dimensions (often much less). @ References: Greene 70; Arnlind et al a1001 [geometry and algebraic structure]; Arnlind et al a1003 [in terms of Nambu brackets]. Embedding with Lorentzian Metric > s.a. lorentzian geometry [hypersurfaces]; extrinsic curvature; formulations of general relativity. * Remark: Obviously, the Lorentzian, global case in general is not so easy; For example, the metric may have closed timelike curves. * In flat spaces: Any Ck Lorentzian manifold, with 3 ≤ k < ∞, can be embedded in a (q + 2)-dimensional flat space (2 are timelike!), with q = \(1\over2\)n (3n + 11) in the compact case (46 for n = 4), and q = \(1\over6\)n (2n2 + 37) + (5/2) n2 + 1 in the non-compact case (87 for n = 4); If the spacetime is globally hyperbolic, q + 1 is enough. * In Ricci-flat spaces: (in 4D, Campbell-Magaard theorem) Any n-dimensional (n ≥ 3) Lorentzian manifold can be isometrically and harmonically embedded in a (n + 1)-dimensional semi-Riemannian Ricci-flat space. * Hyperspace: In general relativity, the space of embeddings of a hypersurface in spacetime (roughly!). @ General references: Rosen RMP(65) [examples]; Clarke PRS(70); Greene 70; Mueller & Sánchez TAMS-a0812 [globally hyperbolic]; Kim CQG(09) [with a non-compact Cauchy surface]; Ponce de León CQG(15)-a1509. @ Hyperspace: Kuchař JMP(76), JMP(76), JMP(76), JMP(77). @ For 4D Ricci-flat spaces: Romero et al GRG(96), Lidsey et al CQG(97)gq/99 [4D solution in 5D]; Mashhoon & Wesson GRG(07) [with a 4D cosmological constant]. @ For 4D spaces with cosmological constant: Ponce de León G&C(08)-a0709 [in various 5D spaces]. @ Campbell-Magaard theorem: Dahia & Romero JMP(02); Anderson gq/04 [attack]; Dahia & Romero CQG(05)gq [interpretation]; Wesson gq/05 [apology]; Avalos et al JMP(17)-a1701 [extension to Weyl manifolds]. @ For n-dimensional Ricci-flat spaces: Seahra & Wesson CQG(03)gq; Chervon et al PLA(04); Anderson gq/04; Avalos et al a1708. @ Codimension-1 embeddings: Anderson & Lidsey CQG(01)gq, Katzourakis mp/04, m.DG/05 [in Einstein spaces]; Dahia & Romero JMP(02) [with prescribed D+1 Ricci tensor]; Haesen & Verstraelen JMP(04)gq/03 [ideal embeddings]; Kuhfittig AP(18)-a1805 [applications to wormholes and galaxy rotation curves]. @ Codimension-2 embeddings: Dillen et al JGP(04) [inequalities intrinsic/extrinsic curvature]. * Results: For a C∞ compact manifold (with possibly degenerate metric), an embedding can be found in 2k = n (n+5) dimensions, signature (k, k), and 2k = 2 (2n+1) (2n+6) dimensions, signature (k, k), in the non-compact case.
CommonCrawl
According to the first comment here, it should be, but the brief description of how this is proved is not so clear to me. I am looking to better understand this argument (or another), or locate a reference discussing this. Does it use the fact that $S^2$ has a unique complex structure? Browse other questions tagged algebraic-geometry complex-geometry riemann-surfaces quotient-spaces complex-manifolds or ask your own question. Is this complex structure unique? Riemann surface arising as a quotient of the upper half-plane. Why $\mathbb C/ \Lambda$ is a Riemann surface? What does being "holomorphic at the cusps" mean?
CommonCrawl
The left most index varies most rapidly if we look at the looping changes. If we look at the two dimensional array, the number of elements stored before a particular array location can be calculated as the column number of the element we are looking for summing with the $row \times column$ number of elements. How does the above recurrence relation work? Browse other questions tagged recurrence-relation arrays memory-management or ask your own question. How can I keep track of the state of a sequence after rotating parts of it multiple times?
CommonCrawl
how exactly is partial derivative different from gradient of a function? In both the case, we are computing the rate of change of a function with respect to some independent variable. While I was going through Gradient Descent, there also the partial derivative term and the gradient were written and used separately. If you look at the definition of the gradient-descent method, it is completely defined in terms of the gradient. where $z = z(x_1, \ldots, x_n)$ is some function of the xs. Not the answer you're looking for? Browse other questions tagged machine-learning gradient-descent calculus or ask your own question. What justifies this calculation of the derivative of a matrix function? Why is optimal learning rate obtained from analyzing gradient descent algorithm rarely (never) used in practice? Gradient descent: compute partial derivative of arbitrary cost function by hand or through software? Why is optimisation solved with gradient descent rather than with an analytical solution? How to choose the learning rate for stochastic gradient descent (via backtracking)?
CommonCrawl
I am interested in studying the implicit function $\alpha^*(y)$ so defined. Any tips to characterize it would be greatly appreciated. Browse other questions tagged pr.probability fixed-point-theorems fa.functional-analysis or ask your own question. Is the Binomial Expectation of Convex Function Convex in p? Does CLT hold for joint distribution of two dependent binomial variables? Do the gradient of convex (Fenchel) conjugates preserve the "distance" between two uniformly convex functions?
CommonCrawl
Quantum-ChromoDynamics (QCD) is the quantum field theory believed to describe the strong nuclear force. Why hasn't QCD fed back to nuclear engineering? How many quark flavor quantum numbers are really needed? How does the instanton break the $U(1)_A$ symmetry in QCD? The $U(1)_A$ symmetry in QCD is anomalous. Its supposed to be broken by the instantons. Can anyone physically describe how does that happen? How and why is the pentaquark stable if it consists of 4 quarks and an antiquark? If matter and antimatter annihilate each other, why is the pentaquark stable since quarks are matter and the antiquark is antimatter? Why is the approximate $\rm U(2)\times U(2)$ global symmetry of QCD that has a special importance? How do we know that gluons have no electric charge? How can we have a quark condensate without a quark potential? Why does literature list the strong coupling at the scale of the Z-boson's mass? Why quarks in the fundamental and gluons in the adjoint? Why do quarks and gluons have colour? Why must a hadronic decay of the $J/\psi$ meson include (at least) three gluons? Why is the decay mediated by a single gluon allowed for the $\rho^0$ meson? Why is lattice QCD called non-perturbative? Does the external leg contraction of gluon in QCD carry group generator index? What should I read to understand this question? If quarks can't be isolated in the first place, how did they become confined in the early universe? Why is color confinement a difficult problem? What macroscopic fields exist around color supercurrent? What is the relation between the Gribov problem and color confinement? Does it really make sense to talk about the color of gluons? Isn't there a unique vacuum of the Yang-Mills quantum theory? How do we show that gluon-fields have color?
CommonCrawl